Service / Most asked for in 2026

AI Integration

Add AI to a product that already has users. Retrieval over your own data, agentic workflows where they earn their keep, evaluation harnesses that produce honest numbers. Production grade and fixed scope.

From €25K 6 to 10 weeks 50% on signing, 50% on shipped

Who this is for

Founders or heads of engineering with a product already in users' hands, who need an AI feature that ships in this quarter rather than the next.

  • Existing SaaS or web product. Real users, real data, real load.
  • A specific AI feature in mind. Search over docs, an assistant inside the app, classification or extraction at scale, an agent that completes a workflow.
  • Internal AI experiments that have not become production features yet.
  • Prior LLM prototypes that need to graduate from notebook to a system that operates 24/7 with cost and quality control.

What is included

  • Architecture review and integration plan delivered in week one.
  • Production deployment with retrieval, prompts, evaluation, monitoring, and cost guardrails.
  • Eval harness covering quality metrics relevant to the use case (precision, faithfulness, latency, cost per call).
  • Observability: traces, prompt versioning, response logging where allowed, anomaly alerts.
  • Cost guardrails: model tier selection, caching, batching, rate limits, budget alarms.
  • Documentation and a 30 minute handoff session with the engineering team.

What is not included

  • Net new product invention. The AI feature scope is defined at the SOW.
  • Ongoing operations beyond the 4 week post-launch window. Available separately as Advisory Retainer.
  • Custom model training or fine-tuning, unless explicitly scoped at the SOW.

Process

Week 1
Discovery
Read your stack, your users, your data. Architecture review and integration plan with measurable success criteria.
Weeks 2 to 4
Build core
Retrieval pipeline, prompt design, agent scaffolding. Working internal version by end of week 4.
Weeks 5 to 7
Eval and harden
Evaluation harness, monitoring, cost controls, error handling. Real production deployment behind a feature flag.
Weeks 8 to 10
Ship and handoff
Phased rollout. Documentation. Handoff to your engineering team with a 30 minute walk-through.

Frequently asked

Which models do you work with?

Whichever fits the budget, latency, and quality target. The default is to evaluate at least two providers (OpenAI, Anthropic, or open weights via vLLM or Together) before committing. The integration is built so models can be swapped with config changes, not code rewrites.

Do you work on data residency / EU AI Act compliance?

Yes for architecture and provider selection. Final compliance sign-off is a legal exercise that sits with your counsel.

What if my use case needs fine-tuning?

Discussed at the SOW. Most production AI features do not need a custom fine-tune. When they do, the work is scoped separately or as an extension to the engagement.

Can you do agentic workflows?

Yes, when they earn their keep. Agentic frameworks add latency, cost, and failure modes. Where a single LLM call or a deterministic pipeline solves the problem, that is what gets built. Agentic patterns are introduced when the use case actually needs them.

What about the post-launch period?

4 weeks of bug-fix support included. Ongoing operations, model upgrades, and feature extensions are available under Advisory Retainer or a follow-on Build Engagement.

Start an AI integration conversation

30 minute call. Describe what you want to build, what is already in production, what is on the line. The fit becomes clear in the first 15 minutes.

Book a 30 minute call →