A small practice that builds production-grade AI for companies that need real outcomes — not pilots.
Search that understands intent. Drafts that match your tone. Summaries that respect context. Built into your existing app, with the evals to keep them honest.
Document review, support triage, data extraction, drafting workflows. The systems your team uses every day, made dramatically faster.
For founders building an AI-native product. Architecture through launch, with the evaluation system that makes it stand up under real users.
When you don't need us to build, but you need clarity on what to build. Due diligence, architecture review, fractional CTO.
A short, focused engagement to define the problem, decide whether AI is the right tool, and propose a scope. If we say no, we say no.
End-to-end, in your repo, on your infrastructure. Real data, real users, evaluated against the baseline we agreed on in week two.
Training for your team, a runbook, an eval suite, clear documentation. Then it's yours. Which was always the point.
An AI-powered platform for portfolio and hedge-fund managers — risk decomposition, factor analysis, scenario simulations, stress testing. Built around the question of how a portfolio behaves in the moments that actually matter.
Quantitative finance models paired with conversational AI agents, so a manager can ask plain-English questions of the data and get back rigorous analysis. Full-stack: Python, FastAPI, React, Postgres, Redis, background workers, LLM integrations.
demo.radomir.fr →Most legal-AI products either cost institutional-firm prices or do shallow work. Casefile Review takes the AI tool the lawyer already uses — Claude, ChatGPT, Cowork — and gives it a connector to their actual case files. The lawyer asks questions in plain English; the system drafts pleadings, verifies quotations character-for-character, traces citations to bundle pages.
Validated end-to-end on a real UK Court of Appeal matter: 645 emails ingested, a 232-paragraph Reply drafted, a 185-page exhibit bundle produced and paginated. Productisation in progress; founding cohort open to UK litigators.
The model is the easy part. The harder, more lasting work is the evaluation harness that keeps it honest after we're gone.
The person on the discovery call is the same person who will write your code, design your eval, and stand behind the result.
One number for the whole engagement. No T&M creep, no scope shuffling, no phase-two surprises.
Code that lives in your repositories, runs on your infrastructure, readable by your engineers. Nothing locked behind our tooling.
The model is the easy part. The harness that keeps it honest after we're gone is what we treat as the real deliverable.
About a third of intro calls end with the recommendation not to start. Not the conversation you expect from a consultancy — but the one that saves you a quarter.
Tell me what you're trying to build. If I can help, I'll say so. If I can't, I'll tell you who can.