Independent AI & ML practice

AI in your business, actually working.

A small practice that builds production-grade AI for companies that need real outcomes — not pilots.

What we do

Four ways AI moves a business.

AI features in your product

Search that understands intent. Drafts that match your tone. Summaries that respect context. Built into your existing app, with the evals to keep them honest.

Internal tools that compound

Document review, support triage, data extraction, drafting workflows. The systems your team uses every day, made dramatically faster.

From-scratch AI products

For founders building an AI-native product. Architecture through launch, with the evaluation system that makes it stand up under real users.

Strategy & advisory

When you don't need us to build, but you need clarity on what to build. Due diligence, architecture review, fractional CTO.

How we work

Diagnose. Build. Hand off.

01

Diagnose · two weeks

A short, focused engagement to define the problem, decide whether AI is the right tool, and propose a scope. If we say no, we say no.

02

Build · 8 to 16 weeks

End-to-end, in your repo, on your infrastructure. Real data, real users, evaluated against the baseline we agreed on in week two.

03

Hand off · plus 30 days

Training for your team, a runbook, an eval suite, clear documentation. Then it's yours. Which was always the point.

Recent work

Built end to end. Both still alive.

FundRank · in production

Portfolio analytics that survives a crisis scenario.

An AI-powered platform for portfolio and hedge-fund managers — risk decomposition, factor analysis, scenario simulations, stress testing. Built around the question of how a portfolio behaves in the moments that actually matter.

Quantitative finance models paired with conversational AI agents, so a manager can ask plain-English questions of the data and get back rigorous analysis. Full-stack: Python, FastAPI, React, Postgres, Redis, background workers, LLM integrations.

demo.radomir.fr →
Casefile Review · in development

An AI workflow that extends the lawyer's existing assistant.

Most legal-AI products either cost institutional-firm prices or do shallow work. Casefile Review takes the AI tool the lawyer already uses — Claude, ChatGPT, Cowork — and gives it a connector to their actual case files. The lawyer asks questions in plain English; the system drafts pleadings, verifies quotations character-for-character, traces citations to bundle pages.

Validated end-to-end on a real UK Court of Appeal matter: 645 emails ingested, a 232-paragraph Reply drafted, a 185-page exhibit bundle produced and paginated. Productisation in progress; founding cohort open to UK litigators.

645emails ingested
232paragraphs drafted
185bundle pages
The model is the easy part. The harder, more lasting work is the evaluation harness that keeps it honest after we're gone.
A working principle
Why us

Small, senior, honest.

Senior practitioner

The person on the discovery call is the same person who will write your code, design your eval, and stand behind the result.

Fixed scope, fixed fee

One number for the whole engagement. No T&M creep, no scope shuffling, no phase-two surprises.

Your repo, your stack

Code that lives in your repositories, runs on your infrastructure, readable by your engineers. Nothing locked behind our tooling.

Evaluation is the deliverable

The model is the easy part. The harness that keeps it honest after we're gone is what we treat as the real deliverable.

Comfortable saying no

About a third of intro calls end with the recommendation not to start. Not the conversation you expect from a consultancy — but the one that saves you a quarter.

Book a call

Thirty minutes. Just your problem.

Tell me what you're trying to build. If I can help, I'll say so. If I can't, I'll tell you who can.

Prefer email? hello@magentacode.io