28 January 2026 · 2 min read
Frontier over open-weight: a capital allocator's read
The open-weight story is seductive and mostly wrong for enterprise. Here is how I think about the trade — expressed in the language allocators actually use.
AIcapital allocationenterprise
There is a recurring argument in enterprise AI circles: run open-weight models on your own infrastructure, control your own destiny, avoid vendor lock-in. It sounds like a CIO's dream. In practice, for most enterprise workloads, it is a worse trade than it looks.
Let me put it in the language a capital allocator would use.
Compounding
Frontier models compound. Each major release lifts the ceiling on what an application can do without you shipping a single new line of code. If you have architected your product around a frontier endpoint, you get the upgrade for free. That is the purest form of positive carry in software.
Open-weight models do not compound in the same way. They move in discrete steps, each one requiring your team to re-finetune, re-evaluate, re-deploy, and re-certify. You are paying for the upside yourself, every time. That is operating leverage running in the wrong direction.
Margin of safety
The second argument for open weight is safety — data residency, sovereignty, control. Real concerns, especially in regulated industries. But for most enterprises the margin of safety comes from the commercial contract and the architecture, not the model weights.
A frontier vendor with a well-drafted enterprise agreement, private networking, and a clean data boundary will give you more real-world safety than a self-hosted open-weight stack that your team is quietly under-patching because they are busy shipping the product. The question is not "where do the weights live" but "where does the risk actually show up".
ROI
The ROI case for frontier is stark. The cost per useful unit of work on a frontier model has fallen by roughly an order of magnitude per year for the last three years, and shows no sign of slowing. The cost of maintaining your own open-weight inference stack has not fallen at anywhere near that pace — because the cost is headcount, and headcount is not on a Moore's Law curve.
If you build against the frontier, your unit economics improve automatically. If you build against your own stack, you have to earn the improvement with operator time.
The honest exception
There is a real case for open weight in a narrow set of situations: highly specialised domains, very high call volumes where the per-token economics flip, or genuine regulatory hard requirements. I am not arguing those away.
But they are exceptions. For the ninety percent of enterprise workloads that sit in the middle of the distribution, frontier is the right default. The allocator's instinct — buy the compounding asset, let someone else carry the maintenance — applies here too.
The operating consequence
The teams I have seen succeed at scale have one thing in common: they treat the model as a commodity input and their own workflow intelligence as the product. They architect for portability, write their prompts like code, and keep their compiler — the translation layer between SOPs and runtime — under their own roof.
That is the right bet. Your moat is not the weights. Your moat is the taste, the data, and the judgement you encode in what sits above them.
Written by Mark Sear. Feedback welcome by email.
More essays →