The Model Itself Is Not Your Competitive Advantage
GPT-4-level performance now costs 1/100th of what it did two years ago. Think about what that means
In 2023, having access to a frontier model felt like an advantage. OpenAI, Anthropic, and Google were the only places to get serious language model capability. You paid the rate, built on top of it, and the quality of the underlying model gave you an edge over competitors who had not figured that out yet.
That window has closed. GPT-4-level performance, which was the benchmark two years ago, now costs under a dollar per million tokens. DeepSeek R1 achieved reasoning performance competitive with OpenAI’s best model at a reported training cost of $5.6 million, compared to hundreds of millions for comparable US lab models. Open-source models from Alibaba, Mistral, and others have narrowed the gap to the frontier to the point where, for a growing set of tasks, they are competitive or better. The model itself is becoming a commodity.
This matters because a lot of companies built their AI strategy around which model they call. That is not a strategy. That is a vendor choice. When the vendor choices are increasingly interchangeable and the switching cost is near zero, the model you call says nothing about whether your product is defensible.
What actually differentiates outcomes now is further up and further down the stack. Proprietary data that you have accumulated and that a competitor cannot easily replicate is a real moat. Workflow depth, meaning how deeply your AI is embedded in a user’s actual work rather than sitting as a chat window beside it, is a real moat. The speed at which you can ship and iterate is a real moat, because a technical advantage measured in months is now measured in weeks. The model is infrastructure.
The companies that understood this early are designing model-agnostic architectures from day one. They are not betting the product on Claude staying ahead of GPT or vice versa. They are building the data layer, the integration layer, and the user experience in ways that compound over time regardless of which model is under the hood.
If your current AI strategy could be fully described as ‘we use GPT-5,’ you have a vendor choice, not a strategy. The question worth spending time on is what your product would have that a competitor could not replicate in six months by calling the same API.

