From Mario DiBenedetto...
In the past few weeks, OpenAI has announced a number of initiatives and acquisitions that seem to be very different than their quest for pure AI dominance. To be fair, they just raised a lot of money at a $500B valuation, making them the highest valued private company on the planet. And the data centers they need to grow their AI training compute have yet to be finished, so they have to spend that money somewhere, right? Well, maybe….
In the past few weeks, OpenAI has released or announced:
The obvious question is whether OpenAI has drifted as even the cheaply trained Chinese LLMs approach parity to ChatGPT’s strengths.
A closer inspection of non-ChatGPT moves suggests a different answer. Borrowing from The AI Supremacy‘s look at how the global AI competition is shaping-up, we dive a little deeper into each of OpenAI’s recent excursions to uncover the key trend that should shape how you approach AI.
At first glance, the narrative of OpenAI writes itself. In their mad-dash to be the best AI chatbot, they eventually realized that they were working with the same training data as every other AI and, as a result, were going to have a hard time differentiating themselves. Now, to be clear, OpenAI still commands 40%+ of the AI chatbot market, so they are by far the most dominant player. But what do you do when you realize you are fighting a battle to be the low-cost leader? You provide product differentiation that makes your product more valuable!
But let’s review the actual announcements from OpenAI one-by-one and determine whether they really are providing differentiation:
So, individually, each of the above moves seems off the beaten path. But taken together, you probably start to see the pattern…
Let’s put OpenAI’s specific moves aside for a moment and step back and look at the broader landscape of AI competition. For this, we will lean on a recent article from The AI Supremacy that goes into the entire AI “stack” and how global competition is hotter in some areas than in others.
The (far too) short summary of their article is that China leads the West in most forms of consumer AI adoption, in the creation of AI vertical apps, and in a whole bunch of the core technologies and techniques that enable LLMs to operate accurately and efficiently. The West has clear leadership in integrating and automating AI. The latter is mostly grounded in the fact that the largest companies in the West (Meta, Google, XAI, Microsoft) control so much of both the consumer and the enterprise technology stack.
What’s assumed to be equal? The basic AI chat experience. As we’ve discussed previously, basic AI chat capabilities are becoming commoditized. And that is not good for Western LLMs that have forecast a massive capital expenditure requirement to build the data centers they need to keep improving their platforms.
That implies the visible parts of AI increasingly look and feel the same. When the top homogenizes, defensibility migrates to what is scarce, namely sticky personalization.
OpenAI’s moves map cleanly onto this. Sora is a distribution play. Media Manager is rights infrastructure. Hardware is an “owned” surface. ROI is personalization. OpenAI has clearly come to the conclusion that they can have the best AI around but if the only way to use it is via their chat window, they will ultimately fail.
So if AI at the “app” layer is starting to look the same everywhere, the wins now come from how you run it—what data does it have access to, what is it integrated into, etc. The play isn’t to chase the shiniest chatbot; it’s to make smart, boring decisions that compound: lock down contracts, design controllable agents, hedge infrastructure, and meet customers where you already own trust.
Here’s the hard truth for OpenAI: while their chat is the market leader, they are woefully behind XAI, Microsoft, Google, and Meta when it comes to distributing solutions.
So how should this shape your thinking?
Treat apps like Lego bricks. Swap them when it makes sense. Just because OpenAI is the market leader, don’t get too tightly coupled to their technology. If you are building your own integrations/UX, keep it LLM agnostic. If you are buying 3rd-party tools, give weight to those that can easily point to any major LLM over those that are specific to any one LLM.
Know Demo-ware vs Real AI. There will continue to be a lot of AI that demos really well but will not be production ready. This could be because of poor design, a lack of sufficient guardrails, inability to manage model drift, or just a fundamental lack of understanding of the non-deterministic nature of most genAI. The onus will be on you to have a scorecard to measure whether any vendor’s product is truly ready for primetime. Given there are literally thousands of vendors hawing AI products, you’ll need to be efficient because this will very much be like finding a needle in a haystack.
Own your distribution. Don’t rely on a single chat window. Embed agents in your website/app, your phone systems, kiosks, and partner flows where you already have traffic and trust. Our favorite saying is that the best AI meets people where they already work!
So—has OpenAI lost its way? The evidence points instead to a company repositioning for where advantage will live next. As the app layer converges on parity, OpenAI is building on the scarce layers: compute efficiency, enforceable rights and provenance, observable agent runtimes, owned distribution, and durable personalization. The problem is that they are way behind the other major tech providers and they may not be able to close the gaps that are growing ever-more-important. The next winners won’t be the ones with the prettiest chatbox—they’ll be the ones who own the scarce layers that make everyone else’s apps possible.