How OpenAI is building its moat

OpenAI moat
Image created with Ideogram (with edits)

This article is part of our series that explores the business of artificial intelligence

When ChatGPT was released, OpenAI was the only game in town and had all the market and attention to itself. Analysts were heralding a new computing paradigm, a new age of AI, and an end to Google’s reign as the gatekeeper to the world’s information.

However, a little over two years later, the large language model (LLM) industry has become commoditized. Having huge compute resources and an endless pile of cash to train large models doesn’t ensure dominance in the market. And open models such as the DeepSeek and Llama have proven that you can’t maintain your lead by hiding your models behind APIs and web applications.

GPT-4.5 showed that scaling pre-training is reaching its limits. And with so many models having near-identical performance on key benchmarks, developers have many options to choose from. Selling access to models has become a race to the bottom; despite having millions of paying customers for ChatGPT, OpenAI is still operating at a loss

But like its peers, OpenAI is constantly adjusting its strategy to maintain its hold on its market and avoid getting swept. With models no longer being the determining factor, OpenAI is building its moat around the application and integration layers.

App in AI vs AI in app

There are two general ways that AI is shaping the future of applications: Bringing application to AI and bringing AI to apps. Applications such as ChatGPT are examples of the former, where you open up an AI application and have it perform tasks for you. 

The original ChatGPT was a plain chatbot that could only answer queries based on its internal knowledge. Since its release in 2022, it has come a long way and in addition to being powered by increasingly powerful models, it has been enhanced with tools such as web search, code interpretation, canvas, custom GPTs, voice mode, and a lot of engineering scaffolding for advanced features such as memory, projects, and Deep Research. 

ChatGPT still has the best user experience and, thanks to OpenAI’s brand recognition, also has the widest distribution with over 400 million weekly active users. For many people, ChatGPT is their first experience of artificial intelligence.

At the same time, OpenAI is taking the “AI to app” approach to make its AI omnipresent wherever you are. For example, the native ChatGPT app for macOS has a “Work with Apps” feature that enables the AI to interact with other applications. For example, you can use it to assist you in writing and editing code in VS Code or XCode. This gives the model additional context and removes the friction of copy-pasting information from your application into ChatGPT.

Integration wars

While impressive engineering feats, all of the features that ChatGPT provides can easily be copied and none of them give OpenAI the ultimate moat. Moreover, OpenAI does not have an operating system (e.g., Windows) or distribution channels (e.g., Google Workspace), which puts it at a disadvantage against competitors such as Apple, Google, and Microsoft. In fact, Google’s Gemini has already caught up to a large degree with ChatGPT, and it launched Deep Research ahead of OpenAI.

Moreover, OpenAI’s partnership with Microsoft is on shaky grounds, endangering its access to the big tech company’s subsidized cloud compute and massive distribution channels. Microsoft is expanding its partnership with other AI labs (e.g., support for Claude models in GitHub Copilot), investing in open source models, and developing its own brand of AI assistants integrated into its operating system, all of which are in conflict with OpenAI’s interests.

Therefore, OpenAI is also expanding its footprint by trying to win the integration and API wars. OpenAI was the first AI lab to provide an API platform to access its GPT-3 model in 2020. Since then, the platform has evolved considerably. 

The most recent update, announced on March 11, adds features such as file search, web search, and computer use. OpenAI has effectively become a product company that wants to become the one-stop shop to create LLM and agentic applications. This is part of the broader war of trying to set the standard for the industry.

For example, a year ago, if you wanted to create a retrieval-augmented generation (RAG), you would have to patch together a bunch of tools and services to make it work. Now, OpenAI brings all of that together within its API framework, trying to make it easier for developers to start their applications and abstracting away the complexity of vector stores and retrieval algorithms. 

Switching the backbone model of an LLM application is easy. But once you build enough infrastructure on top of a framework, you become locked in and switching to a different platform becomes increasingly difficult and costly. Therefore, the moat shifts from the model to all the scaffolding that surrounds it.
It is not clear how successful this strategy will be. What is certain is that there is no clear winner in sight yet. OpenAI still has a lot of legal battles to fight and is facing growing competition from Chinese AI labs, which has spurred CEO Sam Altman to try to convince U.S. lawmakers to reduce copyright barriers for training AI models.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.