Site icon TechTalks

The winners and losers of the DeepSeek-R1 shockwave

dolphin flexing
Image created with Ideogram

This article is part of our series that explores the business of artificial intelligence

The world is still reeling over the release of DeepSeek-R1 and its implications for the AI and tech industries. The model’s impressive capabilities and its reported low costs of training and development challenged the current balance of the AI space, wiping trillions of dollars worth of capital from the U.S. stock market, with Nvidia losing 17% of its market cap.

I’ll caveat everything here by saying that we still don’t know everything about R1. Although DeepSeek released the weights, the training code is not available and the company did not release much information about the training data. It has been widely reported that it only took $6 million to train R1, as opposed to the billions of dollars it takes companies like OpenAI and Anthropic to train their models. But that figure is not accurate and only includes the costs of hardware. Training large language models (LLMs) has many associated costs that have not been included in that report.

Nonetheless, the researchers at DeepSeek seem to have landed on a breakthrough, especially in their training method, and if other labs can reproduce their results, it can have a huge impact on the fast-moving AI industry. Here are the winners and losers based on what we know so far.

Losers: OpenAI and Anthropic

OpenAI and Anthropic are the clear losers of this round. Both companies expected the huge costs of training advanced models to be their main moat. However, if what DeepSeek has achieved is true, they will soon lose their advantage.

To be fair, DeepSeek-R1 is not better than OpenAI o1. But it is not far behind and is much cheaper (27x on the DeepSeek cloud and around 7x on U.S. cloud providers). And for many applications, R1 will be sufficient. Moreover, R1 shows its full reasoning chain, making it much more convenient for developers who want to review the model’s thought process to better understand and steer its behavior. The visible reasoning chain also makes it possible to distill R1 into smaller models, which is a huge benefit for the developer community.

With a contender like DeepSeek, OpenAI and Anthropic will have a hard time defending their market share. Again, to be fair, they have the better product and user experience, but it is only a matter of time before those things are replicated. (Perplexity has already integrated DeepSeek-R1 into its product at the expense of o1 queries.) They will have to reduce prices, but they are already losing money, which will make it harder for them to raise the next round of capital.

That said, this doesn’t mean that OpenAI and Anthropic are the ultimate losers. They now have to go back to the drawing board and rethink their strategy. They have some of the brightest people on board and are likely to come up with a response.

Winner: DeepSeek

DeepSeek is the clear winner here. It is now a household name. It got a lot of free PR and attention. It will get a lot of customers. And High-Flyer, the hedge fund that owned DeepSeek, probably made a few very timely trades and made a good pile of money from the release of R1.

And now, DeepSeek has a secret sauce that will enable it to take the lead and extend it while others try to figure out what to do. That said, we will still have to wait for the full details of R1 to come out to see how much of an edge DeepSeek has over others.

Winner: Small AI labs and application-layer companies

The AI arms race between big tech companies had sidelined smaller AI labs such as Cohere and Mistral. The new dynamics will bring these smaller labs back into the game. The DeepSeek formula shows that having a war chest to spend on compute will not automatically secure your position in the market. It will be interesting to see how other labs will put the findings of the R1 paper to use. 

Another clear winner is the application layer. Previously, having access to the cutting edge meant paying a bunch of money for OpenAI and Anthropic APIs. Now companies can deploy R1 on their own servers and get access to state-of-the-art reasoning models. I already mentioned Perplexity (which is probably cutting costs by using R1). Other companies in sectors such as coding (e.g., Replit and Cursor) and finance can benefit immensely from R1.

Winner: Open source

R1 was a clear win for open source. With the exception of Meta, all other leading companies were hoarding their models behind APIs and refused to release details about architecture and data. 

DeepSeek has also withheld a lot of information. But we have access to the weights, and already, there are hundreds of derivative models from R1. Hopefully, this will incentivize information-sharing, which should be the true nature of AI research. 

Winner: Hyperscalers and hardware manufacturers

R1 is a good model, but the full-sized version needs strong servers to run. This will benefit the companies providing the infrastructure for hosting the models. Microsoft, Google, and Amazon are clear winners but so are more specialized GPU clouds that can host models on your behalf. So all those companies that spent billions of dollars on CapEx and acquiring GPUs are still going to get good returns on their investment.

The companies selling accelerators will also benefit from the stir caused by DeepSeek in the long run. Even though Nvidia has lost a good chunk of its value over the past few days, it is likely to win the long game. The demand for compute is likely going to increase as large reasoning models become more affordable.

Exit mobile version