DeepSeek’s breakthrough: A shift in AI economics?

|

|

Last update:

The launch of DeepSeek’s R1 model has sent shockwaves through the AI world, sparking debate over its impact on competition, cost structures, and the dominance of tech giants like Meta, Microsoft, and OpenAI. While undeniably impressive, the breakthrough also raises big questions about the economics of AI model development – and whether the industry is on the cusp of real change.

The game-changer: training efficiency

DeepSeek claims R1 was trained using roughly 90% fewer chips than comparable state-of-the-art models, challenging the long-held belief that training frontier AI models requires massive computational resources. Until now, expertise in large-scale distributed systems has given hyperscalers a significant advantage. But DeepSeek’s success suggests otherwise: with a leaner chip count and a smaller team, they’ve built a model that competes at the highest levels.

This revelation is already unsettling markets. Investors have long factored AI chip demand into the valuations of players like NVIDIA, AWS, and Microsoft. If cutting-edge models can be trained at a fraction of the expected cost, it could shake financial forecasts and force a rethink on the long-term profitability of AI infrastructure.

The real cost of AI

DeepSeek claims its base model, DeepSeek-V3, cost just $5.6M to train based on GPU usage. But that number likely understates the full expense. With over 100 researchers involved, factoring in salaries, infrastructure, and operational costs paints a different picture. The real investment behind R1 could be significantly higher – raising the question: are we looking at a true breakthrough, or are the gains more incremental after considering all costs?

Then there’s the issue of training data. Well-curated datasets have been shown to dramatically improve model efficiency – Microsoft’s Phi series is proof of that. But DeepSeek has been tight-lipped about its sources. Did they rely on pre-trained models from competitors, potentially skirting terms of service? Without more transparency, it’s hard to gauge how replicable their success really is.

What this means for AI startups

For smaller AI players, DeepSeek’s approach is both inspiring and uncertain. On one hand, it proves that major advances aren’t limited to tech giants with endless budgets. A nimble, well-optimised team can still be relevant.

But there’s a catch. The AI industry is still a race for resources, and DeepSeek’s efficiency gains may not be easy to copy. If their cost savings come from proprietary datasets or highly specialised engineering, replicating their success could be out of reach for most startups.

A moment of caution

DeepSeek’s achievement is impressive, but scepticism is warranted. The AI industry has seen big claims before – only for the fine print to reveal major caveats. The key question is whether this truly rewrites the rules of AI economics or simply shifts the costs elsewhere.

For now, DeepSeek has forced an important conversation about AI’s financial realities. But whether this is a lasting breakthrough or just an anomaly remains to be seen.

Author

Bradford Levy, Assistant Professor of Accounting, University of Chicago Booth School of Business.

Topics:

Follow us:

Partner eventsMore events

Current Month

02apr(apr 2)8:00 am04(apr 4)6:00 am0100 Europe 2025

Share to...