Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Arvind Krishna

IBM CEO: DeepSeek proved us right—AI is not about big, proprietary systems

IBM CEO Arvind Krishna. (Credit: Christopher Goodney/Bloomberg via Getty Images)

Last week, DeepSeek challenged conventional wisdom in AI. Until now, many assumed that training cutting-edge models required over $1 billion and thousands of the latest chips. That AI had to be proprietary. That only a handful of companies had the talent to build it—so secrecy was essential.

DeepSeek proved otherwise. News reports suggest they trained their latest model with just 2,000 Nvidia chips at a fraction of the expected cost—around $6 million. This reinforces what we’ve said all along: Smaller, efficient models can deliver real results without massive, proprietary systems.

But China’s breakthrough raises a bigger question: Who will shape the future of artificial intelligence? AI development cannot be controlled by a handful of players—especially when some may not share fundamental values like protection of enterprise data, privacy, and transparency. The answer isn’t restricting progress—it’s ensuring AI is built by a broad coalition of universities, companies, research labs, and civil society organizations.

What’s the alternative? Letting AI leadership slip to those with different values and priorities. That would mean ceding control of a technology that will reshape every industry and every part of society. Innovation and true progress can only come by democratizing AI.

The time for hype is over. I believe that 2025 must be the year when we unlock AI from its confines within a few players. By 2026, a broad swath of society shouldn’t just be using AI—they should be building it.

DeepSeek AI lesson

Smaller, open-source models are how that future will be built. DeepSeek’s lesson is that the best engineering optimizes for two things: performance and cost. For too long, AI has been seen as a game of scale—where bigger models meant better outcomes. But the real breakthrough is as much about size as it is about efficiency. In our work at IBM, we’ve seen that fit-for-purpose models have already led to up to 30-fold reductions in AI inference costs, making training more efficient and accessible.

I do not agree that artificial general intelligence (AGI) is around the corner or that the future of AI depends on building Manhattan-sized, nuclear-powered data centers. These narratives create false choices. There is no law of physics that dictates AI must remain expensive. The cost of training and inference isn’t fixed—it is an engineering challenge to be solved. Businesses, both incumbents and upstarts, have the ingenuity to push these costs down and make AI more practical and widespread.

We’ve seen this play out before. In the early days of computing, storage and processing power were prohibitively expensive. Yet, through technological advancements and economies of scale, these costs plummeted—unlocking new waves of innovation and adoption.

The same will be true for AI. This is promising for businesses everywhere. Technology only becomes transformative when it becomes affordable and accessible. By embracing open and efficient AI models, businesses can tap into cost-effective solutions tailored to their needs, unlocking AI’s full potential across industries.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Read more:

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.