Hello and welcome to Eye on AI. In this edition…AI takes from the World Economic Forum in Davos; DeepSeek changes everything; Trump signs an executive order on AI; and AI auditing is poised to be big.
The financial markets have been roiled this week by the rave reviews Chinese AI startup DeepSeek’s latest model received over the weekend as AI researchers had more of a chance to play around with it.
Many investors believe the new technology upends several key assumptions about AI:
- That the U.S. leads China on AI development
- That proprietary models have a slight edge on open source ones
- That AI progress depends on access to huge numbers of the most advanced AI chips in data centers with metropolis-size energy demands
I happen to think the markets are probably being overly negative about what DeepSeek means for companies like Nvidia in particular, and I wrote about that here. My Fortune colleagues also covered just about every angle of the DeepSeek news yesterday, and I will highlight more of their coverage below.
I spent last week at the World Economic Forum in Davos, Switzerland, where you couldn’t walk more than two feet without seeing or hearing “AI.” Then there was the big AI news that bracketed the week: Donald Trump’s announcement of the Stargate project—the $500 billion data center building spree involving OpenAI—and the buzz around DeepSeek.
Here I’ll try to bring you some of the other highlights, both from panel discussions and one-on-one conversations I had.
Agents everywhere
Everyone is getting excited about AI agents. Salesforce CEO Marc Benioff and his team are the most totemic examples. Benioff told everyone he almost renamed the entire company Agentforce, he’s so excited about AI agents. Salesforce has tried to make it easy for companies to spin up simple agents to automate all sorts of tasks. Adam Evans, Salesforce’s EVP for its AI platform business, told me London’s Heathrow Airport, the world’s second busiest, has been using Salesforce agents to orchestrate tasks—including gate changes, and running software that helps travelers navigate the airport. And within Salesforce itself, Evans says the use of agents to help with customer service means that 83% of customer queries—of which the company receives 40,000 weekly—can now be successfully resolved without involving a human customer service rep.
And it isn’t just Salesforce. Rodrigo Liang, CEO of AI chip startup SambaNova, told me agents are “about chaining together many of these models to create complete workflows.” This transition should be good for SambaNova’s business, Liang said, because the computer chips it’s building are optimized for running trained AI models—what is known as inference—and they can do that faster and using less power than Nvidia’s GPUs. (The company claims it can run some workloads 100-times faster while also consuming one-tenth the power.) That speed advantage, he says, matters more and more with agents—if each model in a workflow takes two seconds to return an output, but it takes 10 models chained together to complete the overall workflow, that means it will take 20 seconds—which is too long for many use cases, such as customer service responses.
Jevons Paradox is the buzzword of the day
I also had a fascinating conversation with Jonathan Ross, CEO of chip startup Groq, which, like SambaNova, is targeting AI inference tasks. Ross told me his company plans to ship at least 400,000 of its chips this year—and perhaps, if all goes according to plan, as many as 2 million. He thinks that new reasoning models—whether it is DeepSeek’s R1 or OpenAI’s o1 and o3—that require more computing resources for inference to produce the best answers, will provide powerful tailwinds to implement Groq’s chips. (Groq claims an 18x speed up in performance, with power consumption between one tenth and a third of what Nvidia’s GPUs consume.) As the cost of reasoning comes down—thanks in part to innovations such as DeepSeek—Ross also sees businesses deploying more and more AI agents.
Like apparently everyone these days, Ross mentioned Jevons Paradox—the idea that as technology makes a resource-consuming process more efficient, overall consumption of that resource goes up, not down. In this case, he predicts that efficiencies in running top AI models, whether due to model innovations like DeepSeek’s or hardware ones from companies like Groq, will mean companies will start deploying AI in more places, ultimately requiring more total computing resources.
A push to think about AI risks—even as Trump scraps the Biden executive order
But moving to a world of AI agents also poses distinct risks. In a striking panel on international governance of AI, deep learning pioneer Yoshua Bengio argued that the biggest risks of catastrophic harm, including perhaps even existential risks to humanity, come from giving AI models agency. It is only when AI systems can use digital tools to take actions in the real world that they potentially pose a threat to human life. What’s more, Bengio argued, agency isn’t necessary to reap many of the benefits from AI. The AI models that can discover new life-saving drugs or materials to create better batteries or biodegradable plastics don’t require agency.
Demis Hassabis, CEO of Alphabet-owned Google DeepMind, basically agreed, saying “the agentic era is a threshold moment for AI becoming more dangerous.” But he then told Bengio it was simply too late to hope that people would eschew developing agents. “It would have been good to have had a decade or more of [non-agentic, narrow AI systems aimed at solving particular science problems] coming out while giving us time to understand these general algorithms better, but it hasn’t worked out that way,” Hassabis said.
Both Hassabis and Bengio urged the global business and political leaders at Davos to continue trying to develop an international governance regime that would impose some safety controls around the development of super powerful AI systems. But their plea came just days after President Donald Trump abolished his predecessor’s executive order on AI, which had been America’s primary effort to contain any potentially catastrophic AI risks.
Training—including some coding skills—matters
At a Fortune-hosted dinner, AI luminary Andrew Ng suggested that in order for businesses to achieve success with AI, their workforces needed better training in how to use AI tools safely and effectively. He said that to find the best return on investment from AI, it was more important to think in terms of specific tasks that AI could help automate, rather than trying to think about entire jobs. He also told me that while genAI models are excellent at writing code, to get the most out of them, it’s a big help if the people using them understand at least a little bit about how to code themselves. That’s why he thinks that even if we move into a world in which AI does a lot of coding for us, students should still be taught coding.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn