Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

At Web Summit, no sign of an AI slowdown

Moveworks CEO Bhavin Shah (left) on stage at Web Summit with Sarah Myers West of the AI Now Institute. (Credit: Tyler Miller—Sportsfile for Web Summit via Getty Images)

Hello and welcome to Eye on AI. In this edition…no sign of an AI slowdown at Web Summit; work on Amazon’s new Alexa plagued by further technical issues; a general purpose robot model; trying to bend Trump’s ear on AI policy.

Last week, I was at Web Summit in Lisbon, where AI was everywhere. There was a strange disconnect, however, between the mood at the conference, where so many companies were touting AI-powered products and features, and the tenor of AI news last week—much of which was focused on reports that the AI companies building foundation models were seeing diminishing returns from building ever larger AI models and rampant speculation in some quarters that the AI hype cycle was about to end.

I moderated a center stage panel discussion on whether the AI bubble is about to burst, and I heard two very different, but not diametrically opposed, takes. (You can check it out on YouTube.) Bhavin Shah, the CEO of Moveworks, which offers an AI-powered service to big companies that allows employees to get their IT questions automatically answered, argued—as you might expect—that not only is the bubble not about to burst, that it isn’t even clear there’s a bubble.

AI is not like tulip bulbs or crypto

Sure, Shah said, the valuations for a few tech companies might be too high. But AI itself was very different from something like crypto or the metaverse or the tulip mania of the 17th century. Here was a technology that was having real impact on how the world's largest companies operate—and it was only just getting going. He said it was only now, two years after the launch of ChatGPT, that many companies were finding AI use cases that would create real value.

Rather than being concerned that AI progress might be plateauing, Shah argued that companies were still exploring all the possible, transformative use cases for the AI that already exists today—and the transformative effects of the technology are not predicated on further progress in LLM capabilities. In fact, he said, there was far too much focus on what the underlying LLMs could do and not nearly enough on how to build systems and workflows around LLMs and other, different kinds of AI models, that could as a whole deliver significant return-on-investment (ROI) for businesses.

The idea that some people might have had that just throwing an LLM at a problem would magically result in ROI was always naïve, Shah argued. Instead, it was always going to involve systems architecting and engineering to create a process in which AI could deliver value.

AI's environmental and social cost argue for a slowdown

Meanwhile, Sarah Myers West, the coexecutive director of the AI Now Institute, argued not so much that the AI bubble is about to burst—but rather that it might be better for all of us if it did. West argued that the world could not afford a technology with the energy footprint, appetite for data, and problems around unknown biases that today’s generative AI systems have. In this context, though, a slowdown in AI progress at the frontier might not be a bad thing, as it might force companies to look for ways to make AI both more energy and data efficient.

West was skeptical that smaller models, which are more efficient, would necessarily help. She said they might simply result in the Jevons paradox, the economic phenomenon where making the use of a resource more efficient results in more overall consumption of that resource.

As I mentioned last week, I think that for many companies that are trying to build applied AI solutions for specific industry verticals, the slowdown at the frontier of AI model development matters very little. Those companies are mostly bets that those teams can use the current AI technology to build products that will find product-market fit. Or, at least, that’s how they should be valued. (Sure, there’s a bit of “AI pixie dust” in the valuation too, but those companies are valued mostly on what they can create using today’s AI models.)

Scaling laws do matter for the foundational model companies

But for the companies whose whole business is creating foundation models—OpenAI, Anthropic, Cohere, and Mistral—their valuations are very much based around the idea of getting to artificial general intelligence (AGI), a single AI system that is at least as capable as humans at most cognitive tasks. For these companies, diminishing returns from scaling LLMs matters.

But even here, it's important to note a few things—while returns from the pre-training larger and larger  AI models seems to be slowing, AI companies are just starting to look at the returns from scaling up “test time compute” (i.e. giving an AI model that runs some kind of search process over possible answers more time—or more computing resources—to conduct that search). That is what OpenAI’s o1 model does, and it is likely what future models from other AI labs will do too.

Also, while OpenAI has always been most closely associated with LLMs and the “scale is all you need” hypothesis, most of these frontier labs have employed, and still employ, researchers with expertise in other flavors of deep learning. If progress from scale alone is slowing, that is likely to encourage them to push for a breakthrough using a slightly different method—search, reinforcement learning, or perhaps even a completely different, non-Transformer architecture.

Google DeepMind and Meta are also in a slightly different camp here, because those companies have huge advertising businesses that support their AI efforts. Their valuations are less directly tied to frontier AI development.

It would be a different story if one lab were achieving results that Meta or Google could not replicate—which is what some people thought was happening when OpenAI leapt out ahead with the debut of ChatGPT. But since then, OpenAI has not managed to maintain a lead of more than three months for most new capabilities.

As for Nvidia, its GPUs are used for both training and inference (i.e. applying an AI model once it has been trained)—but it has optimized its most advanced chips for training. If scale stops yielding returns during training, Nvidia could potentially be vulnerable to a competitor with chips better optimized for inference. (For more on Nvidia, check out my feature on company CEO Jensen Huang that accompanied Fortune’s inaugural 100 Most Powerful People in Business list.)

With that, here’s more AI News.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction, Nov. 15: Due to erroneous information provided by Robin AI, last Tuesday's edition of this newsletter incorrectly identified billionaire Michael Bloomberg's family office Willets as an investor in the company's "Series B+" round. Willets was not an investor.

**Before we get the news: If you want to learn more about what's next in AI and how your company can derive ROI from the technology, join me in San Francisco on Dec. 9-10 for Fortune Brainstorm AI. We'll hear about the future of Amazon Alexa from Rohit Prasad, the company's senior vice president and head scientist, artificial general intelligence; we'll learn about the future of generative AI search at Google from Liz Reid, Google's vice president, search; and about the shape of AI to come from Christopher Young, Microsoft's executive vice president of business development, strategy, and ventures; and we'll hear from former San Francisco 49er Colin Kaepernick about his company Lumi and AI's impact on the creator economy. You can view the agenda and apply to attend here. (And remember, if you write the code KAHN20 in the "Additional comments" section of the registration page, you'll get 20% off the ticket price—a nice reward for being a loyal Eye on AI reader!)

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.