Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Technology
Jeremy Kahn

Big Tech has not monopolized A.I. software, but Nvidia dominates A.I. chips

Photo of Nvidia CEO Jensen Huang (Credit: Patrick T. Fallon—Bloomberg/Getty Images)

I recently caught up with Ian Hogarth and Nathan Benaich, who each year produce The State of AI Report, a must-read snapshot of how commercial applications of A.I. are evolving. Benaich is the founder of Air Street Capital, a solo venture capital fund that is one of the savviest early-stage investors in A.I.-based startups I know. Hogarth is the former co-founder of concert discovery app Songkick and has since go on to become a prominent angel investor as well one of the founders behind the founder-lead European venture capital platform Plural.

There’s always a lot to digest in their report. But one of the key takeaways from this year’s State of AI is that concerns established tech giants and their affiliated A.I. research labs would monopolize the development of A.I. have been proven, if not exactly wrong, then at least premature. While it is true that Alphabet (which has both Google Brain and Deepmind in its stable), Meta, Microsoft, and OpenAI (which is closely partnered now with Microsoft) are building large “foundational models” for natural language processing and image and video generation, they are hardly the only players in the game. Loosely-organized collectives of A.I. researchers and well-financed, venture-backed startups are challenging these tech giants and their labs with models of their own. AI21Labs, an Israeli startup, has Jurassic, a large language model. So too does Clover, the A.I. research lab of Korean Internet company Naver.

“The traditional dogma in software is about centralization,” Benaich says. “Google, Apple, Facebook will win and build the best products because success begets success and they will just keep sucking up all the talent and having the most compute.” But this has not been the case with A.I. software. “Last year and this year, we see a lot of large scale results out of research collectives,” he says. “Progress is not centralized.”

Some of these newer players are also open-sourcing their models so anyone can build on top of them: Hugging Face has created BLOOM, a very large language model. Eleuther AI, another collective, has built GPT-NeoX, its own open-source riposte to OpenAI’s GPT (it is notable that it did so using Google’s Tensor Processing Units in Google’s datacenters, which Google allowed them to do for free). Stability AI has rolled out the very popular, open-source text-to-image generation system Stable Diffusion, which competes with OpenAI’s DALL-E. Open source versions of DeepMind’s protein-folding A.I. AlphaFold have also been created. (It is worth mentioning that at least a few of the newer research labs—such as Anthropic and Conjecture—were funded by now disgraced cryptocurrency mogul Sam Bankman-Fried. For more on the impact SBF’s downfall has had on A.I. research, check out last week’s issue of Eye on A.I.)

Interest in A.I. software startups targeting business use cases also remains formidable. While the total amount invested in such companies fell 33% last year as the venture capital market in general pulled back on funding in the face of fast-rising interest rates and recession fears, the total was still expected to reach $41.5 billion by the end of 2022, which is higher than 2020 levels, according to Benaich and Hogarth, who cited Dealroom for their data. And the combined enterprise value of public and private software companies using A.I. in their products now totals $2.3 trillion—which is also down about 26% from 2021—but remains higher than 2020 figures.

But while the race to build A.I. software may remain wide open for new entrants, the picture is very different when it comes to the hardware on which these A.I. applications run. Here Nvidia’s graphics processing units completely dominate the field and A.I.-specific chip startups have struggled to make any inroads. The State of AI notes that Nvidia’s annual data center revenue alone—$13 billion—dwarfs the valuation of chip startups such as SambaNova ($5.1 billion), Graphcore ($2.8 billion) and Cerebras ($4 billion). Seventy-eight times more papers used Nvidia hardware than Google’s TPUs. And 98 times more research papers were published in which the hardware used was Nvidia’s than the combined total of all papers citing chips from startups Habana Labs (now owned by Intel), Graphcore, SambaNova, Cerebras, and Cambricon. (Of all those challengers, Graphcore’s chips were used most often.)

Benaich says that the key to Nvidia’s success was not so much its hardware per se, but the popularity of the programming interface allows developers to implement A.I. applications on Nvidia’s GPUs, which is called Cuda. It is Cuda that has enabled Nvidia to “lock in” customers, according to Benaich. “The newer players didn’t focus on software early enough” he says. And Nvidia has continued to evolve Cuda to make it easier to build bigger A.I. models and run them much faster on its chips. “It’s hard to compete with an incumbent that behaves like a startup,” he says.

One portion of the State of AI always deals with politics and polices around A.I. Hogarth was keen to talk about the fact that A.I. seems to be becoming rapidly more capable in areas such as language and image generation, and yet work on how to ensure that A.I. is used safely and responsibly does not seem to be keeping pace. In the past, he says, rates of adoption of some of these systems had been limited by the number of companies that had access to OpenAI’s—and to some extent Google and DeepMind’s—large models. But the growing open source trend was democratizing access and accelerating adoption—which was something of a double-edged sword, according to Hogarth. Open-source models are easier to audit, for example. But they are also much easier for someone to use to generate misinformation or to perpetrate fraud. Hogarth, who invested in A.I. Safety-focused research lab Anthropic, says it is possible the viral popularity of image generation A.I. systems such as Stable Diffusion will wake people up to some of the potential dangers of the technology. He thinks there is “a moral hazard” in the asymmetry between the large amounts of funding going to creating larger and more powerful A.I. models and the relatively paltry resources, especially in terms of actual people focused on the area, devoted to A.I. Safety.

Every year, Hogarth and Benaich end the State of AI with some predictions for the coming year. The ones I found most intriguing in this year’s report were:

•A generative audio A.I. will debut that will attract more than 100,000 developers by September 2023.

•A proposal to regulate research organizations working on artificial general intelligence (this is A.I. that can match or exceed human performance across a wide range of disparate tasks) in the same way biology labs working with potentially dangerous pathogens are regulated will get backing from an elected politician in the U.S., U.K., or European Union.

•That the inability of the A.I.-specific computer chip startups to gain marketshare against Nvidia will result in one of the prominent chip startups being shutdown or acquired for less than 50% of the valuation implied by its last venture capital round.

•That a major user-generated content site, such as Reddit, will reach a commercial licensing deal with one of the major companies building generative models, such as OpenAI, in order to provide license payments for being able to train on their corpus of data. (Right now those building generative models have tended to just scrape this stuff from the Internet without paying anything for it, a controversial practice that has, in the case of Microsoft’s GitHub Copilot, lead to a landmark class action lawsuit.)

There’s plenty more in the State of AI to dive into. You can download the whole report here.

And here’s the rest of this week’s news in A.I.  

Jeremy Kahn
@jeremyakahn
jeremy.kahn@ampressman

***
It's not too late to join us at Brainstorm A.I.
Reid Hoffman is best known as one of the founders of PayPal and LinkedIn. But he’s also been a major investor into A.I. startups as a partner at venture capital firm Greylock. He sits on the board of OpenAI. And now along with DeepMind co-founder and Greylock colleague Mustafa Suleyman he’s co-founded his first company since LinkedIN, Inflection AI. And guess what? Hoffman will be giving the closing keynote at Fortune’s Brainstorm A.I. conference in San Francisco. The conference is taking place on December 5th and 6th and includes an amazing lineup of big thinkers on A.I. and on how A.I. is impacting business. Attendees will hear from luminaries such as Stanford University’s Fei-Fei Li, Landing AI’s Andrew Ng, Meta’s Joelle Pineau, Google’s James Manyika, Microsoft’s Kevin Scott, Covariant co-founder and robotics expert Pieter Abbeel, and Stable Diffusion’s founder Emad Mostaque. We will also hear from Intuit CEO Sasan Goodarzi and top executives from Sam’s Club, Land O Lakes, Capital One, and more. And there’s still a chance to join us. You can apply here to register. (And if you use the code EOAI you’ll get a special discount.) I hope to see you there!

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.