Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Meta's A.I. guru LeCun wants you to know he's no doomer

Meta Chief A.I. Scientist Yann LeCun (Credit: Chesnot/Getty Images)

Hello, and welcome to Eye on A.I. I’m writing this week from Paris, where yesterday I attended an event Meta held to showcase its latest A.I. research for members of the press, and where Viva Tech, one of Europe’s largest technology trade shows, kicked off today.

This past week was one in which a number of A.I. optimists made an effort to counter the increasingly vocal and influential A.I. doom narrative—the fear that A.I. poses a grave and existential risk to humanity and so must be heavily regulated and licensed through new national and international agencies.

One of the optimists firing a rhetorical artillery barrage was venture capitalist Marc Andreessen, who penned a 7,000-word essay accusing the doomers of being either "Baptists" (essentially messianic zealots informed by cult-like social movements, such as Effective Altruism, that have adopted the danger of extinction from runaway A.I. as a core tenant, masked in the rationalist terms, but essentially no more scientific than transubstantiation) or "Bootleggers" (cynical players whose commercial interests align in hyping A.I. doom in order to prompt a regulatory response that cements their market position and hobbles the competition). I don’t agree with portions of Andreessen’s analysis—he uses strawmen, doesn’t engage deeply in taking apart counterarguments, and he is way too sanguine about the serious shortcomings of today’s A.I. systems and the risk of real harm they pose to individuals and democracy. But his essay is worth reading and the Baptist and Bootlegger analogy (which Andreessen borrowed from economic historian Bruce Yandle) is worth thinking about.

Yann LeCun, Meta’s chief A.I. scientist, is also a prominent A.I. optimist and it was bracing to hear his views at yesterday’s Meta event. LeCun thinks that the scenario that most worries the doomers—that we will somehow accidentally stumble into creating superintelligent A.I. that will escape our control before we even realize what’s happened—“is just preposterous.” LeCun says that is simply “not the way anything works in the world.” LeCun is confident that we humans, collectively, “are not stupid enough to roll out infinite power to systems that have not been thoroughly tested inside of sandboxes and virtual environments.”

Like Andreessen, LeCun thinks many recent proposals to regulate A.I. (such as a framework supported by companies like OpenAI and Microsoft to create new national and international A.I. agencies with licensing authority for the training of very large A.I. models) are a terrible idea. In LeCun's view, the companies supporting this are motivated by a desire to quash competition from open-source A.I. models and less well-resourced startups.

It's probably no coincidence that Meta, Lecun's employer, has planted its flag firmly in the open-source camp. Unlike many of its other Big Tech brethren, Meta has made many of its most advanced A.I. models and datasets freely available. (Two U.S. senators wrote to the company last week questioning whether it had been irresponsible in the way it had released its powerful LLaMA large language model. Meta had tried to “gate” access to the model, releasing it only to select researchers for non-commercial purposes, but quickly the entire model, including all of its weights, leaked online and it has now been used by many people beyond the original select research partners. The fear is that LLaMA will become a ready tool for those looking to pump out misinformation, run scams, or carry out cyberattacks.)

Others warning about A.I. risk, including LeCun's friends and fellow Turing Award winners Geoff Hinton and Yoshua Bengio, who along with LeCun are often referred to as “the godfathers of A.I.,” are perhaps guilty of a failure of imagination, he suggested. LeCun, whose father was an aeronautical engineer and who remains fascinated by aircraft, says that asking people to talk about A.I. Safety today is like asking people in 1930 to opine on the safety of the turbojet engine, a technology that had not even been invented yet. Like superpowerful A.I., turbojets sound scary at first, he says. And yet today, thanks to careful engineering and safety protocols, they are one of the most reliable technologies in existence. “Who could have imagined in 1930 that you could cross an ocean in complete safety, at near the speed of sound, with a couple of hundred other people?” he says.

Although LeCun has long championed the A.I. methods—in particular deep neural networks trained using self-supervised learning—that have brought about the generative A.I. boom, he’s not a huge fan of today’s large language models. Their intelligence, such as it is, he says, is too brittle. While they can seem brilliant one second, they can seem utterly stupid the next. They tend to confabulate, they are not reliably steerable, are not controllable, and there is growing evidence that there may not be any way to put in place guardrails around their behavior that can’t be easily overcome. In fact, he thinks fears about A.I. posing a risk to humanity are partly the result of people mistakenly extrapolating from today’s large language models to future superintelligence. “A lot of people are imagining all kinds of catastrophe scenarios because of A.I. And it’s because they have in mind these auto-regressive LLMs that kind of spew nonsense sometimes. They say it’s not safe. They are right. It’s not. But it’s also not the future,” he says. (LLMs are auto-regressive because each word they output is then fed back into them to help the predict the next word in the sequence.)

LeCun boldly predicted that within a few years LLMs, the engine of today’s generative A.I. revolution, will be almost completely abandoned in favor of better, more robust, algorithms. At the A.I. day, LeCun also discussed his own thoughts about what is needed to get to more humanlike A.I. And he explained a new computer vision model called I-JEPA, which Meta CEO Mark Zuckerberg announced the company was open-sourcing yesterday. I-JEPA is the first step in LeCun’s roadmap towards safe superhuman intelligence—and it requires a very different algorithmic design than the Transformer-based systems responsible for today’s LLMs. (More on I-JEPA in the research section of this newsletter below.)

Zuckerberg’s I-JEPA announcement was also part of the Meta CEO’s efforts to parry criticism from investors and the press that the company is lagging in tech’s buzziest space, generative A.I. Unlike its Big Tech brethren, Meta has not rolled out a major consumer-facing generative A.I. product of its own so far. And many noted that when the Biden White House held a meeting with A.I. companies creating foundation A.I. models, Meta was notably absent. The White House said it wanted to meet with those companies that were "at the forefront of A.I. innovation,” which many interpreted as a diss on Meta. (LeCun said that the company has been talking to the White House “through other channels” and noted that he had personally advised French President Emmanuel Macron on A.I. policy in recent days.)

But Zuckerberg is not about to miss out on Silicon Valley’s latest boom. At an “all-hands” meeting for company employees last week, the CEO said Meta plans to put generative A.I. “into every single one of our products." He previewed a number of upcoming announcements around the technology, starting with A.I.-generated stickers that can be shared in the company’s messaging apps, Whatsapp and Messenger, and moving on to chatbot-like agents with a variety of different personas designed “to help and entertain” that company is currently testing internally and which it says it will debut in Whatsapp and Messenger before being pushing them out to Meta’s other apps, and eventually to the metaverse. The company is also planning on using A.I. models that can generate three-dimensional scenes and even entire virtual worlds to help augment and build out the metaverse.

With that, here’s the rest of this week’s news in A.I.


But, before you read on: Do you want to hear from some of the most important players shaping the generative A.I. revolution and learn how companies are using the technology to reinvent their businesses? Of course, you do! So come to Fortune’s Brainstorm Tech 2023 conference, July 10-12 in Park City, Utah. I’ll be interviewing Anthropic CEO Dario Amodei on building A.I. we can trust and Microsoft corporate vice president Jordi Ribas on how A.I. is transforming Bing and search. We’ll also hear from Antonio Neri, CEO of Hewlett Packard Enterprise, on how the company is unlocking A.I.’s promise, Arati Prabhakar, director of the White House’s Office of Science and Technology Policy on the Biden Administration’s latest thoughts about the U.S. can realize A.I.’s potential, while enacting the regulation needed to ensure we guard against its significant risks, Meredith Whittaker, president of the Signal Foundation, on safeguarding privacy in the age of A.I., and many, many more, including some of the top venture capital investors backing the generative A.I. boom. All that, plus fly fishing, mountain biking, and hiking. I’d love to have Eye on A.I. readers join us! You can apply to attend here.


Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.