Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

The atom bomb or the airliner? A.I. regulation may hinge on which analogy wins

Photo of the mushroom cloud from an atomic bomb exploding. (Credit: Getty Images)

Hello and welcome to June’s special edition of Fortune’s Eye on A.I.

Political fights are won and lost over narratives. Tell a better story than your opponents and you’re likely to win. Framing is everything—which means picking the right metaphors or analogies can be crucial. So it was fascinating to watch Microsoft policy chief Brad Smith trial various ways of framing the A.I. regulation debate while speaking to reporters in Europe earlier this week.

Smith was in Europe to give an address at a conference in Brussels, where the European Union is currently in the final stages of negotiating its landmark A.I. Act. And Smith’s speech was about how policy positions Microsoft recently announced on A.I. regulation might align with the A.I. Act, and also some things the company would like to see in the law’s final version.

As I’ve written about previously for Fortune, Microsoft is pushing a package of policies. Some of the points in the company’s A.I. regulation blueprint include what it calls “safety brakes” on the use of A.I. in critical infrastructure, like power grids and city traffic management systems, that would allow humans to quickly take back control from an automated system if something began to go awry. It has also called for regulations that would be similar to “know your customer” rules in financial services, where those making A.I. software would be required to “know your cloud, know your customer, and know your content.” Microsoft has come out in favor of licensing requirements for those creating “highly capable” foundation models and running large data centers where such models are trained. While safety brakes, licensing, and Smith’s KYC rules are not in the EU A.I. Act, he said in his speech and an accompanying blog post that implementing such rules would “be key to meeting both the spirit and obligations of the act.”

What was new in Smith’s discussion with reporters were some of the analogies he used. He said he particularly favored comparisons between international A.I. regulation and civil aviation as a way to think about international governance of A.I. He pointed to the International Civil Aviation Organization, which was created in 1944 and is based in Montreal, as a possible model. Almost every nation on Earth, 192 in total, is a member of the organization. “The ICAO has fostered a common approach to safety standards,” Smith said. “Both the testing of safety when the aircraft are being designed and built, and their application and an ongoing monitoring of their safety is very similar when you step back and think about it to what people are talking about for foundational models.”

There are a couple of other things about the aviation analogy that strike me as interesting. For instance, civil aviation sounds fairly innocuous. We all want airplanes that can fly seamlessly across borders—and that also don’t fall out of the sky. It sounds a lot less tricky to manage than say, nuclear power, which is the analogy that people such as OpenAI’s Sam Altman and A.I. researcher Gary Marcus have been floating, suggesting that we may need an organization similar to the International Atomic Energy Agency to police the use of powerful A.I. systems. In Altman’s telling, A.I. is a technology that could, if not handled carefully, bring about the destruction of the world, just like nuclear power. Framing A.I. regulation by analogy to nuclear power immediately conjures up images of mushroom clouds and Armageddon. Comparing powerful A.I. to commercial aircraft, not so much.

Of course, the big criticism of what Smith and Microsoft have called for in their regulatory blueprint is that the strict KYC and licensing requirements will be extremely difficult for open-source A.I. developers to comply with. In other words, what Microsoft is proposing amounts to a form of “regulatory capture,” where the Big Tech companies set the rules in such a way that they can comply, yet the same rules act as a moat, keeping smaller startups and independent software developers from competing.

I asked Smith about this. In response, he trotted out a different analogy. This time not to the airplane—but to the automobile. We require, for public safety, that anyone who wants to drive a car gets a license first. It hasn’t stopped millions of people from taking driver’s education classes and obtaining their licenses. So just because you require those building A.I. foundation models to get a license, Smith said, it doesn’t mean that open-source developers would be shut out of the game.

“We should require everyone who participates in a manner that could have an implication for safety, to put safety first,” Smith said. “But we all had to find a way to do it that can accommodate different models and I would imagine that we can.” As for regulatory capture, he pointed out that, at the end of the day, companies don’t write the rules, governments do. “And I would hope that they will find a way to write the rules in a way that can accomplish more than one goal at a time,” he said.

Of course, there might be a good reason the airplane isn’t the metaphor Smith wanted to use in the context of talking about competition. Because, while 192 countries have agreed on aviation standards, there are actually only a handful of companies that have been able to effectively compete in selling commercial airliners, and that’s, at least in part, because meeting those standards is really hard.

I asked Smith if foundation models would be able to comply with the EU’s existing data privacy law, known as GDPR, as the A.I. Act says they must. Many experts say it will be difficult for these models to meet that requirement because they are often trained on such vast amounts of data, scraped from the internet, that it is hard for the creators to guarantee that no personal information has been swept up in the data haul. Others have also questioned whether companies such as OpenAI, Microsoft, and Google have a proper legal basis for handling sensitive personal information fed into the systems by users of their A.I.-powered chatbots and search engines.

Here Smith framed A.I. regulation in terms of the automobile again but in a different way. “I think one can think of the foundational model as the engine for a car,” he said. “And then you can think of the [A.I.] application as the car itself. And so you can't have a safe car without a safe engine. But it takes a lot more to have a safe car than to just have a safe engine. And so when it comes to a lot of practical safety requirements, you have to decide: Are we talking about the car or are we talking about the engine? And I think a lot of the GDPR issues that people are focused on today probably involve the car as much as the engine.”

If that leaves you scratching your head as to whether the foundation model itself—the engine in Smith’s analogy—will be GDPR compliant, you aren’t alone. But what was clear is that Microsoft wants you to think about A.I. in terms of familiar, mostly benign technologies of the past—cars and airlines—not more exotic and scary ones, such as nuclear energy or genetic engineering, as some others have suggested. We’ll see which narrative wins over the coming months.

With that, here’s some more A.I. news from the past week.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.