Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Don't hold your breath for global AI rules

It will likely take an AI-related catastrophe before any international rulebook or organization begins regulating AI technologies.

Why it matters: AI innovators and researchers worry about both the doomsday scenario of a runaway super-AI and the less science-fictional but more likely harms that could follow from hasty deployment of the technology, in the form of cyberattacks, scams, disinformation, surveillance, and bias.


Driving the news: Tech policy makers meet in Sweden Tuesday, at the edge of the Arctic Circle, for the twice-yearly Transatlantic Trade and Technology Council.

What’s happening: CEOs say they support global governance of the most serious risks associated with AI.

  • The founders of OpenAI, the company behind ChatGPT, think the International Atomic Energy Agency — which exists to ensure nuclear tech is used for peaceful purposes — is a good model for limiting AI that shows "superintelligence."
  • The Organization for Economic Cooperation and Development — an economic think tank for governments — called for global technical standards for trustworthy AI in principles published in 2019.

The big picture: There's no precedent for global regulation of a potentially dangerous field or specific technology without the cue of some catastrophic event.

  • The United Nations was built from the ashes of World War II.
  • It took the U.S.'s use of nuclear weapons against civilians and a nuclear arms race that threatened global devastation to eventually prompt the adoption of guardrails in that field.

Between the lines: The IAEA opened 12 years after nuclear bombs were dropped on Hiroshima and Nagasaki.

What they're saying: Sam Altman and his OpenAI co-founders want to see “an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security.”

  • Given that neither national or international authorities can keep pace AI innovation, the founders suggest companies "begin implementing elements of what such an agency might one day require," followed by national governments, and eventually a global suite of governments.

Microsoft's Smith is in lockstep with OpenAI (which Microsoft funds) in wanting "proper control over AI," included both government-licensed models and privately watermarked content.

  • Smith supports specific regulations for three layers of the AI technology stack — applications, models and infrastructure — without getting into details about how this could work globally.

Sundar Pichai, Google's CEO, told "60 Minutes" he supports a global treaty system for managing AI.

BSA, a software trade association that includes Adobe, Cisco, IBM, Oracle and Salesforce as members, has been advocating for AI regulation since 2021.

Flashback: The speediest modern example of international action in the face of a technological threat was set by the negotiators of the Montreal Protocol in the 1980s, who took four years to ban around 100 chemicals that had created a dangerous hole in the Earth's ozone layer.

Reality check: While CEOs have offered unusually strong support for regulation in theory, their actions are often inconsistent, and recall the efforts of social media platforms to resist regulation in the 2010s.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.