Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Lance Ulanoff

AI regulation is impossible without compromise, which means the US is doomed

A digital face in profile against a digital background.

Nuance. That's the key to managing the balance between supporting artificial intelligence (AI) innovation while at the same time regulating it. And I don't know if you've noticed it, but we as a people, especially in the US, are not particularly good at nuance.

We might just be doomed.

That was my initial thought when I read US Senator Chuck Schumer's new AI regulatory framework, which is built on five very sensible pillars:

  • Security
  • Accountability
  • Foundations
  • Explain
  • Innovation

Buried inside those pillars are rational plans to address, for instance, how rogue states might try to use AI to harm the US, the management of AI-developed misinformation, protecting our elections from AI-bolstered fraud, algorithm transparency, building a safety net for at-risk workers, and, naturally, bolstering the development of AI so that we can stay ahead of, for instance, the Chinese.

Some of these plans, like the need to beat China and the equal need to protect jobs, seem at cross-purposes. Maybe they are but the reality is that any regulatory approach to AI will take that kind of nuance, the ability to thread the needle between these and other imperatives.

Again, I applaud Schumer for such a level-headed approach and I also realize just how hard it will be for him to reach an accord on any of this.

And that's what's required in the US, at least. In order for us to finally have some AI regulation that addresses all these concerns while not completely stifling development, we need Congress and constituents to agree on some things. They must see the benefits and risks clearly enough to develop and vote through reasonable regulation.

I have zero faith.

No ChatGPT can bring us together

Few places in the world feel as deeply divided as the US. What sells ideas here is rhetoric that strips away all nuance and lays bare usually two diametrically opposed opinions. In current US policymaking, there is no gray.

Sure, our current President and Schumer (a Senate leader) understand the risk-rewards of AI and have spent considerable time trying to explain that to the US but that does not mean the rest of our elected leaders or the general populace get it.

I choose not to mention a few hot-button topics here because, well, their discussion does not invite nuance and they're sure to derail any rational discourse we might have about AI. Still, there's a lesson in the ongoing battles over weapons of violence and the start of life. If you happen to stand anywhere near the middle on either topic, you are drowned out. The only arguments that matter are those for and those against. Compromise and nuance are not part of the vocabulary.

AI will demand compromise. We can't and should not kill it. We also can't allow it to run amok, unfettered by regulation of any sort.

Meet in the middle

In the coming days, weeks, and months, Senators, representatives, the President of the United States, experts, onlookers, and regular people will discuss the merits and dangers of AI. My concern is that what we're in for is less of a conversation than two sides competing for the spotlight: Those who believe AI is the best thing since peanut butter and jelly and those who view AI as a Frankenstein monster already on the loose.

What may save us, though, is that some of the very same people developing the most powerful AI, think OpenAI's Sam Altman and Elon Musk, have already sounded the alarm. They're almost begging for regulation.

I bet many are thinking, "If they even want regulation, maybe it's the right thing to do." However, they also might not understand that in the very same breath people like Altman are reminding us that AI has the potential to transform the world in innumerable positive ways.

Will consumers and even legislators hear that or will that part of the dialogue sound to them like the gibberish from adults in a Peanuts cartoon?

Consumers are rightfully dazzled by ChatGPT, DALL-E 2, and MidJourney, but more than a little frightened, too. Fear is the kind of emotion that wins out over everything. You are far more likely to act on fear than you are amusement or even the satisfaction of ChatGPT finishing a project for you.

Without a nuanced understanding of AI both good and bad, no useful regulation can come about. But in a world where most people don't invite or can't understand nuance, we're probably never going to get it.

A decade from now, we'll look back on this time, assuming the AIs let us, wondering if things would've gone differently if we'd just turned down the rhetoric, listened to both sides, and then crafted fair and useful AI regulation.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.