Most Americans are pretty terrified of artificial intelligence, according to a survey just released by Reuters and Ipsos.
A whopping 61% of respondents thought A.I. could threaten humanity itself, with just 22% coming out against that proposition. Interestingly for those trying to predict the political response to A.I.’s sudden ascent—more on that later—the survey found Trump voters were notably (70% vs. 60%) more likely than Biden voters to perceive an A.I. threat to our species, and Evangelical Christians were markedly more likely than non-Evangelical Christians to “strongly agree” with that threat’s existence (32% vs. 24%).
These stats are really remarkable, especially given how recently generative A.I.’s appearance started sounding alarm bells. However, they are also unsurprising due to the extreme volume of those warnings, and the fact that many of those warnings come from the people building these systems: that Musk-and-Woz-signed open letter (“potentially catastrophic effects on society”); ex-Googler Geoffrey Hinton (“more urgent” than climate change); OpenAI CEO Sam Altman (the A.I. industry could “cause significant harm to the world”).
Of course, everyone’s freaking out—you can’t check the news these days without some expert yelling that the sky may soon fall.
I have mixed feelings about this. On the one hand, great! People are listening to experts again! I’ll take it! Of course, many less well-known experts may be justifiably annoyed that the public wasn’t listening to their warnings until recently, but that’s kind of understandable—the world is full of very pressing concerns, and serious A.I. warnings weren’t hitting the front pages or evening news a year ago.
On the other hand, there are two big problems with today’s heightened level of fear.
Firstly, as umpteen people pointed out when that open letter was published, people really should be less concerned with A.I.’s theoretical existential threats—which serve to bolster the industry’s narrative that what they’re building is oh so powerful and therefore valuable—and more concerned with its existing effects on the spread of disinformation and the perpetuation of biases.
Secondly, when everyone’s freaking out, there’s a whiff of moral panic in the air and a strong risk of bad laws emerging. That’s not to say lawmakers and regulators shouldn’t be addressing A.I.’s myriad issues at speed—they have no choice, and we need them to do so—but the paths they choose will have massive consequences for the industry and for society.
As my colleague Jeremy Kahn noted in his must-read write-up of Altman’s Capitol Hill visit yesterday, the OpenAI CEO was, uh, open about the danger of new rules being designed in a way that leaves only big companies such as his being able to comply. Europe’s open-source community and some legal experts are also concerned that the EU’s almost-there A.I. Act would mandate risk assessments that small firms and projects just can’t handle, while potentially stifling some of A.I.’s less widely-appreciated positive use cases.
Do we want a future where only big companies are effectively allowed or able to train large A.I. models? Maybe we do (to keep as tight a lid on the tech as possible) and maybe we don’t (because stifling competition rarely ends well for the user). But whatever route we take, it needs to be well thought out and chosen on the basis of the facts. The majority of Americans seeing A.I. as an existential threat to civilization—which makes it highly political by definition—is unlikely to help that happen.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.