Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

What responsibility do AI companies have when people misuse their products?

Photo of a TV screen showing law enforcement forensic experts searching the remains of a burnt out Tesla Cybertruck for evidence. (Credit: Ethan Miller—Getty Images)

Hello and welcome to Eye on AI. In today’s edition…Who is responsible if AI contributes to harm?; Anthropic will reportedly triple its 2024 valuation in a new funding round; Meta is hosting AI chatbots that break its policies; Microsoft open-sources Phi-4; AWS announces $11 billion for Georgia data centers; and AI healthcare funding levels out. 

Here are two news stories you may have seen over the last few days: the man who exploded a Cybertruck outside the Trump hotel used ChatGPT to help plan the attack, and people are using AI video tool Runway to insert Minion cartoons into real footage of mass shootings to skirt content filters and post them on social media. 

The stories have sparked debate over how much AI itself is to blame and, by implication, to what extent AI should be regulated. Is AI just a tool that can be used for good or ill? Is there something inherently more dangerous about AI than, say, a Google search, or traditional video editing software? When AI is involved in incidents that cause or could cause harm, should we hold the developers of that technology in any way responsible for what has occurred? 

If this sounds familiar, that’s because this is largely the same debate we’ve been having about social media for over a decade (and were having again this week in the wake of Meta’s sharp shift away from content moderation). It’s also the same conversation America has long been having about guns. But that doesn’t mean we don’t need to be having this conversation about AI. 

When ease-of-use is a bug, not a feature

The information ChatGPT provided to Matthew Livelsberger, the man who killed himself inside his explosive-laden Cybertruck which then detonated outside the Trump Casino in Las Vegas, was taken from the internet. It was therefore available through other means. At this time, we have an idea of what he asked ChatGPT—questions about explosive targets, the speed at which certain rounds of ammunition would travel, and if fireworks are legal in Arizona, according to police—but not what information ChatGPT provided (OpenAI told The AP that the chatbot did provide warnings against harmful or illegal activities).

We also know that ChatGPT makes accessing information easier and more conversational—that’s the whole point of it. It’s why some top scientists have warned that generative AI could make it possible for people who wouldn't otherwise have the knowledge or skills to create bioweapons, for example. Sometimes, a barrier to entry, or “friction” as it's often called in the tech world, is a good thing. 

In the Runway Minions and school shooting video example, there’s the similar consideration that generative AI makes creating such content far too easy. Yes, it’s true that, perhaps, someone could create the same videos with traditional video editing software such as Photoshop or After Effects. But while both sets of tools can be used to create similar outputs, the latter requires significant skill and experience, as well as the purchase of a pricey software package. The former requires knowing how to write a sentence.  

The difference with AI

While it’s true that other technologies have also been used for both good and bad, Vincent Conitzer, a professor of computer science and member of the Responsible AI group at Carnegie Mellon, tells me AI is different in some key respects. 

“One is that our understanding of generative AI is still limited in important ways. We don't deeply understand why these systems often work as well as they do, we can't predict what outputs they will produce or figure out why they produced the output they did. And, most importantly here, our techniques for keeping them safe are still much too limited and brittle,” he said.

What’s also unique to AI is how quickly it’s being developed and released into the world. The creators of and investors in generative AI describe the technology as powerful enough to transform the economy and society. That kind of power means we should pay particular attention to what can go wrong.

Center for AI Safety director Dan Hendrycks tells me we shouldn’t wait for tragic or catastrophic incidents to occur. 

"This is especially important as in the coming months I expect to see continued rapid progress. That’s exciting on one level, but it serves no one to pretend that it can be achieved without risk mitigation and common sense safeguards,” he said.

Thanks for reading. Now, here’s more AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.