Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Diane Brady

The deepfake threat to CEOs

(Credit: Getty Images)

Good morning.

Heading into this election, many feared the campaigns would get hijacked by deepfakes—images and audio made to look and sound almost exactly like public figures that are used to spread disinformation. Yesterday, the Office of the Director of National Intelligence (ODNI), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) issued a warning about foreign adversaries, especially Russia, manufacturing fake videos to claim to show ballot stuffing, cyberattacks and other election fraud. That followed another warning last week, stating that “Russian influence actors” had manufactured a video that falsely depicted individuals claiming to be from Haiti voting illegally in multiple counties in Georgia.

Terrible lies but few seem to be falling for them – yet. The tools used to hone disinformation in elections are here to stay. They are already being felt in frauds like the deepfake “CFO” of U.K. design firm Arup, who tricked an overseas employee into transferring more than $25 million to criminals.

I recently spoke with Irish libel lawyer Paul Tweed about the legal implications of deepfakes as he was promoting his book, From Holywood to Hollywood: My Life as an International Libel Lawyer to the Rich and Famous. I wouldn’t normally seek out someone whose clients range from Britney Spears to the British Royal Family, but Tweed also happens to represent and go after big corporations, too. He’s keeping an eye on new liability risk created by generative AI and the deepfakes that it enables.

In his view, it's just a matter of time before we see a deepfake CEO scandal that sends a stock price dropping before the truth gets out. “It really doesn't matter whether you are the chairman of a blue-chip corporation, or a Hollywood A-Lister worried about protecting your brand, you've got the exact same problem,” says Tweed.

His goal: to hold the platform companies accountable for failing to stop the spread of disinformation. While Section 230 of the Communications Decency Act of 1996 protects online platforms and users from being held liable for third-party content, it’s not clear if the act covers content generated by the platform’s own AI. “What we're trying to do is to put the platforms in a corner,” he says. “If I can get a class action going in the States, I'm absolutely confident I can get it enforced in Ireland” -- where many big tech players are incorporated. Nobody, of course, wants to be the test case that could make that happen.

More news below. 

Diane Brady
diane.brady@fortune.com
Follow on LinkedIn

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.