Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Politics
Stewart Baker

Taking AI Existential Risk Seriously

This episode is notable not just for cyberlaw commentary, but for its imminent disappearance from these pages and from podcast playlists everywhere. Having promised to take stock of the podcast when it reached episode 500, I've decided that I, the podcast, and the listeners all deserve a break.

So, I'll be taking one after the next episode. No final decisions have been made, so don't delete your subscription, but don't expect a new episode any time soon. It's been a great run, from the dawn of the podcast age in 2014, through the ad-fueled podcast boom, which I manfully resisted, to the podcast market correction that's still under way. It was a pleasure to engage with listeners from all over the world. (Yes, even the EU! )

As they say, in the podcast age, everyone is famous for fifteen people. That's certainly been true for me, and I'll always be grateful for listeners' support – not to mention for all the great contributors who've joined the podcast over the years.

Turning back to cyberlaw, there are a surprising number of people arguing that there's no reason to worry about existential and catastrophic risks from proliferating or runaway AI risks. Some of that is people seeking clever takes; a lot of it is ideological, driven by fear that talking about the end of the world will distract attention from the dire danger of face recognition. One useful antidote to this view is the Gladstone Report, written for the State Department's export control agency. David Kris gives an overview of the report for this episode of the Cyberlaw Podcast. The report explains the dynamic, and some of the evidence, behind all the doom-saying, a discussion that is more persuasive than the report's prescriptions for avoiding disaster through regulation.

Speaking of the moral panic over face recognition, Paul Stephan and I unpack a New York Times piece saying that Israel is using face recognition in its Gaza conflict. Actually, we don't so much unpack it as turn it over and shake it, only to discover it's largely empty. Apparently, the editors of the NYT thought that tying face recognition to Israel and Gaza was all their readers needed to understand that the technology is evil, evil, evil.

More interesting is this story arguing that the National Security Agency, traditionally at the forefront of computers and national security, may have to sit out the AI revolution. The reason, David tells us, is that NSA's access to mass quantities of data for training is complicated by rules and traditions against intelligence agencies accessing data about Americans. And there are few training databases not contaminated with data about and by Americans.

While we're feeling sorry for the intelligence community's struggles with new technology, Paul notes that Yahoo News has assembled a long analysis of all the ways that personalized technology is making undercover operations impossible for CIA and FBI alike.

Michael Ellis weighs in with a review of a report by the Foundation for the Defence of Democracies on the need for a U.S. Cyber Force to man, train, and equip warfighting nerds for Cyber Command. It's a bit of an inside baseball solution, heavy on organizational boxology, but we're both persuaded that the current system for attracting and retaining cyberwarriors is not working. As "Yes, Minister" would tell us, we must do something, and this is something.

In contrast, it's fair to say that the latest Senate Judiciary proposal for a "compromise" 702 renewal bill is nothing, or at least nothing much – a largely phony compromise that substitutes ideological baggage for real-world solutions. David and I are unimpressed—and surprised at how muted the Biden administration has been in trying to wrangle the Democratic Senate toward a workable bill.

Paul and Michael review the latest trouble for TikTok – a likely FTC lawsuit over privacy. And Michael and I puzzle over the stories claiming that Meta may have "wiretapped" Snapchat analytic data. They come from

trial lawyers suing Meta, and they raise are a lot of unanswered questions, such as whether users consented to the collection of the data. In the end, we can't help thinking that if Meta had 41 of its lawyers reviewing the project, they probably found a way to avoid wiretapping liability.

The most intriguing story of the week is the complex and surprising three-or four-cornered fight in northern Myanmar over hundreds of thousands of women trapped in call centers to run romance and pig-butchering scams. Angry that many of the women and many of the victims are Chinese, China persuaded a warlord to attack the call centers and free many of the women, deeply embarrassing the current Myanmar ruling junta and its warlord allies, who'd been running the scams. And we thought our southern border was a mess!

And in quick hits:

Direct Download: https://traffic.libsyn.com/steptoecyber/The_Cyberlaw_Podcast_499_.mp3

You can subscribe to The Cyberlaw Podcast using iTunes, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

The post Taking AI Existential Risk Seriously appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.