In this episode, we dive into some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention-based AI, and then into reports on AI safety from OpenAI and Stanford. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed in the name of "trust and safety" by Silicon Valley's content suppression bureaucracies). But there's no doubt that a potential existential issue lurks below the surface of the most ambitious AI projects.
Whether or not ChatGPT's stochastic parroting will ever pose a threat to humanity, Nick Weaver reports, it clearly poses a threat to a lot of people's reputations.
I confess that there's surprisingly little cyberlaw in the biggest intel leak of the last decade. It turns out that leakers can do as much damage as cyberspies, just by folding, carrying, and photographing classified documents. While there's some evidence that the Russian government may have piggybacked on the leak to sow disinformation, Nick says, the real puzzle is the leaker's motivation. That leads us to the question whether being a griefer is grounds for losing your clearance.
Paul Rosenzweig educates us about the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which would empower the administration to limit or ban TikTok. He highlights the most prominent argument against the bill, which is, no surprise, the discretion the act would confer on the executive branch. The bill's authors, Sen. Mark Warner (D-VA) and Sen. John Thune (R-SD), have responded to this criticism, but it looks as though they'll be offering substantive limits on executive discretion only in the heat of Congressional action.
Nick is impressed by the law enforcement operation that shuttered Genesis Market, where credentials were widely sold to hackers. The data seized by the FBI in the operation will pay dividends for years.
I give a warning to anyone who has left a sensitive intelligence job to work in the private sector: If your new employer has ties to a foreign government, the Director of National Intelligence has issued a new directive that (sort of) puts you on notice that you could be violating federal law. The directive has detailed provisions for how the intelligence community will tell its current employees about the new post-employment restrictions, but it offers very little guidance to intelligence community alumni who have already moved to the private sector.
Nick is enthusiastic about the tough tone taken by the Treasury in its report on the illicit finance risk in decentralized finance.
Paul and I cover Utah's bill requiring teens to get parental approval to join social media sites. After twenty years of mocking red states and their Congressional delegations for trying to control the internet's impact on kids, it looks to me as though Knowledge Class parents are getting worried about their own children. When the idea of age-checking internet users gets endorsed by the UK, Utah, and The New Yorker, I suggest, those arguing against the proposal may have a tougher time than they did in the 90s.
And in quick hits:
- Nick comments on the massive 3CX supply-chain hack, which seems to have been a fishing-with-dynamite effort to steal a few people's cryptocurrency.
- I raise doubts about a much-cited claim that a Florida city's water system was the victim of a cyber attack.
- Nick unloads on Elon Musk for drawing a German investigation over Twitter's failure to promptly remove hate speech.
- Paul and I note the UK's most recent paper on how to exercise cyber power responsibly.
- Nick and I puzzle over the conflict between the Biden administration and the New York Times about a government contract that supposedly undermined the administration's stance on spyware.
- And for those who listen to the podcast for news about the Canadian Ski Marathon, I have released a low-fi video that should appeal to both of you.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The post What makes AI safe? appeared first on Reason.com.