President Joe Biden today released his much-awaited executive order on the subject of AI. The headlines are justifiably about the order’s safety and security aspects (we have a story up on that here), but there’s also a fair amount in there about privacy and other civil liberties.
The U.S. lacks a comprehensive federal privacy law, with existing rules relating narrowly to either children (COPPA) or health information (HIPAA). Biden clearly doesn't like this—in his State of the Union earlier this year, he identified data privacy as a rare opportunity for bipartisan legislation, mostly with a focus on protecting under-18s, but also featuring “stricter limits on the personal data that companies collect on all of us.”
Now the president is using AI-related risks to bolster his case and slowly move towards action. From today’s White House statement: “AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”
“To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids,” the statement continued.
A plea to a broken Congress is one thing, but Biden also directed a slew of actions around “privacy-preserving” technologies and techniques. There’s now going to be more federal support for their development, and federal agencies will be encouraged to use them, with guidelines being established to evaluate their effectiveness. There will also be an evaluation of how government agencies buy personally identifiable data from commercial sources such as data brokers and new guidance about avoiding “AI risks” when using it.
Biden’s White House has previously laid out concerns about AI and data privacy—a whole section in last November’s Blueprint for an AI Bill of Rights is devoted to it—but now it’s actually starting to do something about the issue. The bar may be low, but I’ve never seen a U.S. administration being so proactive on privacy, and I’m intrigued to see whether this momentum can be maintained or even hopefully increased.
Biden’s AI order is also proactive on other fronts, in ways that ought to help tackle both longer-term and more immediate risks. Among other things, federal agencies will have to: develop AI safety and security standards and evaluate risks to critical infrastructure; start figuring out how to better support workers who find their jobs displaced; create resources for schools who want to use AI for things like personalized tutoring; and coordinate better on identifying and ending AI-powered civil rights violations.
There are some responsibilities here for Big AI—companies will have to share “safety test results and other critical information” with the government, and give it a heads-up when training risky new models—but, so far, industry is mostly being left to get on with it. Biden has already got the big players to make voluntary commitments around AI safety, and the G7 today also released a code of conduct that is again voluntary.
The U.K. is also hosting its Global Summit on AI Safety this week, so let’s see what comes out of that. Incidentally, a coalition of digital rights activists and trade unionists today issued a rebuke to Prime Minister Rishi Sunak, complaining that his event is shutting them out even though Sunak has acknowledged that the technology “will fundamentally alter the way we live, work, and relate to one another.”
More news below.
David Meyer
Want to send thoughts or suggestions to Data Sheet? Drop a line here.