Well, how about that—on the same day that China unveiled its strict new rules for artificial intelligence safety, the U.S. government moved forward with its own, more cautious push to keep A.I. accountable.
While Beijing’s rules are typically draconian, imposing censorship on both the inputs and outputs of generative A.I. models, the U.S. National Telecommunications and Information Administration (NTIA) has merely launched a request for comment on new rules that might be needed to ensure A.I. systems safely do what their vendors promise.
Here’s NTIA administrator Alan Davidson: “Responsible A.I. systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them. Our inquiry will inform policies to support A.I. audits, risk and safety assessments, certifications, and other tools that can create earned trust in A.I. systems.”
There are some similarities between what the NTIA is tentatively envisioning and what China’s Cyberspace Administration just dictated—though the methods seem quite different. Most notably, the Chinese rules demand that companies submit their models for official security review before they start serving the public, while the NTIA’s request for comment outlines ideas such as independent third-party audits, the effectiveness of which could be incentivized through bounties and subsidies.
Both China and the U.S. want to battle bias in A.I. systems, but again, Beijing just orders A.I. companies not to allow their systems to be discriminatory, while the NTIA document talks about more nuanced tactics, like the use of procurement standards.
If you want to share your thoughts with the agency, you’ll find the necessary forms here. The deadline is June 10, by which point U.S. officials will also have a better idea of what Europe’s A.I. rules might end up looking like.
The EU’s A.I. Act was first proposed a couple of years back, but a lot has happened in that time—the European Commission’s original proposal didn’t think chatbots would need regulating; insert wry chuckle here—so lawmakers are now trying to bring it up to date. Two weeks from today, the European Parliament’s committees dealing with the bill will vote on the general shape of the version they’d like to see. By the time the full Parliament votes on the bill next month, more details will need to have been worked out. Then it goes to backroom “trilogue” negotiations with the Commission and representatives of the EU’s member states.
All this painstaking democratic wrangling is a far cry from China’s simple imposition of A.I. rules, but hopefully, the result will be somewhat friendlier to the companies providing such systems, and the citizens who want to get a straight answer from them.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.