Hello and welcome to Eye on A.I.
Tomorrow, Senate Majority Leader Chuck Schumer will kick off his AI Insight Forum with a packed lineup of A.I. executives in attendance. First announced back in June, it’s the first of nine listening sessions planned to discuss both the risks and opportunities posed by A.I. and how Congress might regulate the technology.
While we won’t know exactly what happens in the forum (more on that later), it’s a major show of how Congress is putting its ear to the ground on A.I. and who it’s listening to. It’s also an interesting contrast to what’s happening at the state and local levels, where we’re starting to see more action than listening.
“These forums will build on the longstanding work of our Committees by supercharging the Senate's typical process so we can stay ahead of AI's rapid development,” Schumer wrote in his latest "Dear Colleague” letter. “This is not going to be easy, it will be one of the most difficult things we undertake, but in the twenty-first century we cannot behave like ostriches in the sand when it comes to AI.”
And yet, while this type of investigation is desperately needed, the invitees and its closed format are already causing backlash. Executives expected to attend Wednesday’s forum include OpenAI CEO Sam Altman, Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, X CEO Elon Musk, former Microsoft CEO Bill Gates, Nvidia CEO Jensen Huang, and Palantir CEO Alex Karp.
A few ethics researchers were invited, but critics have called out the Senate for seeking input largely from the powerful executives who are seeking to profit from these technologies. These are many of the same executives who have a history of publicly saying they welcome regulation while deploying armies of lobbyists to campaign against it behind closed doors. Not to mention that several of these companies, such as Facebook and Google, have recently been fined billions in the EU for their mishandling of data and user privacy, which is an issue at the core of A.I.
“This is the room you pull together when your staffers want pictures with tech industry AI celebrities. It's not the room you'd assemble when you want to better understand what AI is, how (and for whom) it functions, and what to do about it,” tweeted Meredith Whittaker, who is president of the Signal Foundation and has previously testified before Congress regarding A.I. issues like facial recognition.
Triveni Gandhi, the responsible A.I. lead at Dataiku, shared a similar perspective with Eye on A.I., saying that “it’s vital Congress consults a complete ecosystem of A.I. innovators, not just goliaths.”
“The A.I. ecosystem is massive and is made up of many different organizations of all sizes. Congress has a checkered history of favoring the incumbents with regulations, and A.I. is too important to lock out participation in these critical conversations,” she said.
There’s also concern over the fact that these meetings will be closed to the public and press and are considered classified, resulting in calls for greater transparency from researchers, journalists, and advocates for responsible tech. And the call is coming from inside the house, too; just yesterday, Democratic Colorado Sen. John Hickenlooper convened a subcommittee hearing titled “The Need for Transparency in Artificial Intelligence.”
Given how significant A.I.'s impact will be across society, transparency doesn't seem like an unreasonable thing to expect.
Before we get to the rest of this week’s A.I. news, a quick note about an online event Fortune is hosting next month called "Capturing A.I. Benefits: How to Balance Risk and Opportunity."
In this virtual conversation, part of Fortune's Brainstorm A.I., we will discuss the risks and potential harms of A.I., centering the conversation around how leaders can mitigate the potential negative effects of the technology, allowing them confidently to capture the benefits. The event will take place on Oct. 5 at 11 a.m. ET. Register for the discussion here.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com