It's time to start worrying about, and regulating, AI in equal measure.
I'll tell you what convinced me. No, it wasn't the low-key but determined calls from OpenAI CEO Sam Altman, during Tuesday's (May 16) US Senate hearing on AI, for a regulatory body and global oversight. It wasn't even the more worrisome rhetoric from almost-alarmist Professor Gary Marcus. Yes, all that had an impact, and I want to dig into it. What really got me, though, were the first few minutes of what turned into a three-hour hearing.
That was when Senator Richard Blumenthal opened the hearing with a prerecorded speech that outlined the dangers of black-box, unregulated AI, and diminishing trust, ending with "This is not the future we want."
It was a good way to kick off the hearing – Altman's first – setting as it did an inquisitive and concerned tone. Except the voice and words were not Senator Blumenthal's. He explained:
"If you were listening from home you might’ve thought that voice was mine and the words from me, but in fact that voice was not mine, the words were not mine, and the audio was an AI voice cloning software trained on my floor speeches. The remarks were written by ChatGPT when it was asked how I would open this hearing."
While I noticed Senator Josh Hawley visibly smirking next to Blumenthal, I felt a chill go down my spine. I mean, I know generative AI is capable of all this, and yet, I'm not sure it had ever been presented in such stark terms, and on such a lofty and public stage.
It was, to be honest, an OMG moment – and that's putting it politely.
Risk vs reward
The rest of the hearing was far less revelatory. Senators, for once, appeared to have done their homework, talking intelligently about models, training, and content rights. They dove into how easily these chatbots can manipulate people, and how they often lapse into hallucinatory responses.
As for Altman, he made it clear that he was not there to beat back criticism of OpenAI and GPT-4 .
"We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models," said Altman, who supports not just a regulatory body, and the licensing of models of a certain degree of power, but some kind of global overnight, although he accepts that will be harder to achieve.
Altman displayed a surprising amount of empathy, telling the congresspeople, "We understand the people are anxious about how it could change the way we live. We are, too."
Many of the assembled Senators expressed a desire to avoid the mistakes made with social media: acting too late, and assuming that antiquated policies (like Section 230) were up to the task of navigating the modern social media landscape.
This is laudable, but after hearing Senator Blumenthal's ChatGPT recording, I wondered if maybe we're already too late. Although I would say the senator's quick admission that the speech was not his is in line with Altman's recommendation that AI-generated content and images always make their origins clear.
Senators also rightly described AI's rapid emergence to something akin to the invention of the printing press, but also, and not unreasonably, worried that it might more like the creation of the atomic bomb.
Everyone wants this
There were also recommendations for AI 'Nutrition Labels' that would explain exactly what went into training an AI, and which would help us understand what a generative AI produces and why.
Obviously, Altman also did his best to explain that AI, and the work OpenAI does, can be a force for good. Yes, there will be job losses, but he insists there will also be a lot of job creation. Altman outlined the safeguards his company builds into development, including testing GPT-4 for six months before it was released publicly.
He added, "The benefits of the tools we’ve deployed so far vastly outweigh the risks."
But with the next US Presidential election just a year away, and the growing realization that vast numbers of people can be fooled by the content chatbots gleefully spit out, the clock is ticking.
Despite the almost unanimous agreement that we need AI regulation now, the prospects for Congress authorizing and funding an AI regulatory body in the meantime are slim. Regulation of any kind moves at its own glacial pace, and rarely seems designed to even catch up with, let alone get ahead of, risks. We have self-driving car technology on the road right now, for instance, but few nationwide rules for managing it.
Senator Blumenthal didn't just kick off the hearing with a bit of AI showmanship, he made a perhaps unintentional point: it's already too late to get ahead of this generative AI freight train. The question is, can we climb aboard, walk to the engineer's cabin and take control before it comes off the rails?