ChatGPT burst onto the technology world, gaining 100 million users by the end of January 2023, just two months after its launch and bringing with it a looming sense of change.
The technology itself is fascinating, but part of what makes ChatGPT uniquely interesting is the fact that essentially overnight, most of the world gained access to a powerful generative artificial intelligence that they could use for their own purposes. In this episode of The Conversation Weekly, we speak with researchers who study computer science, technology and economics to explore how the rapid adoption of technologies has, for the most part, failed to change social and economic systems in the past – but why AI might be different, despite its weaknesses.
Spending just a few minutes playing with new, generative AI algorithms can show you just how powerful they are. You can open up Dall-E, type in a phrase like “dinosaur riding motorcycle across a bridge,” and seconds later, the algorithm will produce multiple images more or less depicting what you asked for. ChatGPT does much the same, just with text as its output.
These models are trained on huge amounts of data taken from the internet, and as Daniel Acuña, an associate professor of computer science at the University of Colorado, Boulder, in the U.S. explains, that can be a problem. “If we are feeding these models data from the past and data from today, they will learn some biases,” Acuña says. “They will relate words – let’s say about occupations – and find relationships between words and how they are used with certain genders or certain races.”
The problem of bias in AI is not new, but with increased access, more people are now using it, and as Acuña says, “I hope that whoever is using those models is aware of these issues.”
With any new technology there is always a risk of misuse, but these concerns are usually accompanied by hope that as people gain access to better tools, their lives will improve. That theory is exactly what Kentaro Toyama, a professor of community information at the University of Michigan, has studied for nearly two decades.
“What I ultimately discovered was that it is quite possible to get research results that were positive, where some kind of technology would enhance a situation in a government or school, or in a clinic,” explains Toyama. “But it was nearly impossible to take that technological idea and then have it have impact at wider scales.”
Ultimately, Toyama came to believe that “technology amplifies underlying human forces. And in our current world, those human forces are aligned in a way that the rich get richer and inequality keeps growing.” But he was open to the idea that if AI could be inserted into a system that was trying to improve equality, then it would be an excellent tool for that.
Technologies can change social and economic systems when access increases, according to Thierry Rayna, an economist who studies innovation and entrepreneurship. He has studied how widespread access to digital music, 3D printing, block chain and other technologies fundamentally change the relationship between producers and consumers. In each of these cases, “increasingly people have become prosumers, meaning they are actively involved in the production process.” Rayna predicts the same will be true with generative AI.
Rayna says that “In a situation where everybody’s producing stuff and people are consuming from other people, the main issue is that choice becomes absolutely overwhelming.” Once an economic system reaches this point, according to Rayna, platforms and influences become the wielders of power. But Rayna thinks that once people can not only use AI algorithms, but train their own, “It will probably be the first time in a long time that the platforms will actually be in danger.”
This episode was written and produced by Katie Flood and hosted by Dan Merino. The interim executive producer is Mend Mariwany. Eloise Stevens does our sound design, and our theme music is by Neeta Sarl.
You can find us on Twitter @TC_Audio, on Instagram at theconversationdotcom or via email. You can also subscribe The Conversation’s free daily email here. A transcript of this episode will be available soon.
Listen to “The Conversation Weekly” via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.
Daniel Acuña receives funding from the US Office of Research Integrity grants ORIIR180041, ORIIIR190049, ORIIIR200052, and ORIIIR210062, related to automated methods to detect image manipulation and plagiarism. He has also received funding from the National Science Foundation, the Sloan Foundation, and DARPA through the Center for Open Science's SCORE project.
Kentaro Toyama does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This article was originally published on The Conversation. Read the original article.