Artificial intelligence has been around for decades. But the scope of the conversation around AI changed dramatically last year, when OpenAI launched ChatGPT, a Large Language Model that, once prompted, can spit out almost-passable prose in a strange semblance of, well, artificial intelligence.
Its existence has amplified a debate among scientists, executives and regulators around the harms, threats and benefits of the technology.
Related: US Expert Warns of One Overlooked AI Risk
Now, governments are racing to pen feasible regulation, with the U.S. so far seeming to look predominantly to prominent tech CEOs for their insight into regulatory practices, rather than scientists and researchers. And companies are racing to increase the capabilities of their AI tech as the boards of nearly every industry look for ways to adopt AI.
With harms and risks of dramatic social inequity, climate impact, increased fraud, misinformation and political instability pushed to the side amidst predictions of super-intelligent AI, the ethical question comes into greater focus.
The answer to it is not surprisingly nuanced. And though there is a path forward, there remains a litany of ethical red flags regarding AI and those responsible for its creation.
'There's going to be a hell of a lot of abuse of these technologies.'
The ethical issue intrinsic to AI has nothing to do with purported concerns of developing a world-destroying superintelligence. These fears, spouted by Elon Musk and Sam Altman, have no basis in reality, according to Suresh Venkatasubramanian, an AI researcher and professor who in 2021 served as a White House tech advisor.
"It's a ploy by some. It's an actual belief by others. And it's a cynical tactic by even more," Venkatasubramanian told TheStreet. "It's a great degree of religious fervor sort of masked as rational thinking."
"I believe that we should address the harms that we are seeing in the world right now that are very concrete," he added. "And I do not believe that these arguments about future risks are either credible or should be prioritized over what we're seeing right now. There's no science in X risk."
Rather, the issue with AI is that there is a "significant concentration of power" within the field that could, according to Nell Watson, a leading AI researcher and ethicist, exacerbate the harms the technology is causing.
"There isn't a synchronicity between the ability for people to make decisions about AI systems, what those systems are doing, how they're interpreting them and what kinds of impressions these systems are making," Watson told TheStreet.
And though normal civilians don't have any say in whether — or how — these systems get created, the vast majority of people, according to recent polling by the Institute for AI Policy, want AI development to slow down. More than 80% of those surveyed don't trust tech companies to self-regulate when it comes to AI; 82% want to slow down the development of the technology and 71% think the risks outweigh the potential rewards.
With the power to create and deploy AI models concentrated to just a few tech giants — companies incentivized to earn revenue in order to maximize shareholder value — Watson is not optimistic that the firms deploying AI will do so responsibly.
"Businesses can save a lot of money if they get rid of middle managers and line managers and things like that," Watson said. "The prognosis is not good. There's going to be a hell of a lot of abuse of these technologies. Not always deliberately, but simply out of complacency or out of ignorance.
"A lot of these systems are going to end up having a terrible impact on people."
This impact is not some distant threat; it has been ongoing for years. Britain's Horizon Post Office scandal involved "dozens of people being wrongfully sent to jail by an algorithmic management system that said that they were stealing when they were not," Watson said.
Dozens of these convictions were later overturned.
"There are real, actual harms to people from systems that are discriminatory, unsafe, ineffective, not transparent, unaccountable. That's real," Venkatasubramanian said. "We've had 10 years or more of people actually being harmed. We're not concerned about hypotheticals."
Related: Here's the Steep, Invisible Cost Of Using AI Models Like ChatGPT
Responsible AI in Big Tech
This concentration of control, according to Brian Green, an ethicist with the Institute for Technology, Ethics, & Culture, is potentially dangerous considering the ethical questions at hand: rampant misinformation, data scraping and training AI models on content without notifying, crediting or compensating the original creator.
"There are lots of things to be worried about because there are just so many things that can go wrong," Green told TheStreet. "The more power that people have, the more they can use that power for bad purposes, and they might not be intending to use it for that; it might just happen as a side effect."
Though he recognized that there is a long way to go, Green, who co-authored a handbook on ethics in emerging technology, is optimistic that if companies start handling small ethical tasks, it will prepare everyone to handle larger issues (such as economic disruption) when those issues come to hand.
If the firms behind AI start thinking intentionally about ethics, striving to make "AI that's more fair, that's more inclusive, that's safer, that's more secure, that's more private, then that should get them prepared to take on any big issues in the future," Green said. "If you're doing these small things well, you should be able to do the big things well, also."
This effort, according to Watson, needs to go beyond mere ethical intentions; it ought to involve the combination of ethics with AI safety work to prevent some of "the worst excesses" of these models.
"The people who are impacted should have a say in how it gets implemented and developed," Venkatasubramanian said. "It absolutely can be done. But we need to make it happen. It's not going to happen by accident."
The regulatory approach
Citing the importance of clear, actionable regulation to guarantee that the companies developing these technologies engage them responsibly, Watson's greatest hope is that alignment comes easily and regulation comes quickly. Her greatest fear is that the congressional approach to AI might mimic that of the congressional approach to carbon emissions and the environment.
"There was a point where everybody, liberal, conservative, could agree this was a good thing," Watson said. "And then it became politicized and it died. The same thing could very easily happen with AI ethics and safety."
Related: Some of the laws to regulate AI are already in place, expert argues
Green, though optimistic, was likewise of the opinion that people, from those artists impacted by generative AI, to the companies developing it, to the lawmakers in Washington, must actively work to ensure this technology is equitable.
"You really need either some kind of strong social movement towards doing it or you need government regulation," Green said. "If every consumer said 'I'm not going to use a product from this company until they get their act together, ethically,' then it would work."
A growing concern around regulation, however, specifically that which might limit the kind or quantity of data that AI companies could scrape, is that it would further cement Big Tech's lead over any smaller startups.
Amazon, (AMZN) -), Google (GOOGL) -) and Apple (AAPL) -) "have all the data. They don't have to share it with anybody. How do we ever catch up?" Diana Lee, co-founder and CEO of Constellation, an automated marketing firm, told TheStreet. "When it comes to information that's on the web that's publicly traded information, we feel like that's already ethical because it's already out there."
Others, such as Microsoft (MSFT) -), have often discussed the importance of striking a "better balance between regulation and innovation."
But these recurring fears of hindering innovation, Venkatasubramanian said, have yet to be legitimately expounded upon, and to him, hold little water. The same executives who have highlighted fears of a regulatory impact on innovation have done little to explain how regulation could hurt innovation.
"All I can hear is 'we want to conduct business as usual,'" he said. "It's not a balance."
The important thing now, Venkatasubramanian said, is for regulators to avoid the "trap of thinking there's only one thing to do. There are multiple things to do."
Chief among them is clear, enforceable regulation. Venkatasubramanian co-authored the White House's Blueprint for an AI Bill of Rights, which he said could easily be adopted into regulation. The Bill of Rights lays out a series of principles — safe and effective systems, discrimination protections, data privacy, notice and explanation and human alternatives — designed to protect people from AI harm.
"It is really important that Congress pays attention not just to AI as generative AI but AI broadly," he said. "Everyone's thinking about ChatGPT; it'd be really terrible if all the legislation that gets proposed only focuses on generative AI.
"All the harms that we're talking about will exist even without generative AI."
Related: Why ChatGPT Can't Turn Into Marvel Villain Ultron ... Yet
Chuck Schumer's AI Forums
In an effort to better inform Congress about a constantly evolving technological landscape, Senate Majority Leader Chuck Schumer (D-N.Y.) hosted the first of a series of nine AI forums Sept. 13. Musk, Altman, Bill Gates and executives from companies ranging from Google (GOOGL) -) to Nvidia (NVDA) -) were present at the meeting, a fact that garnered wide-spread criticism for appearing to focus regulatory attention on those who stand to benefit from the technology, rather than those impacted by or studying it.
"I think they missed an opportunity because everyone pays attention to the first one. They made a very clear statement," Venkatasubramanian said. "And I think it is important, critically important, to hear from the people who are actually impacted. And I really, really hope that the future forums do that."
The executives behind the companies building and deploying these models, Venkatasubramanian added, don't seem to understand what they're creating. Some, including Musk and Altman, have "very strange ideas about what we should be concerned about. These are the folks Congress is hearing from."
The path toward a positive AI future
While the harms and risks remain incontrovertible, artificial intelligence could lead to massive societal improvements. As Gary Marcus, a leading AI researcher, has said, AI, properly leveraged, can help scientists across all fields solve problems and gain understanding at a faster rate. Medicines can be discovered and produced more quickly.
The tech can even be used to help greater understand and mitigate some impacts of climate change by allowing scientists to better collate data in order to discover predictive trends and patterns.
Current systems —LLMs like ChatGPT — however, "are not going to reinvent material science and save the climate," Marcus told the New York Times in May. "I feel that we are moving into a regime where the biggest benefit is efficiency. These tools might give us tremendous productivity benefits but also destroy the fabric of society."
Further, Venkatasubramanian said, there is a growing list of incredible innovations happening in the field around building responsible AI, innovating methods of auditing AI systems, building instruments to examine systems for disparities and building explainable models.
These "responsible" AI innovations are vital to get to a positive future where AI can be appropriately leveraged in a net-beneficial way, Venkatasubramanian said.
"Short term, we need laws, regulations, we need this now. What that will trigger in the medium term is market creation; we're beginning to see companies form that offer responsible AI as a service, auditing as a service," he said. "The laws and regulations will create a demand for this kind of work."
The longer-term change that Venkatasubramanian thinks must happen, though, is a cultural one. And this shift might take a few years.
"We need people to deprogram themselves from the whole, 'move fast and break things' attitude that we've had so far. People need to change their expectations," he said. "That culture change will take time because you create the laws, the laws create the market demand, that creates the need for jobs and skills which changes the educational process.
"So you see a whole pipeline playing out on different time scales. That's what I want to see. I think it's entirely doable. I think this can happen. We have the code, we have the knowledge. We just have to have the will to do it."
If you work in artificial intelligence, contact Ian by email ian.krietzberg@thearenagroup.net or Signal 732-804-1223
Action Alerts PLUS offers expert portfolio guidance to help you make informed investing decisions. Sign up now.