Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Al Jazeera
Al Jazeera
Technology

White House tells tech CEOs they have ‘moral duty’ on AI

The Biden administration has called on the tech industry to ensure its AI products are safe [File: Brendan Smialowski/ AFP]

Tech executives in the United States have been told they have a “moral” duty to ensure artificial intelligence does not harm society during a meeting at the White House.

The CEOs of Google, Microsoft, OpenAI and Anthropic attended the two-hour meeting about the development and regulation of AI on Thursday at the invitation of US Vice President Kamala Harris.

US President Joe Biden, who briefly attended the meeting, told the CEOs that the work they were carrying out had “enormous potential and enormous danger.”

“I know you understand that,” Biden said, according to a video posted later by the White House.

“And I hope you can educate us as to what you think is most needed to protect society as well as to the advancement.”

Harris said in a statement after the meeting that tech companies “must comply with existing laws to protect the American people” and “ensure the safety and security of their products”.

The meeting featured a “frank and constructive discussion” on the need for tech firms to be more transparent with the government about their AI technology as well as the need to ensure the safety of such products and protect them from malicious attacks, the White House said.

OpenAI CEO Sam Altman told reporters after the meeting that “we’re surprisingly on the same page on what needs to happen.”

The meeting came as the Biden administration announced a $140m investment in seven new AI research institutes, the establishment of an independent committee to carry out public assessments of existing AI systems and plans for guidelines on the use of AI by the federal government.

The stunning pace of advancement in AI has generated excitement in the tech world as well as concerns about social harm and the possibility of the technology eventually slipping out of developers’ control.

Despite being in its infancy, AI has already been embroiled in numerous controversies, from fake news and non-consensual pornography to the case of a Belgian man who reportedly took his own life following encouragement by an AI-powered chatbot.

In a Stanford University survey of 327 natural language-processing experts last year, more than one-third of researchers said they believed AI could lead to a “nuclear-level catastrophe”.

In March, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak were among 1,300 signatories of an open letter calling for a six-month pause on training AI systems as “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.