Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Fortune Staff

Morals, judgement among things that AI tech cannot do, experts say

Experts from Santa Clara University, Indeed, and Workday in discussion during Fortune's Brainstorm AI conference in 2024.

Despite frequent talk in technology circles that advancements in generative artificial intelligence technology are on track to attain a level of “human intelligence,” there are some things industry experts believe the technology simply cannot do.

Moral reasoning and human level judgement are two key components to human cognition that a technology like AI cannot meaningfully cultivate, according to Ann Skeet, the director of leadership ethics at Santa Clara University's Markkula Center for Applied Ethics. Skeet spoke to a room of tech executives and AI experts during Fortune’s Brainstorm AI conference in San Francisco on Tuesday, alongside Kathy Pham, Vice President of Artificial Intelligence at Workday, and Raj Mukherjee, executive vice president of marketplace product and user experience at Indeed. (Disclosure: Workday is a sponsor of Brainstorm AI).

“Moral reasoning is developmental, just like learning how to read and write,” Skeet said. “Ultimately, you get to a full form consciousness at about 40 years old, but for some people it never happens. We need to be aware that AI can't do that – it doesn't have the capacity for moral reasoning.”

Some in the room expressed a view common among AI boosters, optimists and investors that AI and machines in general will develop at such a rate in the coming decades that such technology will simply be better at humans than anything and everything. Even if such a science fictional outcome occurred, Pham of Workday argued that thinking among experts needs to go toward “what parts then do we want to preserve just for our humanity.”

Mukherjee of Indeed added: “We should think about the world we want to create, and we don’t want to preclude humans.”

Pham, Skeet and Mukherjee all agreed the technological uses and outcomes are still in the control of people and leaders. “This is where we decide do we want to work for the tech, or make it work for us,” Skeet said. She added that, ten years ago when she started her work at Santa Clara University, it was impossible to get business executives in a room to discuss ethics – now it’s all such leaders want to discuss. 

Pham urged people in positions of power to know when to say enough is enough when it comes to how AI is used in the future, citing actor and director Ben Affleck’s recent comments to CNBC regarding “Art is knowing when to stop.”

“Maybe good leadership is also knowing when to stop,” Pham said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.