Despite frequent talk in technology circles that advancements in generative artificial intelligence technology are on track to attain a level of “human intelligence,” there are some things industry experts believe the technology simply cannot do.
Moral reasoning and human level judgement are two key components to human cognition that a technology like AI cannot meaningfully cultivate, according to Ann Skeet, the director of leadership ethics at Santa Clara University's Markkula Center for Applied Ethics. Skeet spoke to a room of tech executives and AI experts during Fortune’s Brainstorm AI conference in San Francisco on Tuesday, alongside Kathy Pham, Vice President of Artificial Intelligence at Workday, and Raj Mukherjee, executive vice president of marketplace product and user experience at Indeed. (Disclosure: Workday is a sponsor of Brainstorm AI).
“Moral reasoning is developmental, just like learning how to read and write,” Skeet said. “Ultimately, you get to a full form consciousness at about 40 years old, but for some people it never happens. We need to be aware that AI can't do that – it doesn't have the capacity for moral reasoning.”
Some in the room expressed a view common among AI boosters, optimists and investors that AI and machines in general will develop at such a rate in the coming decades that such technology will simply be better at humans than anything and everything. Even if such a science fictional outcome occurred, Pham of Workday argued that thinking among experts needs to go toward “what parts then do we want to preserve just for our humanity.”
Mukherjee of Indeed added: “We should think about the world we want to create, and we don’t want to preclude humans.”
Pham, Skeet and Mukherjee all agreed the technological uses and outcomes are still in the control of people and leaders. “This is where we decide do we want to work for the tech, or make it work for us,” Skeet said. She added that, ten years ago when she started her work at Santa Clara University, it was impossible to get business executives in a room to discuss ethics – now it’s all such leaders want to discuss.
Pham urged people in positions of power to know when to say enough is enough when it comes to how AI is used in the future, citing actor and director Ben Affleck’s recent comments to CNBC regarding “Art is knowing when to stop.”
“Maybe good leadership is also knowing when to stop,” Pham said.