At a recent panel discussion, someone asked, “Are you excited or concerned about AI?” My answer was, “Yes.” I am excited about AI–and I am deeply concerned. Any technology that affords extraordinary power must also be wielded with extraordinary care. The more powerful the tool, the more care is required.
I’ll start with some concerns. My first is jobs. At Indeed, our mission is to help people get jobs. We spend every day thinking about technology’s impact on people's livelihoods. The question of whether technology will help or hurt jobs goes back to the early 19th century when artisan weavers in England–the Luddites–smashed the machines that threatened to replace them. It’s possible that thousands of years earlier, someone tried smashing a wheel because their job was at risk.
Waves of technological innovation–and disruption–have gotten faster over time. While the effects of the Industrial Revolution gradually accelerated over many generations, the travel, retail, and music industries were turned upside down by the Internet within a decade. With AI, it's conceivable that students might now find themselves learning skills in college that are obsolete by the time they graduate.
I’d like to take this opportunity to offer amends on behalf of all technologists. Tech evangelists use the word "disruption" as if it were purely positive. We pitch new technologies to "disrupt the travel industry" or "disrupt the transportation industry." While consumers benefit from convenience and cost, and over time more jobs are created, the immediate consequence is to upset the lives of travel agents and taxi drivers. "Luddite" has come to mean "anti-progress"–but we need to remember that their fears were legitimate.
As concerned as I am about jobs, I have an even deeper underlying concern about bias. AI is powered by data, data comes from humans, and humans are flawed. This is especially true in employment: In a well-known 2003 study, Black-sounding names received 50% fewer callbacks for interviews than white-sounding names for identical resumes. When AI models are built from data with embedded bias, the resulting models replicate and scale that bias.
A recent study graphically represented this phenomenon. When asked to create images related to job titles and crime, generative AI amplified stereotypes about race and gender. White Americans make up 70% of fast-food workers, but 70% of image results depicted workers with darker skin tones. Women make up 39% of doctors, but only 7% of the image results. Throughout the study, higher-paying roles were represented primarily by perceived male workers and those with lighter skin, while lower-paying jobs were dominated by perceived female workers and those with darker skin. Images generated for “inmate,” “drug dealer,” and “terrorist” likewise amplified stereotypes.
So yes, I have considerable concerns about AI. Despite these concerns, I am also very excited.
Recent advancements in AI have been breathtaking. In just the past few years, we’ve seen farmers use AI to combat pests and disease and an AI-powered brain implant that helped a paralyzed man walk using his thoughts. The latest AI darling, ChatGPT, has set new benchmarks for machines in feats of supposed human intelligence, scoring in the ~90th percentile on the SAT, GRE, LSAT, and Uniform Bar Exam, scoring a 4 or 5 on 13 different AP exams, and even acing Intro through Advanced Sommelier theory.
AI also helps people get jobs. 350 million job seekers visit Indeed every month, and AI powers their simple and fast connections to 30 million jobs. Thanks to AI, someone gets hired on Indeed every three seconds.
Despite the staggering innovation all around us, AI is still in its infancy. If we hope to continue to benefit from the promise of AI, we need to focus considerable time and energy on addressing the risks.
The first step is admitting we have a problem. Bias, toxicity, and hate are intermixed with the useful signals that enable AI to solve meaningful problems. In response, we have established a Responsible AI team dedicated to building a fair product for job seekers and employers by measuring and mitigating unfair bias in our algorithmic products. The team members range from astrophysicists to sociologists, and their approach combines fairness evaluations, tool-building, education, and outreach. We believe that algorithmic fairness is not a purely technical problem with purely technical solutions. Our work treats fairness as a socio-technical issue–embedded in both technology and the social systems that interact with it.
Earlier this year, I had the opportunity to speak with historian Dr. Ibram X Kendi, who framed this issue with startling clarity: “I see the greatest danger as the supposition that AI is actually artificial, that humans did not create AI, and humans have not baked their own racist, sexist, and ethnocentric ideas into the AI that they create. If we as human beings say that artificial intelligence is fundamentally artificial, then it'll become a new way to exclude people because of their race that's defensible.”
To build truly responsible AI systems, we need to fundamentally change the way we build them. To start, we need to prepare the current and next generation of tech workers for the profound implications of technology and its impact on people's lives. Along with statistics and linear algebra, technologists should be required to study history, philosophy, ethics, and literature. A formal code of ethics for AI practitioners, such as an adaptation of the physician's Hippocratic oath, would be an important starting point.
Critically, we need to change who is building these systems. Every day, people are making vital decisions in rooms with little to no representation from marginalized groups–those that are most likely to be negatively impacted by those decisions. Women make up roughly 20% of recent Computer Science Ph.D.s–and barely 6% of Ph.D. holders are Black or Latinx. I say this as a white, straight, cis-gendered man: People like me should not be alone in making decisions that affect millions of people. Representation on its own will not solve the fundamental challenges facing AI, but I believe we will not overcome those challenges if marginalized groups do not have an equitable seat at the table.
We stand at the crossroads of extraordinary innovation and disruption. With great power comes great responsibility. If we want to ensure that AI will benefit all humanity, we need to embrace our responsibility and put humanity at the center.
Chris Hyams is the CEO of Indeed.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.