A.I. has arrived, and CEOs are asking what to do. The answer might surprise them: Do what you know best.
It’s a safe bet that various forms of artificial intelligence, from algorithmic decision-support systems to machine learning applications, have already made their way into the front and back offices of most companies. Remarkably, generative A.I. is now demonstrating value in creative and imagination-driven tasks.
We’ve seen this movie before. The Internet. Mobile. Social media. And now artificial intelligence. With each development, business has been confronted with a new technology that holds both great promise and considerable uncertainty, adopted seemingly overnight by consumers, students, professionals, and businesses.
CEOs recognize the challenge. If they take a wait-and-see approach or simply clamp down on A.I. use, they risk missing a historic opportunity to supercharge their products, services, and operations. On the other hand, allowing the new technology to proliferate within their companies in uncoordinated, even haphazard, ways can lead not only to duplication and fragmentation, but to something much more serious: irresponsible uses of A.I., including the perpetuation of biases, amplification of misinformation, and inadvertent release of proprietary data.
What to do? A.I. is evolving so rapidly that there is no definitive playbook. But most of today’s CEOs have learned valuable lessons from prior technology inflection points. We believe they are well-equipped to apply three basic lessons:
Data governance must become data and A.I. governance
Governance may sound to some like heavy-handed, top-down oversight. But this is not about choosing either centralization or decentralization. It’s about developing company-wide approaches and standards for critical enablers, from the technology architecture needed to support and scale A.I. workloads to the ways you ensure compliance with both regulation and your company’s core values. Without enterprise consistency, you won’t have a clear line of sight into your A.I. applications, and you can’t enable integration and scaling.
You don’t have to start from scratch. Most companies have established data governance to ensure compliance with data privacy regulations, such as the EU’s GDPR. Now, data governance must become data and A.I. governance.
A.I. applications and models throughout the company should be inventoried, mapped, and continuously monitored. Most urgently, enterprise standards for data quality should be defined and implemented, including data lineage and data provenance. This involves where, when, and how the data was collected or synthesized and who has the right to use it. Some A.I. systems may be “black boxes,” but the data sets selected to train and feed them are knowable and manageable–in particular for business applications.
Employees don’t need to become data scientists–they need to become A.I.-literate
History teaches us that when a technology becomes ubiquitous, virtually everyone’s job changes. Here’s an example: The first project of the Data & Trust Alliance–a consortium we co-chair that develops data and A.I. practices–targeted what some might consider unlikely parts of our companies, human resources and procurement.
The Alliance developed algorithmic safety tools–safeguards to detect, mitigate and monitor bias in the algorithmic systems supplied by vendors for employment decisions.
When the tools were introduced to HR and procurement professionals, they asked for education, not in how to be a data scientist, but how to be A.I.-literate HR and procurement professionals. We shared modules on how to evaluate the data used to train models, what types of bias testing to look for, how to assess model performance, and more.
The lesson? Yes, we need data scientists and machine learning experts. But it’s time to enhance the data and A.I. literacy of our entire workforce.
Set the right culture
Many companies have adopted ethical A.I. principles, but we know that trust is earned by what we do, more than by what we say. We need to be transparent with consumers and employees about when they are interacting with an A.I. system. We need to ensure that our A.I. systems–especially for high-consequence applications–are explainable, remain under human control, and can withstand the highest levels of scrutiny, including the auditing required by new and proposed regulations. In short, we need to evolve our corporate cultures for the era of A.I.
Another project by the Alliance was to create “new diligence” criteria to assess the value and risk inherent in targeting data–and A.I.-centric companies for investment or acquisition. The Alliance created Data Diligence and AI Diligence, but the greatest need was for Responsible Culture Diligence–ensuring that values, team composition, incentives, feedback loops, and decision rights support the new and unique requirements of A.I.-driven business.
CEOs have been here before. For some companies, it took decades and a pandemic to fully realize that “digital transformation” implicated every part of the company and its relationships with all stakeholders. And what were the results of misreading the Internet, mobile, and social? Disrupted business models and loss of competitiveness, as well as unintended consequences for society.
What will be the result of getting this one wrong? We could miss a once-in-a-generation opportunity to achieve radical breakthroughs, solve intractable problems, delight customers, empower employees, reduce waste and errors, and serve society. Far worse, we risk doing harm to our stakeholders and to future generations.
A.I. is not solely–indeed, not most importantly–a technology challenge. It is the next driver of enterprise transformation. It’s up to the CEO, board, and the entire C-suite to lead that. And the time to do so is now.
Kenneth I. Chenault and Samuel J. Palmisano are founders and co-chairs of the Data & Trust Alliance, a not-for-profit organization whose 25 cross-industry members develop and adopt responsible data and AI practices. Members include CVS Health, General Catalyst, GM, Humana, Mastercard, Meta, Nike, Pfizer, the Smithsonian Institution, UPS, and Walmart. Chenault is the chairman and managing director of General Catalyst and the former chairman and CEO of American Express. Palmisano is the former chairman and CEO of IBM.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
More must-read commentary published by Fortune:
- The ‘Elon Paradox’: He sells Teslas–but you’d expect him to drive a Ram. Here’s what your car says about your politics
- The alt-right economy is failing. Here’s the real performance of anti-woke entrepreneurs
- E-bike injuries: ‘We’re only seeing the tip of the iceberg,’ trauma experts warn
- Asana CEO: ‘The way we work right now will soon look vestigial. Here’s how A.I. will make work more human’