The new generation of artificial intelligence (AI) tools are being described as “intelligent assistants” and “copilots”–a way to tell us that we still have a role in the cockpit. While that may be true today, it won’t hold for much longer, at least in the field of cybersecurity.
More than 60 startups and major vendors, including my company, have recently announced AI-powered products. Compelling as those announcements are, they’re insignificant compared to what’s coming next: Forrester Research expects the AI software market to expand to $64 billion by 2024, with cybersecurity being the fastest-growing category at a compound annual growth rate of 22.3%.
Largely, that’s good for cybersecurity and the organizations we protect. There are only 69 cybersecurity personnel for every 100 job openings. And even if we could fill all those empty seats, humans today simply can’t defend everything we need to protect on our own. We need AI to respond at the speed and scale that complex IT environments demand.
But just because cybersecurity needs AI doesn’t mean that adapting to its use will be easy. For decades, White Hat defenders have been charged with protecting everything from an organization’s printers to its most treasured intellectual property. Human ingenuity, ability, and no small amount of blood, sweat, and tears built, maintained and secured the platforms we use to live, work, and play. Being asked to turn all that over to bots is no small feat. Cybersecurity faces an identity crisis: Who we are, what we do, and the roles we play are all set to fundamentally change with the advent of AI.
For years, it’s only been humans flying the plane. In the next few years, we’ll fly alongside robot copilots. But our time in the cockpit is inevitably ending. We must begin planning our exit. While we’ll still have a role after we leave flying behind, what we do and the value we bring will be very different going forward.
The threshold at which AI will replace human defenders
For the next few years, AI will still need our help to operate. The majority of cybersecurity breaches today result in part because of the human element, including error, privilege misuse, stolen passwords, and social engineering attacks.
Without getting too deep in the weeds, AI and some basic cybersecurity hygiene (like multi-factor authentication, or MFA) can handle the majority of those incidents. AI can automate what happens when a new user joins an organization, provide them with the accounts they need to do their work, ensure that they have enabled MFA, and watch their account usage for any irregularities. While AI can manage the day-to-day, human cybersecurity professionals will supervise the more impactful decisions and handle exceptions, like what happens if a user needs some specific resource that others in the organization don’t require.
Eventually, AI will be able to handle those high-stakes decisions and exceptions as well. That’s the threshold at which we’ll need to exit the cockpit and trust AI to handle critical incidents faster and more effectively than human actors.
Even then, we’ll still have a role training, supervising, and monitoring the AI. Consider what happened with Microsoft’s Tay, which went from “innocent AI chatbot” to racist, misogynist troll in less than 24 hours. AI needs humans to set, monitor, refine, and change its parameters, the degree to which it can adapt, and the ways it interacts with other AI.
From AI’s teacher to its guardian
It’s not just that humans will need to set AI’s parameters and dictate how it should work. Humans also need to decide why AI is working: We need to present it with the right challenges and ask it the right questions.
Today’s cutting-edge AI represents a significant leap in our ability to use new technologies to solve complex problems. But that’s only possible if human researchers ask it the right questions in the right way and feed it the right information. One of the oldest rules in AI is “garbage in, garbage out.” If AI is trained on low-fidelity, mislabeled, or inaccurate data, then the outputs it creates will be worthless.
And that points to the next role we’ll need to assume once AI takes over day-to-day cybersecurity operations. As AI becomes part of our cybersecurity architecture, threat actors will try to target it with data poisoning and prompt injections. They’ll work to make AI hallucinate or turn against us. AI will protect us–and we’ll need to protect it.
The cybersecurity industry has always found ways to adapt to the newest threats and account for the latest technologies–but AI represents a new order that will test us, pushing us to evolve to a greater degree than we ever have before.
All things change–and we must change with them.
Rohit Ghai is the CEO of RSA, a global leader in identity and access management (IAM) solutions for security-first organizations. Around the world, more than 9,000 organizations rely on RSA to manage more than 60 million identities.
More must-read commentary published by Fortune:
- Return-to-office mandates: Why tax breaks are not a reason for companies in states such as Texas, Utah, and New Jersey to force employees back
- We analyzed 2 years of performance reviews for 13,000 workers. Here’s the proof that low-quality feedback is driving employee retention down
- Burnout is attacking our brains and making it harder to excel at work. ‘Deliberate calm’ can help us adapt
- The growing case for doing less: How harmless cancers are being overdiagnosed in America
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.