Happy new year, and welcome to Eye on AI. In this edition: How cybersecurity training is—and isn't—keeping up with generative AI's new threats; OpenAI beefs up its political lobbying; Nvidia to open-source Run:AI software following acquisition; and AI leadership roles triple in two years.
2024 saw brand new types of generative AI-enabled digital fraud make headlines, from a deepfaked video call that cost a company $25 million to new research on how AI copilots being built into enterprise software can be weaponized as “automatic phishing machines”. Even classic phishing attacks are getting worse and getting more personal, the Financial Times reported today, thanks to AI bots’ ability to easily ingest large amounts of data about a company or person’s style and tone and then easily replicate it. They can also scrape data from a person’s online activity to make phishing emails more personal, and thus more convincing.
As generative AI swiftly upends the cybersecurity threat landscape, companies need to ensure employees are aware of the technology, its capabilities, and its risks. To educate employees on how not to fall victim to various types of attacks, most companies turn to cybersecurity training, which typically consists of an informational video or a series of modules and quizzes that employees must complete. So how is this training keeping up with the new threats being posed by generative AI? I checked in with top providers including Huntress, Ninjio, and KnowBe4 and watched their trainings to find out.
Gen AI cybersecurity trainings cover some, but not all, new threats
Across all the training courses I tested, some of the new threats posed by generative AI were covered thoroughly, in particular how the technology can be used to create more convincing phishing emails and the risks involved with inputting sensitive company information into commercial chatbots. Ninjio’s AI-related training was the only one I reviewed that didn’t cover phishing, though the company says it plans to release a new video on this early this year.
Most of the trainings also covered how generative AI can be used to make convincing deepfakes of someone’s voice. Some addressed video deepfakes, but they focused on videos that would be distributed to employees or shared online. The trainings from Huntress, notably, did not discuss deepfakes of any sort. None of the trainings, however, discussed the emergence of deepfakes being used in live video calls—like the aforementioned instance from earlier this year when an employee at an multinational company thought he was on a call with his executive team, but was actually talking to AI-generated deepfakes created by malicious actors, who deceived him into transfering $25 million of the company’s funds.
None of the trainings addressed prompt injection, a new type of attack that could be used against companies deploying AI assistants in enterprise software like Microsoft 365. These attacks exploit the AI assistant’s ability to retrieve documents and can open up companies to data theft and new types of social engineering attacks. For example, a malicious actor could send an email to a victim at a target company that presents the bad actor’s bank information as that of some trusted entity. If the employee uses Copilot to search for that entity’s banking information, Copilot could surface the malicious email, misleading the victim into sending the money to the malicious actor instead. In the same vein, a hacker could send a malicious email to direct someone to a phishing site—all without having to gain access to the employee’s email.
Companies must be proactive about employee education
I began looking into how cybersecurity training is addressing the new threats posed by generative AI after watching a training video assigned to a relative. When they told me they had just been assigned a new video specifically talking about generative AI as part of their annual cybersecurity training, I was excited to see how the increasingly important topic was being covered. After watching the video, however, I was shocked and disappointed. The video was just a few minutes long, barely touched on the new types of threats I had reported on this past year, and depicted generative AI as if it were some far-off sci-fi technology. It wasn’t until I started diving deeper into the offerings from the cybersecurity industry that I realized that video wasn’t the whole story.
That video was from KnowBe4, and on its own, I don’t think it would be sufficient for informing employees about the risks and threats. I soon discovered that it’s just one of many AI-focused cybersecurity videos offered by KnowBe4, which turned out to have the largest catalog of AI-focused videos and some of the most informative content out of everything I viewed. KnowBe4 told me company admins are able to preview and assign trainings based on their company’s needs. Clearly whoever chose the videos to assign for my relative’s company wasn’t as thorough as they should’ve been. There were additional videos on deepfakes, CEO fraud, phishing, and the dangers of AI chatbots that together would’ve been much more comprehensive.
This made clear that cybersecurity and IT leaders inside companies need to take an active role, familiarizing themselves with the new threats and the training content that exists to inform employees about them. More than ever, cybersecurity education needs to be continuous; a brief video once a year isn’t enough.
Cybersecurity is an endless cat-and-mouse cycle, with security professionals and IT teams often playing catch-up to whatever innovations the fraudsters and hackers decide to adopt. Huntress said it's in the late stages of developing a training on AI hallucinations and plans to create one on deepfakes this year. Ninjio is still developing trainings on generative AI’s impact on phishing and how malicious actors can use AI to automate attacks. KnowBe4 said it’s working to incorporate information on prompt injection attacks into its trainings. (It was the only provider to directly address my questions about this type of emerging threat.) But the training courses won’t do anything unless corporate IT leaders do their job in making sure employees engage in the trainings thoroughly and regularly.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com