Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

Microsoft’s AI Copilot can be weaponized as an ‘automated phishing machine,’ but the problem is bigger than one company

(Credit: MANDEL NGAN/AFP via Getty Images)

Hello and welcome to Eye on AI.

Cybersecurity professionals from around the world gathered last week at Black Hat USA, the prominent conference for catching up on the latest cyber threats and how to defend against them. While I wasn’t at the event in Las Vegas and instead soaked up demo videos and presentation slides from afar, it’s clear—and no surprise—that AI was a prominent topic of conversation. The schedule boasted a few dozen sessions focused on the technology, including a keynote titled “AI is Ruining My Life: Group Therapy for Security Leaders.” The one that has seemingly gotten the most attention, however, is a demo showcasing five ways Microsoft’s Copilot could be manipulated by attackers, including turning it into an “automated phishing machine,” as Wired put it.

The attack methods were presented by Michael Bargury, cofounder and CTO of Zenity and a former Microsoft security architect. What’s particularly interesting is how all of them rely on using the LLM-based tool as it was designed to be used—asking the chatbot questions to prompt it to retrieve data from a user’s own Microsoft workspace. Copilot lives in the company’s 365 software (like Word and Teams) and is meant to help users boost their productivity by summarizing meeting information, looking up details buried in emails, and helping craft messages. Bargury’s demo shows how the same technology and processes that make those capabilities possible could be used maliciously, too. 

For example, several of the attacks require the malicious actor to have already gained access to someone’s email account, but they drastically increase and expedite what the attacker can do once inside. Copilot’s ability to help a user draft emails in their personal writing style could also enable an attacker to easily mimic someone’s writing style at scale and blast out convincing emails with malware or malicious links to unsuspecting colleagues. Once inside an email account, Copilot’s ability to quickly retrieve data from documents, correspondences, and loads of other places inside a company’s workflow could also enable an attacker to easily access sensitive company information. Bargury even demonstrated using Copilot to circumvent the company’s access permissions, wording prompts in a particular way that gets the chatbot to relay information the user doesn’t have permission to view.  

“You talk to Copilot and it’s a limited conversation because Microsoft has put a lot of controls. But once you use a few magic words, it opens up and you can do whatever you want,” Bargury told Wired

Without having to compromise any email accounts, Bargury also demonstrated how malicious actors could use Copilot to hijack a company’s financial transactions and lead an employee to direct payments intended for a trusted entity into the hacker’s own account. First, a hacker would send an email to a victim at the company that presents their bank information as the trusted entity’s. If the employee uses Copilot to search for that entity’s banking information, Copilot could then surface the malicious email and lead the victim to send the money to the malicious actor instead. In the same vein, a hacker could send a malicious email to direct someone to a phishing site. Both scenarios would involve Copilot surfacing and presenting malicious content as a trusted information source.

While these were proof-of-concepts demonstrating how Copilot could be manipulated and not evidence of the chatbot being used widely by hackers in these ways, the techniques do mirror those for manipulating LLMs that we know are in fact being used. In his talk on the most common and impactful LLM attacks discovered throughout this past “year in the trenches,” Nvidia principal security architect Richard Harang similarly discussed LLMs incorrectly handling document permissions as well as prompt injection attacks like Bargury demonstrated, wherein attackers manipulate LLMs to leak sensitive data and assist in other harms by disguising malicious inputs as legitimate prompts.

Speaking to Wired, Microsoft head of AI incident detection and response Philim Misner said the company appreciates Bargury’s work identifying the vulnerabilities and is working with him to address them. While the implications are enormous for Microsoft and the many businesses small and large that use the company’s software, it’s important to point out that these issues are in no way unique to Microsoft or its copilot. Microsoft Copilot’s deep integration with sensitive company information and flows of communication makes for an especially vulnerable scenario, but all of its enterprise competitors are creating the same type of AI-assistant experience within their software, too. At the same time, all LLMs are vulnerable to attacks and general-purpose LLMs like ChatGPT have also been exploited as hacking tools. Ask any security researcher or executive about the impact on cybersecurity and they will sigh while telling you how generative AI has completely upended the cyber threat landscape—as was made clear at both Black Hat and Def Con (a hacking convention following the main Black Hat event where Fortune’s Sharon Goldman reported from this past weekend) and in pretty much every discussion of cybersecurity since ChatGPT was released in 2022. 

With vulnerabilities in Microsoft’s LLM in the spotlight at Black Hat, it may seem ironic that another talk by a Microsoft security engineer focused on how that same LLM technology can be leveraged to boost security responses. In fact, there’s nothing more typical cybersecurity than that. Every breakthrough in technology has created new tools and attack points for the hackers to hack, and at the same time, new ways for the defenders to defend. AI is only the latest technology to kick off a new era of cybersecurity cat-and-mouse—and it is a big one. But the cycle continues.

And with that, here’s more AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.