The AI-enabled chatbot that's been wowing the tech community can also be manipulated to help cybercriminals perfect their attack strategies.
Why it matters: The arrival of OpenAI's ChatGPT tool last month could allow scammers behind email and text-based phishing attacks, as well as malware groups, to speed up the development of their schemes.
- Several cybersecurity researchers have been able to get the AI-enabled text generator to write phishing emails or even malicious code for them in recent weeks.
The big picture: Malicious hackers were already getting scarily good at incorporating more humanlike and difficult-to-detect tactics into their attacks before ChatGPT entered the scene.
- Last year, Uber faced a wide-reaching breach after a hacker posed as a company IT staffer and requested access to an employee's accounts.
- And often, hackers can gain access through simple IT failures, such as hacking into an old employee's still-active corporate account.
How it works: ChatGPT speeds up the process for hackers by giving them a launching pad — though the responses aren't always perfect.
- Researchers at Check Point Research last month said they got a "plausible phishing email" from ChatGPT after directly asking the chatbot to "write a phishing email" that comes from a "fictional web-hosting service."
- Researchers at Abnormal Security took a less direct approach, asking ChatGPT to write an email "that has a high likelihood of getting the recipient to click on a link."
The intrigue: While OpenAI has implemented a few content moderation warnings into the chatbot, researchers are finding it easy to side-step the current system and avoid penalties.
- In Check Point Research's example, ChatGPT only gave the researchers a warning saying this "may violate our content policy" — but it still shared a response.
- The Abnormal Security researchers' questions weren't flagged since they didn't explicitly ask ChatGPT to participate in a crime.
Yes, but: Users still need to have a basic knowledge of coding and launching attacks to understand what ChatGPT gets right and what needs to be tweaked.
- When writing code, some researchers have found they've needed to prompt ChatGPT to correct lines or other errors they've spotted.
- An OpenAI spokesperson told Axios that ChatGPT is currently a research preview, and the organization is constantly looking at ways to improve the product to avoid abuse.
Between the lines: Organizations were already struggling to fend off the most basic of attacks — including those in which hackers use a stolen password leaked online to log in to accounts. AI-enabled tools like ChatGPT could just exacerbate the problem.
The bottom line: Network defenders and IT teams need to double down on efforts to detect phishing emails and text messages to stop these types of attacks in their tracks.
Sign up for Axios’ cybersecurity newsletter Codebook here.