Hackers have a lot to gain from the use of generative AI tools such as ChatGPT. While the tools are still too young to be able to run malicious campaigns with minimal human input, they can be used to supercharge human-run campaigns in ways that have never been seen before.
This is according to new analysis from IBM’s Security Intelligence X-Force team, which detailed an experiment in which they pitted human-written phishing emails against those written by ChatGPT. The goal was to see which email would have a higher click-through rate, both for emails themselves and for the malicious links inside.
In the end, human-written content won, but by the tiniest of margins. The conclusion is that it’s just a matter of time before AI surpasses human content in terms of believability and authenticity, taking care of all the hard work for cybercriminals.
Emotional intelligence
The humans beat AI in the aspects of emotional intelligence, personalization, and understanding the everyday struggles of victims. “Humans understand emotions in ways that AI can only dream of,” the researchers say. “We can weave narratives that tug at the heartstrings and sound more realistic, making recipients more likely to click on a malicious link.”
When it comes to personalization, humans were able to reference legitimate organizations and deliver tangible advantages to the workforce, making the emails more likely to be opened.
And finally, humans understand what makes their targets suspicious: “The human-generated phish had an email subject line that was short and to the point while the AI-generated phish had an extremely lengthy subject line, potentially causing suspicion even before employees opened the email.”
All of these factors can be easily tweaked with minimal human input, making AI’s work extremely valuable. It is also worth noting that the X-Force team could get a generative AI model to write a convincing phishing email in just five minutes from five prompts - manually writing such an email would take the team about 16 hours.
“While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns,” the researchers concluded.
“While even restricted versions of generative AI models can be tricked into phishing via simple prompts, these unrestricted versions may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.”
More from TechRadar Pro
- Tackling malicious domains and typosquatting
- Here's a list of the best firewalls today
- These are the best endpoint protection services right now