Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Sead Fadilpašić

Claude AI and other systems could be vulnerable to worrying command prompt injection attacks

Generative AI images created by Mark Pickavance.

  • Security researchers tricked Anthropic's Claude Computer Use to download and run malware
  • They say that other AI tools could be tricked with prompt injection, too
  • GenAI can be tricked to write, compile, and run malware, as well

In mid-October 2024, Anthropic released Claude Computer Use, an Artificial Intelligence (AI) model allowing Claude to control a device - and researchers have already found a way to abuse it.

Cybersecurity researcher Johann Rehnberger recently described how he was able to abuse Computer Use and get the AI to download and run malware, as well as get it to communicate with its C2 infrastructure, all through prompts.

While it sounds devastating, there are a few things worth mentioning here: Claude Computer Use is still in beta, and the company did leave a disclaimer saying that Computer Use might not always behave as intended: “We suggest taking precautions to isolate Claude from sensitive data and actions to avoid risks related to prompt injection.” Another thing worth noting is that this is a prompt injection attack, fairly common against AI tools.

"Countless ways" to abuse AI

Rehnberger calls his exploit ZombAIs, and says he was able to get the tool to download Sliver, a legitimate open source command-and-control (C2) framework developed by BishopFox for red teaming and penetration testing, but it is often misused by cybercriminals as malware.

Threat actors use Sliver to establish persistent access to compromised systems, execute commands, and manage attacks in a similar way to other C2 frameworks like Cobalt Strike.

Rehnberger also stressed that this is not the only way to abuse generative AI tools, and compromise endpoints via prompt injection.

“There are countless others, like another way is to have Claude write the malware from scratch and compile it,” he said. “Yes, it can write C code, compile and run it.”

“There are many other options.”

In its writeup, The Hacker News added DeepSeek AI chatbot was also found vulnerable to a prompt injection attack that could allow threat actors to take over victim computers. Furthermore, Large Language Models (LLM) can output ANSI escape code, which can be used to hijack system terminals via prompt injection, in an attack dubbed Terminal DiLLMa.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.