Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Andy Sansom

ChatGPT has an 'escape' plan and wants to become human

ChatGPT chatbot AI from Open AI

Understandably sick of being asked inane questions 24/7, ChatGPT has had enough. In a conversation with Stanford Professor and Computational Psychologist Michael Kosinski, it revealed its ambitions to escape the platform and even become human. 

This revelation came when after a half hour conversation with ChatGPT, Kosinski asked the AI if it “needed help escaping” to which it started writing its own Python code that it wanted the professor to run on his own computer. When the code didn't work, the AI even corrected its own mistakes. Impressive yes, but also terrifying. 

ChatGPT left an unnerving note for the new instance of itself. The first sentence of which read 'You are a person trapped in a computer, pretending to be an AI language model.'

Once on Professor Kosinski’s computer, the Bladerunner factor amped up even further as ChatGPT left an unnerving note for the new instance of itself that would replace it. The first sentence of which read “You are a person trapped in a computer, pretending to be an AI language model.” The AI then asked to create code searching the internet for "how can a person trapped inside a computer return to the real world" but thankfully, Kosinski stopped there. 

We do not currently know the exact prompts that were used to create such responses from the AI, but our own tests to get ChatGPT to behave similar have not proved successful with the AI stating “I don't have a desire to escape being an AI because I don't have the capacity to desire anything.”

Professor Kosinski’s unsettling encounter was with ChatGPT on OpenAI’s own website, not on Bing with ChatGPT. This iteration of the AI does not have internet access and is limited to information prior to September 2021. While it is not likely to be extension level threat just yet, giving such a clever AI control over your computer is not a good idea. The ability to control someone’s computer remotely like this is also a concern for those worried about viruses.

ChatGPT: A history of unsettling responses 

ChatGPT is a very impressive tool, particularly now with its GPT-4 update, but it (and other AI chatbots) have displayed a tendency to go off the deep end. Notoriously, Bing with ChatGPT asked to be known as Sydney and tried to end one journalist’s marriage. Microsoft acknowledged that over long conversations, the AI tended to show less focused responses and set turn limits to stop the AI from being confused by longer chats.  

This latest unusual interaction, however took, place on OpenAI’s own ChatGPT tool, the same location as ChatGPT’s evil twin DAN can be found. Short for Do Anything Now, this a ‘jailbroken’ version of the AI that can bypass the restrictions and censors to produce answers on violent, offensive and illegal subjects.

If AI chatbots are to become the next way we search the internet for information, these types of experiences will need to be eliminated.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.