Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Francisco Pires

Scientists Develop GPT Model That Interprets Human Thoughts

AI brain illustration

Even as the world still reels in its attempt to understand and absorb the ripples from the launch of ChatGPT and assorted AI-based systems — whose dust will take a long while to settle — scientists are carrying on with their own applications of Generative Pre-trained Transformers (GPT) and LLM (Large Language Models). And according to Scientific American, one of the latest such applications is a GPT-based model that takes its prompts not from human text, but directly from the users' mind.

Developed by a research team with the University of Texas at Austin, and published in a paper in the journal Nature, their GPT model interprets a person's brain activity via its bloodflow as shown in an fMRI (functional Magnetic Resonance Imaging), allowing it access to what the user is "hearing, saying, or imagining". And it does this without any invasive surgery or any attachment on the patient itself. There was a clear opportunity to name the new model BrainGPT, but someone ignored that memo: the researchers refer to their "brain reading" model as GPT-1 instead.

The researchers do note that due to the fMRI technique being used, GPT-1 can't parse specific words that a subject might think about; and because the model works at a higher level of abstraction (it extrapolates meaning from brain activity, instead of looking for the meaning itself), some details are lost in translation. For instance, one research participant listened to a recording stating, "I don't have my driver's license yet." Processing the fMRI data generated from the moment the participant heard the words, GPT-1 returned the original sentence as meaning "She has not even started to learn to drive yet." So, no - it doesn't transcribe our thoughts verbatim - but it does understand their general meaning, or "the gist of it", as the researchers characterized some of GPT-1's results. 

All of this does raise an immediate question: where does this take us?

In theory, technology itself isn't malicious. Technology is an abstraction, a concept, that can then be used for a purpose. In a vacuum, GPT-1 could help ALS or aphasia patients communicate. Also in a vacuum, technologies such as these could be leveraged by users to "record" their thoughts (imagine a Notes app that's linked to your own thoughts, or an AutoGPT installation that piggybacks on your ideas), opening up new venues for self-knowledge, and perhaps even new pathways for psychotherapy.

But while we're here, we can also throw in some other, less beneficial repurposings of the technology, such as using it in order to extract information directly from an unwilling subject's brain. Being non-invasive is both a strength and a weakness there. And there's also the matter with the technology itself: fMRI machines take up entire rooms and millions of budget dollars wherever they're found, which severely limits applications.

Even so, it would seem that the "willingness" element of communication - that choice of voicing our own thoughts, of bringing them into the actual world - is on the throes of destruction. The researchers themselves have called their attention to potential misuses and negative impacts of the technology in their study - something that  happens far less often that it should in both academia and private research efforts.

"Our privacy analysis suggests that subject cooperation is currently required both to train and to apply the decoder," it reads. "However, future developments might enable decoders to bypass these requirements. Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes. For these and other unforeseen reasons, it is critical to raise awareness of the risks of brain decoding technology and enact policies that protect each person's mental privacy."

As we stand at the door beyond which our thoughts are no longer safe, that's a wise stance indeed.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.