Get all your news in one place.
100’s of premium titles.
One app.
Start reading

ChatGPT's edge: We want to believe

ChatGPT rocketed into our world because of exponentially compounding advances in artificial intelligence programming — but also because of the primordial wiring of our brains.

How it works: Human perceptual systems are finely tuned to recognize another person, researchers have established, and we do this so well that we project humanity even when it's not there. Two dots and a circle become a face; the moon gets a man in it.


  • We humanize elements of nature and abstract shapes — and conversations with computers, too.

We may understand that ChatGPT is code that doesn't know what words mean yet can put one word after another in sequences that resemble human speech. But when we engage with it we inevitably slip into the comfortable feeling that we're communing with a fellow being.

Why it matters: Tech giants and startup companies want to deploy AI programs like ChatGPT into all facets of our work and lives, and they have a powerful ally in the evolution-forged pathways of human perception.

Flashback: It doesn't take a modern supercomputer to trigger this anthropomorphic response.

  • In 1966 users who traded lines of text with Eliza, a pioneering chatbot created by MIT's Joseph Weizenbaum, readily imagined they were conversing with a human partner.
  • Eliza gave users open-ended questions modeled on a therapist's prompts, and they began avidly sharing their feelings.
  • Weizenbaum, whose family had fled Hitler's Germany, spent the rest of his career sounding alarms over the dangers of AI in the hands of powerful companies and governments.

The big picture: The so-called Eliza Effect is part of what's propelling the hype wave for generative AI. It plugs into potent myths and images that we've built around AI itself, from its name to its representations in culture to the grand promises businesses are making for it.

  • "Artificial intelligence" sounds so much grander than "large language model" or "machine-learning-trained algorithm." The label deliberately emphasizes open-ended possibilities and downplays limits.

A Google engineer named Blake Lemoine made headlines last year by declaring that he believed Google's ChatGPT-like LaMDA program had achieved "sentience."

  • Skeptical experts, along with Lemoine's Google colleagues, argued this was just another instance of the Eliza Effect.
  • But the incident is likely to replay itself as millions of people start regularly interacting with programs like ChatGPT.

What's next: As ChatGPT-style computing spreads, we'll face more and more uncertainty online and in daily life over the nature of the entities sending words our way.

  • Already, when we engage with a corporate help desk or customer service center via text or instant message, it's getting difficult to tell whether the typing at the other end is coming from a person or a bot.
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.