Get all your news in one place.
100’s of premium titles.
One app.
Start reading

In new AI hype frenzy, tech is applying the label to everything now

At this peak moment in the tech world's artificial intelligence craze, anything that tech companies can slap an "artificial intelligence" label on, they will.

Why it matters: The more our understanding of a new technology is distorted by hype, the less thoughtfully we can apply it — and the more likely it is we will cause harm with it.


The big picture: Real advances in machine-learning based pattern- recognition and -completion have sparked a new bubble in tech-industry investment, encouraging companies to apply the "AI" label to anything that moves.

Driving the news: Paul McCartney recently told the BBC that AI was helping the surviving Beatles produce a new song featuring vocals by John Lennon, who was killed in 1980.

  • Yes, but: Technology isn't bringing the voice that sang "Imagine" and "Revolution" back from the dead. Lennon actually stood in front of a mike in 1978 and sang the words of the "new" song, "Now and Then," on a low-quality demo.
  • Tools that can cleanly extract a singer's voice from a noisy old recording are far more efficient today than in the past. But they're not fundamentally different from audiovisual pattern recognition programs that have been in use for decades — like the "magic wand" in Adobe Photoshop that isolates a foreground image from a background.
  • Calling this "artificial intelligence" suggests that it is somehow smarter or more autonomous than it is.

Today's AI promoters are trying to have it both ways: They insist that AI is crossing a profound boundary into untrodden territory with unfathomable risks. But they also define AI so broadly as to include almost any large-scale, statistically-driven computer program.

  • Under this definition, everything from the Google search engine to the iPhone's face-recognition unlocking tool to the Facebook newsfeed algorithm is already "AI-driven" — and has been for years.

Zoom out: The catalyst for this hype wave was the introduction of ChatGPT late last year, which spotlighted the impressive conversational abilities of today's large language models.

  • ChatGPT is an attention-seizing program, but the chatbot approach isn't a one-hammer-fits-all-nails solution to the world's problems. The broader field of generative AI, with its powers of audiovisial mimicry, is similarly impressive but limited.
  • But 20 years ago we changed the definition of "artificial intelligence" in ways that set us up for the current frenzy of calling everything AI.

The term "artificial intelligence" emerged in the 1950s to name the goal of duplicating human capabilities of reasoning in code and circuitry, which experts at the time predicted might take 15 or 20 years to achieve.

  • For decades scientists sought to do so by painstakingly modeling the real world in data so that computers could understand it.
  • When that route proved slow and unrewarding, AI experienced a cycle of "winters" when funding dried up and progress dwindled.

A different and long-neglected road involving the creation of neural networks emerged as a promising alternative, beginning to take form in the '90s and accelerating in the aughts.

  • Instead of painstakingly organizing the world's information for the computer to ingest, this approach had the machine consume vast quantities of disorganized data to identify patterns.
  • Exponential growth in processing power and storage capacity made this machine learning technique into an increasingly effective student — and the internet itself offered a plunderable trove of digital-ready course material.
  • Ever since then, AI effectively became synonymous with "efficient pattern-matching on a large scale." Under this definition, almost any kind of automation or probability-based system qualifies as "artificially intelligent."

The other side: Proponents of today's AI argue that such pattern-matching is basically what the human brain does, too, so as computers' capabilities advance they'll inevitably converge on those of humanity.

The bottom line: The ubiquity of the "artificial intelligence" category in tech today might be the most phenomenally successful act of rebranding in corporate history.

  • But under the hood, Silicon Valley's new AI products are mostly just efficient refinements of technologies we've all been using for years.
  • As Meredith Broussard, a scholar at NYU and prominent AI critic, put it in a recent interview with The Markup: "Once you use it, AI feels mundane. It just feels like using any other technology."
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.