Get all your news in one place.
100’s of premium titles.
One app.
Start reading

How AI will turbocharge misinformation — and what we can do about it

Attention-grabbing warnings of artificial intelligence's existential threats have eclipsed what many experts and researchers say is a much more imminent risk: A near-certain rise in misinformation.

Why it matters: The struggle to separate fact from fiction online didn't start with the rise of generative AI — but the red-hot new technology promises to make misinformation more abundant and more compelling.


The big picture: By some estimates, AI-generated content could soon account for 99% or more of all information on the internet, further straining already overwhelmed content moderation systems.

  • Dozens of "news sites" filled with machine generated content of dubious quality have already cropped up, with far more likely to follow — and some media sites are helping blur the lines.
  • Without sufficient care, generative AI systems can also recycle conspiracy theories and other misinformation found on the open web.

Threat level: University of Washington professor Kate Starbird, an expert in the field, told Axios that generative AI will deepen the misinformation problem in three key ways.

  1. Generative AI is great at churning out misinformation. "Generative AI creates content that sounds reasonable and plausible, but has little regard for accuracy," Starbird said. "In other words, it functions as a BS generator." Indeed, some studies show AI-generated misinformation to be even more persuasive than false content created by humans.
  2. Generative AI helps those who deliberately seek to mislead — purveyors of disinformation. "Generative AI makes it extremely cheap and easy to generate content — including micro-targeted messages for specific audiences — to power a disinformation campaign," Starbird said.
  3. Generative AI models themselves offer a new target for those who seek to shape the information debate on a topic. "Would-be manipulators may seek to 'poison' or strategically shape the outputs of these models by feeding their content into the inputs," Starbird said.

"Taken together, Generative AI has the potential to accelerate the spread of both mis- and disinformation, and exacerbate the ongoing challenge of finding information we can trust online," Starbird said.

Between the lines: Misinformation can take many forms, from deepfake photos and video to text-based articles as well as memes that combine text and images.

  • Misinformation can be spread intentionally or unknowingly and is driven by a wide range of motivations, including political or ideological goals, extortion or revenge, moneymaking and personal amusement — sometimes several of these at once.

Be smart: As daunting as the AI misinformation threats are individuals, companies and governments can act to mitigate the risks.

1. Provenance: One of the main ways to combat misinformation is to make it clearer where a piece of content was generated and what happened to it along the way. The Adobe-led Content Authenticity Initiative aims to help image creators do this. Microsoft announced earlier this year that it will add metadata to all content created with its generative AI tools. Google, meanwhile, plans to share more details on the images catalogued in its search engine.

  • Efforts to label AI-generated text are trickier, as it's easy to remove watermarks. So far tools designed to identify content created by AI systems like ChatGPT have proven less than fully reliable, with a considerable number of both false positives and false negatives.

2. Regulation: Laws can be another tool to fight misinformation. While such efforts can run afoul of free speech protections, there's public value in requiring identification of political advertising's funders or prohibiting deepfake-based harassment and extortion.

3. Algorithms: As counter-intuitive as it may seem, some see using AI itself as the tool most likely to be able to detect machine-generated misinformation. Although that's likely to prove an open-ended arm's race at best, it's also hard to imagine now human content moderators could keep up with the flood of AI-generated misinformation.

4. Media literacy: Educating people to be smart consumers of information can also help combat misinformation, though it requires concerted effort and significant investment and can encounter opposition from those who benefit from a poorly informed populace.

What they're saying: Tech companies say they are applying existing policies to AI-generated content as while working to develop new techniques specifically designed to address its unique traits.

  • Google: “Connecting people to high-quality information is core to our mission," Google said in a statement to Axios. "We’re taking a responsible approach to AI by giving our users the tools they need to evaluate information, such as information literacy tools on Google Search and new innovations in watermarking, metadata, and other techniques."
  • Meta: Facebook's parent company told Axios that it is applying the same policies to AI-generated content as for any other content, including rules around misinformation. AI-generated content that attempts to make factual claims, for example, is subject to third-party fact checking. The company also points to breakthroughs in AI techniques that have helped screen misinformation — such as Few Shot Learner, a technique introduced in 2021 that allows harmful content to be identified more quickly.
  • Microsoft: "We rank uses of generative AI in disinformation as a top concern," Microsoft chief scientific officer Eric Horvitz said in a statement to Axios. " We have been following and tracking the use of AI tools by bad actors for creating manipulative deepfakes since the first demonstrations of capabilities. We have teams working to address issues, including efforts that are complementary to our work in cybersecurity, on tracking the evolving uses of deepfakes by nation states, detection and content filtering, and preventing the promotion of harmful or discriminatory content in line with our AI principles."
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.