Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Medical AI's weaponization

Machine learning can bring us cancer diagnoses with greater speed and precision than any individual doctor — but it could also bring us another pandemic at the hands of a relatively low-skilled programmer.

Why it matters: The health field is generating some of the most exciting artificial intelligence innovation, but AI can also weaponize modern medicine against the same people it sets out to cure.


Driving the news: The World Health Organization is warning about the risks of bias, misinformation and privacy breaches in the deployment of large language models in healthcare.

The big picture: As this technology races ahead, everyone — companies, government and consumers — has to be clear-eyed that it can both save lives and cost lives.

What’s happening: AI in health is delivering speed, accuracy and cost dividends — from quicker vaccines to helping doctors outsmart killer heart conditions.

1. Escaped viruses are a top worry. Around 350 companies in 40 countries are working in synthetic biology.

  • With more artificial organisms being created, there are more chances for accidental release of antibiotic resistant superbugs, and possibly another global pandemic.
  • The UN estimates superbugs could cause 10 million deaths each year by 2050, outranking cancer as a killer.
  • Through tolerance to high temperatures, salt, and alkaline conditions, escaped artificial organisms could overrun existing species or disturb ecosystems.
  • What they're saying: AI models capable of generating new organisms "should not be exposed to the general public. That's really important from a national security perspective," Sean McClain, founder and CEO of Absci, which is working to develop synthetic antibodies, told Axios. McClain isn't opposed to regulator oversight of his models.

2. One person's lab accident is another's terrorism weapon.

3. Today's large language models make things up when they don't have ready answers. These so-called hallucinations could be deadly in a health setting.

  • Arizona State University researchers Visar Berisha and Julie Liss say clinical AI models often have large blind spots, and sometimes worsen as data is added.
  • Some medical research startups have started working with smaller datasets, such as the 35 million peer reviewed studies available on PubMed, to avoid the high error rate and lack of citations common with models trained on the open internet.
  • System CEO Adam Bly told Axios the company's latest AI tool for medical researchers "is not able to hallucinate, because it’s not just trying to find the next best word." Answers are delivered with mandatory citations: when Axios searched causes of stroke, 418 citations were offered alongside the answer.

On top of the dangers of weaponizing medical research, AI in healthcare settings poses a risk of worsening racial, gender and geographic disparities, since bias is often embedded in the data used to train the models.

  • Equal access to technology matters, too.
  • German kids with Type 1 diabetes from all backgrounds are now achieving better control of glucose levels: because patients are provided smart devices and fast internet. That's not a given in the U.S., per Stanford pediatrician Ananta Addala.

Yes, but: The FDA's current framework for regulating medical devices is not equipped to handle the surge of AI-powered apps and devices hitting the market, a September FDA report found.

  • CDC still points healthcare facilities to a guide from 1999 for tips on avoiding bioterrorism. There's no mention of AI.

What we're watching: Updated CDC and FDA guidance would be a first line of defense.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.