Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

AI could revolutionize health care. But first we need to revolutionize how we regulate it

(Credit: Courtesy of Scarlet)

Hello and welcome to Eye on AI. In this edition...what's wrong with the way we regulate medical AI and how one startup plans to fix it; OpenAI rolls out its o1 reasoning model and says its structure will change; Can AI chatbots implant false memories?

AI is poised to have a huge impact on medicine. As I write in my book, Mastering AI: A Survival Guide to Our Superpowered Future, it’s one of the areas where I am most optimistic about the technology’s likely effects.

But to reap these benefits, we should be careful about how we design AI medical software, how we use it, and how we regulate it.

Bad (AI) medicine is not what we need

As with all AI applications, the risks stem from bad or biased data. Medical research has historically suffered from the underrepresentation of women and people of color in studies. Training AI on this data can lead to models don’t work well for these patients.

Computer vision systems that analyze medical imagery can easily suffer from “overfitting”—learning to perform well on a particular test data set, but doing so in a way that is not clinically relevant and won’t hold up in the real world. Famously, one AI model designed to identify serious cases of pneumonia in chest X-rays learned to place a great deal of emphasis on letters found in the margins of the X-ray film that indicated whether the image had been taken by a portable chest X-ray or a standard one. Portable chest X-rays are, of course, used on the sickest patients. So the AI had learned that the presence of the letter "P"—for portable—was the best predictor of bad pneumonia cases, rather than learning much about the appearance of patients' lungs. On images without such markings, the AI was useless.

Headline accuracy figures can also be misleading. An AI that correctly identifies 95% of pathologies on chest X-rays sounds great—except if happens to miss a particularly aggressive type of lung tumor most of the time. Too many false positives matter too. At best, they annoy doctors, making them less likely to use the software. At worst, they could lead to incorrect diagnoses.

Luckily, you might think, we have medical regulatory bodies to guard us against these dangers, while also ensuring important medical AI innovations reach patients quickly. You’d be wrong. Our current regulatory procedures are poorly suited for AI.

Lack of clinical validation

In the U.S., the Food and Drug Administration has approved close to 1,000 AI-enabled “medical devices” (which can include software as well as hardware with AI features). The vast majority of these (97%) have been approved through a process known as 510(k) that allows for faster approvals so long as the software vendor shows that their product is “substantially equivalent” to a previously approved device.

But state-of-the-art in AI changes rapidly, making it difficult to say the performance of a new AI model is equivalent to older forms of software.

More importantly, vendors are allowed to test their AI software on historical data. They don’t need to prove it improves patient outcomes in real-world clinical settings. In a recent paper published in Nature Medicine, researchers found that 43% of FDA-approved AI medical devices lacked any clinical validation data. And, out of 521 AI devices the researchers examined, only 22 had submitted results from randomized control trials, the gold standard for validating therapies.

The FDA rules were designed for hardware, which is generally upgraded infrequently. It never anticipated a world of Agile software development, with weekly app updates. The FDA has introduced "Predetermined Change Control Plans" (PCCP) to allow minor software updates on a preset schedule, but this still doesn't fully address the needs of AI models, some of which can learn continuously.

One U.K. startup thinks there is a better way

In the U.K. and Europe, the situation is more flexible, but still has drawbacks. Here, government medical regulators outsource the authorization of medical devices to designated private companies called “notified bodies.”

I recently met with Scarlet, a startup that is a notified body for both the U.K. and the EU, specializing in AI medical software. It's creating a technology platform that makes it much easier for AI vendors to submit their market authorization applications for review.

James Dewar, Scarlet’s cofounder and CEO, tells me that the company’s technology helps standardize submission documentation and automatically checks if a vendor’s application is complete, saving days to weeks of time. Most importantly, software developers can submit updates to their software as frequently as they wish, and get approvals for these software updates in days, instead of the six to eight months it could take in the past.

Dewar and his cofounder, Jamie Cox, both previously worked on medical AI models at former U.K. health tech company Babylon Health (later bought by eMed Healthcare). But Scarlet’s platform doesn’t use AI itself—at least not yet, although Dewar says the company is considering how large language models might help. Human experts review the substance of each application, something that is unlikely to change, he said.

Buyer beware

More troublingly, Dewar told me that there are no explicit requirements for notified bodies to examine how well a product performs for patient subgroups or disease subtypes—or how they should deal with AI concepts such as bias and model drift.

Vendors are not required, for instance, to submit confusion matrices, a table that shows how performance varies—on metrics such as false positive and false negative rates—across different patient groups, although Scarlet does currently ask vendors to submit these metrics.

“There’s an element of buyer beware,” Dewar says of AI medical devices. “At the moment, the regulation doesn’t ask us to do anything about bias. We would welcome those changes, but that is not what the current regulations specify.” He also said there was “a balance” to be struck between increasing requirements around clinical effectiveness and the need to “get innovation to market.”

A model for the EU AI Act?

Scarlet just received a $17.5 million Series A venture capital investment from London-based venture capital firm Atomico, with participation from prior investors Kindred Capital, Creandum, and EF (Entrepreneur First). The company is hoping to expand into the U.S., where the FDA uses accredited private organizations to conduct initial reviews of 510(k) applications—although unlike in Europe, in the U.S. these private companies do not have the final say on authorization.

Dewar said Scarlet was also considering branching out into certification of AI software in other high-risk settings besides medicine. Under the EU AI Act, any company deploying AI software in high-risk areas such as controlling electricity or water supplies, or grading university entrance exams, must have an outside party verify its risk-assessment and mitigation processes. A big question has been: Which organizations will have the expertise to conduct these checks? Well, Scarlet might be one.

And with that, here's more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction, Sept. 18: An earlier version of this story incorrectly identified the company where Dewar and Cox worked prior to founding Scarlet. It was Babylon Health, not Benevolent AI. The story also been updated to clarify that the months-to-days speed advantage Scarlet's platform provides is for updates to AI software that has been previously approved, not for initial authorization applications, as well as to clarify that Scarlet currently does ask its customers to submit confusion matrices even though there is no legal requirement that they do so.

Before we get to the news. If you want to learn more about AI and its likely impacts on our companies, our jobs, our society, and even our own personal lives, please consider picking up a copy of my book, Mastering AI: A Survival Guide to Our Superpowered Future. It's out now in the U.S. from Simon & Schuster, and you can order a copy today here. In the U.K. and Commonwealth countries, you can buy the British edition from Bedford Square Publishers here.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.