Hello, and welcome to February’s special edition of Eye on A.I. Today, I want to take a brief break from the chronicling the generative A.I. craze and focus on a little-known company working with very different A.I. methodologies that may wind up having an outsized impact on a sector critical to almost every aspect of the modern economy: insurance.
The company, Mind Foundry, is located just down the road from my house in Oxford, England. The company was co-founded by Oxford University computer scientists Stephen Roberts and Michael Osborne. (Osborne is best known as the co-author, with economist Carl Benedikt Frey, of a seminal 2013 study on the roles that might be most susceptible to automation in the near future.) It specializes in applying machine learning in what it calls “high-stakes situations”—ones that are critical to operating equipment, like optimizing propulsion from a jet engine, or where a person’s financial health, physical health, or even life, may be on the line if the A.I. software gets it wrong.
As a result, many of the machine learning techniques Mind Foundry uses are not the deep learning systems that have helped drive most of the decade’s hype around A.I. Instead, they are older, but still very powerful statistical models that have the benefit of being far more transparent about how they work and why they provide the predictions or classifications they are making. One of the biggest problems with generative A.I. systems like ChatGPT is what researchers call “hallucination”—they make stuff up and yet express this erroneous information with great conviction. “That’s why they are absolutely not appropriate in high stakes,” Brian Mullins, Mind Foundry’s CEO, tells me. “There’s too much potential for unforeseen failure modes.” Mind Foundry’s machine learning methods, in contrast, Roberts tells me, are “humble and honest”—they understand when their predictions are less certain, and they can tell a user when they are straying into parts of a data distribution where they are likely to be less reliable.
Mind Foundry has done a lot of work for government clients, including in the defense and security sector, as well as helping health systems allocate resources. But Mullins says the startup has increasingly found traction in the insurance sector, where underwriting models need to be both reliable—and interpretable, so that regulators can understand why they work and how they comply with rules meant to prevent companies from either unfairly discriminating against customers or taking on too much risk.
Now the company teamed up with Japan-based Aioi Nissay Dowa Insurance (ANDI) and its wholly-owned European subsidiary Aioi Nissay Dowa Europe (AND-E), which specializes in automotive insurance, to launch a first-of-its-kind lab in Oxford. The lab is dedicated to investigating how insurance can help address global issues from climate change to aging global populations and how machine learning can help insurance companies better understand and underwrite risk. “Decisions that are being made now affect what happens in 10 to 100 years’ time,” Roberts tells me. “And we need to work to understand what those possible futures might look like.” As part of the new lab, ANDI and AND-E are making what they term “a significant investment” in Mind Foundry, but the companies declined to make the amount of the investment public.
Roberts says the new lab will pool expertise from throughout Oxford University, including faculty who specialize in energy and power systems engineering, economics, biology, and climate science, as well as computer science and robotics. He says that there are about seven faculty members currently affiliated with the lab, but that Mind Foundry and ANDI intend to bring in more, including those from areas such as law and politics. At the lab, they will be joined by experts in insurance from ANDI and AND-E. “We have people from Japan, we have people from the scientific side of Oxford there, we have people from our operations there, we have our data teams there, and we have security technicians inside. So it's a huge variety of people looking at all different streams of what insurance and risk management is all about,” Michael Kainzbauer, AND-E’s Group CEO, says.
He says that among the new lab’s priorities will be researching new insurance products that might follow an individual across different modes of transport—for instance, a commuter who used a car-sharing program to take a car to the train station and then takes a train into the city followed by a shared bike or scooter that is rented by the minute. AND-E is also interested in looking at the sort of insurance that may be necessary if autonomous cars become ubiquitous, which Kainzbauer says will involve a switch from the industry looking primarily at individual liability, as it does currently with drivers, to a focus on the product liability of the car and its software, which would probably be insured across a fleet. The lab will also look at insurance that is increasingly based on telematics—data streamed directly from vehicles that can give insurers a more precise and accurate picture of driving behavior. In addition, it will examine the insurance of A.I. software itself. It will explore risks associated with climate change and ways to help reduce the carbon footprint of transport. And finally, it will look at health and wellness, especially in light of aging populations in most developed countries.
Kainzbauer says that his company’s work with Mind Foundry has already led to improvements in customer retention, safer driving on the part of AND-E customers, and better fraud detection. For instance, he says that, with the help of Mind Foundry’s models, AND-E has been able to take telematics data from a car and use that to create a score that a driver themselves can see on a mobile app that allows them to visualize how safely they are driving. Just seeing this score, Kainzbauer says, often prompts people to drive more safely.
Roberts says the same kinds of scoring might help people make better decisions in many other areas too, such as combating climate change. “This isn’t about social manipulation,” he says. “This is actually about building a scientific chain of reasoning into everything that's done, and empowering people. It is my utmost belief, optimistically, that people want to do the right thing. They just don't necessarily have the information to help them make the right decision. And just by providing factual, evidence-based information that's priced into actions that they take, people begin to make the right decisions.”
On that optimistic note, here’s the rest of this week’s news in A.I.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn