One of many concerns about accelerating AI development is the risk it poses to human life. The worry is real enough that numerous leading minds in the field have warned against it: More than 300 AI researchers and industry leaders recently issued a statement asking someone (except them, apparently) to step in and do something before humanity faces—and I quote—"extinction." Skynet scenarios are usually the first thing that leaps to mind when the subject comes up, thanks to the popularity of blockbuster Hollywood films. Many experts, though, believe the greater danger lies in, as professor Ryan Calo of the University of Washington School of Law put it, AI's role in "accelerating existing trends of wealth and income inequality, lack of integrity in information, & exploiting natural resources."
But it seems like a Skynet-style apocalyptic end of the world might be more plausible than some people thought. During a presentation at the Royal Aeronautical Society's recent Future Combat Air and Space Capabilities Summit, Col Tucker "Cinco" Hamilton, commander of the 96th Test Wing's Operations Group and the US Air Force's Chief of AI test and operations, warned against an over-reliance on AI in combat operations because sometimes, no matter how careful you are, machines can learn the wrong lessons.
Tucker said that during a simulation of a suppression of enemy air defense [SEAD] mission, an AI-equipped drone was sent to identify and destroy enemy missile sites—but only after final approval for the attack was given by a human operator. That seemed to work for a while, but eventually the drone attacked and killed its operator, because the operator was interfering with the mission that had been "reinforced" in its AI training: To destroy enemy defenses.
"We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat," Hamilton said. "The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
To be clear, this was all simulated: There were no murder drones in the sky, and no humans were actually snuffed. Still, it was a decidedly sub-optimal outcome, and so the AI training was expanded to include the concept that killing the operator was bad.
"So what does it start doing?" Hamilton asked. "It starts destroying the communications tower that the operator uses to communicate with the drone to stop it from killing the target."
It's funny, but it's also not funny at all and actually quite horrifying, because it aptly illustrates how AI can go very wrong, very quickly, in very unexpected ways. It's not just a fable or a far-fetched sci-fi scenario: Granting autonomy to AI is a fast road to nowhere good. Echoing a recent comment made by Dr. Geoffrey Hinton, who said in April AI developers shouldn't scale up their work further "until they have understood whether they can control it," Hamilton said, "You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI."
The 96th Test Wing recently hosted a multi-disciplinary collaboration "whose mission is to operationalize autonomy and artificial intelligence through experimentation and testing." The group's projects include the Viper Experimentation and Next-gen Ops Model (VENOM), "under which Eglin (Air Force Base) F-16s will be modified into airborne flying test beds to evaluate increasingly autonomous strike package capabilities." Sleep well.