The US Air Force tried to smooth over comments made by one of its colonels, who last month said that an AI-controlled drone had “killed” its operator during a simulated exercise.
Col Tucker “Cinco” Hamilton, who heads AI testing and operations for the Air Force, made the comments during the Royal Aeronautical Society’s summit on Future Combat Air and Space Capabilities in London.
He described a simulation in which an AI-controlled drone realised its human operator was preventing it from earning points for killing certain targets – and decided to kill the operator instead.
“It killed the operator because that person was keeping it from accomplishing its objective,” Col Tucker was quoted in a blog post published by the Royal Aeronautical Society as saying.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re going to lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
However, the Air Force says that Col Hamilton’s words were “taken out of context” and meant only to be “anecdotal”.
“The … Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” said Ann Stefanek, an Air Force spokeswoman.
“This was a hypothetical thought experiment, not a simulation.”
The use of AI in warfare is a hotly debated topic that ethicists have long wrestled with.
Peter Lee, a professor of applied ethics at the University of Portsmouth, describes himself as a “cautious advocate of AI-enabled weapons”, as long as they are developed within a legal and ethical framework.
He believes the simulation that Col Hamilton described was plausible.
“It is an absolutely credible example that an artificial intelligence-enabled weapon system could come up with such a scenario,” he told The National.
But Mr Lee, whose research focuses on the use of drones in war, said that while the scenario was certainly possible, the number of checks and balances that would have to be in place before such a drone even took off makes that potential outcome incredibly unlikely.
“It is not as if you create this massive system and then you're just unleashing it. At every stage of testing, you'd want to see, do I get predictable results? Do I get reliable results?”