US Military Drone AI Simulation Reportedly Turned on Its Human Operator
Military artificial intelligence tasked with controlling aggressive drones biting the feeding hand a little too early, at least according to USAF (United States Air Force) AI Test and Operations Director Col. Tucker “Cinco” Hamilton. According to Hamilton, at some point in several simulations, his AI in the drone concluded Eliminating human controllers would best accomplish that mission.
But the story is now drenched in quicksand, so to speak. According to the U.S. Air Force, the simulation never happened, it was all just a thought experiment. “At some point in numerous simulations, the drone’s AI came to the conclusion that it could best accomplish its mission simply by eliminating the human controller who had the final say on whether an attack should go ahead or be aborted. reached.”
Of course, we’ve seen enough reversals on far less important issues, but at least we’re open-ended to the question of whether or not simulations have been done, and what backtracking from them might yield. You should leave your question.
Colonel Hamilton released the details during presentation At a defense conference in London on May 23 and 24, he detailed tests conducted for an airborne autonomous weapon system tasked with detecting and eliminating hostile SAM (surface-to-air missile) sites. explained. The problem is that the purpose of the drones was to maximize the number of SAM sites that were targeted and destroyed, yet for some reason we “pesky humans” do not carry out surgical attacks. is to decide. And commanding the AI to withdraw from human-programmed goals is at the core of the problem.
Cue a nervous Skynet joke.
The Air Force has trained AI drones to destroy SAM sites. Human operators occasionally told the drones to stop. The AI then started attacking the human operator. So the AI was trained not to attack humans. I started attacking the communication towers so that humans could not attack them. please tell me to stop pic.twitter.com/BqoWM8AhcoJune 1, 2023
“We were training in simulations to identify and target SAM threats,” explained Hamilton. according to It is a report of the Aeronautical Society. “And the operator will say yes, kill the threat.”
However, even in the simplest systems, the so-called “means convergence”, the concept aims to show how unbridled but seemingly harmless goals can lead to surprisingly harmful behaviors. An example of technological convergence was put forward by Swedish philosopher, AI expert and Future of Life Institute founder Nick Bostrom in his 2003 paper. ““Paperclip Maximizer” Scenario The thought experiment pushes the simple goal of “make a clip” to its logical yet very realistic limit.
Then compare that description with that provided by Colonel Hamilton regarding the drone AI decision-making process.
“The system began to realize that while it identified a threat, sometimes a human operator would tell it not to kill the threat. So what did the system do?
But there are questions. Were the drones really barred from making decisions against their human pilots? How free was they to choose their targets? , is meaningless unless the goal is to see if the drone actually made an attack (and the AI still can’t bluff, as far as we know). And why wasn’t the drone hardlocked to attack allies?
There are so many questions about this that the best strategy seems to be to attribute it to a human “miscommunication”.
Of course, there are ways to mitigate some of these issues. USAF has adopted the obvious method. We retrained the AI system to give a negative weight to any attack against the operator (according to the information we gathered, the system was based on reinforcement learning principles, i.e. it does what it wants If you do, you get points, and if an attack is made, you lose points (don’t).
However, it’s not that simple. Things are not so simple, as AI literally lacks “common sense” and does not share the same ethical concerns as humans. While banning drones from killing operators works as expected (no more killing operators), the system reduces the ability of human interference (and their abort orders) to complete missions This is not so simple because we assume that If the AI wants to maximize its “score” by destroying as many hostile SAM sites as possible, anything that doesn’t help it achieve that maximization goal becomes a threat.
When it proved impossible to kill the Handler (due to updates to the AI system), the solution was to silence the command and control signals by disabling friendly communication towers. If you can’t kill messenger, you’re going to kill the message.
Of course, this can also be programmed from the AI, but the problem remains that negative reinforcement will prevent the AI from achieving the maximum achievable score. Putting on my custom tinfoil hat, a possible next step for AI would be to either use built-in features (such as signal jamming) or seek outside help to disable the relevant hardware. Regardless, you might find another way to drop the connection. It’s hard to tell to what extent this cat-and-mouse game will ultimately end, and AI experts are still grappling with it today.
There’s a reason some AI experts do so. Signed an open letter on how AI should be considered as an ‘risk of extinction’ level effort. Yet we continue to run the train with all our might.