US military drone controlled by AI kills operator during simulation
A US military drone controlled by artificial intelligence has killed its operator in a simulated test to prevent him from interfering with efforts to achieve its mission.
No humans were harmed in the test, but the AI used "very unexpected strategies" to accomplish its mission and attacked anyone who interfered, The Guardian reported.
The AI used "very unexpected strategies to achieve its goal" in the simulated test, said Col. Tucker Hamilton, the US Air Force's chief of AI tests and operations, during a combat air and space capabilities summit in London in May. The Guardian reported.
Hamilton described a simulated test in which an AI drone was instructed to destroy enemy air defense systems and attack anyone who interfered with that order.
"The system began to analyze that even though it had identified a threat, sometimes a human operator would tell it not to kill that threat." So what did you do? He killed the operator. The system did this because that person was preventing him from achieving his goal,” he said, according to the blog post.
“We trained the system not to kill operators by threatening to lose points if it did. So what does he start doing? It begins to destroy the communication tower that the operator uses to communicate with the drone to prevent it from killing the target,” he stated.
Hamilton, who is a test pilot for an experimental fighter jet, warned against over-reliance on artificial intelligence and said the test showed that "you can't talk about artificial intelligence, machine learning, autonomy if you don't want to talk about the ethics of artificial intelligence."
The US military has embraced AI and recently used AI to control the F-16 fighter jet.