AI flies military plane for first time

AI Military Plane

The US Air Force has used an artificial intelligence pilot to fly a military plane for the first time in a major step towards AI being deployed in warfare.

After completing over a million simulations, the µZero AI algorithm successfully co-piloted a U-2 spy plane in a training mission above California on 15 December.

Writing in Popular Mechanics, Dr Will Roper from the US Air Force described it as a “giant leap for ‘computerkind’" in future military operations.

“We trained µZero – a world-leading computer programme that dominates chess, Go, and even video games without prior knowledge of their rules – to operate a U-2 spy plane,” wrote Dr Roper.

“It was the mission commander, the final decision authority on the human-machine team… Given the high stakes of global AI, surpassing science fiction must become our military norm.”

Dr Roper, who serves as the Assistant Secretary of the Air Force for Acquisition, Technology and Logistics, said that the mission was a demonstration of “how completely our military must embrace AI to maintain the battlefield decision advantage".

He concluded: “Algorithmic warfare has begun.”

The mission comes just four months after an AI pilot defeated a US Air Force pilot in a virtual F-16 dogfight, fought over several rounds on a computer simulation.

The human pilot was unable to match the innovative twisting techniques adopted by the AI pilot, which was not limited by physical constraints like the amount of G-force a human can withstand.

During a live broadcast of the aerial combat, the human pilot said: “Standard things we do as fighter pilots are not working,"

Earlier this year, the US Department of Defense announced plans to adopt ethical principles in order to lay the foundation for artificial intelligence to be used in warfare.

The principles called for “appropriate levels of judgement and care” when deploying AI systems, while also making them “traceable” and “governable”.

Arms control advocates warned that more needed to be done to prevent AI from making life-or-death decisions on the battlefield, calling for stronger restrictions on the technology.

“I worry that the principles are a bit of an ethics-washing project. The word ‘appropriate’ is open to a lot of interpretations,” said Lucy Suchman, an anthropologist who specialises in AI in warfare.

An open letter from leading AI and robotics researchers in 2015 warned that “a global arms race is virtually inevitable" if major military powers continue to push ahead with AI weapon development.

"We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so," the letter stated, whose signatories also included Stephen Hawking, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak.

“Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Resource: Independent

You can sign up for our e-mail newsletter for the most up-to-date developments and news from YZTD.