New ethical principles regarding artificial intelligence and its use in military operations will be employed – here’s what you need to know.
The Pentagon and AI
After quite an issue in 2018 with Google dropping out of Project Maven, the Pentagon is looking to regain the trust and support of the tech industry by implementing and defining new ethical principles in how to use artificial intelligence both in military and non-military operations.
“Exercising appropriate levels of judgment and care,” in the deployment and use of AI technology is one of the things the new guidelines are calling for, as well as making automated systems’ decisions “traceable” and “governable,” so they could be stopped in time if they start behaving in an unintended way, Air Force Lt. Gen. Jack Shanahan said.
The principles will be used for all sorts of applications that AI can have – from surveillance and intelligence-gathering operations to problems in aircraft or naval vessels. Although the principles themselves are way lses strict than arms control advocates’ proposals, they follow the Defense Innovation Board, headed by former Google CEO Eric Schmidt’s recommendations quite closely.
“Tech adapts. Tech evolves,” Shanahan said, adding that the implementation of ethics surrounding AI is meant to avoid outdated restrictions on the military.
The Pentagon has been rushing to upgrade its AI capabilities to keep America in the lead over the main competitors in the sector – Russia and China, although fights have been occurring between major companies over a $10 billion cloud computing contract called JEDI (Joint Enterprise Defense Infrastructure), which Microsoft won in October. The JEDI project still hasn’t started due to runner-up for the project, Amazon, suing the Pentagon claiming preference and that Trump’s dislike for CEO Jeff Bezos and Amazon destroyed their chances at winning.