You know the story. The government creates artificial intelligence—badda bing badda boom—you’re fighting Arnold Schwarzenegger in a post-apocalyptic battle for the planet. It’s a tale as old as 1984 (and still being told).
But it doesn’t have to be that way. The Department of Defense asked the Defense Innovation Board to prepare a report called “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense” or as I call it a Terminator avoidance plan.
Last week, the board released its set of principles for the ethical deployment of AI “for both combat and non-combat purposes.” Contractors in AI or related fields would do well to incorporate these principles as the DoD will rely on contractors to be responsible and ethical in this field. The summary linked above is a tidy 11 pages. The full report is available here.
The board settled on these principles after a 15-month study that sought feedback from experts such as “Turing Award-winning AI researchers, retired four-star generals, human rights attorneys, political theorists, arms control activists, tech entrepreneurs” and the public.
Among the legitimately scary conclusions from the study include these heaters:
“Now is the time, at this early stage of the resurgence of interest in AI, to hold serious discussions about the norms of AI development and use in a military context—long before there has been an incident.”
Incident, they say. Terminator, I read.
Here’s another:
“Our adversaries and competitors have recognized the transformative potential of AI and are investing heavily in it by modernizing their forces while actively engaging in provocative activities around the globe.”
Russian Terminators.
And here’s the kicker, from Lt. Gen. Jack Shanahan, Director of the Joint AI Center, “What I don’t want to see is a future where our potential adversaries have a fully AI-enabled force and we do not . . . I don’t have the time luxury of hours or days to make decisions. It may be seconds and microseconds where A.I. can be used.”
Yikes. That’s a general saying that the military has to consider turning battle decisionmaking over to AI or risk falling behind. I can’t stress this enough, that’s literally the plot of The Terminator.
So what is the board going to do about it? It landed on five principles. They are:
- Responsible – human beings should remain responsible
- Equitable – avoid bias
- Traceable – favor transparency and auditability
- Reliable – define the domain of use
- Governable – be able to stop unintended harm
In order to accomplish these principles, the board made the following recommendations:
- Formalize these principles via official DoD channels
- Establish a DoD-wide AI steering Committee
- Cultivate and grow the field of AI engineering
- Enhance DoD training and workforce programs
- Invest in research on novel security aspects of AI
- Invest in research to bolster reproducibility
- Define reliability benchmarks
- Strengthen AI test and evaluation techniques
- Develop a risk management methodology
- Ensure proper implementation of AI ethics principles
- Expand research into understanding how to implement AI ethics principles
- Convene an annual conference on AI safety, security, and robustness
So that all sounds totally reasonable. But it’s all so sci-fi-y and comic-booky. Here’s hoping that the private contractors working with the government adhere to this approach and take pains to incorporate fail-safes.
Questions about this post? Email us or give us a call at 785-200-8919.
Looking for the latest government contracting legal news? Sign up for our free monthly newsletter, and follow us on LinkedIn, Twitter and Facebook.