Defense Departments are adding a new weapon to their arsenals

AI is rapidly becoming embedded in our lives — from art to zoology — and military applications are no exception. In fact, many Defense Departments are quickly moving on offense to adopt digital strategies.

The U.S. Department of Defense (DoD)Chief Digital and Artificial Intelligence Office (CDAO) hosted the Advana Industry Day. Advana is a mashup of ‘advancing analytics’ and gives DoD personnel access to more than 400 DoD systems and the tools, services, and analytics to facilitate data-driven decisions across the organization.

DoD is pursuing this initiative as a recompete, an opportunity to increase vendor competition, increase interoperability among systems, and ensure Advana uses best-in-class solutions.

AI doing what it does best — crunching the data

Anthony King, a professor of defense and security studies and the Director Security Studies, Director Strategy and Security Institute at the University of Exeter in the United Kingdom writes in his paper, “Digital Targeting: Artificial Intelligence, Data, and Military Intelligence,” that contrary to popular belief, AI is not being primarily used to guide autonomous weapons but instead is being used to process data. AI is augmenting military intelligence and accelerate military targeting.

AI in the military: It’s not as easy as you might think

For AI to be useful to a military, the application needs to be trustworthy. A dozen trustworthy instances can be quickly undermined by a single failure.

For AI to gain acceptance in military environments, the data sets must not biased. That bias may not be evident in an initial implementation of AI use cases. But, as societies are constantly changing, there is the risk of predictional drift. What was trustworthy and unbiased yesterday may not be so tomorrow or next week or next month.

When AI is used in autonomous weapons or analytical systems, there is the very real concern that humans may get left out of the decision making on the battlefield. In its book, “Responsible Use of AI in Military Systems,” the authors point out the U.S. DoD uses the terminology “appropriate levels of human judgment.” But, Human Rights Watch asks what determines an “appropriate” level of human judgment.

The U.S. DoD also advocates for the use of Responsible AI. That means developing AI systems that will operate ethically, lawfully, transparently, traceably, and protect privacy while continuously monitoring for bias creep.

Warfare has always been a complex undertaking. The addition of AI into the military’s toolkit would seem to only make waging war more challenging still.

Learn how AI can help your business grow

Subscribe to AI Today

#AI #AIToday #DefenseDepartments #warfare #Advana #DoD

AI Today

Post Office Box 54272, San Jose, CA, 95154, US.
© 2024 Hologram LLC. All rights reserved.

Social Links