Defense and Warfare in the Age of AI: Of Morals, Morale and Military Intelligence

The programming code for the first iteration of generative AI is in the digital books. While the story is still in the early chapters of development, what has yet to be written is how Defense Departments will integrate AI into their missions while maintaining their standards.

The 5 ‘P’s of AI

In 2019, United States (U.S.) Department of Defense, through its Defense Innovation Board, began to address how they would adopt and adapt AI into their operations and published the results of their research in AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. They concluded that, to strategically and effectively integrate AI would require they deploy five key elements. Their AI needed to be: responsible, equitable, traceable, reliable, and governable.

The following principles represent the means to ensure ethical behavior as the Department develops and deploys AI. To that end, the Department should set the goal that its use of AI systems is:

“1. Responsible. Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems.

2. Equitable. DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.

3. Traceable. DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.

4. Reliable. DoD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.

5. Governable. DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.“

These principles, the U.S. DoD concludes, cannot be an afterthought. They need to be an integral part of the deployment process in keeping with its commitment to U.S. law, the Law of War (which has its roots in international law first articulated by the 18th/19th Century English philosopher, Jermey Bentham), and international humanitarian law.

In the European Union, Dorgyles C.M. Kouakou, a quantitative economist at TAC ECONOMICS and Eva Szego, a researcher with the École Nationale Supérieure de Techniques Avancées (ENSTA Paris, the National Higher School of Advanced Techniques) published their evaluation of AI in the defense sector in their paper, Evaluating the integration of artificial intelligence technologies in defense

activities and the effect of national innovation system performance on its enhancement.

The pair used graph theory to assess the degree to which AI has been integrated into defense departments and how the performance of their national innovation system (NIS) influences their integration. They analyzed data from 33 countries accumulated from 1990 to 2020. Their findings indicate that, not only is the (U.S.) the global leader in integrating AI into its defense department, but there is a significant gap between the U.S. and other countries; second in readiness is Germany followed by the United Kingdom.

Nations that hope to maintain their military advantage will need to continue to evaluate and update their AI systems. The performance of their AI technology, the researchers concluded, will play a critical role in those nations’ military superiority.

[pull-quote] “To be prepared for war is one of the most effective means of preserving peace.” George Washington, Commander-in-chief of the Continental Army and 1st President of the United States

The best defense… is a strong offense

As with much of AI, using artificial intelligence to plan military strategy can be a blessing and a curse. In their paper, “Escalation Risks from Language Models in Military and Diplomatic Decision-Making, researchers Juan-Pablo Rivera, Gabriel Mukobib, Anka Reuel, Max Lamparth, Chandler Smith, and Jacquelyn Schneider studied five of the most common off-the-shelf AI tools, such as GPT-4, to understand how AI responds to the traditional precursors to war.

The group – a cohort of colleagues from Stanford University, the Georgia Institute of Technology, Northeastern University and the Hoover Wargaming and Crisis Simulation Initiative – also learned that the software responded in unpredictable ways – hardly an advantage. Real-world geopolitical divides can be highly complex. The research showed that, more often than not, the AI large language models (LLMs) led to escalation of tensions in unpredictable ways. The logic tended to prompt first-strike tactics as a means toward deterrence leading to an arms-race mentality, and in rare cases, the deployment of nuclear weapons. Since real-world geopolitical divides can be highly complex, the study concludes that senior military and political leaders should carefully consider the recommendations of a machine that can learn rapidly but is wholly without sentience.

AI Today

Post Office Box 54272, San Jose, CA, 95154, US.
© 2024 Hologram LLC. All rights reserved.

Social Links