AI – The Latest Weapon in Warfare

Soldier with an AI robot dog.

Big changes are coming – are here! – on the battlefields

In a recent interview with Harvard Medicine News, Kanaka Rajan, Ph.D. an associate professor of neurobiology in the Blavatnik Institute at Harvard Medical School, outlined the ramifications of inserting AI into warfare. Her study, conducted along with research fellows in neurobiology Riley Simmons-Edler and Ryan Badman, and MIT PhD student Shayne Longpre, Dr. Rajan explained how AI represents a quantum leap in the ways warfare is being conducted now, how it will increasingly be a part of warfare, and the implications that has for those conducting AI research that enables it.

Dr. Kanaka Rajan associate professor of neurobiology in the Blavatnik Institute at Harvard Medical School.

Why should AI-weaponry be an academic’s concern?

Dr. Rajan explained what prompted her and the other academics to consider the issue in the first place:

“We realized that the academic AI research community would not be insulated from the consequences of widespread development of these weapons,” she says. “Militaries often lack sufficient expertise to develop and deploy AI tech without outside advice, so they must draw on the knowledge of academic and industry AI experts. This raises important ethical and practical questions for researchers and administrators at academic institutions, similar to those around any large corporation funding academic research.”

Rajan detailed the three biggest risks when AI and machine learning are integrated into warfare:

▪      Having these AI-enabled weapons makes it easier for adversaries to engage in war to begin with.

▪      Nonmilitary scientific AI research may be censored or co-opted to support the development of these weapons.

▪      Militaries may use AI-powered autonomous technology to reduce or deflect human responsibility in decision-making.

Dr. Rajan went on to explain that a primary deterrent inhibiting nations from starting a war is the inevitability that their soldiers will die. That human toll on a nation’s citizenry often has political costs for the leaders who choose to start a war. AI-powered weapons can take soldiers out of the equation when they’re not placed in harm’s way. That, however, Rajan points out, can lead to greater death and destruction perhaps prompting greater geopolitical problems as AI-powered weaponry evolves.

When Dr. Rajan reviewed how academic scientific research, such as in nuclear physics and rocketry, increasingly became engaged in defense operations, she observed that, researchers were subjected to travel restrictions, censorship, and, in some cases, the acquisition of security clearances to continue their work. These prohibitions, Dr. Rajan believes could impede basic AI research which is outside the realm of AI military applications.

As defense departments seek to adopt more and more AI-powered weaponry, the more likely it is AI knowledge is walled off behind security clearances which, Dr. Rajan believes, will inhibit new research.

You’re just thinking about this now?

AI has been on a tear in the past year or two, asserting itself into ever-increasing aspects of our lives. But, says, Dr. Rajan, many people and the governments that represent them, have tended to look at each of the AI advances in isolation, rather than looking at the overall landscape of AI systems and their capabilities. Furthermore, the companies developing these proprietary systems are guarded about the degree to which their products are autonomous or how much a human-in-the-loop is required to engage the system(s).

In some instances, Dr. Rajan, points out an AI-system has made a complex evaluation of the situation in a black box and all the human has to do is press a “Kill” button. That could lead to errors the AI system made in the process going undetected. But, on the battlefield, seconds, sometimes microseconds, count. And since the ethics of whether or not to push the button are not yet part of any AI-weaponry system, that could lead to some horrifying outcomes.

A solution academics can live with

Dr. Rajan explains that today’s universities already have institutional training, oversight, and transparency requirements to help researchers recognize the ethical risks and biases industry funding can have. But no such training or oversight exists when it comes to military funding.

A good place for universities to start would be to create discussion seminars, internal regulations, and oversight processes for military-, defense-, and national security agency-funded projects that are similar to those already in place for other industry-funded projects.

Moving forward in the lab and on the battlefield

Dr. Rajan emphasizes the fact that all AI-enabled weapons are not created equally and so, she suggests, the academic community and the governments engaging them in AI military applications need to understand the capabilities of the weapons they produce and the oversight they require. That, in turn, can lead to controls and regulations governments and institutions can use to guide them as they increasingly hand large chunks of battlefield strategies and decisions to ‘smart’ machines.

#AItoday #AI #warfare #harvardmedicine #AIweaponry

 

Keep up with the latest AI news, insight, and analysis.

Get a Free Trial Subscription to AI Today.

AI Today

Post Office Box 54272, San Jose, CA, 95154, US.
© 2024 Hologram LLC. All rights reserved.

Social Links