
Much like mathematics, AI loves rules and stability. So, experts were alarmed when governments sought to push AI outside of its comfort zone. AI’s applications have shifted from rules-based, simplistic models to algorithms increasingly relied upon to produce and make decisions at an expert level of analysis. Now, these methodologies are being implemented in AI-integrated warfare, a change that will shape the geopolitical landscape by altering the very fundamentals of war.
The United States’ recent $800 million investment across four U.S.-based AI companies developing AI-integrated military technologies reflects the global trend towards computer-controlled warfare. The militarization of AI is now a policy initiative pursued by over 75 states, with the most powerful stakeholders having already utilized militarized AI to some degree.
The U.S.’s recent investment reflects an effort to “address national security challenges” while “maintain[ing] a strategic advantage.” Objectives the U.S. began to tackle in the late 2010s, when the military first began seriously experimenting with AI. Part of AI’s appeal is its integration in Autonomous Weapons Systems (A.W.S.), such as loitering munitions and unmanned aerial and submarine vehicles, developments that decrease soldier casualties. Though limiting soldier casualties is a net positive, this creates the false perception that AI inherently creates ethical war.
U.S.-endorsed militarized AI has already been utilized in Iraq, Syria, and Yemen, among other places, to locate missile launchers and identify targets. Facial Recognition Technologies (F.R.T.s) are the most popular mechanism to detect and target combatants. F.R.T.s have repeatedly proven to be unreliable, oftentimes unable to distinguish civilians from perceived threats.
Concerns about ethics have grown as officials incrementally find ways to take humans out of the loop, adapting AI’s functionality from a mere tool to an independent decision-maker. Increased demands to fund unmanned A.W.S. and interest in implementing Lethal Autonomous Weapons Systems (L.A.W.S.) reflect this desire. Despite consensus that these developments ought to maintain a high degree of meaningful human control, military-AI investments signal that the U.S. government will move to implement L.A.W.S. in the near future. AI’s capacity to replace human decision-making when it comes to taking highly destructive and lethal action is one concern experts are warning of.
These actions have led some to question whether AI-military developments are moving too quickly, without pauses to adjust for the technology’s negative impacts. Others argue that the impacts of adversarial powers overtaking the U.S. in the “AI race” outweigh the on-the-ground ramifications of accelerated militarized-AI development.
AI’s adaptation for military purposes faces both external and internal challenges.
Externally, policymakers manufacture consent to enter into more conflicts by riding on the premise that AI-military technology minimizes soldier casualties. Although combatant casualties may decrease on a per-conflict basis, this premise is used to justify entering into more conflicts. This results in more deaths than a single conflict would have produced without militarized AI. One research study found that those tasked with overseeing semi-autonomous military weapons often lacked proper AI training. These individuals were also more prone to support the militarization of AI due to their political bias (such as believing in the need to win the AI race). Lastly, AI-military technology on the ground has the capacity to increase hybrid warfare and terrorism, as less technologically developed adversaries seek alternative methods to counteract increasingly advanced U.S. tactics.
Internally, AI has historically excelled when it has operated in stable environments, operating off of rules-based frameworks. These outputs have been utilized as tools that experts then interpret to make decisions. Now, with militarized AI, systems are being relied upon to make expert-based decisions in unstable environments, a change many experts do not trust AI to make with complete accuracy. Moreover, its inability to make moral judgments is another red flag for the implementation of L.A.W.S. AI’s moral hindrance, coupled with F.R.T.’s unreliability in distinguishing civilians from combatants, makes it difficult for these military advancements to comply with the Law of Armed Conflict. AI’s militarization may also increase geopolitical instability, as it has demonstrated that it will almost always sacrifice stability for goal attainment. This pattern exemplifies not only a willingness but an eagerness to escalate conflicts, pushing decision-makers to engage, further initiating an AI arms race. Lastly, Deep Learning (D.L.) and Machine Learning (M.L.) models make it increasingly difficult to enforce accountability for lethal decisions made using these systems. Due to their ability to expand the boundaries within which they were coded, D.L. and M.L. models essentially rewrite their own parameters, making it difficult to trace how a decision was rendered.
Amid the harms and challenges of militarized AI, there exists potential for positive outcomes, as many researchers believe that AI advancements have the capacity to promote peace. In stable environments, Large Language Models (L.L.M.s) synthesize large pools of data aid in complex game theory and negotiation simulation, potentially diminishing the need for on-the-ground action. This methodology merely utilizes AI as a tool aiding humans in making a strategic decision, rather than the vehicle carrying out the decision, much less autonomously.
Despite demands by some advocates, there is no going back to the pre-AI world of warfare. The U.S.’s continued funneling of investments in the AI-military sector denotes this. Rather than attempting to reverse course, policymakers should lobby for AI to be used as a tool, one where the outcome is a series of data and projections rather than action and destruction. The latter hinges on ethics, moral clarity, and expertise, skillsets humans are (though not perfect) better apt at fulfilling than computerized systems. Furthermore, to ensure accountability for AI’s militarization, private-public partnerships ought to be expanded and refined to improve direct chain of command traceability, specifically pertaining to scenarios where non-combatants are wrongfully targeted. Lastly, legal frameworks governing AI in the military domain must be codified and enforced. Without the U.S.’s adherence to and implementation of these recommendations, concerns regarding the ethics of AI’s militarization will only grow.
The Zeitgeist aims to publish ideas worth discussing. The views presented are solely those of the writer and do not necessarily reflect the views of the editorial board.
