Scholar analyzes potential strategies to regulate wartime use of artificial intelligence.
No longer confined to the realm of science fiction, militarized artificial intelligence (AI) is evolving into warfare. How should international regulators respond?
In a recent paper, Mark Klamberg, a professor at Stockholm University, examines three methods of regulating the use of AI in military operations at the international level. Klamberg suggests that regulators should step up their oversight by using the current international humanitarian law framework, adding AI-specific regulations to existing rules, or developing a new system of regulation altogether.
Militarized AI is not new. Under the Obama Administration, the United States expanded the use of drones. Drones are an example of narrow AI which is AI designed to perform a single task.
The prevalence of sophisticated narrow AI which supports human decision making has increased, as seen in the war in Ukraine. The Ukrainian armed forces developed an Android application to reduce the time spent on targeting artillery. Its algorithm directs human operators to fire at opponents.
But general AI—which performs tasks as well as or better than humans—stands to upend the way war is done. Klamberg argues that the combination of narrow and general AI will increase the speed of warfare and enable quick and efficient decision making in military organizations.
Klamberg explains that current international regulatory efforts have been limited and focus on lethal autonomous weapons systems. The U.S. Department of Defense defines these weapons systems as systems that “select and engage targets” without further human intervention.
But Klamberg suggests that it is misleading to use the term “autonomous” in the context of these weapons systems.
Lethal autonomous weapons systems still incorporate humans either through direct control, supervision, or development of the system, so lethal autonomous weapons systems may still comport with international humanitarian law principles. As the International Committee of the Red Cross explains, the person who had “meaningful human control” over the system is accountable for that weapon.
Because mechanisms to regulate lethal autonomous weapons systems exist, Klamberg instead emphasizes the regulation of AI in military command and control systems. These systems are the organizations of personnel communication and coordination used to accomplish a given military goal.
AI in this context offers many benefits, including improving the accuracy, speed, and scale of decision-making in complex environments in a cost-effective manner.
Klamberg explains that this use of AI may lift the uncertainty of the “fog of war” that results from inefficient communication and information in a military operation. AI technology could connect soldiers and commanders, promoting efficient communication at even the lowest tiers of command.
The use of AI in military command and control systems, however, also poses challenges that regulators should address. Klamberg identifies concerns that the use of AI is more likely to endanger civilians, marks a loss of humanity, and may facilitate biased decision-making. AI may also increase the power asymmetry between nations, creating the potential for riskless warfare where one side is too advanced to fail.
Furthermore, the incorporation of AI into military command and control systems complicates how responsibility is allocated, Klamberg explains. Specifically, who is responsible for AI’s bad decisions? The software programmer, military commander, front-line operator, or even the political leader?
Klamberg identifies a concern that military personnel may be held responsible for the decisions of advanced autonomous systems even though they lack meaningful control over the system. Instead of pinning blame on low-level operators, Klamberg suggests that those overseeing any disciplinary process focus on supervisors and those with more control over the system.
Challengers to militarized AI, also called “abolitionists,” warn against the use of the technology altogether given these risks.
The complexity and rapid development of these technologies makes their regulation at the international level difficult. But the task is a worthwhile endeavor based on Klamberg’s premise that warring nations do not have an unlimited right to injure their enemy.
Klamberg outlines three methods of regulating militarized AI.
Klamberg suggests applying existing rules and principles of international humanitarian law to militarized AI. International humanitarian law is founded on the moral principles of distinction, proportionality, and precaution.
The principle of distinction requires that warring actors distinguish between civilians and combatants. Proportionality entails weighing the cost of harm to civilians against the military advantage of an attack, and the principle of precaution includes taking other measures before an attack to mitigate its adverse effects.
Klamberg claims that these three principles can be programmed into militarized AI and would serve as a regulatory check on the technology. For instance, the principles of distinction and proportionality could be reduced to a formulaic calculation that enables the AI to separate civilians from combatants before executing any action.
To incorporate these principles into AI, Klamberg proposes involving human oversight in AI decisions. Klamberg explains that continuous assessment of the formulas programmed into the AI would serve as reassurance that the AI is acting according to accepted moral principles.
In addition, Klamberg proposes new AI-specific regulation that adds to existing rules, such as the military’s current rules of engagement. These rules are the internal policies that delineate the circumstances under which a military organization will engage in combat.
Klamberg proposes that militarized AI may be constrained through programming which incorporates the rules of engagement. Such programming would either restrict or permit the AI to deploy its weapons consistent with the rules. Klamberg suggests that this ethical programming could become part of the rules of engagement.
Ultimately, Klamberg imagines possible new frameworks for governing militarized AI.
One possibility involves implementing an arms control or trade regime to prevent an AI arms race, such as have been used to control nuclear arms races. As international agreements, arms control and trade regimes disallow the production and sale of certain weapons.
Some of the leading robotics companies have pledged not to weaponize their creations, but Klamberg suggests that these pledges have left companies working with the U.S. Department of Defense noticeable wiggle room. Instead of relying on voluntary pledges, Klamberg calls for the creation of a binding international treaty among countries.
Another possibility includes introducing new regulations governing the methods of AI warfare developed by international bodies in compliance with the Geneva Conventions. But these regulations may be too slow to be effective and may not take into account the development of AI, cautions Klamberg.
Whatever step is taken next, Klamberg suggests it should support an international regulatory framework that adapts to the future challenges of militarized AI.