by Maurizio Simoncelli. Vice President and Co-Founder of the International Research Institute Disarmament Archive - IRIAD
Technology is making great strides in all sectors. In particular, the applications of Artificial Intelligence (AI) open up new perspectives—including the military one. Now they are leading to the creation of LAWS (Lethal Autonomous Weapons Systems), i.e., autonomous weapon systems capable of identifying, selecting and attacking the target without human action. The mass media call them “killer robots,” a catchy term, reminiscent of Hollywood movies like Robocop or Terminator. We are not yet at that level, but at least experimentally many projects are being launched in various states including the USA, Russia, China, France, England and Israel. In particular, the governments of the United States, Russia and China continue to invest heavily in LAWS development involving defense departments, arms manufacturers and universities.
What are the advantages of these weapons? They have very short reaction times and high precision, do not suffer emotional stress in conflict situations, minimize the possibility of hitting civilians by mistake (the so-called “collateral damage”), and they can only be activated in the event of an evident and singular aggressive act. Last but not least, they avoid the possibility of loss of human life by the armed forces that use them: the war of killer robots would be a clash between sophisticated machines produced by industries (in the event of a tie) or between men and machines.
Are these science fiction scenarios? Not so much, given that it is estimated that the economic impact of AI (civil and military) will be around 13 trillion (billions of billions) of dollars by 2030. And we are already working hard on close or shortrange weapons systems, unmanned combat aircraft, precision-guided ammunition, unmanned land vehicles and unmanned marine vehicles, targeted ammunition that fly autonomously until they find the target and then attack it. Military control practically becomes non-existent and the autonomous weapon does everything by itself.
JC Rossi already in November 2016, in a study entitled The War that Will Come: Autonomous Weapons, spoke of “a technology, in essence, which by removing man from the programming and management of a machine, replaces him with artificial intelligence (AI) that avoids those risks that human pulsations, on the other hand, do not exclude.”
These autonomous weapons can actually operate at different levels: with a prevalent human control (a human in the loop), with a human control limited only to the initial phases (human on the loop), or with the absence of human control and with total “Autonomy” (human out the loop).
This is where numerous scientists and experts have identified enormous dangers: the loss of human control over the machine can be extremely damaging. How can a machine, however sophisticated, understand that the opponent wants to surrender or that he is no longer able to even express this will because he is injured. Or that the target attacked is actually civilian, erroneously present in the area? Artificial Intelligence can make mistakes or be misled by the opponent, while the self-learning process (machine learning) based on the action of different algorithms can lead to unpredictable final results. How would it actually behave in a real battlefield?
While Artificial Intelligence can be highly useful in the civilian field, in the military field it leaves many questions unanswered. If a civilian accidentally or necessarily approaches the area manned by the LAWS, will the machine be able to distinguish and not attack? Could a plane forced to make an emergency landing or a person fleeing from a threat be understood in his motives by a LAWS? Will the program be able to enter recognition mode? If the LAWS is wrong, is it the fault of the programmer, of the defective component, of the maintenance team, of the company that installed it, of those who bought it and selected it without due caution, or of the politicians who wanted it? The chain of responsibilities becomes more and more unstable and opaque. Faced with a protest demonstration with airborne objects, how will LAWS react? These remain unanswered questions, until tested directly in one of these cases: the error, however, will consist in broken human lives. From time to time, engineers, programmers and technicians will be able to correct errors and improve the autonomous weapon—but the cost will not be negligible.
Already since 2012, numerous robotics researchers have signed an appeal for the prohibition of LAWs and in 2013 the Stop Killer Robots Campaign was launched, which gradually sensitized governments and international public opinion on the issue, so much so that in 2017 a Group of Government Experts was established, which then in 2019 reached agreement on a report that includes 11 guiding principles, shared in November 2019 during the CCW Conference in Geneva by the participating states.
To date, 30 countries have asked for a ban on LAWS, but the debate is ongoing and a possible global agreement is still a long way off.
A very recent study by the Disarmament Archive International Research Institute entitled LAWS Lethal Autonomous Weapons Systems. The Issue of Lethal Autonomous Weapons and the Possible Italian and European Actions for an International Agreement on the Matter highlights not only the framework of the sector’s R&D and legal issues, but also the current unreliability of these technologies, noting how AI systems are vulnerable to all cyber attacks that exploit vulnerabilities of normal computer systems. And therefore it appears necessary that, before operationally using these systems outside the laboratories, even in the military context, the implications of an ethical, social and international law are properly assessed.
Maurizio Simoncelli
Vice President and Co-Founder of the International Research Institute Disarmament Archive - IRIAD