The use of Artificial Intelligence in warfare has rapidly increased in recent years due to modern technology. Experts are saying that in the near future, killer robots will outnumber human soldiers in battlefields.
Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development, according to Newsweek. One of the main factors is to reduce the loss of heavy casualties and human costs (recruiting a soldier costs $15,000 per year; the cost of treating an injured US soldier is about $2 million a year). Autonomous weapon systems are also relatively inexpensive compared to other weapons and more effective when it comes to hitting intended targets and avoiding casualties. During the conflict in Nagorno-Karabakh in 2020, Azerbaijan’s armed drones were a huge advantage against Armenia’s military capacity. The use of drones in this case is an example of how battlefields are being transformed by unmanned attack drones around the world. However, while autonomous weapon systems in many cases minimize civilian casualties in war, a UN report from 2020 revealed that killer robots may have killed an unarmed civilian for the first time ever during the Libyan civil war, showing that the use of autonomous weapon systems must be questioned.
Attempts to legislate
During mid-December last year in Geneva, the United Nation Convention on Certain Conventional Weapons tried to reach a consensus on restricting autonomous weapons, however, without success. The convention has previously banned landmines, booby traps and incendiary weapons. As the convention only takes place every five year, a number of experts warn about the potential consequences and escalation of the use of autonomous weapons in warfare.
In an article from The Conversation, human rights researcher James Dawes discusses how autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists. As killer robots are cheap and almost impossible to contain it could result in a widespread sale. Killer robots also create a juridical dilemma about criminal responsibility. As Dawes writes: “But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders?”. Right now, there are no juridical regulations about meaningful human control of autonomous weapons, which means there will be war crimes with no war criminals to hold accountable. So far, the US is the only nation with ethical standards for AI weapons.
The present and future
As mentioned, militaries around the world are heavily investing in autonomous weapons. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020. According to Forbes, the Russian military is developing different types of unmanned systems and autonomous drones. There have been reports of Russia’s use of killer robots in Ukraine, since their invasion of the country. LA Times reports that the Biden administration plans to send 100 “loitering missiles” — sometimes called suicide drones — to the Ukrainian military.
The nature of war has changed significantly since the end of World War II, and the more extensive use of autonomous weapons, will probably lead into a new era of conflicts and war methods where technology is prominent. The accountability and ethics of autonomous weapons will thus forever be changed in the manners of politics and law. It’s important that the politicians around rapidly do something to the anarchic system, otherwise, more civilians will become victims of war robots.
Staff Writer at Utblick since November 2021