Several artificial intelligence experts have been calling for a ban on the use of so-called “killer robots”. Will their calls fall on deaf ears?
What are the dangers of using killer robots for warfare?
Back in August, Elon Musk and over 100 other industry leaders signed a petition urging world leaders as well as the United Nations to put a ban on lethal autonomous weapons. According to the contents of the petition, developing this type of weapon may lead to the the third revolution in warfare, with the first two being the invention of gunpowder and nuclear weapons.
“Once developed, [autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the petition said. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.” Experts in Canada and Australia have also called for their own prime ministers to take a stand against such weapons.
Should robots be given that much power over the life and death of civilians?
The question, of course, is whether or not these leaders as well as the UN will listen and take the petition into consideration. The UN is set to conduct the first-ever discussions on the issue, but it seems as though Musk and his cohorts are in for some disappointment. Beginning on 13 November, the Convention on Certain Conventional Weapons (CCW), the UN’s disarmament grouping, will spend five days discussing the use of fully-automated weapons in Geneva. However, Amandeep Gill, the ambassador chairing the meeting, says that those hoping for a ban will be disappointed.
Robots cannot be held accountable for acts of war the same way humans can.
"It would be very easy to just legislate a ban but I think... rushing ahead in a very complex subject is not wise," Gill said to reporters. "We are just at the starting line."
Gill also says that the five-day discussion is set to focus on understanding what kind of weapons these killer robots are. The group, which will include civil society and technology companies, is unlikely to discuss a ban on the weapons outright.
These robots, upon development, will have the capacity to operate without guidance or oversight by a human. Thus, they can make the decision whether or not to kill without permission or approval from a military authority.
Are we opening the Pandora's box of artificial intelligence?
According to the petition, enabling robots to make such decisions without human intervention is “morally wrong”. The signatories stress that the use of killer robots should be regulated under the CCW, which regulates the types and use of available weapons. After all, the signatories argue, robots cannot be held accountable for their actions, regardless if these actions have power over the life or death of human beings.
Neil Davison of the International Committee of the Red Cross (ICRC) also states that “machines can't apply the law and you can't transfer responsibility for legal decisions to machines”. The ICRC, however, has called for limiting autonomous weapons instead of calling for an outright ban.
It’s likely that the signatories of the petition will keep their stance and keep urging world leaders to hammer down a ban. After all, the signatories reason, “[o]nce this Pandora’s box is opened, it will be hard to close.”
Get weekly science updates in your inbox!