vc_edit_form_fields_attributes_vc_ What happens when machines can decide who to kill? – Red Cross Red Crescent
Conflict Podcast

What happens when machines can decide who to kill?

It’s the stuff of science fiction: machines that make decisions about who and when to kill. Referred to as “autonomous weapons”, they’re already in use to some degree. But as more sophisticated systems are being developed we wanted to an expert in the field about whether such systems comply with international humanitarian law and what it means for humanity to give machines the power over human life and death.

One of the many roles that RCRC magazine has played over the years has been to tackle complex, emerging humanitarian issues with in-depth reporting and analysis. By exposing our readers to the perspectives of experts, within and without the Movement, the magazine could help inform discussions on issues such as mental health, gender and inclusion, sexual violence, urban conflict, or the implications of new technologies.

One example was this cover article, “Programmed for War”, in which the magazine explored the increasing development of autonomous weapons systems: complex machines that are programmed to make life-and-death decisions on the battlefield, potentially taking the place of human soldiers, pilots or remote-control operators.

“An autonomous weapon is one that selects and applies force to targets without human intervention,” explains Neil Davison, an expert in the field who shares his insights in this 6th Episode of the RCRC magazine podcast series.

“After it’s been activated, or turned on, it’s essentially the process of sensors and software or machine processes that trigger it to strike,” says Davison, senior scientific and policy adviser for the Arms and Conduct of Hostilities Unit at the ICRC. “One way to think about it is that this sensor and software process forms a target profile. So that could be let’s say, the shape of a tank or the speed and trajectory of an incoming missile.”

“Certain autonomous weapons like this have been in use in very constrained ways for some years, but at the moment, there’s increasing interest in expanding the range of systems that function in that way, particularly the range of remote-controlled weapons.”

Some examples: airplane like drones or small, quad-copters, ground robots, or even autonomous boats or submarines. “We’re really at a point in time where there could be a big shift in whether these systems are kept under remote control or whether they target autonomously,” notes Davison.

Such autonomous weapons could also function in “swarms” — robots acting together that use software to coordinate amongst themselves to attach an enemy target.

As these systems become more complex — potentially integrating artificial intelligence, or machine learning — it will be more difficult to predict how they will behave in any given situation.

This unpredictability is one of many concerns identified by the ICRC, which would like to see a ban on what it refers to as “unpredictable” autonomous weapons as being inherently non-compliant with international humanitarian law. After all, how can one be sure a highly lethal weapons system will comply with the law if we don’t know how it will function on the battlefield.

Beyond that, there are questions about legal accountability. After all, if a machine is “making decisions” based on information from sensors, complex algorithms and things it “learns” from experience, who is responsible if these systems violate the rules of war? Finally, there is the moral and ethical question about whether machines should ever be given the power to decide who lives and who dies.

These are just some of the questions we pose to Davison in this fascinating and sometimes scary 25-minute podcast. But don’t despair, he also offers some reasons for hope and some actions we can all take to help make a positive difference on this complex and challenging issue.

To learn more, please visit this link:

https://www.icrc.org/en/war-and-law/weapons/autonomous-weapon-systems

Related

‘Wildfire diaries’ and radical change in communications

In this episode, we talk with humanitarian communicator Kathy Mueller who produced our first magazine podcast series, The Wildfire Diaries, about massive wildfires in Northern Canada in 2017. We talk about that series, her many international missions, and the big changes in humanitarian communications since she began with the Canadian Red Cross almost 20 years ago.

The power of storytelling

In this episode, we talk about the power of storytelling to inform and inspire. “Storytelling is a fundamental aspect of human communication,” says our guest Prodip, a volunteer and multi-media storyteller for the Bangladesh Red Crescent. “It inspires us to be a hero of our own community.” We also speak with one such community hero, Dalal al-Taji, a longtime volunteer and advocate for inclusion of people with disabilities in emergencies response. “In disasters. persons with disabilities sometimes get forgotten.”

The brave new world of ‘Tech-plomacy’

Digital information technology holds tremendous potential for easing human suffering. But it also poses many risks. In countries impacted by conflict, for example, those risks can be a matter of life and death. Humanitarian ‘tech-plomat’ Philippe Stoll decodes plusses and minuses of the humanitarian tech revolution.

Discover more stories

Get stories worth sharing delivered to your inbox

Want to stay up to date?

This might interest you...

Talking OPENLY about mental health

It’s about time to speak openly about the stigma around mental health.

Check it out