• Black Facebook Icon
  • Black Twitter Icon

© 2019 Decisive Artificial Intelligence Inc.

The Moral Question of Killer Weapons

August 9, 2018

How fast and overwhelming does a situation need to be for it to be okay to put a computer at the trigger?

 

 

Note to reader: This article has more questions than answers.

 

 

 

Imagine an autonomous drone, among hundreds, flying around collecting real-time data 24/7. It suddenly spots activity in a well known terrorist activity area and approaches to take a better look. Amidst a group of people and vehicles it identifies through image and behaviour recognition at 70% certainty, a well sought after high ranking terrorist leader and his lieutenant in an unpopulated area. They are about to leave the area, aware of constant drone surveillance. There is no time for a human to review the data, deliberate with colleagues, and make a decision.

 

Should the drone be able to make the decision to strike autonomously?

 

Now let’s imagine a similar scenario only this time it is 90% target certainty but in a populated area. Same question arises, but is the answer different?

 

With such vast amounts of detailed data being produced by our current and growing surveillance/intelligence networks, human intervention and decision making has quickly become a serious bottleneck to time-sensitive effectiveness. Allowing AI to reach conclusions and make decisions seems to be the “natural” next step for this technology to be used efficiently. Let’s say we allow it to just make recommendations - it is still very unlikely we will have the speed and ability to execute commands based on those recommendations.

 

What is a killer weapon? Where is the line between automation and robotization of weapons? Does a combat robot comprise software having partial access to weapons? Should software be allowed to pull the trigger, or just do everything up until that point? Which conflict situations call for a non-human decision making process?

 

As I write this, the world is on the verge of answering these and other similar questions in order to officially define what a killer weapon entails. The advancement of military technology has for centuries contributed to the radical improvement of civilian life. This time it seems that the advancement of the field of Artificial Intelligence, which is developing predominantly in universities and private companies, is making its way into the military and not the other way round. Needless to say, this gradual adoption of such technology makes a lot of people very uncomfortable. An international definition and decision around what is what and how far can we take this is currently at full play.

 

Humans have always found a way to make things easier for themselves. This is truly fascinating. One could call it creative inventiveness born out of laziness or greed... or both. We call this technology, and AI is no different. Essentially it is a tool that makes better decisions, faster. When applied to the military, (combat situations in particular), it not only is a tool that makes better decisions faster, it has the additional bonus of keeping military personnel out of harm's way. It does the fighting for us, in every sense.

 

Up until now, the military could not guarantee that its personnel would not be at risk in a conflict situation, no matter how sound the intelligence. Having the option to now guarantee this through the use of this technology, is it morally acceptable that the military should opt to not use it?

 

Arguably it is the perfect weapon. After all, having an effective force that acts fast and efficiently without casualties is the ideal scenario for any strategist. Assuming that the AI is less prone to making mistakes than a human operator, which seems to be the case so far, why would we limit this technology in one of the fields that needs it the most?

 

So where is this fear coming from? Is it simply fear of change or are there other factors involved? The combination of efficiency and safety strongly argues in favour of its use. But then we hit a big one: If the AI decides if someone lives or dies - can we morally accept that a computer decides the outcome of a human life?

 

Morality is built on costumes, a certain way to do things and behave. If we were to do things differently, better, in a field that desperately needs it, then why not allow for that change to update morality?

 

Our moral codes are in constant flux and evolution, shifting, adjusting as acceptance and open-mindedness grows in our society. These particular moral questions are no different.

 

So let’s open up to questioning the use of this technology in the battlefield, since it has the potential to change the world and the way we live and die in it forever.

 

Share on Facebook
Share on Twitter
Please reload