Saturday, January 10, 2015

Post 2: Drones and Automated Warfare

            The development and implementation of Unmanned Aerial Vehicles in military applications have created the ability for operators to engage targets without endangering their own lives.  It is worth noting that in many areas in which UAV’s operate, Iraq, Afghanistan, Yemen etc., the same job could be done by manned aircraft considering the lack of air defense capabilities in those regions.  In either case, the targeted killing of enemy combatants or suspected combatants resides in dangerous legal waters.  Aside from that issue, Noel Sharkey discusses the issues of developing more advanced systems along the same vein as UAV’s, systems that may operate autonomously.  Like Sharkey, I argue that autonomous robots that are able to engage targets with no human input will not exist in the near future, and even if they did, the dangerous lack of accountability and the inclination to use violence due to lack of moral attachment makes them unusable.
            Sharkey mentions the issues with the current methodology of drone strikes.  There are two main issues to be dealt with, the first are the moral implications of being able to kill while the operators’ only view of the area is through a computer screen thousands of miles away.  This detachment from the violence inherently dehumanizes the people being observed and targeted.  They are not people, there are small figures seen through a camera at 30,000 feet not unlike video games which feature the exact same scenarios.  The dehumanization of the people being observed, coupled with the lack of attachment to the killing, can lead to desensitization to and inclination towards the use of violence.  Granted, there are times when violence of action is necessary and justified, but this desensitization may lead to rash judgments about who is a viable target and if their killing is truly justified.  War is foggy, and even for those on the ground, a decision to engage perceived combatant can be complicated.  If we agree that there are many difficult situations for those on the ground, where use of force is not always clear cut, we must agree that an operator looking at the situation through a computer screen must have equal if not greater difficulty in discerning the full situational picture on the ground.  Again, this is not to say that drone strikes should never be used, but it does inevitably complicate the validation of the use of force.
            The film “Ender’s Game” provides an interesting corollary to this modern problem.  Towards the end of the movie, Ender believes he’s participating in the final simulation, which is revealed to actually be a real battle in which he sacrifices thousands of lives to destroy the enemy planet.  Once Ender realizes what has actually occurred, he’s wrought with emotion after destroying an entire planet and sacrificing the lives of his own men and women.  When he believes it is only a simulation, he has no problem with sacrificing his own forces and destroying an entire planet.  In a simulation the goal is to win and there are no lives at stake.  The killing is indiscriminate and brutally efficient.  There is no emotional or moral attachment to a simulation, only military efficiency and the will to win.  Although the drones currently being employed to eliminate combatants and terrorist leaders do not provide the same level of detachment, there is no illusion that it is real warfare, the operators as still undeniably detached from the killing.  This detachment may lead to killing that is not always necessary or justified because there is no cost to the operator behind the joystick. 
            The other main issue Sharkey discusses is the lack of oversight for those conducting the killing, and the lack of due process for those who are engaged and only suspected of being insurgents.  There have been no comments on how operators, commanders, or policy makers decide who will be engaged by drone strikes.  Again, there are surely cases which are clear cut, but many, where the targets’ intentions and combatant status may be unclear.  In such cases, when the operator is already desensitized to killing, there may be a lack of certainty of the target’s status, but little qualms with killing them, which clearly poses an issue as civilian deaths should be minimized.
            Aside from the issues already outlined, further automation of armed robots only creates further problems such as the discrimination between targets and civilians, and the transformation an individual may make between the two, and the concept of proportionality.  The technology simply does not exist for a computer to be able to discern the details of a dynamic and changing environment to the extent needed to decide whether or not to kill.  Moreover, the program would need to be able to decide what the cost to civilian life would be and if that is “worth it”.  There is no objective measure to the value of a military target versus collateral damage.  That is something that is hashed out using human intuition and judgment, something an artificial intelligence program may never have. 
            Even if the technology were available at some point in the future, and the technological issues have been sorted out.  There is still the inevitability of mistakes.  The only way a mistake could not happen would be to have the perfect program that can analyze every situation flawlessly, every time.  But the programs are created by humans, who can and do make mistakes; a perfect program will never exist.  Sharkey illuminates this issue.  When the armed robot makes a mistake, who is held accountable?  Conceivably, anyone in the chain of command could be held accountable, or maybe the company who built the robot.  As Sharkey says, if there is no clear answer, “…legally speaking, these robots cannot be used” (p. 381).  The government and the military especially, are reliant on accountability at all times.  There must be accountability for your people, accountability for the actions of your subordinates, as well as your own actions.  An asset in which no one is accountable for when mistakes happen is unreliable and cannot be used.

6 comments:

  1. Hi John,

    Very nicely written!
    I'd like to address a couple of points in your post.

    I agree with you that robots with human ethical and moral capabilities will not exist in the near future. It is difficult, living in 2015, to imagine a scenario in which this concept can become reality.

    Next, you write that one of the problems with using drones is that those people who control drones become detached and may take actions which would result in unnecessary killing and that there would be "no cost to the operator behind the joystick". Do you mean that this person would not be held accountable for their actions? If yes, would those people who authorized him/her to take that action be held accountable? Where would it stop? How is the lack of accountability with drones any different from, say, the lack of accountability regarding past U.S military engagements? What of the war crimes committed by U.S military and intelligence personnel in places like Abu Ghraib, and at the numerous black sites used during the War on Terror? The list goes on. The truth is, a very small percentage of those who authorized or carried out these atrocities were brought to justice, and this thought is indeed quite disturbing.

    In my opinion, the decision to use drones is not a "good" choice. Rather, it is the preferable choice between an evil and less costly evil. The United States Government is tasked with protecting the interests of the United States and its citizens. It seems only logical that it is in the national security interest of the United States to minimize the damage on its assets (U.S soldiers) if at all possible. If some sort of an attack on a terrorist is inevitable, the question that has to be asked is, do we place the lives of U.S soldiers at risk if that is not necessary and if we can just achieve the goal through a drone strike?

    ReplyDelete
    Replies
    1. I agree with you, Alisa, that it is very disturbing to think that the people behind this type of warfare are seldom brought to justice or held accountable. While putting less US soldiers at risk is a good thing, I think that it will be hard to take credit for doing something such as killing someone (who has been on a kill list) when the autonomous warfare did it. In other words, if we aren't going to hold people accountable for when things go wrong, I don't think those people should get credit for when things go right.

      John-I agree that this autonomous warfare is much like violent video games that kids and adolescents are playing today. I also agree that because it is a simulation, and not real, that it is not as big a deal (such as in Ender's Game). However, do you think that because of the content of these video games, that the people who are playing these games tend to form thoughts surrounding the idea that autonomous warfare is okay?

      Delete
  2. Alisa - What I meant by the section you quoted was that the operator piloting the drone is not under any threat while engaging a target, unlike soldiers on the ground. In Ender's Game, not only do they use drones, but Ender believes the entire thing is a simulation. Because Ender does not experience a threat to himself, and believes he's commanding a virtual fleet against a virtual enemy, he has no problem destroying the entire enemy planet and sacrificing some of his own forces. I attempted to draw a parallel between the movie and the mindset of drone operators who can kill at no risk to themselves. I believe this lack of attachment to the situation on the ground, and lack of threat to themselves, may make drone operators more likely to engage targets, whereas if they were on the ground, they may have tried to find an alternate solution, or more intelligence, before deciding to engage. Where I think lack of accountability is an issue, as Sharkey points out, is if there were totally autonomous robots killing combatants. An autonomous robot cannot be punished and reprimanded. So when said robot makes a mistake, who is held responsible? There is no clear answer, which I think makes such robots unusable, if they ever did exist.

    Jessica - I definitely would not say that playing video games makes people more prone to violence or accepting violence as normal. However, I would say that if drone pilots can see similarities between their job, and video games they may have played, it may desensitize them to what they're doing. But that is something that would need to be studied to see if it has any merit.

    ReplyDelete
    Replies
    1. John,

      I believe there's a key difference in the comparison you make between Ender's Game and our reality. In the movie, the drone operators, including Ender, did not know that they were controlling real forces-they believed that it was a simulation. In our reality, drone operators are quite aware that they are dealing with the lives of real people.
      Yes, they may become detached from the situation, but it would be difficult to say that these people are 100% convinced that what they're doing is a game.
      I agree with you that if robots with moral and ethical capabilities were to exist, they would be unusable. What of the drones that are in use today? Are there circumstances when they should(n't) be used?

      Delete
  3. Alisa - I whole heartedly agree that drone pilots don't believe that they are playing a simulation. I do however, think there are similarities between what Ender experienced and what drone pilots may experience. The question you pose is really situation-dependent. Surely there are some situations where a drone operator spots a clear, active threat and would be justified in firing on that threat. I think where it can be tricky is when there is a suspected combatant. That's where Sharkey points out the legal issue, in the absence of solid evidence, the drone operator uses their judgment, essentially acting as judge, jury and sometimes executioner. Innocent people surely have died in such situations. Drone strikes are then further complicated by the risk of collateral damage, as are many other types of military operations. As with other operations, that is something that is weighed against the military value of the objective.

    ReplyDelete
  4. John,

    I see what you are saying, and yes this is problematic though I'm not really sure how it is any different when soldiers are out on the battlefield, or when a pilot drops a bomb and flies away. They too have to make judgement calls which are not always correct or justified.

    ReplyDelete