Friday, October 9, 2015

Killer Robots: Allowing Logic To Override Ethics

Artificial Intelligence


Inhuman Killing Machines: “The Campaign to Stop Killer Robots is a global coalition comprised of 56 international, regional, and national non-governmental organizations (NGOs) in 25 countries that have endorsed the call for a preemptive ban on fully autonomous weapons (as of 28 September 2015).”
Photo Credit & Source: Stop Killer Robots

An article, by Robert Newman, in Philosophy Today examines the question of whether robots can be ethical. This is no abstract question. but an important one in the face of news that many governments are developing lethal autonomous weapon systems, so-called killer robots, for military use. Both Stephen Hawking and Elon Musk, among thousands of other scientists and artificial intelligence researchers, have added their names to an open letter advocating that the U.N. ban such weapons, a sensible idea.

In “Can Robots Be Ethical.” (Oct/Nov 2015), Newman writes:
Should the driverless vehicles being developed by Apple, Google and Daimler be programmed to mount the pavement to avoid a head-on collision? Should they be programmed to swerve to hit one person in order to avoid hitting two? Two instead of four? Four instead of a lorry full of hazardous chemicals? Driverless cars programmed to select between these options would be one example of what the science journal Nature has taken to calling ‘ethical robots’. Another is the next generation of weapons. If drones weren’t bad enough, the US Defence Department is developing Lethal Autonomous Weapons Systems (LAWS). These select their own kill list using a set of algorithms, and need no human intervention, at however remote a distance. Autonomous drones in development include tiny rotorcraft smaller than a table-tennis ball, which will be able to float through homes, shops and offices to deliver a puncture to the cranium.
In July 2015, Nature published an article, ‘The Robot’s Dilemma’, which claimed that computer scientists “have written a logic program that can successfully make a decision… which takes into account whether the harm caused is the intended result of the action or simply necessary to it.” (I find the word ‘successfully’ chilling here; but not as chilling as ‘simply necessary’.) One of the scientists behind the ‘successful’ program argues that human ethical choices are made in a similar way: “Logic is how we… come up with our ethical choices.” But this can scarcely be true. To argue that logic is how we make our ethical decisions is to appeal to what American philosopher Hilary Putnam describes as “the comfortable eighteenth century assumption that all intelligent and well-informed people who mastered the art of thinking about human actions and problems impartially would feel the appropriate ‘sentiments’ of approval and disapproval in the same circumstances unless there was something wrong with their personal constitution” (The Collapse of the Fact/Value Dichotomy and Other Essays, 2002).
However, for good or ill, ethical choices often fly in the face of logic. They may come from emotion, natural cussedness, vague inkling, gut instinct, or even imagination. For instance, I am marching through North Carolina with the Union Army, utterly logically convinced that only military victory over the Confederacy will abolish the hateful institution of slavery. But when I see the face of the enemy – a scrawny, shoeless seventeen-year-old – I throw away my gun and run sobbing from the battlefield. This is an ethical decision, resulting in decisive action: only it isn’t made in cold blood, and it goes against the logic of my position.
The answer is clear, both in this essay and in my human mind, that such a proposition is not only absurd, it is morally repugnant and irresponsible. The use of logic alone does not always lead to moral choices. Human decision-making is not done in binary code or by a set of algorithms, since humans are not machines, and to argue otherwise says much about the argument’s failure to understand what characterizes humans. Human brains are not an accumulation of bits and bytes, as digital machines are. Then there are the matters of conscience and soul. Viewed rationally, such comparisons are fanciful and not factual, if not reductionist and an insult to humanity. The argument for killer robots is indefensible.

Then there are the why questions like: Why would some humans have a desire to program machines to act “ethically,” in what is in any case humanly immoral? Why the need to outsource ethics and morality now?

The only possible and plausible answer is that such persons want to take humans entirely out of the decision-making process, and thus evade responsibility for crimes against humanity during wars and other military actions. Is that what makes it “attractive” for military planners? Does this technology offer the type of freedom that current international laws do not? Does using the word “logic” make it more palatable to the public? Thankfully, young people are not buying this argument, having read enough dystopian fiction to understand its implications for humanity.

Let’s carry this argument further. The question then becomes: can robots be charged for war crimes? Perhaps. Yet, this serves no real human purpose. The better answer is, “no,” but the inventors of such robots or automatons can, and international laws on the implementation of robots in battle ought to be drafted and codified before any such autonomous killing system are ever put into use. My hope that such laws would permanently and forever discourage their deployment. Or, even better, their development.

*********************
For more, go to [NewPhil]

No comments:

Post a Comment

All comments ought to reflect the post in question. All comments are moderated; and inappropriate comments, including those that attack persons, those that use profanity and those that are hateful, will not be tolerated. So, keep it on target, clean and thoughtful. This is not a forum for personal vendettas or to create a toxic environment. The chief idea is to engage, to discuss and to critique issues. Doing so within acceptable norms will make the process more rewarding and healthy for everyone. Accordingly, anonymous comments will not be posted.