Quantcast
Viewing all articles
Browse latest Browse all 2275

Robot ethics: Thou shalt not kill?

Where wars were once fought in hand-to-hand combat or soldiers shooting it out, the reality of wars these days mean operators in the US can decide whether people live or die in Pakistan at the touch of a button.

Robots could also one day replace humans on the battlefield, but how far away are we from this type of robotic warfare and what are the ethical implications?

Computerworld Australia also spoke to the Department of Defence about its involvement in robotics for military purposes.

The move to free-thinking robots

The US is a significant user of military drones, unmanned aerial vehicles. Its arsenal of drones has increased from less than 50 a decade ago to around 7000, according to a report by the New York Times, with Congress sinking nearly $5 billion into drones in the 2012 budget.

Robotics research is increasingly heading in the direction of autonomy, with the race on to create the most autonomous robot which is capable of thinking for itself and making its own decisions.

For example, robots can now play soccer against each other and be completely autonomous during a match, making their own decisions on how to play the game.

This type of autonomy could also be applied to military robots, but instead of a friendly game of soccer, theoretically, robots could be programmed to kill – at will or specific people.

Robert Sparrow, associate professor, school of philosophical, historical and international studies at Monash University, warns we are delving into Pandora’s box with autonomous military robots and there are major ethical implications.

He argues that military robots make the decision to go to war more likely as it means governments “can achieve their foreign policy goals by sending robots without taking [on] casualties,” he told Computerworld Australia.

“If you thought you were going to [have] 10,000 casualties, for instance, in going into a conflict, then you have to have a pretty good reason to do it. If you think we’ll just send half a dozen robots in and kill a lot of high valued targets, then that calculus looks very different and favours going to war.”

Current technology also means a robot could, theoretically, be armed with weapons and programmed to kill.

Mary-Anne Williams, director, innovation and enterprise research lab at the University of Technology, Sydney, says robots can be trained to kill “with surprising ease".

“They can aim, shoot and fire. Robots today have sophisticated sensory-perception and [are] able to detect human targets. Once detected, robots can use a wide range of weaponry to kill targets,” she says.

The potential for military robots to be used for morally questionable actions is spurring some academics on to call for a code of ethics to be implemented around the use of military robots.

Williams says adhering to a robot code of ethics is currently up to the individuals designing the robots, but she says there is a need for the rest of society to push for a set of guidelines which robots must adhere to.

“Robots can undertake physical action which can impact [on] people and property, so their actions must be governed by laws and a code of ethics. Robot actions can have a significant impact and lead to loss of life. Therefore robots must act in accordance with the law,” she says.

Isaac Asimov foresaw the technological reality we are now living in, detailing three laws of robotics in his book I, Robot in 1950. These laws of ethics included:

  1. A robot may not hurt or kill humans
  2. Robots must obey orders by humans
  3. A robot must protect itself, as long as protecting itself does not conflict with the first two laws of robotics.

Sparrow also believes there should be ethical guidelines around the use of military robots.

“I think there is an ethics regardless of whether there’s regulation … We [also] need an arms control regime – we need international regulation of unmanned vehicles,” he says.

“We should [also] be very cautious about allowing weapons to make an autonomous decision about firing, [but] the logic of these weapons systems clearly points towards that.”

However, Sparrow says we are some way off a time where robots could enter enemy territory and fight besides humans. He cites the “political cost” as the first barrier, as well as the likely public dissent against the idea of robots marching in and killing people.

Another barrier is that while the technology exists for facial recognition, which would allow the development of algorithms which could be attached to a gun to kill specific people, Sparrow says it would not be a reliable system.

New drones, new problems

The growing use of drones is potentially creating a new headache and growing international problems for governments who use them.

Civilian resentment towards countries which use drones against them to hurt and kill people are reportedly on the rise. The New York Timesreported there was growing anti-American sentiment in countries like Pakistan where the US uses drones against the country.

The US commonly use drones called the Predator and Reaper, which are remotely piloted drones which carry out air strikes. As recently as a few days ago, the New York Times reported the US had carried out a drone strike on regions in Afghanistan and Pakistan, allegedly killing a Pakistani Taliban commander.

An opinion piece in the New York Times also stated drone strikes in Yemen are adding to the growing hatred towards the US and spurring people on to join radical militants.

Ultimately, Sparrow points the finger of responsibility at engineers as a collective to take a stand against the use of military robots and cease taking part in their development.

He believes robotics funding by the military is distorting robotics development around the world, and while he concedes it can be difficult for engineers to turn down scarce funding in the field, Sparrow believes engineers are partially responsible for the use of military robots killing people.

“Ethics isn’t just a matter of regulation. It’s a matter of right and wrong and my argument is that engineers should think about whether they really want to be working on these systems that are likely to make future wars more likely,” he says.

“I do think that there is a role for international regulation and that’s going to have to be negotiated at an international level between nations who are likely to build these weapons.”


Viewing all articles
Browse latest Browse all 2275

Trending Articles