/ News

04.09.2012

Robot ethics: Thou shalt not kill?

Where wars were once fought in hand-to-hand combat or soldiers shooting it out, the reality of wars these days mean operators in the US can decide whether people live or die in Pakistan at the touch of a button.

Robots could also one day replace humans on the battlefield, but how far away are we from this type of robotic warfare and what are the ethical implications?

Computerworld Australia also spoke to the Department of Defence about its involvement in robotics for military purposes.

The move to free-thinking robots

The US is a significant user of military drones, unmanned aerial vehicles. Its arsenal of drones has increased from less than 50 a decade ago to around 7000, according to a report by the New York Times, with Congress sinking nearly $5 billion into drones in the 2012 budget.

Robotics research is increasingly heading in the direction of autonomy, with the race on to create the most autonomous robot which is capable of thinking for itself and making its own decisions.

For example, robots can now play soccer against each other and be completely autonomous during a match, making their own decisions on how to play the game.

This type of autonomy could also be applied to military robots, but instead of a friendly game of soccer, theoretically, robots could be programmed to kill – at will or specific people.

Robert Sparrow, associate professor, school of philosophical, historical and international studies at Monash University, warns we are delving into Pandora’s box with autonomous military robots and there are major ethical implications.

He argues that military robots make the decision to go to war more likely as it means governments “can achieve their foreign policy goals by sending robots without taking [on] casualties,” he told Computerworld Australia.

“If you thought you were going to [have] 10,000 casualties, for instance, in going into a conflict, then you have to have a pretty good reason to do it. If you think we’ll just send half a dozen robots in and kill a lot of high valued targets, then that calculus looks very different and favours going to war.”

Current technology also means a robot could, theoretically, be armed with weapons and programmed to kill.

Mary-Anne Williams, director, innovation and enterprise research lab at the University of Technology, Sydney, says robots can be trained to kill “with surprising ease".

“They can aim, shoot and fire. Robots today have sophisticated sensory-perception and [are] able to detect human targets. Once detected, robots can use a wide range of weaponry to kill targets,” she says.

The potential for military robots to be used for morally questionable actions is spurring some academics on to call for a code of ethics to be implemented around the use of military robots.

Williams says adhering to a robot code of ethics is currently up to the individuals designing the robots, but she says there is a need for the rest of society to push for a set of guidelines which robots must adhere to.

“Robots can undertake physical action which can impact [on] people and property, so their actions must be governed by laws and a code of ethics. Robot actions can have a significant impact and lead to loss of life. Therefore robots must act in accordance with the law,” she says.

Isaac Asimov foresaw the technological reality we are now living in, detailing three laws of robotics in his book I, Robot in 1950. These laws of ethics included:

  1. A robot may not hurt or kill humans
  2. Robots must obey orders by humans
  3. A robot must protect itself, as long as protecting itself does not conflict with the first two laws of robotics.

Sparrow also believes there should be ethical guidelines around the use of military robots.

“I think there is an ethics regardless of whether there’s regulation … We [also] need an arms control regime – we need international regulation of unmanned vehicles,” he says.

“We should [also] be very cautious about allowing weapons to make an autonomous decision about firing, [but] the logic of these weapons systems clearly points towards that.”

However, Sparrow says we are some way off a time where robots could enter enemy territory and fight besides humans. He cites the “political cost” as the first barrier, as well as the likely public dissent against the idea of robots marching in and killing people.

Another barrier is that while the technology exists for facial recognition, which would allow the development of algorithms which could be attached to a gun to kill specific people, Sparrow says it would not be a reliable system.

New drones, new problems

The growing use of drones is potentially creating a new headache and growing international problems for governments who use them.

Civilian resentment towards countries which use drones against them to hurt and kill people are reportedly on the rise. The New York Timesreported there was growing anti-American sentiment in countries like Pakistan where the US uses drones against the country.

The US commonly use drones called the Predator and Reaper, which are remotely piloted drones which carry out air strikes. As recently as a few days ago, the New York Times reported the US had carried out a drone strike on regions in Afghanistan and Pakistan, allegedly killing a Pakistani Taliban commander.

An opinion piece in the New York Times also stated drone strikes in Yemen are adding to the growing hatred towards the US and spurring people on to join radical militants.

Ultimately, Sparrow points the finger of responsibility at engineers as a collective to take a stand against the use of military robots and cease taking part in their development.

He believes robotics funding by the military is distorting robotics development around the world, and while he concedes it can be difficult for engineers to turn down scarce funding in the field, Sparrow believes engineers are partially responsible for the use of military robots killing people.

“Ethics isn’t just a matter of regulation. It’s a matter of right and wrong and my argument is that engineers should think about whether they really want to be working on these systems that are likely to make future wars more likely,” he says.

“I do think that there is a role for international regulation and that’s going to have to be negotiated at an international level between nations who are likely to build these weapons.”

Source: http://www.techworld.com.au/article/434811/robot_ethics_thou_shalt_kill_/




/ About us

Founded by Russian entrepreneur Dmitry Itskov in February 2011 with the participation of leading Russian specialists in the field of neural interfaces, robotics, artificial organs and systems.

The main goals of the 2045 Initiative: the creation and realization of a new strategy for the development of humanity which meets global civilization challenges; the creation of optimale conditions promoting the spiritual enlightenment of humanity; and the realization of a new futuristic reality based on 5 principles: high spirituality, high culture, high ethics, high science and high technologies. 

The main science mega-project of the 2045 Initiative aims to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality. We devote particular attention to enabling the fullest possible dialogue between the world’s major spiritual traditions, science and society.

A large-scale transformation of humanity, comparable to some of the major spiritual and sci-tech revolutions in history, will require a new strategy. We believe this to be necessary to overcome existing crises, which threaten our planetary habitat and the continued existence of humanity as a species. With the 2045 Initiative, we hope to realize a new strategy for humanity's development, and in so doing, create a more productive, fulfilling, and satisfying future.

The "2045" team is working towards creating an international research center where leading scientists will be engaged in research and development in the fields of anthropomorphic robotics, living systems modeling and brain and consciousness modeling with the goal of transferring one’s individual consciousness to an artificial carrier and achieving cybernetic immortality.

An annual congress "The Global Future 2045" is organized by the Initiative to give platform for discussing mankind's evolutionary strategy based on technologies of cybernetic immortality as well as the possible impact of such technologies on global society, politics and economies of the future.

 

Future prospects of "2045" Initiative for society

2015-2020

The emergence and widespread use of affordable android "avatars" controlled by a "brain-computer" interface. Coupled with related technologies “avatars’ will give people a number of new features: ability to work in dangerous environments, perform rescue operations, travel in extreme situations etc.
Avatar components will be used in medicine for the rehabilitation of fully or partially disabled patients giving them prosthetic limbs or recover lost senses.

2020-2025

Creation of an autonomous life-support system for the human brain linked to a robot, ‘avatar’, will save people whose body is completely worn out or irreversibly damaged. Any patient with an intact brain will be able to return to a fully functioning  bodily life. Such technologies will  greatly enlarge  the possibility of hybrid bio-electronic devices, thus creating a new IT revolution and will make  all  kinds of superimpositions of electronic and biological systems possible.

2030-2035

Creation of a computer model of the brain and human consciousness  with the subsequent development of means to transfer individual consciousness  onto an artificial carrier. This development will profoundly change the world, it will not only give everyone the possibility of  cybernetic immortality but will also create a friendly artificial intelligence,  expand human capabilities  and provide opportunities for ordinary people to restore or modify their own brain multiple times.  The final result  at this stage can be a real revolution in the understanding of human nature that will completely change the human and technical prospects for humanity.

2045

This is the time when substance-independent minds will receive new bodies with capacities far exceeding those of ordinary humans. A new era for humanity will arrive!  Changes will occur in all spheres of human activity – energy generation, transportation, politics, medicine, psychology, sciences, and so on.

Today it is hard to imagine a future when bodies consisting of nanorobots  will become affordable  and capable of taking any form. It is also hard to imagine body holograms featuring controlled matter. One thing is clear however:  humanity, for the first time in its history, will make a fully managed evolutionary transition and eventually become a new species. Moreover,  prerequisites for a large-scale  expansion into outer space will be created as well.

 

Key elements of the project in the future

• International social movement
• social network immortal.me
• charitable foundation "Global Future 2045" (Foundation 2045)
• scientific research centre "Immortality"
• business incubator
• University of "Immortality"
• annual award for contribution to the realization of  the project of "Immortality”.

Login as user:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Login to 2045.com

Email:
You do not have login to 2045.com? Register!
Dear colleagues, partners, friends! If you support ​the 2045 strategic social initiative goals and values, please register on our website.

Quick registration:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Registration

Name:
Surname:
Field of activity:
Email:
Password:
Enter the code shown:

Show another picture

Восстановить пароль

Email:

Text:
Contact Email:
Attachment ( not greater than 5 Mb. ):
 
Close
avatar project milestones