/ News

27.05.2014

Can a robot learn right from wrong?

In Isaac Asimov’s short story "Runaround," two scientists on Mercury discover they are running out of fuel for the human base. They send a robot named Speedy on a dangerous mission to collect more, but five hours later, they find Speedy running in circles and reciting nonsense.

It turns out Speedy is having a moral crisis: he is required to obey human orders, but he’s also programmed not to cause himself harm. "It strikes an equilibrium," one of the scientists observes. "Rule three drives him back and rule two drives him forward."

AS ROBOTS FILTER OUT INTO THE REAL WORLD, MORAL SYSTEMS BECOME MORE IMPORTANT

Asimov’s story was set in 2015, which was a little premature. But home-helper robots are a few years off, military robots are imminent, and self-driving cars are already here. We’re about to see the first generation of robots working alongside humans in the real world, where they will be faced with moral conflicts. Before long, a self-driving car will find itself in the same scenario often posed in ethics classrooms as the "trolley" hypothetical — is it better to do nothing and let five people die, or do something and kill one?

There is no right answer to the trolley hypothetical — and even if there was, many roboticists believe it would be impractical to predict each scenario and program what the robot should do.

"It’s almost impossible to devise a complex system of ‘if, then, else’ rules that cover all possible situations," says Matthias Scheutz, a computer science professor at Tufts University. "That’s why this is such a hard problem. You cannot just list all the circumstances and all the actions."

WITH THE NEW APPROACH, ROBOT REASON THROUGH CHOICES RATHER THAN APPLY RULES

Instead, Scheutz is trying to design robot brains that can reason through a moral decision the way a human would. His team, which recently received a $7.5 million grantfrom the Office of Naval Research (ONR), is planning an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot.

At the end of the five-year project, the scientists must present a demonstration of a robot making a moral decision. One example would be a robot medic that has been ordered to deliver emergency supplies to a hospital in order to save lives. On the way, it meets a soldier who has been badly injured. Should the robot abort the mission and help the soldier?

For Scheutz’s project, the decision the robot makes matters less than the fact that it can make a moral decision and give a coherent reason why — weighing relevant factors, coming to a decision, and explaining that decision after the fact. "The robots we are seeing out there are getting more and more complex, more and more sophisticated, and more and more autonomous," he says. "It’s very important for us to get started on it. We definitely don’t want a future society where these robots are not sensitive to these moral conflicts."

Scheutz’s approach isn’t the only one. Ron Arkin, a well-known ethicist at Georgia Institute of Technology who has also worked with the military, wrote what is arguably the first moral system for robots. His "ethical governor," a set of Asimov-like rules that intervene whenever the robot’s behavior threatens to stray outside certain constraints, was designed to keep weaponized robots in check.

THE HOPE IS THAT EVENTUALLY, ROBOTS WILL MAKE BETTER MORAL DECISIONS THAN HUMANS

For the ONR grant, Arkin and his team proposed a new approach. Instead of using a rule-based system like the ethical governor or a "folk psychology" approach like Scheutz’s, Arkin’s team wants to study moral development in infants. Those lessons would be integrated into the Soar architecture, a popular cognitive system for robots that employs both problem-solving and overarching goals. Having lost out on the grant, Arkin still hopes to pursue parts of the proposal. Unfortunately, there isn’t much funding available for robot morality.

The hope is that eventually robots will be able to perform more moral calculations than a human ever could, and therefore make better choices. A human driver doesn’t have time to calculate potential harm to humans in a split-second crash, for example.

There is another major challenge before that will be possible, however. In order to make those calculations, robots will have to gather a lot of information from the environment, such as how many humans are present and what role each of them plays in the situation. However today’s robots today still have limited perception. It will be difficult to design a robot that can tell ally soldiers from enemies on the battlefield, for example, or be able to immediately assess a disaster victim’s physical and mental condition.

It’s uncertain whether the ONR’s effort to design a moral reasoning system will be practical. It may turn out that robots do better when making decisions according to broad, hierarchical rules. In the end of Asimov’s story, the two scientists are able to jolt Speedy out of his infinite loop by invoking the first and most heavily weighted law of robotics: never harm a human, or, through inaction, allow a human to come to harm. One scientist exposes himself to the deadly Mercurial sun until Speedy snaps out of his funk and comes to the rescue. The robot is all apologies, which seems unfair — it’s a slave to its programming, after all. And as Arkin says, "It’s hard to know what’s right and what’s wrong."

Source: http://www.theverge.com/2014/5/27/5754126/the-next-challenge-for-robots-morality




/ About us

Founded by Russian entrepreneur Dmitry Itskov in February 2011 with the participation of leading Russian specialists in the field of neural interfaces, robotics, artificial organs and systems.

The main goals of the 2045 Initiative: the creation and realization of a new strategy for the development of humanity which meets global civilization challenges; the creation of optimale conditions promoting the spiritual enlightenment of humanity; and the realization of a new futuristic reality based on 5 principles: high spirituality, high culture, high ethics, high science and high technologies. 

The main science mega-project of the 2045 Initiative aims to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality. We devote particular attention to enabling the fullest possible dialogue between the world’s major spiritual traditions, science and society.

A large-scale transformation of humanity, comparable to some of the major spiritual and sci-tech revolutions in history, will require a new strategy. We believe this to be necessary to overcome existing crises, which threaten our planetary habitat and the continued existence of humanity as a species. With the 2045 Initiative, we hope to realize a new strategy for humanity's development, and in so doing, create a more productive, fulfilling, and satisfying future.

The "2045" team is working towards creating an international research center where leading scientists will be engaged in research and development in the fields of anthropomorphic robotics, living systems modeling and brain and consciousness modeling with the goal of transferring one’s individual consciousness to an artificial carrier and achieving cybernetic immortality.

An annual congress "The Global Future 2045" is organized by the Initiative to give platform for discussing mankind's evolutionary strategy based on technologies of cybernetic immortality as well as the possible impact of such technologies on global society, politics and economies of the future.

 

Future prospects of "2045" Initiative for society

2015-2020

The emergence and widespread use of affordable android "avatars" controlled by a "brain-computer" interface. Coupled with related technologies “avatars’ will give people a number of new features: ability to work in dangerous environments, perform rescue operations, travel in extreme situations etc.
Avatar components will be used in medicine for the rehabilitation of fully or partially disabled patients giving them prosthetic limbs or recover lost senses.

2020-2025

Creation of an autonomous life-support system for the human brain linked to a robot, ‘avatar’, will save people whose body is completely worn out or irreversibly damaged. Any patient with an intact brain will be able to return to a fully functioning  bodily life. Such technologies will  greatly enlarge  the possibility of hybrid bio-electronic devices, thus creating a new IT revolution and will make  all  kinds of superimpositions of electronic and biological systems possible.

2030-2035

Creation of a computer model of the brain and human consciousness  with the subsequent development of means to transfer individual consciousness  onto an artificial carrier. This development will profoundly change the world, it will not only give everyone the possibility of  cybernetic immortality but will also create a friendly artificial intelligence,  expand human capabilities  and provide opportunities for ordinary people to restore or modify their own brain multiple times.  The final result  at this stage can be a real revolution in the understanding of human nature that will completely change the human and technical prospects for humanity.

2045

This is the time when substance-independent minds will receive new bodies with capacities far exceeding those of ordinary humans. A new era for humanity will arrive!  Changes will occur in all spheres of human activity – energy generation, transportation, politics, medicine, psychology, sciences, and so on.

Today it is hard to imagine a future when bodies consisting of nanorobots  will become affordable  and capable of taking any form. It is also hard to imagine body holograms featuring controlled matter. One thing is clear however:  humanity, for the first time in its history, will make a fully managed evolutionary transition and eventually become a new species. Moreover,  prerequisites for a large-scale  expansion into outer space will be created as well.

 

Key elements of the project in the future

• International social movement
• social network immortal.me
• charitable foundation "Global Future 2045" (Foundation 2045)
• scientific research centre "Immortality"
• business incubator
• University of "Immortality"
• annual award for contribution to the realization of  the project of "Immortality”.

Login as user:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Login to 2045.com

Email:
You do not have login to 2045.com? Register!
Dear colleagues, partners, friends! If you support ​the 2045 strategic social initiative goals and values, please register on our website.

Quick registration:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Registration

Name:
Surname:
Field of activity:
Email:
Password:
Enter the code shown:

Show another picture

Восстановить пароль

Email:

Text:
Contact Email:
Attachment ( not greater than 5 Mb. ):
 
Close
avatar project milestones