/ News

06.11.2015

We’re building superhuman robots. Will they be heroes, or villains?

Each week, In Theory takes on a big idea in the news and explores it from a range of perspectives. This week we’re talking about robot intelligence. Need a primer? Catch up here.

Patrick Lin is an associate philosophy professor at California Polytechnic State University and an affiliate scholar at Stanford Law School’s Center for Internet and Society. He works with government and industry on technology ethics, and his book “Robot Ethics” was published in 2014.

Forget about losing your job to a robot. And don’t worry about a super-smart, but somehow evil, computer. We have more urgent ethical issues to deal with right now.

Artificial intelligence is replacing human roles, and it’s assumed that those systems should mimic human behavior — or at least an idealized version of it. This may make sense for limited tasks such as product assembly, but for more autonomous systems — robots and AI systems that can “make decisions” for themselves — that goal gets complicated.

There are two problems with the assumption that AI should act like we do. First, it’s not always clear how we humans ought to behave, and programming robots becomes a soul-searching exercise on ethics, asking questions that we don’t yet have the answers to. Second, if artificial intelligence does end up being more capable than we are, that could mean that it has different moral duties, ones which require it to act differently than we would.

[Other perspectives: If robots can become like us, what does that say about humanity?]

Let’s look at robot cars to illustrate the first problem. How should they be programmed? This is important, because they’re driving alongside our families right now. Should they always obey the law? Always protect their passengers? Minimize harm in an accident if they can? Or just slam the brakes when there’s trouble?

These and other design principles are reasonable, but sometimes they conflict. For instance, an automated car may have to break the law or risk its passengers’ safety to spare the greatest number of lives on the outside. The right decision, whatever that is, is fundamentally an ethical call based on human values, and one that isn’t answerable by science and engineering alone.

That leads us to the second, related problem. With its unblinking sensors and networked awareness, robot cars can detect risks and react much faster than we can — that’s what artificial intelligence is meant to do. In addition, their behavior is programmed, which means crash decisions are already scripted. Therein lies a dilemma. If a human driver makes a bad decision in a sudden crash it’s a forgivable accident, but when AI makes any decision, it’s not a reflex but premeditated.

This isn’t just philosophical; it has real implications. Being thoughtful about a crash decision — accounting for numerous factors that a human brain cannot process in a split-second — would be assumed to lead to better outcomes overall, yet it is where new liability arises. An “accidental” accident caused by a person and a “deliberate” accident involving a computer system could have vastly different legal implications.

Why would we hold artificial intelligence to a higher standard? Because, as any comic-book fan could tell you, “With great power comes great responsibility.” The abilities of AI and robots are effectively superpowers. While it may not be our moral duty to throw a ticking bomb into outer space to save people on the ground, it’s arguably Superman’s duty because he can. Where we may duck out of harm’s way, a robot may be expected to sacrificeitself for others, since it has no life to protect.

But even superheroes need a Justice League or a Professor X for a sanity check; or campaigners emerge to fill that vacuum, on issues from love to war. Some companies, such as Google DeepMind, recognize the value of an “ethics board” to help guide their AI research and its resulting products in uncharted territory. Berkeley’s Stuart Russell, a computer science professor, supports bringing in ethics discussions: “In the future, moral philosophy will be a key industry sector.” Stanford’s Jerry Kaplan, another AI expert, predicts that, within 10 years, a “moral programming” course will be required for a degree in computer science.

Our society is increasingly becoming a black box — we don’t know how things work anymore, because it’s hard to inspect the algorithms on which many of our products run. These formulas are mostly hidden away as corporate trade secrets, whether they are financial trading bots or car operating systems orsecurity screening software. Even within a company, their own algorithms can be too complex to understand: New code is stacked on top of old code over time, sometimes resulting in “spaghetti code” that can literally kill. The unintended acceleration by Toyota vehicles, resulting from badly structured code, may have been involved in the deaths of at least 89 people since 2000.

But then again, fears about AI may just reflect fears about ourselves. We know what kind of animals we are, and we worry that AI might wreak the same havoc (some algorithms have already been accused of discrimination). But in the same way that we can raise our children to do the right thing, we can ease our worries about unprincipled artificial intelligence systems by building ethics into the design. Ethics creates transparency, which builds trust. We’ll need trust to co-exist with the technological superheroes we’ve created to save us all.

Explore these other perspectives:

Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk

Francesca Rossi: Can you teach morality to a machine?

Patrick Lin: We’re building superhuman robots. Will they be heroes, or villains?

Ari Schulman: If robots can become like us, what does that say about humanity?

Murray Shanahan: Machines may seem intelligent, but it’ll be a while before they actually are

Dileep George: Killer robots? Superintelligence? Let’s not get ahead of ourselves.

Source: https://www.washingtonpost.com/news/in-theory/wp/2015/11/02/were-building-superhuman-robots-will-they-be-heroes-or-villains/




/ About us

Founded by Russian entrepreneur Dmitry Itskov in February 2011 with the participation of leading Russian specialists in the field of neural interfaces, robotics, artificial organs and systems.

The main goals of the 2045 Initiative: the creation and realization of a new strategy for the development of humanity which meets global civilization challenges; the creation of optimale conditions promoting the spiritual enlightenment of humanity; and the realization of a new futuristic reality based on 5 principles: high spirituality, high culture, high ethics, high science and high technologies. 

The main science mega-project of the 2045 Initiative aims to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality. We devote particular attention to enabling the fullest possible dialogue between the world’s major spiritual traditions, science and society.

A large-scale transformation of humanity, comparable to some of the major spiritual and sci-tech revolutions in history, will require a new strategy. We believe this to be necessary to overcome existing crises, which threaten our planetary habitat and the continued existence of humanity as a species. With the 2045 Initiative, we hope to realize a new strategy for humanity's development, and in so doing, create a more productive, fulfilling, and satisfying future.

The "2045" team is working towards creating an international research center where leading scientists will be engaged in research and development in the fields of anthropomorphic robotics, living systems modeling and brain and consciousness modeling with the goal of transferring one’s individual consciousness to an artificial carrier and achieving cybernetic immortality.

An annual congress "The Global Future 2045" is organized by the Initiative to give platform for discussing mankind's evolutionary strategy based on technologies of cybernetic immortality as well as the possible impact of such technologies on global society, politics and economies of the future.

 

Future prospects of "2045" Initiative for society

2015-2020

The emergence and widespread use of affordable android "avatars" controlled by a "brain-computer" interface. Coupled with related technologies “avatars’ will give people a number of new features: ability to work in dangerous environments, perform rescue operations, travel in extreme situations etc.
Avatar components will be used in medicine for the rehabilitation of fully or partially disabled patients giving them prosthetic limbs or recover lost senses.

2020-2025

Creation of an autonomous life-support system for the human brain linked to a robot, ‘avatar’, will save people whose body is completely worn out or irreversibly damaged. Any patient with an intact brain will be able to return to a fully functioning  bodily life. Such technologies will  greatly enlarge  the possibility of hybrid bio-electronic devices, thus creating a new IT revolution and will make  all  kinds of superimpositions of electronic and biological systems possible.

2030-2035

Creation of a computer model of the brain and human consciousness  with the subsequent development of means to transfer individual consciousness  onto an artificial carrier. This development will profoundly change the world, it will not only give everyone the possibility of  cybernetic immortality but will also create a friendly artificial intelligence,  expand human capabilities  and provide opportunities for ordinary people to restore or modify their own brain multiple times.  The final result  at this stage can be a real revolution in the understanding of human nature that will completely change the human and technical prospects for humanity.

2045

This is the time when substance-independent minds will receive new bodies with capacities far exceeding those of ordinary humans. A new era for humanity will arrive!  Changes will occur in all spheres of human activity – energy generation, transportation, politics, medicine, psychology, sciences, and so on.

Today it is hard to imagine a future when bodies consisting of nanorobots  will become affordable  and capable of taking any form. It is also hard to imagine body holograms featuring controlled matter. One thing is clear however:  humanity, for the first time in its history, will make a fully managed evolutionary transition and eventually become a new species. Moreover,  prerequisites for a large-scale  expansion into outer space will be created as well.

 

Key elements of the project in the future

• International social movement
• social network immortal.me
• charitable foundation "Global Future 2045" (Foundation 2045)
• scientific research centre "Immortality"
• business incubator
• University of "Immortality"
• annual award for contribution to the realization of  the project of "Immortality”.

Login as user:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Login to 2045.com

Email:
You do not have login to 2045.com? Register!
Dear colleagues, partners, friends! If you support ​the 2045 strategic social initiative goals and values, please register on our website.

Quick registration:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Registration

Name:
Surname:
Field of activity:
Email:
Password:
Enter the code shown:

Show another picture

Восстановить пароль

Email:

Text:
Contact Email:
Attachment ( not greater than 5 Mb. ):
 
Close
avatar project milestones