/ News
Technological singularity and transhumanism - new world for old
Will technology provide a perfect future for the ascent of man? Or is it wishful thinking by techno-pundits who want to believe human progress is all toward a utopian state of existence?
The history of human invention often seems precariously unplanned, yet the questing human mind remains bent on finding meaning in and for our grand schemes, however compelling the evidence for innovation via accident and randomness. Surely, human invention must be heading somewhere'- otherwise, what does it all mean?
In striving to devise a meta-significance of the Internet, for example, philosophers and pundits have come up with interpretive theories that can sometimes seem cobbled together from established notions of socio-technological evolution, such as the effects of the telegraph, the car, radio and TV, or other world-changing advances.
But the rapid rate of development of the Internet, and its intrinsically open and unregulated nature, has made it a natural environment for advocates of totally free speech and uncensored expression. No wonder connections have been argued (by technology historians like John Markoff and others) between the Internet 'consciousness', the personal computer industry, and the societal subcultures of the 1960s.
The tech-literate generations born in that decade and after represented a significant change in the relationship between 'ordinary people' and computer technology. It was foreseen that the world would need more technologists to run the computer systems that were gradually automating many workplace tasks, and playing an integral role in commercial management. Universities started to make their systems available to students to learn with, on a time-share basis. As these systems became more connected, users were able to communicate with each other in prototypical virtual communities. By the end of the 1970s the means to conduct these virtual engagements from the privacy of one's own bedroom where within sight.
Skills learned in extracurricular computer classes, and the culture of self-realisation via 'virtual' exploration, meant that in the 1970s and 1980s many young technologists entered the adult workplace knowing more about the technology than the incumbent 'experts' - and they were not backward in asserting their knowhow.
This was a major turnaround. While the computer establishment had been characterised by big international computer corporations like IBM - whose technical experts wore white lab coats, and whose executives were uniformed in conservative suits and ties, the technology they owned was viewed as oppressive rather than liberating. "Most of our generation scorned computers as the embodiment of centralised control," wrote Californian author Stewart Brand, born in 1938. But a contingent of Brand's generation "embraced computers, and set about transforming them into tools of liberation".
There's more irony in the fact that the 1960s, the decade in which IBM established itself as the epitome of clean-cut corporatism, also engendered hippies and other countercultural movements and 'alternative' ideologies. California was not the only US state that between 1965 and 1972 attracted thousands of 'New Communalists' seeking an utopian alternative lifestyle between 1965 and 1972; it was also around that time a state becoming identified with high-tech innovation.
As Professor Fred Turner of Stanford University's Department of Communication says in 'From Counterculture to Cyberculture' (2006), "New Communalists... often embraced the collaborative social practices, the celebration of technology, and the cybernetic rhetoric of mainstream military-industrial-academic research". He adds: "Analysts of digital utopianism have dated the communitarian rhetoric surrounding the introduction of the Internet to what they have imagined to be a single, authetically revolutionary social movement that was somehow crushed or co-opted by the forces of capitalism".
Time magazine writer Charles Cooper observed in 2005 that by rights the East Coast should have bested the West Coast in the expansion of the American computer industry: "The East Coast computing axis, which ran from just north of New York City (where IBM housed its headquarters, up to Cambridge and the Massachusetts Institute of Technology), was rich in talent, money and pedigree' But most of the groundbreaking research [in computing] was getting done in California." Soon even IBM had set up its innovation centre in Silicon Valley.
Turn on, turn off, turn on again
It might have been expected that the commerce-driven Silicon Valley and the subcultures would grate on each other as abrasively as the tectonic plates that occasionally clash deep below the Californian soil. In fact there was a willing cross-fertilisation between the two outlooks. Historians have noted how the 'democratisation' of computing, brought first by the personal computer revolution of the 1980s, and then by the connectedness revolution of the 1990s, were influenced by the communal values of self-reliance and open-access espoused by counter culture figureheads of the earlier decades. The computer geeks soon had their own figureheads - emerging visionaries and techno-libertarians such as Steves Jobs and Wozniak (while not forgetting their contemporary William H Gates III).
Stewart Brand also argued that the counterculture's scorn for centralised authority was what spurred the philosophical foundations of not only the 'leaderless' Internet, but also the entire personal computer revolution.
Brand was writing at the dawn of public Internet ubiquity; it's less likely that the 'leaderless', decentralised ideal still holds in 2014's world of dominant, highly-resourced online brands like Google and Amazon, around which lesser Web-based entities revolve in a kind of gravitational thrall.
The rise of the virtual world, and better insights into the effect the Internet has had - and continues to have - on most aspects of society, has encouraged a range of 'isms' that in many respects match those of the 1960s in their unorthodoxy.
Theories such as, for example, singularitarianism, transhumanism, extropianism, and cyber-utopianism - and even facets of cyberdelia and the so-called 'California ideology' - have been propounded over the last decade as ideological constructs based in interpretive theories that might be pseudo-scientific; but are also founded on pseudo-theories in that many of their premises are based on evidence drawn from the real-world application of Internet technology.
At first hearing, these species of technological utopianism sound more Gordon Moore than Thomas More, and bespeak a visionary grounding that's based on forecasts and extrapolations of a range of technology trends, all of which are open to dispute. Most are informed by an aspiration toward making the world a better place.
So do the theories usually classed under the general heading of 'technological utopianism' provide anything of value for the earnest technologist looking to better understand the tools of their profession - or even the average human curious about how computer technology is affecting their lives? What, in short, are they about?
Technological singularity, or simply 'the singularity', is defined as a theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilisation, and perhaps human nature. The capabilities of such an intelligence might be hard for humans to comprehend, so the technological singularity is seen sometimes as an occurrence beyond which future developments in human history become not only unpredictable but even beyond our current powers of understanding.
Transhumanism is a cultural and intellectual movement promoting the aim of transforming the human condition fundamentally by developing - and making available - technologies to enhance human intellectual, physical, and psychological capabilities. Transhumanist thinking studies the potential benefits and hazards of emerging technologies that could overcome basic human limitations. It also addresses ethical matters involved in developing and using such technologies. Some transhumanists predict that human beings may eventually transform themselves into beings with such greatly expanded abilities that they justify a state of being known as 'posthuman'.
Extropianism, meanwhile, is an evolving framework of values and standards for continuously improving the human condition; extropians believe that advances in science and technology will at some future point enable humans to live indefinitely. Cyber-utopianism is a concept put forward by Russian writer Evgeny Morozov, which partially derides the belief that online communication of its very nature emancipatory, and that the Internet innately favours the oppressed rather than the oppressor. Morozov argues that this belief is naive, and stubborn for its refusal to acknowledge its pitfalls. He reportedly blames the former hippies for promulgating this misguided utopian belief in the 1990s.
All of these idealistic concepts are - to a degree - concerned with how computer technology and the digital existence it enables may be contributing to the eventual establishment of some kind of refined, enhanced state of human existence in which the problems that beset us in the 'real' world can be alleviated or left behind altogether as we realise that the virtual world offers us a more satisfying, more 'perfect' mode of being than conventional physical existence.
It might sound airy-fairy stuff to the average IT professional faced with everyday chores like pulling cables or fixing software glitches; but these frameworks of values and standards for making sense of the effect computer technology is having on everyone who uses it are at least a first step to meta-philosophies that, some might argue, mankind will need as technological progress starts to challenge our human capacity to comprehend its full ramifications.
Technological singularity
Around 70 years ago, the first digital computers relied on fragile vacuum tubes and punched-paper cards, but already their designers knew that they could become so much more than mere calculating machines, if only the clumsiness of 1940s hardware could be overcome by suitable electronics. Lecturing in 1951, computer scientist Alan Turing said: "It seems probable that once the machine-thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits." At some stage therefore, Turing advised, we should "have to expect the machines to take control'"
In conversation with fellow mathematician Stanislav Ulam in 1958, the Hungarian-American computing pioneer John von Neumann suggested that the march of technology "gives the appearance of approaching some essential singularity in the history of the race, beyond which human affairs, as we know them, cannot continue." Then in 1965 Irving Goode, an outstanding mathematician who had worked alongside Alan Turing as a wartime code-breaker, predicted "an intelligence explosion" trigged by "an ultra-intelligent machine that designs even better machines. The first intelligent machine is the last invention that man need ever make, provided it is docile enough to control."
If the distinction between us and our machines begins to blur, then perhaps the question of docility vanishes, because we would hardly want to destroy ourselves (would we?)' Twenty years later, at the University of California, Los Angeles (UCLA), an idea known as transhumanism took shape, just at the point when the recently-arrived personal computers were on their way to becoming commonplace. It'became realistic to imagine a time, a few decades down the road, when biology and electronics might be merged, so that both brain and body can be augmented with additional powers of memory, intelligence, social connectivity, agility, and longevity. According to the transhumanist manifesto, we will become 'better than well'.
Entrepreneur and futurist Ray Kurzweil is now working on GoogleBrain, a new version of the familiar search engine with the express purpose of actually understanding search queries instead of just robotically sniffing-out keywords. Kurzweil is also the leading proponent of 'The Singularity', a term derived from von Neumann's remark, and which Kurzweil defines as "an era during which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today". In his many books, articles and lectures, he confidently predicts "the dawning of a new civilisation that will enable us to transcend our biological limitations and amplify our creativity".
For Kurzweil, the point of no return has already been reached because of what he calls the Law of Accelerating Returns. Technological change is exponential, he argues. The rate of advance is itself speeding up: "So we won't experience 100 years of progress in the 21st century - it will be more like 20,000 years of progress (at today's rate)."
To put Kurzweil's argument simply, imagine a Greek philosopher from 2,000 years ago, and a clever Medieval clockmaker, magically transported to the early 20th century. Both of them may well have grasped the basics of steam engines, motor cars, aircraft; even the wire telegraph might not have seemed startling. On the other hand, the advances that mankind has experienced over just the last three decades, from deep space exploration to instant global communications, would leave them stunned, while to them biotechnology would be incomprehensible. How, then, would we citizens of the early 21st century respond if we could catch a glimpse of the technologies awaiting us in just a few decades' time?
Kurzweil and his followers believe that a crucial turning point will be reached around the year 2030, when information technology achieves 'genuine' intelligence, at the same time as biotechnology enables a seamless union between us and this super-smart new technological environment. Ultimately the human-machine mind will become free to roam a universe of its own creation, uploading itself at will on to a "suitably powerful computational substrate". We will become essentially god-like in our powers.
Scientists from other fields of research share this vision of epic change just around the corner. Eminent cosmologist Martin Rees asserts that "post-human intelligence will develop hypercomputers with the processing power to simulate living things - even entire worlds". These worlds will become places where we can really 'live', as we test our ability to reshape the meaning of existence itself.
This could sound like unabashed science fiction (predicated on an attainable societal state of man-machine harmony), until we remind ourselves of what is already out there. We have artificial limbs, for instance, activated by nerve impulses. There's Darwinian software that designs its own improved successor code.We are on the verge of wearable computing, from Google Glass to increasingly technologically-adept smartwatches (see E&T Vol 8 No 12). Already the distinction between the averagely wealthy human and their 'hardware' is blurring. It is also obvious that certain kinds (and perhaps, all kinds) of virtual experience will become indistinguishable from 'real' world sense impressions, once we've gone beyond rectangular plasma screens and gone seriously three-dimensional.
The 'technological singularity' concept has its critics, of course. Kurzweil and others assume a smooth, unstoppable acceleration in technological progress, regardless of resource shortages or potential limits to electronic miniaturisation, say. There is much speculative talk of quantum computers, nanorobots, and other futuristic devices for transforming our bodies and boosting our brain power. To be bluntly skeptical, aspects of technological singularity could appear to have been conceived by clever men of a certain age, who are addicted to vitamin supplements (as Kurzweil freely admits), and who don't want to shuffle off before they have been able to witness their predictions coming true.
Nevertheless, a report entitled 'Converging Technologies for Improving Human Performance', commissioned in 2003 by the US National Science Foundation (NSF), takes seriously the idea that we are approaching "a convergence of nanotechnology, biology, information systems and cognitive science that will have immense individual, societal and historical implications". The report also predicts "fast interfaces directly between the human brain and machines, enabling new modes of interaction between people". In short, concepts that 20 years ago seemed like geeky dreams now look perfectly plausible.
There may be some snakes lurking in this technological paradise. For instance, the NSF's expectation of "access to collective knowledge while safeguarding privacy" seems risible in 2014. Futurologist Ian Pearson, formerly of BT, and now running his own consultancy called Futurizon, does not see the Technological Singularity (or the 'Convergence', or whatever we want to call it) as a remedy for all our future adversities.
"It won't bring a utopia," he says. "Will the future be better than today, worse, or just different? I can't answer that question."
But maybe we'll find out soon enough. Vernor Vinge, formerly a professor of mathematics at San Diego State University (SDSU), has written many essays and works of fiction based on the 'singularity' theme. He insists that "we are on the edge of change comparable to the rise of human life on Earth", no less, and the time to start thinking about it is right now. "For all my rampant technological optimism, sometimes I think I'd be more comfortable if I were regarding these transcendental events from 1,000 years' remove instead of 20."
Transhumanism
Does human evolution need a helping hand?'The idea that homosapiens is not'evolving does not seem controversial and'a lot of people believe it. Naturalist David Attenborough wrote in the''Daily Telegraph last autumn: "We stopped natural selection'as soon as we started being able to rear 95 to 99 per cent of our babies that are born."
Biologists were quick to point out that natural selection is still happening and influences human evolution not just because'only citizens of the richest nations'of the world have low infant mortality, but because death is not the only contributor to natural selection. Many children are simply never conceived because their would-be parents made choices that rendered them less likely to conceive, as this is not just about contraception, as satirised in Mike Judge's 2007 movie 'Idiocracy'. Women with larger numbers of children were found by Stephen Stearns and colleagues from Yale University to be on average shorter, stouter, and likely to experience a later menopause.
These natural-selection pressures are not necessarily the ones we want. Responding to Attenborough's article, anthropologist Professor John Hawks of the University of Wisconsin wrote: "The traits that make a difference to selection are not typically the ones that matter to the health of 70-year-olds."
That is arguably where transhumanism comes in, by providing ways of altering the body to make it less susceptible to the effects of ageing and to evolve in a direction humans choose rather than a direction that population pressures deliver. Transhumanism has splintered into many different forms but all assume that progress in technology will make it possible to build better humans or, if not better, with a much longer lifespan. Whether this is an ethically desirable outcome and will lead to a better human race remains an important but unanswered question.
Transhumanists such as utilitarian philosopher David Pearce argue that individuals will have a choice over what they will be able to upgrade, and whether to upgrade. However, we do not yet know the price of the upgrade, whether everybody will be able to afford it and whether those individual choices will benefit the rest of us.
Many transhumanists argue that superintelligence will pave the way for a more equitable, less destructive future. As smart people are not immune from making unfortunate choices, this may be one of transhumanism's more courageous assumptions. So, if we pass by the moral questions of building better humans, is it even possible?
If you look at the directions in research,'it'may be a process that is hard to avoid and, if you squint hard enough, we may already be there. Technology already'extends us - it's just not part of our bodies in most cases; yet it is getting much closer, even for everyday objects. Take'Google Glass, for example. The backlash against the computerised spectacles is arguably the first salvo to go mainstream in what will be a long-running debate over the merits (or otherwise) of transhumanism.
As a removable piece of eyeware, Google Glass is perhaps not obvious as a transhumanist technology, but the backlash focuses on concerns that it potentially gives wearers an advantage, such as the ability to perform face recognition on strangers or monitor and record a meeting covertly. If we assume that retinal implant technology will improve over time, it is not a major stretch to the point where the vision-processing technology in Google Glass is deployed if not in the eye itself, but in a far less obvious wireless connection to a wearable computer.
For other prosthetic technologies, we are still dealing with the low reliability, performance, and robustness of electronics and mechanical systems versus the biological. The research focus is in how much engineers can repair failing parts of the body; the mind is the next step. Researchers are trying to find ways to fight the effects of neurodegenerative diseases such as Alzheimer's that, although they do not extend life, would reduce the proportion of life spent needing personal care.
In the autumn of 2013, US defence research agency DARPA began to talk about its brain-repair projects, one focusing on implants to treat depression, chronic pain, and post-traumatic stress disorder (PTSD) using techniques such as deep-brain stimulation. A second project, Restoring Active Memory (RAM), is an attempt to build electronic devices that can repair brain damage and reverse memory loss. Although there are clear applications for injured soldiers, spinoffs from the work, assuming it succeeds, might prove useful for dealing with the effects of dementia.
Electronics, however, has a lot of catching up to do. As most other prosthetics,'biology'today does a much better job than a computer. The machine's sole advantage now is that it can record and store data more reliably than a group of biological neurons, which need repeated stimulation to maintain their connections and the memories those connections represent.
Technology in 2014 is faced with two'problems. The first is the suitability of existing von-Neumann architectures for emulating brain function. Research is'continuing into neuromorphic elements'that are better at behaving like'neurons and synapses although it remains unclear which parts of the biological neuron are essential for a machine that operates like a brain and which may even be able to achieve consciousness.
The second problem is a possible slowdown in electronic integration. The two-dimensional scaling that characterised the Moore's Law era is within sight of reaching its limits: at some point we will simply not have enough atoms to construct a working switch beyond the final technology node. Using the third dimension to scale-up is a realistic alternative but we may not be able to obtain the same degree of scaling in cost that we have had with integrated-circuit development over the past half century.
Pro-brainers?
Russian businessman Dimitry Itskov is confident that the technology will develop. He decided to set up the 2045 Initiative with a plan to build a complete replacement for the human brain based on some form of electronic technology by the middle of this century. Rather than aim just for that endpoint, the team has outlined a series of four phases that make the ultimate aim seem more tractable. By the end of 2020, for example, the 2045 Initiative expects to have 'affordable' robotic avatars controlled by some form of brain-computer interface, building on techniques already in place that allow limited control of prosthetic limbs using electronic implants.
The next decade would, according to the 2045 group, turn the avatar into a life-support system for the brain, followed by an accurate computer model of a conscious brain that would provide the means to transfer the mind from the biological brain to a machine emulation. As long as the computer is repairable and powered, it would offer the mind immortality.
As we still do not understand the processes of the brain, a 40-year timescale is ambitious but assuming that the research into brain-assisting implants proceeds quickly, we may encounter an increasing number of hybrid minds that are, to some degree, transhuman if not enhanced humans. Using biology to redesign ourselves may prove to be a more viable near-term approach for long-lived transhumans.
Gerontologists such as Aubrey de Grey of the SENS Research Foundation focus on extending the life of the human body. De Grey believes that the human lifespan could be stretched to centuries using therapies that subvert the processes that lead to bodies degrading with age and repair damage.
The idea of life (or youth extension, at least) has begun to capture attention from mainstream companies. Google created the firm Calico, headed by former Genentech CEO Art Levinson, to find ways to deal with what Larry Page described as the "challenge of ageing".
As transhumanism focuses largely on the self, most branches of the philosophy do not have a ready answer as to whether future humans will make good choices for the many. But, even though the ambitious plans of mind uploading and bodily enhancement seem fanciful today, mainstream technological development is delivering technologies that our descendants, if not our future selves, will see as 'transhuman'.
Further information
- en.wikipedia.org/wiki/Technological_singularity
- www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
- en.wikipedia.org/wiki/Transhumanism
- lifeboat.com/ex/transhumanist.technologies
Source: http://eandt.theiet.org/magazine/2014/01/new-utopias-for-old.cfm
/ About us
Founded by Russian entrepreneur Dmitry Itskov in February 2011 with the participation of leading Russian specialists in the field of neural interfaces, robotics, artificial organs and systems.
The main goals of the 2045 Initiative: the creation and realization of a new strategy for the development of humanity which meets global civilization challenges; the creation of optimale conditions promoting the spiritual enlightenment of humanity; and the realization of a new futuristic reality based on 5 principles: high spirituality, high culture, high ethics, high science and high technologies.
The main science mega-project of the 2045 Initiative aims to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality. We devote particular attention to enabling the fullest possible dialogue between the world’s major spiritual traditions, science and society.
A large-scale transformation of humanity, comparable to some of the major spiritual and sci-tech revolutions in history, will require a new strategy. We believe this to be necessary to overcome existing crises, which threaten our planetary habitat and the continued existence of humanity as a species. With the 2045 Initiative, we hope to realize a new strategy for humanity's development, and in so doing, create a more productive, fulfilling, and satisfying future.
The "2045" team is working towards creating an international research center where leading scientists will be engaged in research and development in the fields of anthropomorphic robotics, living systems modeling and brain and consciousness modeling with the goal of transferring one’s individual consciousness to an artificial carrier and achieving cybernetic immortality.
An annual congress "The Global Future 2045" is organized by the Initiative to give platform for discussing mankind's evolutionary strategy based on technologies of cybernetic immortality as well as the possible impact of such technologies on global society, politics and economies of the future.
Future prospects of "2045" Initiative for society
2015-2020
The emergence and widespread use of affordable android "avatars" controlled by a "brain-computer" interface. Coupled with related technologies “avatars’ will give people a number of new features: ability to work in dangerous environments, perform rescue operations, travel in extreme situations etc.
Avatar components will be used in medicine for the rehabilitation of fully or partially disabled patients giving them prosthetic limbs or recover lost senses.
2020-2025
Creation of an autonomous life-support system for the human brain linked to a robot, ‘avatar’, will save people whose body is completely worn out or irreversibly damaged. Any patient with an intact brain will be able to return to a fully functioning bodily life. Such technologies will greatly enlarge the possibility of hybrid bio-electronic devices, thus creating a new IT revolution and will make all kinds of superimpositions of electronic and biological systems possible.
2030-2035
Creation of a computer model of the brain and human consciousness with the subsequent development of means to transfer individual consciousness onto an artificial carrier. This development will profoundly change the world, it will not only give everyone the possibility of cybernetic immortality but will also create a friendly artificial intelligence, expand human capabilities and provide opportunities for ordinary people to restore or modify their own brain multiple times. The final result at this stage can be a real revolution in the understanding of human nature that will completely change the human and technical prospects for humanity.
2045
This is the time when substance-independent minds will receive new bodies with capacities far exceeding those of ordinary humans. A new era for humanity will arrive! Changes will occur in all spheres of human activity – energy generation, transportation, politics, medicine, psychology, sciences, and so on.
Today it is hard to imagine a future when bodies consisting of nanorobots will become affordable and capable of taking any form. It is also hard to imagine body holograms featuring controlled matter. One thing is clear however: humanity, for the first time in its history, will make a fully managed evolutionary transition and eventually become a new species. Moreover, prerequisites for a large-scale expansion into outer space will be created as well.
Key elements of the project in the future
• International social movement
• social network immortal.me
• charitable foundation "Global Future 2045" (Foundation 2045)
• scientific research centre "Immortality"
• business incubator
• University of "Immortality"
• annual award for contribution to the realization of the project of "Immortality”.