2045 Initiativehttp://2045.com/Strategic Social Initiativehttp://2045.ru/images/logo_en.png2045 Initiativehttp://2045.com/<![CDATA[Dmitry Itskov: www.Immortal.me - Want to be immortal? Act!]]>http://2045.com/news/33999.html33999Fellow Immortalists!

Many of the daily letters that the 2045 Initiative and I receive ask the question: will only the very rich be able to afford an avatar in the future, or will they be relatively cheap and affordable for almost everyone?

I would like to answer this question once again: avatars will be cheap and affordable for many people,… but only if people themselves make every effort needed to achieve this, rather than wait until someone else does everything for them.

To facilitate and expedite this, I am hereby soft-launching a project today which will allow anyone to contribute to the creation of a ‘people’s avatar’… and perhaps even capitalize on this in the future. The project is named Electronic Immortality Corporation. It will soon be launched at http://www.immortal.me under the motto "Want to be immortal? Act!"

The Electronic Immortality Corporation will be a social network, operating under the rules of a commercial company. Instead of a user agreement, volunteers will get jobs and sign a virtual contract.

In addition to creating a ‘people’s avatar’, the Electronic Immortality Corporation will also implement various commercial and charitable projects aimed at realizing ideas of the 2045 Initiative, transhumanism and immortalism.

We will create future technologies that can be commercialized within decades (e.g. Avatar C) as well as implement ‘traditional’ business projects such as, for example, producing commercially viable movies.

Even the smallest volunteer contribution to the work of the Corporation will be rewarded by means of its own virtual currency that will be emitted for two purposes only: a) to reward volunteer work, and b) to compensate real financial investments in the company. Who knows, our virtual currency may well become as popular and in demand as Bitcoin.

The first steps are as follows:

First, we will establish an expert group, which will shape the final concept and the statutes of the Electronic Immortality Corporation.

Second, we will announce and organize two competitions: a) to create the corporate identity of the Electronic Immortality Corporation, and b) the code of the social network.

Third, we will form the Board of Directors of the Electronic Immortality Corporation.  There, we would like to see experienced businessmen with a track record of successfully implemented large projects.

Fourth, we will engage celebrities and public figures from around the world.

Therefore, if you…

- have experience in creating social networks, online games, gaming communities and are willing to discuss the final concept of the Electronic Immortality Corporation,

- are a brilliant designer,

- are a talented programmer with experience in developing large-scale and/or open source projects,

- are a businessman with experience in managing large companies and ready to participate in the Board of Directors of the Electronic Immortality Corporation or you know of such a person,

- are in contact with celebrities and ready to engage them in the Electronic Immortality Corporation;

and at the same time you desire to change the world, to build a high-tech reality, to participate in creating avatars and immortality technologies… if all of this is your dream and you are ready to serve it selflessly,

email us at team@immortal.me

Want to be immortal? Act!

 

Dmitry Itskov

Founder of the 2045 Initiative



]]>
Sun, 23 Apr 2045 21:50:23 +0400
<![CDATA[San Francisco biohackers are wearing implants made for diabetes in the pursuit of 'human enhancement']]>http://2045.com/news/35099.html35099Paul Benigeri, a lead engineer at cognitive enhancement supplement startup Nootrobox, flexes his tricep nervously as his coworkers gather around him, phones set to record the scene. He runs his fingers over the part of the arm where Benigeri's boss, Geoff Woo, will soon stick him with a small implant.

"This is the sweet spot," Woo says.

"Oh, shit," Benigeri says, eyeing the needle.

"Paul's fine," Woo says. "K, ooooone ..."

An instrument no bigger than an inhaler lodges a needle into the back of Benigeri's arm. Woo removes his hand to reveal a white plate sitting just above the implant. Benigeri smiles.

"You are now a tagged elephant," Woo says, admiring his handiwork.

"A bionic human," says Nootrobox cofounder Michael Brandt.

In San Francisco, a growing number of entrepreneurs and biohackers are using a lesser-known medical technology called a continuous glucose monitor, or CGM, in order to learn more about how their bodies work. They wear the device under their skin for weeks at a time.

CGMs, which cropped up on the market less than 10 years ago and became popular in the last few years, are typically prescribed by doctors to patients living with diabetes types 1 and 2. They test glucose levels, or the amount of sugar in a person's blood, and send real-time results to a phone or tablet. Unlike fingerstick tests, CGMs collect data passively, painlessly, and often.

For tech workers taking a DIY approach to biology, CGMs offer a way to quantify the results of their at-home experiments around fasting, exercise, stress, and sleep.

More...

]]>
Wed, 18 Jan 2017 11:22:26 +0400
<![CDATA[Giving rights to robots is a dangerous idea]]>http://2045.com/news/35100.html35100The EU’s legal affairs committee is walking blindfold into a swamp if it thinks that “electronic personhood” will protect society from developments in AI (Give robots ‘personhood’, say EU committee, 13 January). The analogy with corporate personhood is unfortunate, as this has not protected society in general, but allowed owners of companies to further their own interests – witness the example of the Citizens United movement in the US, where corporate personhood has been used as a tool for companies to interfere in the electoral process, on the basis that a corporation has the same right to free speech as a biological human being.

Electronic personhood will protect the interests of a few, at the expense of the many. As soon as rules of robotic personhood are published, the creators of AI devices will “adjust” their machines to take the fullest advantage of this opportunity – not because these people are evil but because that is part of the logic of any commercial activity.

Just as corporate personhood has been used in ways that its original proponents never expected, so the granting of “rights” to robots will have consequences that we cannot fully predict – to take just two admittedly futuristic examples, how could we refuse a sophisticated robot the right to participate in societal decision-making, ie to vote? And on what basis could we deny an intelligent machine the right to sit on a jury?
Paul Griseri
La Genetouze, France

]]>
Mon, 16 Jan 2017 11:24:52 +0400
<![CDATA[Bionic legs and smart slacks: exoskeletons that could enhance us all]]>http://2045.com/news/35101.html35101There are tantalising signs that as well as aiding rehabilitation, devices could soon help humans run faster and jump higher.

Wearing an £80,000 exoskeleton, Sophie Morgan is half woman, half robot.

Beneath her feet are two metal plates, and at her hand a digital display, a joystick and, somewhat alarmingly, a bright red emergency button.

As she pushes the joystick forward, the bionic legs take their first steps – a loud, industrial whirring strikes up and her right foot is raised, extended and placed forward. Her left slowly follows. As she looks up, a smile spreads across her face.

Exoskeletons, touted as devices that will allow the injured to walk, elderly people to remain independent for longer, the military to get more from soldiers and even turn all of us into mechanically enhanced humans, have captured the imagination of researchers across the world, from startups to Nasa.

For now, the most obvious – and tangible – application has involved allowing paralysed people to stand and walk. “It was a mixture of surrealism and just absolute, just the most exhilarating feeling,” says Morgan, describing her first experience of the technology four years ago.

Now 31, the artist, model and presenter of Channel 4’s 2016 Paralympic coverage was paralysed in a car accident aged 18 and has used a wheelchair ever since. The idea to try the exoskeleton, she says, came from the BBC security correspondent Frank Gardner, who uses a wheelchair after being shot while reporting from Saudi Arabia.

The exoskeleton, from Rex Bionics, offered a life-changing experience, according to Morgan. “It had been 10 years, give or take, since I had properly stood, so that was in itself quite overwhelming,” she says. The impact was far reaching. “It is not just about the joy of ‘Oh, I am standing’. It is the difference it makes, the way you feel afterwards, psychologically and physiologically – it is immeasurable.”

Returning to her wheelchair, says Morgan, is a disappointing experience. “I am walking in my dreams, so it does blur that line – that liminal space between real and dream, and reality and fantasy,” she says of the device.

The exoskeleton isn’t just about stirring excitement. As Morgan points out, there are myriad health problems associated with sitting for long periods of time. A report co-commissioned byPublic Health England and published last year highlighted findings showing that, compared with those up and about the most, individuals who spend the longest time sitting are around twice as likely to develop type 2 diabetes and have a 13% higher risk of developing cancer.

Wheelchair users, adds Morgan, also face side-effects, from pressure sores to urinary tract infections. “It could be the difference between longevity and not for people like me,” she says of the exoskeleton.

The competition

About 40 of the Rex Bionic devices are currently in use worldwide, including in rehabilitation centres, says Richard Little, co-founder of the company. An engineer, Little says he was inspired to develop the system after his best friend and co-founder was diagnosed with multiple sclerosis.

But there is competition. As Little points out, the development of battery technology, processing power and components has brought a number of exoskeletons on to the market in recent years, including those from the US-based companies ReWalk and Ekso Bionics. “[They] offer a whole load of different things which are similar in some ways but different in others,” says Little. “[Ours] doesn’t use crutches,” he points out, adding that the innovation removes the risk of users inadvertently damaging their shoulders, and frees their arms.

There are tantalising signs that exoskeletons could do more than just aid rehabilitation or increase the mobility options for those who have experienced a stroke or spinal cord injury.

While the bionic legs tried by Morgan are pre-programmed, researchers have developed exoskeletons controlled by a non-invasive system linked to the brain, allowing an even wider range of wheelchair users to walk. What’s more, when combined with virtual reality and tactile feedback, the systems even appear to promote a degree of recovery for people with paraplegia.

“All our patients got some degree of neurological recovery, which has never been documented in spinal cord injury,” says Miguel Nicolelis, co-director of Duke University’s centre for neuroengineering, who led the work.

It’s a development that excites Little, whose team have also been exploring the possibility of thought control with their own device.

Yet despite their transformative capabilities, the limitations of such bulky exoskeletons have left many frustrated. Tim Swift, co-founder of the US startup Roam Robotics and one of the original researchers behind the exoskeleton from Ekso Bionics, is one of them.

“It is a 50lb machine that costs $100,000 and has a half-mile-an-hour speed and can’t turn,” he says of his former work. “There are only so many applications where that makes sense. This is not a shift towards consumer, this is a hunt for somewhere we can actually use the technologies we are making.”

The dream, says Swift, is to create affordable devices that could turn us all into superhumans, augmenting our abilities by merging the biological with state of the art devices to unleash a new, improved, wave of soldiers, workers, agile pensioners and even everyday hikers. But in devising the underpinning technology, he says it is time to ditch the motors and metal approach that he himself pioneered.

While hefty, rigid devices can support someone with paraplegia, says Swift, such exoskeletons are too heavy and costly for wider applications – such as helping a runner go faster. The fundamental challenge, he adds, is to create a device that remains powerful while keeping the weight down. “I think you have two solutions,” he says. The first is to develop a new, lightweight system that efficiently uses battery energy to generate movement. The second, he says, is to stick with metals and motors but be more intelligent in how you use them.

Swift’s answer is based on the former – but it hasn’t received universal acclaim. “I have spent the last two and a half years literally getting laughed out of conferences when I tell people we are going to make inflated exoskeletons,” he says. “People think it is a running joke.”

But Swift is adamant that to produce a system that can be used in myriad ways to augment humans, be it on the building site, in the home or up a mountain, technologists must innovate. And air, he believes, is the way to do it. The result, so far, is a series of proof-of-concept devices, braces that look a little like padded shin-guards, that can be strapped on to arms or legs.

“The fundamentals allow you to have extremely lightweight structures [and] extremely low cost because everything is basically plastics and fabrics as opposed to precision machined metals,” he says. And there is another boon. “Because you can make something that is very lightweight without sacrificing power, you are actually increasing the power density, which creates these opportunities to do highly dynamic behaviours.”

In other words, according to Swift, exoskeletons made of inflated fabric could not only boost a human’s walking abilities, but also help them run, jump or even climb. “When I say I want someone to go into Footlocker and buy a shoe that makes them run 25% faster – [we are] actively looking at things that look like that,” he says.

Others agree with Swift about the need to reduce the clunkiness of exoskeletons, but take a different approach.

Augmenting humans

Hugh Herr is a rock climber, engineer and head of the biomechatronics research group at MIT. A double amputee, the result of a climbing accident on Mount Washington, Herr has pioneered the development of bionic limbs, inventing his own in the process. But it was in 2014 that his team became the first to make an all-important breakthrough: creating a powered, autonomous exoskeleton that could reduce the energy it took a human to walk.

“No one is going to want to wear an exoskeleton if it is a fancy exercise machine, if it makes you sweat more and work harder, what is the point?” says Herr. “My view is if an exoskeleton fails to reduce metabolism, one needs to start over and go back to the drawing board.”

To boost our bodies, says Herr, it is necessary to break the challenge down. “We are taking a first principle approach, and joint by joint understanding deeply what has to be done scientifically and technologically to augment a human,” he says. 

For Herr the future is not inflatables (“pneumatics tend to be very inefficient,” he says) but minimalistic, stripping away the mass of conventional exoskeletons so that the device augments, rather than weighs down, the wearer. “If you separated the device from the human, it can’t even uphold its own weight,” he says. 

The approach, he adds, was to focus on the area of the body with biggest influence when it came to walking, “Arguably the most important muscle to bipedal human gait is the calf muscle,” he says. “So we said in a minimalist design [with] minimal weight and mass, one arguably should build an artificial calf muscle.” 

Boasting sensors for position, speed and force for feedback, and programmed to move and respond in a natural way, the device drives the foot forward, saving the wearer energy on each step. “Our artificial calf muscle pushes the human in just the right time in the gait cycle where the human is most inefficient and after that period gets out of the way completely,” he says.

Herr isn’t alone in focusing on such minimalist ankle-based devices. Among other pioneers is Conor Walsh at Harvard University who has created similar exoskeletons to help stroke patients walk. The devices are a million miles from the cumbersome bionic legs with with Morgan walked across the office, but then Herr believes the future for exoskeletons lies firmly with the augmented human.

“In the future when a person is paralysed, they won’t use an exoskeleton. The reason is we are going to understand how to repair tissues,” he says. “The only time to use an exoskeleton is if you want to go beyond what the muscles are capable of, beyond innate physicality.”

Making them look like second skins and behave like second skins is going to happen

In Bristol, Jonathan Rossiter is hoping to do just that with an even bolder approach: smart materials. “Fabrics and textiles and rubbers is a really good description of the things we are looking at,” he says. Professor of robotics at Bristol University and head of the Soft Robotics group at Bristol Robotics Laboratory, Rossiter believes exoskeletons of the future will look more like a pair of trousers. “Making them look like second skins and actually behave like second skins is going to happen,” he says.

The technology behind it, says Rossiter, will be hi-tech materials: rubbers that bend when electricity is applied, or fabrics that move in response to light, for example. “We build up from the materials to the mechanisms,” he says.

Conscious of an ageing population, Rossiter believes a pair of smart trousers will prove invaluable in keeping people independent for longer, from helping them out of chairs to allowing them to walk that bit further. But he too sees them becoming popular gadgets, helping hikers clamber up mountains.

There is, however, a hitch. Scaling up smart materials from the tiny dimensions explored in the lab to a full-blown set of slacks is no small feat. “You are taking something which is [a] nanomaterial. You have to fabricate it so that it layers up nicely, it doesn’t have any errors in it, it doesn’t have any fractures or anything else and see if you can transpose that into something you can wear,” says Rossiter. In short, it will be a few seasons yet before your wardrobe will be boasting some seriously smart legwear.

But as technology marches on, the dream gets closer to reality. Herr, for one, believes commercial devices are a hop, skip and a jump away – arriving within the next two decades.

“Imagine if you had leg exoskeletons where you could traverse across very, very irregular natural surfaces, natural terrains with a dramatically reduced metabolism and an increased speed while you are jumping over logs and hopping from rock to rock, going up and down mountains,” he says, conjuring up a scene of a bionic, human gazelle.

“When that device exists in the world, no one will ever use the mountain bike again.”

]]>
Tue, 10 Jan 2017 11:30:51 +0400
<![CDATA[This CES 2017 robot can be controlled by one hand]]>http://2045.com/news/35094.html35094Earlier at CES, we saw the Lego Boost announced -- a kit that lets you build and control Lego robots. Ziro is a similar kit, by the company ZeroUI, but it lets you build robots out of any material and control them with a smart glove.

Ziro has three parts to it: a motorized module, a wireless glove to control that module and an app to animate/program modules. The idea is that you build the modules into your robot. You program those modules with the Ziro app. And you remote control your creation using a smart glove worn on one hand.

Ziro is aimed at kids and their creativity, ZeroUI CEO Raja Jasti told me at CES. He said he wants to empower kids to create and design robots out of anything -- emphasizing the use of eco-friendly materials over plastic.

Jasti's passion is matched by the fun of seeing someone control a robot with just their hand. In a demonstration, a man wearing the Ziro smart glove moved his hand slightly forward. At the same time, a robot (that looked like a famous droid from a large movie franchise) moved forward. Then, the man twisted his hand in a circular motion. The robot spun in a circle.

Jasti said that they have already gotten Ziro kits into some schools, but the kit can also be used at home. Ziro could be this generation's Erector Set.

The Ziro starter kit includes a smart glove, two modules and parts for a trike assembly base. Ziro is available to preorder for $150 (which converts to £120 and AU$200) and be available in the spring of 2017.

]]>
Sat, 7 Jan 2017 11:38:16 +0400
<![CDATA['Caterpillar' Robot Wriggles to Get Around]]>http://2045.com/news/35093.html35093A soft, caterpillar-like robot might one day climb trees to monitor the environment, a new study finds.

Traditionally, robots have usually been made from rigid parts, which make them susceptible to harm from bumps, scrapes, twists and falls. These hard parts can also keep them from being able to wriggle past obstacles.

Increasingly, scientists are building robots that are made of soft, bendable plastic and rubber. These soft robots, with designs that are often inspired by octopuses, starfish, worms and other real-life boneless creatures, are generally more resistant to damage and can squirm past many of the obstacles that impair hard robots, the researchers said. [The 6 Strangest Robots Ever Created]

"I believe that this kind of robot is very suitable for our living environment, since the softness of the body can guarantee our safety when we are interacting with the robots," said lead study author Takuya Umedachi, now a project lecturer in the Graduate School of Information Science and Technology at the University of Tokyo.

However, soft materials easily deform into complex shapes that make them difficult to control when conventional robotics techniques are used, according to Umedachi and his colleagues. Modeling and predicting such activity currently requires vast amounts of computation because of the many and unpredictable ways in which such robots can move, the researchers said.

To figure out better ways to control soft robots, Umedachi and his colleagues analyzed the caterpillars of the tobacco hornworm Manduca sexta, hoping to learn how these animals coordinate their motions without a hard skeleton. Over millions of years, caterpillars have evolved to move in complex ways without using massive, complex brains.

The scientists reasoned that caterpillars do not rely on a control center like the brain to steer their bodies, because they only have a small number of neurons. Instead, the scientists suggest that caterpillars might control their bodies in a more decentralized manner. Their model demonstrates their theory that sensory neurons embedded in soft tissues relay data to groups of muscles that can then help caterpillars move in a concerted manner.

The scientists developed a caterpillar-like soft robot that was inspired by their animal model. They attached sensors to the robot, which has a soft body that can deform as it interacts with its environment, such as when it experiences friction from the surface on which it walks. This data was fed into a computer that controlled the robot's motors, and the motor could, in turn, contract the robot body's four segments.

The researchers found that they could use this sensory data to guide the robot's inching and crawling motions with very little in the way of guidance mechanisms. "We believe that the softness of the body can be crucial when designing intelligent behaviors of a robot," Umedachi told Live Science.

"I would like to build a real, caterpillar-like robot that can move around on branches of trees," Umedachi said. "You can put temperature and humidity sensors and cameras on the caterpillar-like robots to use such spaces."

The scientists detailed their findings online Dec. 7 in the journal Open Science.

Original article on Live Science.

]]>
Sat, 7 Jan 2017 11:34:33 +0400
<![CDATA[Meet Kuri, Another Friendly Robot for Your Home]]>http://2045.com/news/35095.html35095Mayfield Robotics set out to build an approachable robot on wheels for surveillance and entertainment. Will anyone buy it?

Inside the Silicon Valley office of Mayfield Robotics, Kuri looks up at me and squints as if in a smile. Then the robot rolls across the floor, emitting a few R2-D2-like beeps.

Mayfield Robotics, which spun out of the research branch of Bosch, built Kuri as the next step in home robotics. It joins an increasingly crowded field: joining smart-home devices like Amazon’s Alexa and Google Home are wheeled robots like Jibo, Pepper, and Buddy, ready to offer companionship and entertainment (see “Personal Robots: Artificial Friends with Limited Benefits”).

Kaijen Hsiao, CTO of Mayfield Robotics, says Kuri was built to focus on doing a few things very well, and its personality will be what sets it apart. The 20-inch-tall robot is essentially an Amazon Alexa on wheels, letting users play music or control their smart devices from anywhere in the home. It can also live-stream video of your home for surveillance purposes.

Kuri is currently available for pre-order for $699 and is expected to ship to buyers by the end of the year. Mayfield is beginning to manufacture the robot now but will spend the year fleshing out the software side.

While people are at home, Kuri’s mission is to provide entertainment, whether that’s playing music or a podcast or reading a story out loud. It can autonomously follow users from room to room as it performs these tasks. Through a website called IFTTT, users can also set up custom commands for specific actions.

Kuri promises to keep working for you when you’re not home, too. Behind one of Kuri’s eyes is a 1080p camera, and users can access a live stream from the Kuri app. The video function can be used to check on a pet or make sure no intruders are present. Microphones embedded in the robot can detect unusual sounds, prompting the robot to roll in that direction and investigate. Or users can remotely pilot the robot to a specific area. The company says Kuri has “hours of battery life” and drives itself to its dock when it needs to charge.

Mayfield built this robot to perform all these tasks with personality. Kuri comes across as lovable but simple, so there’s no reason to expect it to do more than simple jobs. “He talks robot. He talks in bleeps and bloops,” Hsiao says. “It makes him endearing, but it also sets expectations appropriately.”

But will that be enough to make people want Kuri? In 2017, there will be a range of home robots that use artificial personality, says Andra Keay, the founder of Robot Launchpad and managing director of Silicon Valley Robotics.

“However, I believe that there is going to be a limit to the number of personalities we will want to have in our houses,” Keay says. “So the race is on to create not just engagement but loyalty. That’s a real challenge.” 

]]>
Thu, 5 Jan 2017 11:39:47 +0400
<![CDATA[Languages still a major barrier to global science, new research finds]]>http://2045.com/news/35092.html35092English is now considered the common language, or 'lingua franca', of global science. All major scientific journals seemingly publish in English, despite the fact that their pages contain research from across the globe.

However, a new study suggests that over a third of new scientific reports are published in languages other than English, which can result in these findings being overlooked - contributing to biases in our understanding.

As well as the international community missing important science, language hinders new findings getting through to practitioners in the field say researchers from the University of Cambridge.

They argue that whenever science is only published in one language, including solely in English, barriers to the transfer of knowledge are created.

The Cambridge researchers call on scientific journals to publish basic summaries of a study's key findings in multiple languages, and universities and funding bodies to encourage translations as part of their 'outreach' evaluation criteria.

"While we recognise the importance of a lingua franca, and the contribution of English to science, the scientific community should not assume that all important information is published in English," says Dr Tatsuya Amano from Cambridge's Department of Zoology.

"Language barriers continue to impede the global compilation and application of scientific knowledge."

The researchers point out an imbalance in knowledge transfer in countries where English is not the mother tongue: "much scientific knowledge that has originated there and elsewhere is available only in English and not in their local languages."

This is a particular problem in subjects where both local expertise and implementation is vital - such as environmental sciences.

As part of the study, published today in the journal PLOS Biology, those in charge of Spain's protected natural areas were surveyed. Over half the respondents identified language as an obstacle to using the latest science for habitat management.

The Cambridge team also conducted a litmus test of language use in science. They surveyed the web platform Google Scholar - one of the largest public repositories of scientific documents - in a total of 16 languages for studies relating to biodiversity conservation published during a single year, 2014.

Of the over 75,000 documents, including journal articles, books and theses, some 35.6% were not in English. Of these, the majority was in Spanish (12.6%) or Portuguese (10.3%). Simplified Chinese made up 6%, and 3% were in French.

The researchers also found thousands of newly published conservation science documents in other languages, including several hundred each in Italian, German, Japanese, Korean and Swedish.

Random sampling showed that, on average, only around half of non-English documents also included titles or abstracts in English. This means that around 13,000 documents on conservation science published in 2014 are unsearchable using English keywords.

This can result in sweeps of current scientific knowledge - known as 'systematic reviews' - being biased towards evidence published in English, say the researchers. This, in turn, may lead to over-representation of results considered positive or 'statistically significant', and these are more likely to appear in English language journals deemed 'high-impact'.

In addition, information on areas specific to countries where English is not the mother tongue can be overlooked when searching only in English.

For environmental science, this means important knowledge relating to local species, habitats and ecosystems - but also applies to diseases and medical sciences. For example, documents reporting the infection of pigs with avian flu in China initially went unnoticed by international communities, including the WHO and the UN, due to publication in Chinese-language journals.

"Scientific knowledge generated in the field by non-native English speakers is inevitably under-represented, particularly in the dominant English-language academic journals. This potentially renders local and indigenous knowledge unavailable in English," says lead author Amano.

"The real problem of language barriers in science is that few people have tried to solve it. Native English speakers tend to assume that all the important information is available in English. But this is not true, as we show in our study.

"On the other hand, non-native English speakers, like myself, tend to think carrying out research in English is the first priority, often ending up ignoring non-English science and its communication.

"I believe the scientific community needs to start seriously tackling this issue."

Amano and colleagues say that, when conducting systematic reviews or developing databases at a global scale, speakers of a wide range of languages should be included in the discussion: "at least Spanish, Portuguese, Chinese and French, which, in theory, cover the vast majority of non-English scientific documents."

The website conservationevidence.com, a repository for conservation science developed at Cambridge by some of the authors, has also established an international panel to extract the best non-English language papers, including Portuguese, Spanish and Chinese.

"Journals, funders, authors and institutions should be encouraged to supply translations of a summary of a scientific publication - regardless of the language it is originally published in," says Amano. The authors of the new study have provided a summary in Spanish, Portuguese, Chinese and French as well as Japanese.

"While outreach activities have recently been advocated in science, it is rare for such activities to involve communication across language barriers."

The researchers suggest efforts to translate should be evaluated in a similar way to other outreach activities such as public engagement, particularly if the science covers issues at a global scale or regions where English is not the mother tongue.

Adds Amano: "We should see this as an opportunity as well as a challenge. Overcoming language barriers can help us achieve less biased knowledge and enhance the application of science globally."

]]>
Thu, 29 Dec 2016 11:31:32 +0400
<![CDATA[Seven robots you need to know. Pointing the way to an android future]]>http://2045.com/news/35084.html35084Walking. Grasping an object. Empathising. Some of the hardest problems in robotics involve trying to replicate things that humans do easily. The goal? Creating a general purpose robot (think C-3PO from Star Wars) rather than specialised industrial machines. Here are seven existing robots that point the way towards the humanoid robots of the future.

Atlas

Use: Originally built for Darpa Robotics Challenge
Made by: Boston Dynamics
What it tries to do: Achieve human-like balance and locomotion using deep learning, a form of artificial intelligence.

“Our long-term goal is to make robots that have mobility, dexterity, perception and intelligence comparable to humans and animals, or perhaps exceeding them; this robot is a step along the way.”​

MARC RAIBERT, FOUNDER, BOSTON DYNAMICS

Features: 
• 1.7m tall and weighs 82kg
• Can walk on two feet and get back up if it falls down 
Human equivalent: Legs/skeleton/musculature

Superflex

Use: Military. Part of Darpa’s Warrior Web project
Made by: SRI Robotics
What it tries to do: A suit that makes the wearer stronger and helps prevent injury

Superflex is a type of ‘soft’ robot, which can mould itself to the environment or a human body in a way that typical robots can’t. The goal is to make machines that feel and behave more like biological than mechanical systems, and give additional powers to the wearer.

Features: 
• Battery-powered compressive suit weighs seven pounds 
• Faux ‘muscles’ can withstand 250lb of force
Human equivalent: Musculature

Photo: SRI International

Amazon Echo

Use: Voice-controlled speaker 
Made by: Amazon
What it tries to do: Lets you control devices by talking to them

It may not have any moving parts, but Amazon’s Echo – and Alexa, the digital assistant that lives inside it, is definitely trying to solve one of the central problems in robotics: how to create robots that can recognise human speech and provide natural voice responses.

You can tell Alexa to: 
• Control your light switches• Give you the latest sports scores
• Help tune your guitar
Human equivalent: Voice and ears

Life-like humanoids

Use: Natural interactions
Made by: Hiroshi Ishiguro Laboratories
What they try to do: Create a sense of ‘presence’, or sonzai-kan in Japanese, by making robots that look identical to humans

“Our goal is to realise an advanced robot close to humankind and, at the same time, the quest for the basis of human nature.”

Geminoid-F photo: Getty, video: Hiroshi Ishiguro Laboratories.

Pepper

Use: Day-to-day companion, and customer assistant
Made by: SoftBank
What it tries to do: Recognise and respond to human emotions

While Pepper clearly looks like a robot rather than a human, it uses its body movement and tone of voice to communicate in a way designed to feel natural and intuitive.

See Pepper's visit to the FT

Human equivalent: Feelings and emotions

Photo: Getty

Robo Brain

Use: Knowledge base for robots
Made by: Cornell University
What it tries to do: Accumulate all robotics-related information into an interconnected knowledge base similar to the memory and knowledge you hold in your brain.

The human brain is such a complex organ that it would be extremely difficult to create an artificial replica that sits inside a robot. But what if robots’ ‘brains’ could exist, disembodied in the cloud? Robo Brain hopes to achieve just that.
Researchers hope to integrate 100,000 data sources into the database.

Challenges: Understanding and juggling different types of data

Google Car

Use: Self-driving car
Made by: Google
What it tries to do: Group learning and real-time co-ordination

The true ambition behind Google’s automotive efforts is not just to make a car that can drive itself. Instead, it’s to use group learning to strengthen artificial intelligence, so that if one Google car makes a mistake and has an accident, all Google cars will learn from it. This involves managing large-scale, real-time co-ordination.

What happens when robots rule the road

Photos: FT Graphic/Getty/Dreamstime

]]>
Sat, 24 Dec 2016 23:40:32 +0400
<![CDATA[The '2016 Robot Revolution' and all the insane new things that robots have done this year]]>http://2045.com/news/35083.html35083Robots are useful for all kinds of things: building cars, recycling old electronics, having sex with - the list goes on and on.

And 2016 has been a big year for our cyber companions as they've evolved in ways we couldn't have imagined in 2015.

Robots have taken up jobs for the first time and even stepped in to save people from parking tickets .

We've compiled the above video to show you some of the highlights of 2016 and get you either excited or terrified for what the future holds.

"The pattern for the next 10-15 years will be various companies looking towards consciousness," noted futurologist Dr. Ian Pearson told Mirror Online.

"The idea behind it that if you make a machine with emotions it will be easier for people to get on with.

"[But] There is absolutely no reason to assume that a super-smart machine will be hostile to us."

Whether it's artificial intelligence, the singularity or just more celebrity sex dolls , there's certainly going to be a lot to talk about when we all meet back here in December 2017.

]]>
Fri, 23 Dec 2016 23:30:25 +0400
<![CDATA[Good news! You probably won’t be killed by a sex robot]]>http://2045.com/news/35082.html35082After spending a fascinating two days at the International Congress on Love and Sex with Robots, where academics discussed everything from robot design to the ethics of programming lovers, I was surprised to learn from Gizmodo that “sex robots may literally f**k us to death.”

How, I wondered, could these otherwise thoughtful researchers allow humanity to walk into such a dystopian nightmare?

Quite rightly, they won’t. That headline was in fact inspired by a discussion on the ethics of artificial intelligence by Prof. Oliver Bendel, who outlined some of the broad implications of creating machines which can “think” – including how we make sure robots make good moral decisions and don’t end up causing humans harm. Far from “warning” of the dangers of oversexed robots, Bendel was actually trying to ensure that they don’t “f**k us to death”. So while I might personally fantasise about the future headlines like “Woman, 102, Sexed To Death By Robot Boyfriend”, it’s unlikely that I’ll kick the bucket with such panache. Thanks to Bendel, and others who are exploring these questions as artificial intelligence develops, sex robots will likely have a built-in kill switch (or “kill the mood” switch) to prevent anyone from being trapped in a nightmare sex marathon with a never-tiring machine.

Reporting on events like the sex robots conference is notoriously tricky. On the one hand, sex robots are guaranteed to grab the attention of anyone looking for something to distract them from their otherwise robot-less lives, so an article is guaranteed to be a hit. On the other hand, academics are notoriously careful in what they say, so quite rightly you’re unlikely to find one who’ll actually screech warnings about imminent death at the hands (or genitals) of a love machine.

But no one wants to click a Twitter link that says “Academic Research Revealed To Be More Complicated Than We Can Cram Into 20 Words.” Hence Gizmodo’s terrifying headline, and other pieces which picked an interesting observation, then sold it to readers with something more juicy than the title in the conference schedule. The Register went with “Non-existent sex robots already burning holes in men’s pockets” in reference to a paper presented by Jessica Szczuka, in which men were quizzed about their possible intentions to buy a sex robot. The Daily Mail chose to highlight the data issues which arise from intimate connections with machines by telling us “Sex Robots Could Reveal Your Secret Perversions!

They’re blunt tools, but they get people interested, and hopefully encourage people to read further into issues they might not previously have considered. For example, during her keynote talk, Dr Kate Devlin mentioned a robot which hit the headlines last year because it “looked like Scarlet Johansson”. She posed an ethical question for makers of realistic bots and dolls: how do you get permission from the person whose likeness you’re using? Alternatively: “Celebrities Could Sue Over Sex Robot Doppelgangers!”

Dr Devlin also questioned why research into care robots for elderly people doesn’t also include meeting their sexual needs (“Academic Demands Sex Toys For Pensioners”) and pointed out that while more established parts of the sex industry tend to be male-dominated, in the sex tech field pioneering women are leading the way (“Are Women The Future Of The Sex Industry?”).

Julie Wosk – professor of art history and author of “My Fair Ladies: Female Robots, Androids and other artificial Eves” explored pop culture representations of sex robots, from Ex Machina’s Ava to Good Girl’s brothel-owned learning sex bot. Sex robots are most commonly female, beautiful and subservient, and Wosk pointed out that in pop culture they also have a tendency to rebel. Westworld, Humans, Ex Machina – all include strong, often terrifying, female robots who gain consciousness, and could be seen as a manifestation of society’s fears of women gaining power. Put a sub editor’s hat on and voila: “Is Feminism To Blame For Our Fear of Sex Robots?”

Dr Lynne Hall focused on user experience – while sex robots are often portrayed as humanoid, in fact a robot that pleasures you may be more akin to something you strap to your body while you watch porn. She went on to point out that porn made with one or more robotic actors has a number of interesting benefits such as a lower risk of STI transmission, and perhaps better performer safety, as robot actors replace potentially predatory porn actors (“Sex Robots Will Revolutionise Porn!”). David Levy, author of “Love and Sex with Robots”, gave a controversial keynote on the implications of robot consciousness when it comes to relationships: “Humans Will Marry Robots By 2050.”

In other presentations, designers and engineers showed off the real-life robots they had built. Cristina Portalès introduced us to ‘ROMOT’ – a robotic theatre which combines moving seats, smells, virtual reality and more to create a uniquely intense experience. But while the ROMOT team have no plans to turn it into a sex show, Cristina outlined how it could be used to enhance sexual experiences - using porn videos and sex scents to create a wholly X-rated experience. Or, if you prefer: ‘Immersive Sex Theatre Could Be The Future Of Swinging.’ Other designers showed off projects designed to increase human intimacy over a long distance – like ‘Kissinger’ (‘Remarkable Gadget Helps You Smooch A Lover Over The Internet’) and ‘Teletongue’ (‘With X-Rated Lollipop You Can Make Sweet Love At A Distance’).

You get the idea. If we had a classification system for science reporting, all these headlines would be flagged to let the user know that the actual story is far more complicated. But they’d also probably languish unclicked, meaning similar research is less likely to get covered in the future.

Towards the end of the conference one of the Q+A sessions moved into the area of science and tech communication. Inevitably, with so many journalists in the room, there was an uneasiness from some academics about the way in which the conference would be covered. As someone with a bee in my bonnet about the way sex is often reported in the mainstream media, I think this wariness is often justified. But while my initial reaction to Gizmodo’s headline was to roll my eyes, their presence – and that of other journalists – made the overall topic of robotic relationships and intimacy much more accessible to the public. There have been one or two swiftly-corrected inaccuracies, but the press presence means that what could otherwise have been a small conference just for academics has sparked debate around the world. 

]]>
Thu, 22 Dec 2016 23:26:34 +0400
<![CDATA[We will soon be able to read minds and share our thoughts]]>http://2045.com/news/35085.html35085The first true brain-to-brain communication in people could start next year, thanks to huge recent advances.

Early attempts won’t quite resemble telepathy as we often imagine it. Our brains work in unique ways, and the way each of us thinks about a concept is influenced by our experiences and memories. This results in different patterns of brain activity, but if neuroscientists can learn one individual’s patterns, they may be able to trigger certain thoughts in that person’s brain. In theory, they could then use someone else’s brain activity to trigger these thoughts.

“You could detect certain thought processes and use them to influence other people’s decisions” 

So far, researchers have managed to get two people, sitting in different rooms, to play a game of 20 questions on a computer. The participants transmitted “yes” or “no” answers, thanks to EEG caps that monitored brain activity, with a technique called transcranial magnetic stimulation triggering an electrical current in the other person’s brain. By pushing this further, it may be possible to detect certain thought processes, and use them to influence those of another person, including the decisions they make.

Another approach is for the brain activity of several individuals to be brought together on a single electronic device. This has been done in animals already. Three monkeys with brain implants have learned to think together, cooperating to control and move a robotic arm.

Similar work has been done in rats, connecting their brains in a “brainet”. The next step is to develop a human equivalent that doesn’t require invasive surgery. These might use EEG caps instead, and their first users will probably be people who are paralysed. Hooking up a brainet to a robotic suit, for example, could enable them to get help from someone else when learning to use exoskeletons to regain movement.

This article appeared in print under the headline “Mind-reading fuses thoughts”

]]>
Wed, 14 Dec 2016 23:43:54 +0400
<![CDATA[Phantom movements in augmented reality helps patients with chronic intractable phantom limb pain]]>http://2045.com/news/35079.html35079Dr Max Ortiz Catalan at Chalmers University of Technology, the Department of Signals and systems, has developed a novel method of treating phantom limb pain using machine learning and augmented reality. This approach has been tested on over a dozen of amputees with chronic phantom limb pain who found no relief by other clinically available methods before. The new treatment reduced their pain by approximately 50 per cent, reports a clinical study published in The Lancet.

​People who lose an arm or leg often experience phantom limb pain, as if the missing limb was still there. Phantom limb pain can become a serious chronic condition that significantly reduces the patients’ quality of life. It is still unclear why phantom limb pain and other phantom sensations occur.

Several medical and non-medical treatments have been proposed to alleviate phantom limb pain. Examples include mirror therapy, various types of medications, acupuncture, and implantable nerve stimulators. However, in many cases nothing helps. This was the situation for the 14 arm amputees who took part in the first clinical trial of a new treatment, invented by Chalmers researcher Max Ortiz Catalan, and further developed with his multidisciplinary team in the past years.

“We selected the most difficult cases from several clinics,” Dr Ortiz Catalan says. “We wanted to focus on patients with chronic phantom limb pain who had not responded to any treatments. Four of the patients were constantly medicated, and the others were not receiving any treatment at all because nothing they tried had helped them. They had been experiencing phantom limb pain for an average of 10 years.”

The patients were treated with the new method for 12 sessions. At the last session the intensity, frequency, and quality of pain had decreased by approximately 50 per cent. The intrusion of pain in sleep and activities of the daily living was also reduced by half. In addition, two of the four patients who were on analgesics were able to reduce their doses by 81 per cent and 33 per cent.

“The results are very encouraging, especially considering that these patients had tried up to four different treatment methods in the past with no satisfactory results,” Ortiz Catalan says. “In our study, we also saw that the pain continuously decreased all the way through to the last treatment. The fact that the pain reduction did not plateau suggests that further improvement could be achieved with more sessions.”

Ortiz Catalan calls the new method phantom motor execution. It consist of using muscle signals from the amputated limb to control augmented and virtual environments. Electric signals in the muscles are picked up by electrodes on the skin. Artificial intelligence algorithms translate the signals into movements of a virtual arm in real-time. The patients see themselves on a screen with the virtual arm in the place of the missing arm, and they can control it as they would control their biological arm.

Thus, the perceived phantom arm is brought to life by a virtual representation that the patient can see and control. This allows the patient to reactivate areas of the brain that were used to move the arm before it was amputated, which might be the reason that the phantom limb pain decrease. No other existing treatment for phantom limb pain generates such a reactivation of these areas of the brain with certainty. The research led by Ortiz Catalan not only creates new opportunities for clinical treatment, but it also contributes to our understanding of what happens in the brain when phantom pain occurs.

The clinical trial was conducted in collaboration with Sahlgrenska University Hospital in Gothenburg, Örebro University Hospital in Örebro, Bräcke Diakoni Rehabcenter Sfären in Stockholm, all in Sweden, and the University Rehabilitation Institute in Ljubljana, Slovenia.

“Our joint project was incredibly rewarding, and we now intend to go further with a larger controlled clinical trial,” Ortiz Catalan says. “The control group will be treated with one of the current treatment methods for phantom limb pain. This time we will also include leg amputees. More than 30 patients from several different countries will participate, and we will offer more treatment sessions to see if we can make the pain go away completely.”

The technology for phantom motor execution is available in two modalities – an open source research platform, and a clinically friendly version in the process of being commercialised by the Gothenburg-based company Integrum. The researchers believe that this technology could also be used for other patient groups who need to rehabilitate their movement capability, for example after a stroke, nerve damage or hand injury.

]]>
Sat, 3 Dec 2016 19:25:59 +0400
<![CDATA[A new minimally invasive device to treat cancer and other illnesses ]]>http://2045.com/news/35081.html35081 A new study by Lyle Hood, assistant professor of mechanical engineering at The University of Texas at San Antonio (UTSA), describes a new device that could revolutionize the delivery of medicine to treat cancer as well as a host of other diseases and ailments (Journal of Biomedical Nanotechnology, "Nanochannel Implants for Minimally-Invasive Insertion and Intratumoral Delivery"). Hood developed the device in partnership with Alessandro Grattoni, chair of the Department of Nanomedicine at Houston Methodist Research Institute. "The problem with most drug-delivery systems is that you have a specific minimum dosage of medicine that you need to take for it to be effective," Hood said. "There's also a limit to how much of the drug can be present in your system so that it doesn't make you sick." As a result of these limitations, a person who needs frequent doses of a specific medicine is required to take a pill every day or visit a doctor for injections. Hood's creation negates the need for either of these approaches, because it's a tiny implantable drug delivery system. "It's an implantable capsule, filled with medicinal fluid that uses about 5000 nanochannels to regulate the rate of release of the medicine," Hood said. "This way, we have the proper amount of drugs in a person's system to be effective, but not so much that they'll harm that person." The capsule can deliver medicinal doses for several days or a few weeks. According to Hood, it can be used for any kind of ailment that needs a localized delivery over several days or a few weeks. This makes it especially tailored for treating cancer, while a larger version of the device, which was originally created by Grattoni, can treat diseases like HIV for up to a year. "In HIV treatment, you can bombard the virus with drugs to the point that that person is no longer infectious and shows no symptoms," Hood said. "The danger is that if that person stops taking their drugs, the amount of medicine in his or her system drops below the effective dose and the virus is able to become resistant to the treatments." The capsule, however, could provide a constant delivery of the HIV-battling drugs to prevent such an outcome. Hood noted it can also be used to deliver cortisone to damaged joints to avoid painful, frequent injections, and possibly even to pursue immunotherapy treatments for cancer patients. "The idea behind immunotherapy is to deliver a cocktail of immune drugs to call attention to the cancer in a person's body, so the immune system will be inspired to get rid of the cancer itself," he said. The current prototype of the device is permanent and injected under the skin, but Hood is working with Teja Guda, assistant professor of biomedical engineering, to collaborate on 3-D printing technology to make a new, fully biodegradable iteration of the device that could potentially be swallowed.

Read more: A new minimally invasive device to treat cancer and other illnesses 

]]>
Thu, 1 Dec 2016 19:30:12 +0400
<![CDATA[For robots, artificial intelligence gets physical]]>http://2045.com/news/35076.html35076In a high-ceilinged laboratory at Children’s National Health System in Washington, D.C., a gleaming white robot stitches up pig intestines.

The thin pink tissue dangles like a deflated balloon from a sturdy plastic loop. Two bulky cameras watch from above as the bot weaves green thread in and out, slowly sewing together two sections. Like an experienced human surgeon, the robot places each suture deftly, precisely — and with intelligence.

Or something close to it.

For robots, artificial intelligence means more than just “brains.” Sure, computers can learn how to recognize faces or beat humans in strategy games. But the body matters too. In humans, eyes and ears and skin pick up cues from the environment, like the glow of a campfire or the patter of falling raindrops. People use these cues to take action: to dodge a wayward spark or huddle close under an umbrella.

Part of intelligence is “walking around and picking things up and opening doors and stuff,” says Cornell computer scientist Bart Selman. It “has to do with our perception and our physical being.” For machines to function fully on their own, without humans calling the shots, getting physical is essential. Today’s robots aren’t there yet — not even close — but amping up the senses could change that.

 

“If we’re going to have robots in the world, in our home, interacting with us and exploring the environment, they absolutely have to have sensing,” says Stanford roboticist Mark Cutkosky. He and a group of like-minded scientists are making sensors for robotic feet and fingers and skin — and are even helping robots learn how to use their bodies, like babies first grasping how to squeeze a parent’s finger.

The goal is to build robots that can make decisions based on what they’re sensing around them — robots that can gauge the force needed to push open a door or figure out how to step carefully on a slick sidewalk. Eventually, such robots could work like humans, perhaps even caring for the elderly.

The whole story...

]]>
Sun, 20 Nov 2016 19:03:22 +0400
<![CDATA[The robot suit providing hope of a walking cure]]>http://2045.com/news/35075.html35075Clothing that can help people learn how to walk again after a stroke is the brainchild of a Harvard team reinventing the way we use robot technology

Conor Walsh’s laboratory at Harvard University is not your everyday research centre. There are no bench-top centrifuges, no fume cupboards for removing noxious gases, no beakers or crucibles, no racks of test tubes and only a handful laptop computers. Instead, the place is dominated by clothing.

On one side of the lab stands a group of mannequins dressed in T-shirts and black running trousers. Behind them, there are racks of sweatshirts and running shoes. On another wall of shelves, shorts and leggings have been carefully folded and labelled for different-size wearers. On my recent visit, one student was sewing a patch on a pair of slacks.

Walk in off the street and you might think you had stumbled into a high-class sports shop. But this is no university of Nike. This is the Harvard Biodesign Lab, home of a remarkable research project that aims to revolutionise the science of “soft robotics” and, in the process, transform the fortunes of stroke victims by helping them walk again.

“Essentially, we are making clothing that will give power to people who have suffered mobility impairment and help them move,” says Professor Walsh, head of the biodesign laboratory. “It will help them lift their feet and walk again. It is the ultimate in power-dressing.”

Last week, at a ceremony in Los Angeles, 35-year-old Walsh was awarded a Rolex award for enterprise for his work. He plans to use the prize money – 100,000 Swiss francs (about £82,000) – to expand “soft robotics” to develop suits that could also enhance the ability of workers and soldiers to lift and carry weights and also improve other areas of medical care, including treatments for patients suffering from Parkinson’s disease, cerebral palsy and other ailments that affect mobility.

Walsh is a graduate – in manufacturing and mechanical engineering – of Trinity College Dublin. While a student, he became fascinated with robotics after he read about the exoskeletons being developed in the United States to help humans handle heavy loads. Essentially, an exoskeleton is a hard, robot-like shell that fits around a user and moves them about. Think of the metal suit worn by Robert Downey Jr in Iron Man or the powered skeletal frame Sigourney Weaver used in Aliens to deal with the acid-dribbling extraterrestrial that threatened her spaceship.

“I thought that it all looked really, really cool,” says Walsh. So he applied, and was accepted, to study at the Massachusetts Institute of Technology (MIT) under biomechatronics expert Professor Hugh Herr. But when Walsh began working on rigid exoskeletons, he found the experience unsatisfactory. “It was like being inside a robotic suit of armour. It was hard, uncomfortable and ponderous and the suit didn’t always move the way a human would,” he says.

So when Walsh moved to Harvard, where he set up the biodesign lab, he decided to take a different approach to the problem. “I saw immediately that if you had a softer suit that accentuated the right actions, was comfy to wear and didn’t encumber you, it could have huge biomedical applications,” he says. “I began to wonder: can we make wearable robots soft?”

The answer turned out to be yes. Walsh, assisted by colleagues Terry Ellis, Louis Award and Ken Holt of Boston University, worked with experts in electronics, mechanical engineering, materials science and neurology to create an ingenious, low-tech way to boost walking: the soft exosuit. A band of cloth is wrapped around a person’s calf muscles. Pulleys, made from bicycle brake cables, are attached to these calf wraps and the other ends of the cables fitted to a power pack worn on a patient’s back. When the wearer starts to lift his foot to take a step, the power pack pulls the cables and this helps the wearer lift their leg. Then, as their foot swings forward, another cable, attached to the toecap of their shoes, tightens to help raise the toe so that it does not drag on the ground as they swing their legs forward. This condition is known as “foot drop” and it is a common difficulty for stroke patients.

In this way, an often critical problem for someone who can no longer control their muscles properly is alleviated. They can lift their legs and, just as importantly, keep their toes from turning down so that they do not drag on the ground and make them stumble. It is the perfect leg-up, in short.

“Designing robotic devices that target specific joints just hadn’t been done before,” says Walsh. “People had only looked at constructing a full-leg exoskeleton. We are targeting just one joint, not a whole leg. Crucially, in the case of strokes, it is the one that is often most badly impaired. Also, we have managed to keep our materials very light and easily wearable. Simple is best. That is our mantra.”

Analysis Cryonics: does it offer humanity a chance to return from the dead?While it used to be the stuff of science fiction, the technology behind the dream has advanced in recent years Read more

Originally, the pulleys that lifted the cables that helped wearers’ raise their legs and toes were powered by a trolley-like device that trundled alongside them. One of the key improvements involved in Walsh’s project has been to reduce that power pack to a size that can be worn reasonable comfortably. The unit weighs 10lbs (4.5kg) and Walsh expects his team will be able to make further reductions in the near future. “Motors are going to get lighter, batteries are going to get lighter. That will all be of great benefit, without doubt.”

The packs are also fitted with devices known as inertial measurement units (IMU), which analyse the forces created by foot movements and raise and lower the brake-cable pulleys. These sensors have to work with millisecond accuracy for the system to work properly. “Timing is absolutely critical,” says Walsh.

Test runs have already proved successful, however. Videos of stroke patients wearing soft exosuits and walking on treadmills reveal a marked improvement in their movement. Once fitted with the suits, they no longer clutch the handrails and their strides become much quicker and more confident. “We are not saying our system is the only solution to impaired mobility,” adds Walsh. “There will always be a place for hard exoskeleton power suits, for example, for people who are completely paralysed. But for less severe problems, soft robotic suits, with their lightness and flexibility, are a better solution.”

Every year, about 110,000 people suffer a stroke in the UK. Most patients survive but strokes are still the third-largest cause of death, after heart disease and cancer, in this country. Strokes occur when the blood supply to the brain is stopped due to a blood clot or when a weakened blood vessel bursts. One impact affects how muscles work. As the Stroke Association points out, your brain sends signals to your muscles, through your nerves, to make them move. A stroke, in damaging your brain, disrupts these signals. Classic symptoms include foot drop and loss of stamina. Patients feel tired and become more clumsy, making it even more difficult to control their movements.

“Patients often withdraw from life. They stop going out and miss out on all sort of social events – their grandchildren’s sports events or parties,” says Ignacio Galiana of the Wyss Institute for Biologically Inspired Engineering at Harvard University, which is also involved in the soft exosuit project. “They prefer to stay at home and to stop exercising because it is so tiring and draining. They withdraw from the world. By making it possible to walk normally again we hope we can stop that sort of thing happening.”

The soft exosuits will not be worn all of the time, it is thought, but instead be put on for a few hours so patients can get out of their homes without exhausting themselves. The devices should also help in physiotherapy sessions aimed at restoring sufferers’ ability to walk. “This is a new tool that will greatly extend and accelerate rehabilitation therapy for stroke patients,” says Walsh. “Patients no longer have to think about the process of moving. It starts to come naturally to them, as it was before they had their stroke.”

As to timing, Walsh envisages that his team will be able to get their prototypes on to the market in about three years. Nor will soft exoskeleton use be confined to stroke cases. “Cerebral palsy, Alzheimer’s, multiple sclerosis, Parkinson’s, old age: patients with any of these conditions could benefit,” adds Walsh. “When muscles no longer generate sufficient forces to allow people to walk, soft, wearable robots will be able to help them.”

]]>
Sun, 20 Nov 2016 18:59:52 +0400
<![CDATA[Medical Bionic Implant And Artificial Organs Market Volume Forecast and Value Chain Analysis 2016-2026]]>http://2045.com/news/35074.html35074Artificial organ and implants are special type of made devices/ prosthetics which are implanted in human body, so that it can imitates the function original organ. The crucial requirement of such organ is to function as normal organ. Bionics is combination Biology and Electronics. Medical Bionics are substitute or improvement of other body parts with robotic versions. Medical bionic implants are diverse from artificial organ, they impersonate original function very thoroughly or even do better than it.

Organ transplantation becomes mandate when an organ in body of person is damaged due to injury or disease. But, number of organ donors is very less than the demand. Although after the organ is transplanted there are chances of rejection of transplanted organ. This signifies that immune system of the recipient is not able to accept the organ. Artificial organs and bionics are made of biomaterial. Biomaterial is a living or non-living substance which is introduced in body as portion of artificial organ or bionics to substitute an organ or functions associated with it. Heart and kidney are most developed artificial organs while pace makers and cochlear implants most developed medical bionics.

Medical Bionic Implant and artificial Organs Market: Drivers and Restraints

Currently, medical bionic implant and artificial organs globalmarket is driven by the fact that large number of patients are in need for organ transplantation although not everyone can get organ as the number of donors are less. Growing advancements in medical technologiesare fueling the global medical bionic implant and artificial organsmarket. Growing public awareness about various diseases, advancements in medical bionic implant and artificial organs procedures and the need for screenings conducted for early diagnosis and treatment of various diseases are also expected to favor the global medical bionic implant and artificial organs market. Expiring of the patents of 3D printing will also play important development of 3D printing of artificial organs. However high cost associated with organ transplant procedure and price of medical bionics act as a restraint for global medicalbionic implant and artificial organsmarket.

Request Free Report Sample@ http://www.futuremarketinsights.com/reports/sample/rep-gb-1407

Medical Bionic Implant and artificial Organs Market: Segmentation

Global medical bionic implant and artificial organsmarketis segmented on the basis of product type as given below:

Based on product type, global medical bionic implant and artificial organs market is segmented into:

  • Heart Bionics
    • Ventricular Assist Device
    • Total Artificial Heart
    • Artificial Heart Valves
    • Pacemaker
      • Implantable Cardiac Pacemaker
      • External Pacemaker
  • Orthopedic Bionics
    • Bionic Hand
    • Bionic Limb
    • Bionic Leg
  • Ear Bionics
    • Bone Anchored Hearing Aid
    • Cochlear Implant

Based on implant location, global medical bionic implant and artificial organs market is segmented into:

  • Externally Worn
  • Implantable

Request For TOC@ http://www.futuremarketinsights.com/toc/rep-gb-1407

Medical Bionic Implant and artificial Organs Market: Overview

With quick technological advancement, rapid technological advancements in medical field, ever increasing demand of medical bionics implants and artificial organs, the global medical bionic implant and artificial organsmarket is anticipated to have vigorous development during the forecast period.

Medical Bionic Implant and artificial Organs Market: Region- wise Outlook

Depending on geographic regions, Global medical bionic implant and artificial organsmarketis segmented into seven key regions: North America, Latin America, Eastern Europe, Western Europe, Asia Pacific Excluding Japan, Japan and Middle East & Africa.North America is the leading market for medical bionic implant and artificial organsdue to rapid technological innovations and huge investment in research and development and increased healthcare expenditures on artificial prosthesis. Whereas, Asia-pacific andEurope is expected to grow at a significant growth due to large consumer base, rising government initiatives for enhancing healthcare, and high disposable incomewill contribute to the global medical bionic implant and artificial organsmarket value exhibiting robust CAGR during the forecast period.

Medical Bionic Implant and artificial Organs Market: Key Players

Some of the key market players in global medical bionic implant and artificial organsmarketareTouch Bionics Inc., Lifenet Health Inc.,Cochlear Ltd., Sonova, Otto Bock Inc., Edwards Lifesciences Corporation, Medtronic, Inc. HeartWare, Orthofix Holdings, Inc., BionX Medical Technologies, Inc. and others.

]]>
Wed, 16 Nov 2016 18:56:00 +0400
<![CDATA[Modular Exoskeleton Reduces Risk of Work-Related Injury]]>http://2045.com/news/35073.html35073Robotics startup suitX is turning human laborers into bionic workers with a new modular, full-body exoskeleton that will help reduce the number of on-the-job injuries.

The flexible MAX (Modular Agile eXoskeleton) system is designed to support those body parts—shoulders, lower back, knees—most prone to injury during heavy physical exertion.

A spinoff of the University of California Berkeley's Robotics and Human Engineering Lab, suitX built MAX out of three units: backX, shoulderX, and legX. Each can be worn independently or in any combination necessary.

"All modules intelligently engage when you need them, and don't impede you" when moving up or down stairs and ladders, driving, or biking, the product page said.

Field evaluations conducted in the US and Japan, as well as in laboratory settings, indicate the MAX system "reduces muscle force required to complete tasks by as much as 60 percent."

The full-body suit and its modules are aimed primarily at those working in industrial settings like construction, airport baggage handling, assembly lines, shipbuilding, warehouses, courier delivery services, and factories.

The full MAX Suit (BackX, ShoulderX, LegX together) will run you $10,000; the BackX and ShoulderX are $3,000 each; and a LegX is $5,000. SuitX suggests consumers contact sales@suitx.com for more details.

The company is perhaps best known for its Phoenix exoskeleton, which enables people with mobility disorders to stand up, walk, and interact with others. The lightweight device—still in the testing phase—carries a charge for up to four hours of constant use, or eight hours of intermittent walking.

]]>
Wed, 16 Nov 2016 18:53:04 +0400
<![CDATA[Advanced robot can understand how humans THINK and knows how the brain works]]>http://2045.com/news/35067.html35067The latest generation of artificially intelligent robots took centre stage recently at the 2016 World Robot Conference held in the Chinese capital Beijing.

But one of the stand out devices was a robot that can actually understand the intricacies of the human brain, and how a human thinks.

Xiao I has the ability to analyse human languages as well as a huge amount of data, and can assemble the functions of a human brain.

The advanced robot can understand and act on user’s instructions by analysing the specific context, thanks to its massive database which has accumulated information concerning daily life and industries for decades, according to an exhibitor at the Xiao I booth.

"The top four companies representing the best human-computer interaction technology were voted for at a summit in Orlando the day before yesterday.

Xiao I ranks as the top one, and others include Apple's Siri, Microsoft's Cortana and Amazon's Echo," said the exhibitor.

Over the past few years, Beijing authorities have been giving policy support to the robot developers in an attempt to stimulate growth of the city’s high-tech industry.

"Without artificial intelligence a robot will be nothing but a machine. Most robot-related research is developing towards the direction of artificial intelligence, which will enhance the sensory ability of robots and enable them to offer better services," said Sheng Licheng, deputy director of Beijing’s Yizhuang Development Zone Administration.

The five-day 2016 World Robot Conference wrapped up on Tuesday, after dazzling visitors with the very latest advancements in robot technology.

]]>
Mon, 31 Oct 2016 20:43:24 +0400
<![CDATA[Soft robot with a mouth and gut can forage for its own food]]>http://2045.com/news/35066.html35066Lying in a bath in Bristol, UK, is a robotic scavenger, gorging itself on its surroundings. It’s able to get just enough energy to take in another stomach full of food, before ejecting its waste and repeating the process. This is no ordinary robot. It’s a self-sustaining soft robot with a mouth and gut.

Developed by a Bristol-based collaboration, this robot imitates the life of salps – squishy tube-shaped marine organisms. Salps have an opening at each end, one for food to enter and one for waste to leave. They digest any tasty treats that pass through their body, giving them just enough energy to wiggle about. The same is true for the Bristol bot.

By opening its “mouth”, made from a soft polymer membrane, the robot can suck in a belly full of water and biomatter. The artificial gut – a microbial fuel cell (MFC) – is filled with greedy microbes that break down the biomass and convert its chemical energy into electrical energy, which powers the robot. Digested waste matter is then expelled out the rear end, just as more water is sucked in the front for the next feed. With every mouthful, the robot’s reserves are replenished, so in theory it could roam indefinitely.

 

 

“Squeezing out enough energy to be self-sustainable is the real breakthrough,” says Fumiya Iida, a robotics researcher from the University of Cambridge.

Leave it alone

The energy that an MFC can get from food like this is currently pretty low. But by using soft materials for the mouth and the gut, the team was able to reduce the robot’s energy consumption. They got more power by putting several MFCs in series, like a battery.

One advantage of a self-sustaining robot is that if you don’t have to charge it, change its batteries, or hook it up to a power source, it won’t need any human interference. This would make it ideal for use in inhospitable environments: leave the robot in a radioactive disaster zone or a lake filled with pollution, then let it to get to work.

At the moment, it is just a proof of concept. The surrounding water is idealised, meaning that the nutrients have been evenly spread and are in an easy-to-digest form, but other researchers have shown that MFCs can work in more testing conditions.

A self-sustaining robot could one day clean up “red tides” like this one in China, as well as collecting rubbish

Getty Images News

Now that self-sustainability has been achieved, the team wants to get more power so that the robot can start performing useful tasks.

“In the future, robots like this could be released into the ocean to collect garbage,” says Hemma Philamore, one of the robot’s creators from the University of Bristol. Another application could see the robots feeding in agricultural irrigation systems while monitoring plants or applying chemicals to crops. “What we are developing is a robot that can act naturally, in a natural environment,” says Philamore.

Journal reference: Soft RoboticsDOI: 10.1089/soro.2016.0020

]]>
Mon, 31 Oct 2016 20:40:00 +0400
<![CDATA[See a sweating robot do push-ups like it's Schwarzenegger]]>http://2045.com/news/35058.html35058Wasn't it Thomas Edison who said genius is 99 percent perspiration and 1 percent inspiration? Here's a new development that leans heavily on both. The University of Tokyo has developed Kengoro, a musculoskeletal humanoid robot that cools its motors by sweating.

Kengoro, which stands 5 feet 6 inches (1.7 meters) tall, made its debut at the International Conference on Intelligent Robots and Systems held this week in Daejon, Korea. Japanese researchers needed to find a way to cool it down without adding a batch of tubes and fans, so they decided to make it sweat.

According to IEEE Spectrum, fake sweat glands allow deionized water to seep out through Kengoro's frame around its 108 motors. As the motors heat up, the water cools them. Kengoro's metal frame is embedded with permeable channels, kind of like a sponge. The deionized water seeps slowly from the inner layers to the more porous layers as needed for cooling.

But Kengoro doesn't have to worry about wiping down its gym equipment -- the water evaporates as it cools so it doesn't drip in gross puddles like it does with guy on the Stairmaster next to you.

The creative cooling method allowed Kengoro to demonstrate doing push-ups for an impressive 11 minutes straight without overheating. That's right, push-ups. It's a skinless Arnold Schwarzenegger, in other words. Let's just hope it sticks to "Kindergarten Cop" Arnold, and not "Terminator" Arnold, because we all know how mankind's little adventure with super-advanced robots turned out there.

]]>
Sat, 15 Oct 2016 13:58:13 +0400
<![CDATA[Brain implant provides sense of touch with robotic hand – and that’s just the start]]>http://2045.com/news/35057.html35057A dozen years ago, an auto accident left Nathan Copeland paralyzed, without any feeling in his fingers. Now that feeling is back, thanks to a robotic hand wired up to a brain implant.

“I can feel just about every finger – it’s a really weird sensation,” the 28-year-old Pennsylvanian told doctors a month after his surgery.

Today the brain-computer interface is taking a share of the spotlight at the White House Frontiers Conference in Pittsburgh, with President Barack Obama and other luminaries in attendance.

The ability to wire sensors into the part of the brain that registers the human sense of touch is just one of many medical marvels being developed on the high-tech frontiers of rehabilitation.

“You learn completely new and different things every time you come at this from different directions,” Arati Prabhakar, director of the Pentagon’s Defense Advanced Research Projects Agency, said last week at the GeekWire Summit in Seattle.

Prabhakar provided a preview of the Copeland’s progress during her talk. DARPA’s Revolutionizing Prosthetics program provided the primary funding for the project, which was conducted at the University of Pittsburgh and its medical center, UPMC.

The full details of the experiment were published online today in Science Translational Medicine.

Copeland’s spinal cord was severely injured in an accident in the winter of 2004, when he was an 18-year-old college freshman. The injury left him paralyzed from the upper chest down, with no ability to feel or move his lower arms or legs.

Right after the accident, Copeland put himself on Pitt’s registry of patients willing to participate in clinical trials. Nearly a decade later, a medical team led by Pitt researcher Robert Gaunt chose him to participate in a groundbreaking series of operations.

Gaunt and his colleagues had been working for years on developing brain implants that let disabled patients control prosthetic limbs with their thoughts. “Slowly but surely, we have been moving this research forward,” study co-author Michael Boninger, a professor at Pitt as well as the director of post-acute care for UPMC’s Health Services Division, said in a news release.

This experiment moved the team’s efforts in a new direction. Four arrays of microelectrodes were implanted into the region of Copeland’s brain that would typically take in sensory signals from his fingers. Over the course of several months, researchers stimulated specific points in the somatosensory cortex, and mapped which points made Copeland feel as if a phantom finger was being touched.

“Sometimes it feels electrical, and sometimes it’s pressure,” Copeland said, “but for the most part, I can tell most of the fingers with definite precision. It feels like my fingers are getting touched or pushed.’

To test the results, the researchers placed sensors onto each of the fingers of a robotic hand. They connected the system to Copeland’s brain electrodes, and put a blindfold over his eyes. Then an experimenter touched the robo-hand’s fingers and asked Copeland if he could tell where the feeling was coming from.

Over the course of 13 sessions, each involving hundreds of finger touches, Copeland’s success rate was 84 percent. The index and little fingers were easy to identify, while the middle and ring fingers were harder.

During the experiment, Copeland learned to distinguish the intensity of the touch to some extent – but for what it’s worth, he couldn’t distinguish between hot and cold. That’ll have to come later.

“The ultimate goal is to create a system which moves and feels just like a natural arm would,” Gaunt said. “We have a long way to go to get there, but this is a great start.”

Prabhakar said neurotechnology is a high priority for DARPA, in part because of the kinds of injuries that warfighters have suffered in conflicts abroad.

“Lower-limb prosthetics have gotten very good – but upper-limb prosthetics, until very recently, have still been limited to a very simple hook,” she said.

]]>
Sat, 15 Oct 2016 13:54:32 +0400
<![CDATA[Anki's Cozmo robot is the new, adorable face of artificial intelligence]]>http://2045.com/news/35059.html35059Human beings have an uneasy relationship with robots. We’re fascinated by the prospect of intelligent machines. At the same time, we’re wary of the existential threat they pose, one emboldened by decades of Hollywood tropes. In the near-term, robots are supposed to pose a threat to our livelihood, with automation promising to replace human workers while the steady march of artificial intelligence puts a machine behind every fast food counter, toll booth, and steering wheel.

In comes Cozmo. The palm-sized robot, from San Francisco-based company Anki, is both a harmless toy and a bold refutation of that uneasy relationship so loved by film and television. The $180 bot, which starts shipping on October 16th, is powered by AI, and the end result is a WALL-E-inspired personality more akin to a clever pet than a do-everything personal assistant.

Anki isn’t trying to sell us a vision of the future like Apple, Google, and so many other Bay Area tech companies. Instead, it wants to offer an alternative. AI promises to change our lives in drastic ways. With Cozmo, Anki wants to show AI can also be a source of joy and a unique way to deepen our relationship with technology beyond the tired crusades to reinvent productivity and connect the world.

The company largely succeeds here. In my time with Cozmo over the last week, it’s been an endearing experience to discover all of the robot’s many subtle quirks, and to revisit what it’s like to play with something that feels mysteriously organic in ways you can’t quite understand. I’m reminded of childhood experiences trying to push the linguistic limits of the Furby I got for Christmas, and later on finding myself fascinated by the perceived depth of the AOL Instant Messenger bot SmarterChild.

This is intentional. Cozmo is supposed to appeal to young kids and early teenagers. It’s the same demographic Anki targeted with its first product line: a series of smartphone-controlled toy cars that can deftly maneuver a circuit-embedded track. The company, founded by Carnegie Mellon roboticists, has always proclaimed its interest in AI and robotics. Yet until the unveiling of Cozmo earlier this year, it was unclear how a toy car startup could make use of such expertise. Now, it’s evident all the software and hardware experience has paid off.

Unlike its less sophisticated predecessors in the toy market, Cozmo has advanced software to backup its smarts. Anki has programmed the robot with what it calls an emotion engine. That means Cozmo can react to situations as a human would, with a full range of emotions from happy and calm to frustrated and bold. If you pick it up, Cozmo’s blue square-shaped eyes will turn to angry slivers and its lift-like arms will raise and fall rapidly to exhibit its displeasure. Agree to play a game with Cozmo, however, and its eyes will turn into upside-down U’s to show glee. When it loses at a contest, it’ll get mad and pound the table.

Anki programmed in dozens upon dozens of nuanced personality displays to make Cozmo feel more alive, and seeing new ones pop up serendipitously is one of the products most enjoyable aspects. To create Cozmo’s personality profile and many expressions, Anki employed the help of former Pixar animator Carlos Baena, who was hired last year to give Cozmo the feeling of an animated film character come to life. The robot also emits a wide-ranging series of emotive chirps to give it a sense of constant awareness in your presence.

To further keep Cozmo feeling like a living, breathing machine, Anki uses a number of popular AI staples. The robot can employ facial recognition to remember faces and recite names. It also uses sophisticated path planning — aided by its three sensor-imbued toy cubes — to maneuver environments and avoid falling off tables. Most of these computations are not happening on the robot’s internal hardware, which keeps it light and relatively durable. Instead, Cozmo connects to a iOS or Android app, which communicates with Anki’s servers where more of heavier lifting is taken care of.

As for what you actually do with Cozmo, the activities vary. You can play a number of games with the robot using the three cubes. Those include a Whac-A-Mole game and your standard keep-away, where Cozmo tries to snatch a cube from your hand before you can pull it back. This is all coordinated through the mobile app, which uses a gamification system to let you unlock more skills for Cozmo by completing one of three daily goals. Those can include simple things like letting Cozmo free roam on your coffee table for 10 minutes. Others give you specific scenarios to create, like beating Cozmo at a game of "tap the cube" after reaching a 4-4 tie. One of the most fun features the app allows is a remote-control mode, where you can see through Cozmo's camera and use him as a kind of reconnaissance tool.

Overall, the biggest criticism you can direct toward Cozmo at the moment is that it’s just a toy, one best enjoyed by young smartphone-savvy kids. That presents a bit of a problem, because Anki’s most impressive achievements here — facial recognition, its versatile emotion engine — will be lost on the target audience. Meanwhile, adults who find Cozmo fascinating, enough to plunk down $180 at least, will be frustrated by the robot’s initial limitations. Walking that line, between appealing to kids with a fondness for Pixar films and impressing robot-loving older customers, will be difficult.

There are other downsides to Cozmo at its initial launch. Though the robot is controlled by the relatively simple mobile app, younger children will most likely need a parent or sibling’s help in getting Cozmo set up. It needs to be activated every now and again through a special Wi-Fi network, and getting it to wake up can sometimes be tricky unless Cozmo is kept in its charging dock when not in use. Being tied to the special Cozmo Wi-Fi network means the phone can’t connect to the internet, and exiting the app will put Cozmo to sleep after a few moments. These kinks may be ironed out with future software updates, but they’ll likely frustrate kids who expect toys to work out of the box or want Cozmo to have a persistent, always-on mode less reliant on a phone.

The robot does have a great deal of potential. Anki is releasing a finished software development kit in the coming months to let developers take advantage of the robot’s advanced capabilities to perform unforeseen tasks. Anki wants Cozmo to have an impact similar to Microsoft’s original Kinect motion camera, which roboticists tapped for computer vision capabilities that were at the time available only with far more expensive components. One possibility the company has floated in the past is programming Cozmo to work with smart appliances and your media center, so it can dim your Philips Hue lights and put on Netflix when it recognizes two different people sitting on the couch.

For now, though, it’s mostly a neat toy designed for kids, while only the most hardcore of robotics fans and programmers will want to pick one up for their office or at-home tinkering projects. But that may be good enough. What Anki wants to accomplish — to bring robotics and AI to everyone, in a kid-friendly package — doesn’t require a sophisticated humanoid bot to help you around the house or a ultra-capable online assistant to manage your entire life. The goal can be achieved with a likable personality that people will develop a fondness for. In that regard, Cozmo easily surpasses the bar. 

An early look at the Cozmo robot

  

 

]]>
Fri, 14 Oct 2016 14:01:46 +0400
<![CDATA[Robotic surrogates help chronically ill kids maintain social, academic ties at school]]>http://2045.com/news/35053.html35053Chronically ill, homebound children who use robotic surrogates to "attend" school feel more socially connected with their peers and more involved academically, according to a first-of-its-kind study by University of California, Irvine education researchers.

"Every year, large numbers of K-12 students are not able to go to school due to illness, which has negative academic, social and medical consequences," said lead author Veronica Newhart, a Ph.D. student in UCI's School of Education. "They face falling behind in their studies, feeling isolated from their friends and having their recovery impeded by depression. Tutors can make occasional home visits, but until recently, there hasn't been a way to provide these homebound students with inclusive academic and social experiences."

Telepresence robots could do just that. The Internet-enabled, two-way video streaming automatons have wheels for feet and a screen showing the user's face at the top of a vertical "body." From home, a student controlling the device with a laptop can see and hear everything in the classroom, talk with friends and the teacher, "raise his or her hand" via flashing lights to ask or answer questions, move around and even take field trips.

However, the robots have gone straight from production to consumer, the researchers noted, and there is great need for objective, formal studies in order for schools, hospitals and communities to responsibly engage in this innovative educational practice.

The exploratory case study -- co-authored by Mark Warschauer, UCI professor of education and informatics -- involved five homebound children, five parents, 10 teachers, 35 classmates and six school/district administrators. The students -- four males and one female -- ranged in age from 6 to 16, and their chronic illnesses included an immunodeficiency disorder, cancer and heart failure.

Getting to see their friends and staying socially connected was what they said they liked best about using the robots. The school day felt more normal, they reported, because they were able to participate in discussions, interact with peers and undergo new experiences with their classmates.

"Further research is required to determine the impact of robot utilization on students' health and well-being, as well as the most effective ways to implement this technology in various settings," said Newhart, who presented the findings at the 23rd International Conference on Learning, held in July at the University of British Columbia.

"Collaboration among education, technology and healthcare teams is key to the success of virtual inclusion in the classroom for improved learning, social and health outcomes for vulnerable children."

This fall, telepresence robots will become available on the UCI campus -- a gift from the class of 2016. "This is a solution for any student who's prevented from completing a course or degree program because of a long-term injury or illness," said Newhart, who will soon launch additional studies in school districts across the country.

Story Source:

The above post is reprinted from materials provided by University of California, Irvine. Note: Content may be edited for style and length.

]]>
Fri, 16 Sep 2016 00:43:23 +0400
<![CDATA[How a small implanted device could help limit metastatic breast cancer]]>http://2045.com/news/35052.html35052A small device implanted under the skin can improve breast cancer survival by catching cancer cells, slowing the development of metastatic tumors in other organs and allowing time to intervene with surgery or other therapies.

These findings, reported in Cancer Research, suggest a path for identifying metastatic cancer early and intervening to improve outcomes.

"This study shows that in the metastatic setting, early detection combined with a therapeutic intervention can improve outcomes. Early detection of a primary tumor is generally associated with improved outcomes. But that's not necessarily been tested in metastatic cancer," says study author Lonnie D. Shea, Ph.D., William and Valerie Hall Department Chair of Biomedical Engineering at the University of Michigan.

The study, done in mice, expands on earlier research from this team showing that the implantable scaffold device effectively captures metastatic cancer cells. Here, the researchers improve upon their device and show that surgery prior to the first signs of metastatic cancer improved survival.

"Currently, early signs of metastasis can be difficult to detect. Imaging may be done once a patient experiences symptoms, but that implies the burden of disease may already be substantial. Improved detection methods are needed to identify metastasis at a point when targeted treatments can have a significant beneficial impact on slowing disease progression," says study author Jacqueline S. Jeruss, M.D., Ph.D., associate professor of surgery and biomedical engineering and director of the Breast Care Center at the University of Michigan Comprehensive Cancer Center.

The scaffold is made of FDA-approved material commonly used in sutures and wound dressings. It's biodegradable and can last up to two years within a patient. The researchers envision it would be implanted under the skin, monitored with non-invasive imaging and removed upon signs of cancer cell colonization, at which point treatment could be administered.

The scaffold is designed to mimic the environment in other organs before cancer cells migrate there. The scaffold attracts the body's immune cells, and the immune cells draw in the cancer cells. This then limits the immune cells from heading to the lung, liver or brain, where breast cancer commonly spreads.

"Typically, immune cells initially colonize a metastatic site and then pave the way for cancer cells to spread to that organ. Our results suggest that bringing immune cells into the scaffold limits the ability of those immune cells to prepare the metastatic sites for the cancer cells. Having more immune cells in the scaffold, attracts more cancer cells to this engineered environment," Shea says.

In the mouse study at day 5 after tumor initiation, the researchers found a detectable percentage of tumor cells within the scaffold but none in the lung, liver or brain, suggesting that the cancer cells hit the scaffold first.

At 15 days after tumor initiation, they found 64 percent fewer cancer cells in the liver and 75 percent fewer cancer cells in the brains of mice with scaffolds compared to mice without scaffolds. This suggests that the presence of the scaffold slows the progress of metastatic disease.

The researchers removed the tumors at day 10, which is after detection but before substantial spreading, and found the mice that had the scaffold in place survived longer than mice that did not have a scaffold. While surgery was the primary intervention in this study, the researchers suggest that additional medical treatments might also be tested as early interventions.

In addition, researchers hope that by removing the scaffold and examining the cancer cells within it, they can use precision medicine techniques to target the treatment most likely to have an impact.

This system is early detection and treatment, not a cure, the researchers emphasize. The scaffold won't prevent metastatic disease or reverse disease progression for patients with established metastatic cancer.

The team will develop a clinical trial protocol using the scaffold to monitor for metastasis in patients treated for early stage breast cancer. In time, the researchers hope it could also be used to monitor for breast cancer in people who are at high risk due to genetic susceptibility. They are also testing the device in other types of cancer.

Story Source:

The above post is reprinted from materials provided by University of Michigan Health System. Note: Content may be edited for style and length.

]]>
Fri, 16 Sep 2016 00:40:57 +0400
<![CDATA[THE HYPE—AND HOPE—OF ARTIFICIAL INTELLIGENCE]]>http://2045.com/news/35046.html35046Earlier this month, on his HBO show “Last Week Tonight,” John Oliver skewered media companies’ desperate search for clicks. Like many of his bits, it became a viral phenomenon, clocking in at nearly six million views on YouTube. At around the ten-minute mark, Oliver took his verbal bat to the knees of Tronc, the new name for Tribune Publishing Company, and its parody-worthy promotional video, in which a robotic spokeswoman describes the journalistic benefits of artificial intelligence, as a string section swells underneath.

Tronc is not the only company to enthusiastically embrace the term “artificial intelligence.” A.I. is hot, and every company worth its stock price is talking about how this magical potion will change everything. Even Macy’s recently announced that it was testing an I.B.M. artificial-intelligence tool in ten of its department stores, in order to bring back customers who are abandoning traditional retail in favor of online shopping.

Much like “the cloud,” “big data,” and “machine learning” before it, the term “artificial intelligence” has been hijacked by marketers and advertising copywriters. A lot of what people are calling “artificial intelligence” is really data analytics—in other words, business as usual. If the hype leaves you asking “What is A.I., really?,” don’t worry, you’re not alone. I asked various experts to define the term and got different answers. The only thing they all seem to agree on is that artificial intelligence is a set of technologies that try to imitate or augment human intelligence. To me, the emphasis is on augmentation, in which intelligent software helps us interact and deal with the increasingly digital world we live in.

Three decades ago, I read newspapers, wrote on an electric typewriter, and watched a handful of television channels. Today, I have streaming video from Netflix, Amazon, HBO, and other places, and I’m sometimes paralyzed by the choices. It is becoming harder for us to stay on top of the onslaught—e-mails, messages, appointments, alerts. Augmented intelligence offers the possibility of winnowing an increasing number of inputs and options in a way that humans can’t manage without a helping hand.

Computers in general, and software in particular, are much more difficult than other kinds of technology for most people to grok, and they overwhelm us with a sense of mystery. There was a time when you would record a letter or a document on a dictaphone and someone would transcribe it for you. A human was making the voice-to-text conversion with the help of a machine. Today, you can speak into your iPhone and it will transcribe your messages itself. If people could have seen our current voice-to-text capabilities fifty years ago, it would have looked as if technology had become sentient. Now it’s just a routine way to augment how we interact with the world. Kevin Kelly, the writer and futurist, whose most recent book is “The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future,” said, “What we can do now would be A.I. fifty years ago. What we can do in fifty years will not be called A.I.”

You don’t have to look up from Facebook to get his point. Before we had the Internet, we would either call or write to our friends, one at a time, and keep up with their lives. It was a slow process, and took a lot of effort and time to learn about each other. As a result, we had fewer interactions—there was a cost attached to making long-distance phone calls and a time commitment attached to writing letters. With the advent of the Internet, e-mail emerged as a way to facilitate and speed up those interactions. Facebook did one better—it turned your address book into a hub, allowing you to simultaneously stay in touch with hundreds, even thousands, of friends. The algorithm allows us to maintain more relationships with much less effort at almost no cost.

Michelle Zhou spent over a decade and a half at I.B.M. Research and I.B.M. Watson Group before leaving to become a co-founder of Juji, a sentiment-analysis startup. An expert in a field where artificial intelligence and human-computer interaction intersect, Zhou breaks down A.I. into three stages. The first is recognition intelligence, in which algorithms running on ever more powerful computers can recognize patterns and glean topics from blocks of text, or perhaps even derive the meaning of a whole document from a few sentences. The second stage is cognitive intelligence, in which machines can go beyond pattern recognition and start making inferences from data. The third stage will be reached only when we can create virtual human beings, who can think, act, and behave as humans do.

We are a long way from creating virtual human beings. Despite what you read in the media, no technology is perfect, and the most valuable function of A.I. lies in augmenting human intelligence. To even reach that point, we need to train computers to mimic humans. An April, 2016, story in Bloomberg Businessprovided a good example. It described how companies that provide automated A.I. personal assistants (of the sort that arrange schedules or help with online shopping) had hired human “trainers” to check and evaluate the A.I. assistants’ responses before they were sent out. “It’s ironic that we define artificial intelligence with respect to its ability to replicate human intelligence,” said Sean Gourley, the founder of Primer, a data-analytics company, and an expert on deriving intelligence from large data sets with the help of algorithms.

Whether it is Spotify or Netflix or a new generation of A.I. chat bots, all of these tools rely on humans themselves to provide the data. When we listen to songs, put them on playlists, and share them with others, we are sending vital signals to Spotify that train its algorithms not only to discover what we might like but also to predict hits.

Even the much talked-about “computer vision” has become effective only because humans have uploaded billions of photos and tagged them with metadata to give those photos context. Increasingly powerful computers can scan through these photos and find patterns and meaning. Similarly, Google can use billions of voice samples it has collected over the years to build a smart system that understands accents and nuances, which make its voice-based search function possible.

Using Zhou’s three stages as a yardstick, we are only in the “recognition intelligence” phase—today’s computers use deep learning to discover patterns faster and better. It’s true, however, that some companies are working on technologies that can be used for inferring meanings, which would be the next step. “It does not matter whether we will end up at stage 3,” Zhou wrote to me in an e-mail. “I’m still a big fan of man-machine symbiosis, where computers do the best they can (that is being consistent, objective, precise), and humans do our best (creative, imprecise but adaptive).” For a few more decades, at least, humans will continue to train computers to mimic us. And, in the meantime, we’re going to have to deal with the hyperbole surrounding A.I.

]]>
Sun, 28 Aug 2016 20:37:37 +0400
<![CDATA[Bot tech controls drug release when needed]]>http://2045.com/news/35047.html35047(Tech Xplore)—A study shows that that nanobots can release drugs inside your brain. The nanorobots, reported NewScientist on Thursday, are built out of DNA. Drugs can be tethered to their shell-like shapes.

Helen Thomson had details on how this all works: "The bots also have a gate, which has a lock made from iron oxide nanoparticles. The lock opens when heated using electromagnetic energy, exposing the drug to the environment. Because the drug remains tethered to the DNA parcel, a body's exposure to the drug can be controlled by closing and opening the gate."

Their study has been published in PLOS ONE, as "Thought-Controlled Nanoscale Robots in a Living Host." New Scientist talked about the value of their work, as showing the ability to exercise more precise control over when a drug is active in the body. "Because the bots can open and close when required, the technology should minimize unwanted side effects."

Therein has been the challenge, getting drugs to where they need to be exactly when they are wanted. "Most drugs diffuse through the blood stream over time – and you're stuck with the side effects until the drug wears off," wrote Thomson.

Kate Baggaley in Popular Science said, "This technology could eventually give people more control over when and where a medication is active in their body." Thomson said the technique may be useful for treating brain disorders such as schizophrenia and ADHD.

"The technology released a drug inside cockroaches in response to the man's brain activity. As described in Popular Science: "A man's brain activity prompted nanobots made out of DNA to release drugs inside a cockroach."

The system is from a team at the Interdisciplinary Center, in Herzliya, and Bar Ilan University, in Ramat Gan, Israel. Following their research effort, the question becomes if and when we will see this applied to humans. According to theNew Scientist report, the technology is not ready for use in humans.

They still have to work on the basic setup, including a smaller, more portable way to measure brain activity. New Scientist said they also envision the person wearing an EEG device similar to a small heating aid to monitor brain activity.

What's next? Thomson said that "the technology isn't ready to be used in humans yet. To work, the setup needs a smaller, more portable method of measuring brain activity. The team also envisions a person wearing a small, hearing aid-like EEG device to monitor brain activity and detect when drugs are needed – "for example, when a person with ADHD's concentration begins to lapse. A smart watch would then create the electromagnetic field required to release a dose of Ritalin."

The authors wrote that "so far no interface has been established between a human mind and a therapeutic molecule, which are 10 orders of magnitude apart. The purpose of this study was to show that DNA robots can bridge this gap." They said the robots which they designed can be electronically remote-controlled. "This was done by adding metal nanoparticles to the robotic gates, which could heat in response to an electromagnetic field."

More information: Shachar Arnon et al. Thought-Controlled Nanoscale Robots in a Living Host, PLOS ONE (2016). DOI: 10.1371/journal.pone.0161227

Abstract 
We report a new type of brain-machine interface enabling a human operator to control nanometer-size robots inside a living animal by brain activity. Recorded EEG patterns are recognized online by an algorithm, which in turn controls the state of an electromagnetic field. The field induces the local heating of billions of mechanically-actuating DNA origami robots tethered to metal nanoparticles, leading to their reversible activation and subsequent exposure of a bioactive payload. As a proof of principle we demonstrate activation of DNA robots to cause a cellular effect inside the insect Blaberus discoidalis, by a cognitively straining task. This technology enables the online switching of a bioactive molecule on and off in response to a subject's cognitive state, with potential implications to therapeutic control in disorders such as schizophrenia, depression, and attention deficits, which are among the most challenging conditions to diagnose and treat. 

]]>
Sat, 27 Aug 2016 20:40:33 +0400
<![CDATA[Stretchy supercapacitors power wearable electronics]]>http://2045.com/news/35048.html35048A future of soft robots that wash your dishes or smart T-shirts that power your cell phone may depend on the development of stretchy power sources. But traditional batteries are thick and rigid—not ideal properties for materials that would be used in tiny malleable devices. In a step toward wearable electronics, a team of researchers has produced a stretchy micro-supercapacitor using ribbons of graphene.

The researchers will present their work today at the 252nd National Meeting & Exposition of the American Chemical Society (ACS).

"Most power sources, such as phone batteries, are not stretchable. They are very rigid," says Xiaodong Chen, Ph.D. "My team has made stretchable electrodes, and we have integrated them into a supercapacitor, which is an energy storage device that powers electronic gadgets."

Supercapacitors, developed in the 1950s, have a higher power density and longer life cycle than standard capacitors or batteries. And as devices have shrunk, so too have supercapacitors, bringing into the fore a generation of two-dimensional micro-supercapacitors that are integrated into cell phones, computers and other devices. However, these supercapacitors have remained rigid, and are thus a poor fit for soft materials that need to have the ability to elongate.

In this study, Chen of Nanyang Technological University, Singapore, and his team sought to develop a micro-supercapacitor from graphene. This carbon sheet is renowned for its thinness, strength and conductivity. "Graphene can be flexible and foldable, but it cannot be stretched," he says. To fix that, Chen's team took a cue from skin. Skin has a wave-like microstructure, Chen says. "We started to think of how we could make graphene more like a wave."

The researchers' first step was to make graphene micro-ribbons. Most graphene is produced with physical methods—like shaving the tip of a pencil—but Chen uses chemistry to build his material. "We have more control over the graphene's structure and thickness that way," he explains. "It's very difficult to control that with the physical approach. Thickness can really affect the conductivity of the electrodes and how much energy the supercapacitor overall can hold."

The next step was to create the stretchable polymer chip with a series of pyramidal ridges. The researchers placed the graphene ribbons across the ridges, creating the wave-like structure. The design allowed the material to stretch without the graphene electrodes of the superconductor detaching, cracking or deforming. In addition, the team developed kirigami structures, which are variations of origami folds, to make the supercapacitors 500 percent more flexible without decaying their electrochemical performance. As a final test, Chen has powered an LCD from a calculator with the stretchy graphene-based micro-supercapacitor. Similarly, such stretchy supercapacitors can be used in pressure or chemical sensors.

In future experiments, the researchers hope to increase the electrode's surface area so it can hold even more energy. The current version only stores enough energy to power LCD devices for a minute, he says.

 Explore further: Crumpled graphene could provide an unconventional energy storage

More information: Flexible Micro-supercapacitors based on graphene, 252nd National Meeting & Exposition of the American Chemical Society (ACS).

Abstract 
Micro-supercapacitors with unique two-dimensional (2D) structures are gaining attention due to their small size, high energy density and potential applications in on-chip and portable electronics. Compared to the sandwich structure of conventional supercapacitors, the 2D structure of micro-supercapacitors enables a reduction in the ionic diffusing pathway, and more efficient utilization of the surface area of electrode materials. Meanwhile, emerging wearable electronics require the property of stretchability in addition to flexibility for application on the soft and curved human body that is covered with highly extensible skins. Micro-supercapacitors, as a candidate for essential integrated energy conversion and storage units on wearable electronics, ought to be capable of accommodating large strain while retaining their performance. In this talk, I will present our recent development of highly stretchable micro-supercapacitors with stable electrochemical performance. The excellent stretchable and electrochemical performance relies on the out-of-plane wavy structures of graphene micro-ribbons. It decreases the stain concentration on the electrode fingers, so that the detaching and cracking of the electrode materials could be prevented. In addition, it ensured the electrode fingers keeping relative constant distance, so the stability of the micro-supercapacitors could be enhanced. 

]]>
Thu, 25 Aug 2016 20:42:36 +0400
<![CDATA[Meet DevBot, a self-driving electric racing car]]>http://2045.com/news/35045.html35045DevBot is a test mule for Roborace, the first driverless racing series.

There are less than two months to go until the start of Formula E's third season, which kicks off in Hong Kong on October 9. One of the more interesting things about Formula E's upcoming season is the new support series, Roborace. As the name suggests, it's a series for self-driving race cars, and the organizers have just unveiled the mule—called DevBot—that teams will use to develop their control software.

All of the Roborace teams will use identical Robocars, but each will develop its own control algorithms. The race cars are fully electric—in keeping with the ethos of Formula E—and have more than a little Speed Racer about them. But DevBot will look much more familiar to fans of sports car racing; it's a Le Mans-style prototype coupe, shown in the test photos without the front and rear bodywork.

DevBot also has a cockpit for a human driver, unlike the Robocars, but it does have the same powertrain, sensor suite, processors, and communication systems as the forthcoming autonomous race cars. DevBot is also fully electric, suggesting the handiwork of Drayson Racing Technologies. Several years ago, Drayson converted its Lola B10 Le Mans Prototype racer frominternal combustion to electric power and has been involved in developing the technology used by Formula E.

Although DevBot has been testing in private for several months now, it will officially break cover later this week at the Formula E preseason test, being held at Donington Park in Leicestershire, England, on August 24.

]]>
Tue, 23 Aug 2016 23:31:01 +0400
<![CDATA[Paralyzed Man Regains Hand Movement, Thanks to First-Ever Nerve-Transfer Surgery]]>http://2045.com/news/35042.html35042HEADFIRST

Tim Raglin regularly dove, headfirst, into the water at his family’s lake house. The 45-year old Canadian man had done so thousands of times without incident. In 2007, though Raglin hit his head on a rock in the shallow water, shattering a vertebra in his cervical spine.

His family pulled him to safety, saving him from drowning. However, for nine years, both his hands and feet were left paralyzed.

Now though, there’s hope for Raglin and others like him.

Raglin is the first Canadian to ever undergo a nerve transfer surgery. Dr. Kirsty Boyd from the Ottawa Hospital essentially rewired Raglin’s body– rerouting some of his fully-functional elbow nerves to his hand. Although Raglin had to wait several months for the nerves to regrow, this procedure allowed him to regain some control over his right hand.

ROAD TO INDEPENDENCE

After persevering for 18 months, Raglin was finally able to open his fingers during an occupational therapy session at The Ottawa Hospital Rehabilitation Centre.

“It was kind of a shock,” he said in an interview. “And it’s really moving now: There’s a lot of nerves touching muscles that are getting stronger…Every iteration, it just gets more and more exciting.”

It’s still a slow uphill battle for Raglin. The muscles in his hand have deteriorated from lack of use, so they tire easily. In addition, because Raglin is using a different nerve pathway to activate the muscles in his hand, it will take some time for his brain to adjust to the new system.

Despite these challenges, he has learned to close his fingers on something by flexing his bicep. In time, however, it’s expected his brain will figure out how to separate the triggers for his hand and his bicep.

“I’m not quite at the point where I can get a cup off the table, but I can envision myself doing that. I know I will be able to do that eventually—so it’s exciting to see that.”

]]>
Tue, 23 Aug 2016 23:14:10 +0400
<![CDATA[Tiny robot caterpillar can move objects ten times its size]]>http://2045.com/news/35041.html35041Soft robots aren't easy to make, since they require a completely different set of components from their rigid counterparts. It's even tougher to scale down the parts they typically use for locomotion. A team of researchers from the Faculty of Physics at the University of Warsaw, however, successfully created a 15-millimeter soft micromachine that only needs light to be able to move. The microrobot is made of Liquid Crystalline Elastomers (LCEs), smart materials that change shape when exposed to visible light. Under a light source, the machine's body contracts like a caterpillar and forms waves to propel it forward.

The researchers said the robo-caterpillar can climb steep slopes, squeeze into minuscule spaces and move objects ten times its size. A tiny machine like this that can operate in challenging environments could be used for scientific research, and maybe even espionage if someone can find a way to attach a camera or a mic to it. But if the robot's a bit too small for a specific application, researchers could also adopt the team's method to make something a wee bit bigger.

]]>
Sun, 21 Aug 2016 23:09:00 +0400
<![CDATA[Putting a computer in your brain is no longer science fiction]]>http://2045.com/news/35040.html35040Like many in Silicon Valley, technology entrepreneur Bryan Johnson sees a future in which intelligent machines can do things like drive cars on their own and anticipate our needs before we ask.

What’s uncommon is how Johnson wants to respond: find a way to supercharge the human brain so that we can keep up with the machines.

From an unassuming office in Venice Beach, his science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain to help people suffering from neurological damage caused by strokes, Alzheimer’s or concussions. Top neuroscientists who are building the chip — they call it a neuroprosthetic — hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks.

The medical device is years in the making, Johnson acknowledges, but he can afford the time. He sold his payments company, Braintree, to PayPal for $800 million in 2013. A former Mormon raised in Utah, the 38-year-old speaks about the project with missionary-like intensity and focus.

“Human intelligence is landlocked in relationship to artificial intelligence — and the landlock is the degeneration of the body and the brain,” he said in an interview about the company, which he had not discussed publicly before. “This is a question of keeping humans front and center as we progress.”

Johnson stands out among an elite set of entrepreneurs who believe Silicon Valley can play a role in funding large-scale scientific discoveries — the kind that can dramatically improve human life in ways that go beyond building software.

Though many of their ventures draw from software principles: In the last two years, venture capital firms like Y Combinator, Andreessen Horowitz, Peter Thiel’s Founders Fund, Khosla Ventures and others have poured money into start-ups that focus on “bio-hacking” — the notion that you can engineer the body the way you would a software program. They’ve funded companies that aim to sequence the bacteria in the gut, reprogram the DNA you were born with, or conduct cancer biopsies from samples of blood. They’ve backed what are known as cognitive-enhancement businesses like Thync, which builds a headset that sends mood-altering electrical pulses to the brain, and Nootrobox, a start-up that makes chewable coffee supplements that combine doses of caffeine with active ingredients in green tea, leading to a precisely engineered, zenlike high.

[Tech titans' lastest project: Creating the fountain of youth]

It’s easy to dismiss these efforts as the hubristic, techno-utopian fantasies of a self-involved elite that believes it can defy death and human decline — and in doing so, confer even more advantages on the already-privileged.

And while there’s no shortage of hubris in Silicon Valley, it’s also undoubtable some of these projects will accelerate scientific breakthroughs and fill some of the gaps left in the wake of declining public funding for scientific research, said Laurie Zoloth, professor of  bioethics and medical humanities at Northwestern University. Moreover, techies are motivated by the fact that many biological and health challenges increasingly involve data-mining and computation; they’re looking more like problems that they know how to solve. Large-scale genome sequencing, for example, has long been seen as key to unlocking targeted cancer therapies and detecting disease far earlier than current methods; it’s becoming more of a reality as the cost of sequencing, storing and analyzing the data has dropped dramatically, leading to a flood of investments in that area.

Kernel is cognitive enhancement of the not-gimmicky variety. The concept is based on the work of Theodore Berger, a pioneering biomedical engineer who directs the Center for Neural Engineering at the University of Southern California, and is the start-up’s chief science officer.

For over two decades, Berger has been working on building a neuroprostheticto help people with dementia, strokes, concussions, brain injuries and Alzheimer's disease, which afflicts 1 in 9 adults over 65.

The implanted devices try to replicate the way brain cells communicate with one another. Let’s say, for example, that you are having a conversation with your boss. A healthy brain will convert that conversation from short-term memory to long-term memory by firing off a set of electrical signals. The signals fire in a specific code that is unique to each person and is a bit like a software command.

Brain diseases throw off these signaling codes. Berger’s software tries to assist the communication between brain cells by making an instantaneous prediction as to what the healthy code should be, and then firing off in that pattern. In separate studies funded by the Defense Advanced Research Projects Agency over the last several years, Berger’s chips were shown toimprove recall functions in both rats and monkeys.

A year ago, Berger felt he had reached a ceiling in his research. He wanted to begin testing his devices with humans and was thinking about commercial opportunities when he got a cold call from Johnson in October 2015. He hadn’t heard of Johnson; the Google search said he was a tech entrepreneur who had founded a payments processing company and invested in out-there science start-ups. The two met in Berger’s office later that month. They talked for four hours, skipping lunch, and by the end of the day, Johnson said he would put up the funds for the two to start something together. “I don’t know who, but somebody was looking over us,” Berger said of the meeting.

For Johnson, the meeting was a culmination of a longtime obsession with intelligence and the brain.

[Building an artificial brain]

Shortly after he sold Braintree, he was already restless to start another company. He spent six months calling everyone he knew who was doing “something audacious” — about 200 people in all. “I wanted to understand, what mental models people maintained — how did they define what to work on and why?” he says.

He then set up a $100 million fund that invests in science and technology start-ups that could “radically improve quality of life.” The fund, which comes exclusively from his personal fortune, was called OS Fund, because he wanted to support companies that were making changes at the operating-system level, he said. Johnson’s goal was to take projects from “crazy to viable” — including start-ups attempting to mine asteroids for precious metals and water, delivery drones for developing countries, and an artificial-intelligence company building the world’s largest human genetic database.

 

At the same time, he kept returning to intelligence, both artificial and real. As he saw it, artificial intelligence was booming — technology advances were moving at an accelerated pace; the pace of the human brain’s evolution was sluggish by comparison. So he hired a team of neuroscientists and tasked them with combing through all the relevant research, with the goal of forming a brain company. Eventually they settled on Berger.

Ten months later, the team is starting to sketch out prototypes of the device and is conducting tests with epilepsy patients in hospitals. They hope to start a clinical trial, but first they have to figure out how to make the device portable. (Right now, patients who use it are hooked up to a computer.)

Zoloth says one of the big risks of technologists funding science is that they fund their own priorities, which can be disconnected from the greater public good. Many people don’t have enough resources to fulfill the brain potential they currently have, let alone enhance it. “Saying that if tech billionaires fund what they want may inadvertently fund science for the larger public, as a sort of leftover effect, is a problematic argument,” she said. “If brilliantly creative high school teachers in the inner city, for example, could fund science, too, then perhaps the needs of the poor might be found more interesting.”

Johnson says he is acutely aware of those concerns. He recognizes that the notion of people walking around with chips implanted in their heads to make them smarter seems far-fetched, to put it mildly. He says the goal is to build a product that is widely affordable, but acknowledges there are challenges. He points out that many scientific discoveries and inventions — even the printing press — started out for a privileged group but ended up providing massive benefits to humanity. The primary benefits of Kernel, he says, will be for the sick, for the millions of people who have lost their memories because of brain disorders. Even a small improvement in memory — a person with dementia might be able to remember the location of the bathroom in their home, for example — can help people maintain their dignity and enjoy a greater quality of life.

And in an age of AI, he insists that boosting the capacity of our brains is itself an urgent public concern. “Whatever endeavor we imagine — flying cars, go to Mars — it all fits downstream from our intelligence,” he says. “It is the most powerful resource in existence. It is the master tool.”

]]>
Wed, 17 Aug 2016 21:54:12 +0400
<![CDATA[Researchers 'reprogram' network of brain cells in mice with thin beam of light]]>http://2045.com/news/35037.html35037Neurons that fire together really do wire together, says a new study in Science, suggesting that the three-pound computer in our heads may be more malleable than we think.

In the latest issue of Science, neuroscientists at Columbia University demonstrate that a set of neurons trained to fire in unison could be reactivated as much as a day later if just one neuron in the network was stimulated. Though further research is needed, their findings suggest that groups of activated neurons may form the basic building blocks of learning and memory, as originally hypothesized by psychologist Donald Hebb in the 1940s.

"I always thought the brain was mostly hard-wired," said the study's senior author, Dr. Rafael Yuste, a neuroscience professor at Columbia University. "But then I saw the results and said 'Holy moly, this whole thing is plastic.' We're dealing with a plastic computer that's constantly learning and changing."

The researchers were able to control and observe the brain of a living mouse using the optogenetic tools that have revolutionized neuroscience in the last decade. They injected the mouse with a virus containing light-sensitive proteins engineered to reach specific brain cells. Once inside a cell, the proteins allowed researchers to remotely activate the neuron with light, as if switching on a TV.

The mouse was allowed to run freely on a treadmill while its head was held still under a microscope. With one laser, the researchers beamed light through its skull to stimulate a small group of cells in the visual cortex. With a second laser, they recorded rising levels of calcium in each neuron as it fired, thus imaging the activity of individual cells.

Before optogenetics, scientists had to open the skull and implant electrodes into living tissue to stimulate neurons with electricity and measure their response. Even a mouse brain of 100 million neurons, nearly a thousandth the size of ours, was too dense to get a close look at groups of neurons.

Optogenetics allowed researchers to get inside the brain non-invasively and control it far more precisely. In the last decade, researchers have restored sight and hearing to blind and deaf mice, and turned normal mice aggressive, all by manipulating specific brain regions.

The breakthrough that allowed researchers to reprogram a cluster of cells in the brain is the culmination of more than a decade of work. With tissue samples from the mouse visual cortex, Yuste and his colleagues showed in a 2003 study in Nature that neurons coordinated their firing in small networks called neural ensembles. A year later, they demonstrated that the ensembles fired off in sequential patterns through time.

As techniques for controlling and observing cells in living animals improved, they learned that these neural ensembles are active even without stimulation. They used this information to develop mathematical algorithms for finding neural ensembles in the visual cortex. They were then able to show, as they had in the tissue samples earlier, that neural ensembles in living animals also fire one after the other in sequential patterns.

The current study in Science shows that these networks can be artificially implanted and replayed, says Yuste, much as the scent of a tea-soaked madeleine takes novelist Marcel Proust back to his memories of childhood.

Pairing two-photon stimulation technology with two-photon calcium imaging allowed the researchers to document how individual cells responded to light stimulation. Though previous studies have targeted and recorded individual cells none have demonstrated that a bundle of neurons could be fired off together to imprint what they call a "neuronal microcircuit" in a live animal's brain.

"If you told me a year ago we could stimulate 20 neurons in a mouse brain of 100 million neurons and alter their behavior, I'd say no way," said Yuste, who is also a member of the Data Science Institute. "It's like reconfiguring three grains of sand at the beach."

The researchers think that the network of activated neurons they artificially created may have implanted an image completely unfamiliar to the mouse. They are now developing a behavioral study to try and prove this.

"We think that these methods to read and write activity into the living brain will have a major impact in neuroscience and medicine," said the study's lead author, Luis Carrillo-Reid, a postdoctoral researcher at Columbia.

Dr. Daniel Javitt, a psychiatry professor at Columbia University Medical Center who was not involved in the study, says the work could potentially be used to restore normal connection patterns in the brains of people with epilepsy and other brain disorders. Major technical hurdles, however, would need to be overcome before optogenetic techniques could be applied to humans.

The research is part of a $300 million brain-mapping effort called the U.S. BRAIN Initiative, which grew out of an earlier proposal by Yuste and his colleagues to develop tools for mapping the brain activity of fruit flies to more complex mammals, including humans.

Story Source:

The above post is reprinted from materials provided by Columbia University.Note: Content may be edited for style and length.

Journal Reference:

  1. Rafael Yuste et al. Imprinting and recalling cortical ensembles.Science, August 2016 DOI: 10.1126/science.aaf7560
]]>
Sat, 13 Aug 2016 23:54:28 +0400
<![CDATA[MIT’s DuoSkin turns temporary tattoos into on-skin interfaces]]>http://2045.com/news/35036.html35036Your next tattoo could be functional as well as aesthetic. A new MIT Media Lab product called DuoSkin created in partnership with Microsoft Research turns temporary tattoos into connected interfaces, letting them act as input for smartphones or computers, display output based on changes in body temperature and transmit data to other devices via NFC.

DuoSkin:Functional, stylish on-skin user interfaces from MIT Media Lab on Vimeo.

Cindy Hsin-Liu Kao, PhD Student at the MIT Media Lab, explains the origins of the project in the video above. Kao says that metallic jewelry-like temporary tattoos are a growing trend, providing a great opportunity for creating something that meshes with existing fashion while also adding genuinely useful functional capabilities. She notes that in Taiwan, there’s a “huge culture” of cosmetics and street fashion, which is affordable and accessible enough that “you can very easily change and edit your appearance whenever you want.” The DuoSkin team wanted to achieve the same thing with their technological twist on the tattoo trend.

As a result, the system is actually designed to be fairly inexpensive and easy to set up for just about anyone. It uses gold leaf, the same thing you’ll occasionally find delicately flaked atop swanky desserts, for basic conductivity, but otherwise employs everyday crafting materials like a vinyl cutter and temporary tattoo printing paper. You can use any desktop graphics creation software you like to design the circuit, then feed that design through the vinyl cutter, layer the gold leaf on top and apply it as you would a standard temporary tattoo. Small, surface mount electronic components including NFC chips complete the connectivity picture.

Researchers devised three different ways in which the DuoSkin tattoos could be used, including as input devices that can turn your skin into a trackpad, or a capacitive virtual control knob for adjusting volume on your connected device, for example. The tattoos can also display output, changing color based on your body temp like a Hypercolor T-shirt. Finally, they can contain data to be read by other devices, via NFC wireless communication. Kao also shows how they can contain embedded LEDs for on-skin light effects.

Kao ends by suggesting they’d like to see this tech come to tattoo parlours, so it’s easy for anyone to get connected ink. It’s definitely something that could further the use cases and value appeal of wearable tech as a category, especially among price sensitive customers who place a high value on aesthetics and don’t want to have to wear a watch or other more cumbersome piece of tech.

Startups like Inkbox are already working on material science advancements that extend the life of temporary tattoos, too, so there could be a collaboration opportunity down the road that gives people access to tattoo-based interfaces that don’t last forever, but don’t wash off overnight, either.

]]>
Sat, 13 Aug 2016 23:15:13 +0400
<![CDATA[How to Give Fake Hands Real Feeling]]>http://2045.com/news/35033.html35033In Zhenan Bao’s lab at Stanford, researchers are ­inventing materials for touch-sensitive prosthetics.

The human hand has 17,000 touch sensors that help us pick things up and connect us to the physical world. A prosthetic hand or foot has no feeling at all.

Zhenan Bao hopes to change that by wrapping prosthetics with electronic skin that can sense pressure, heal when cut, and process sensory data. It’s a critical step toward prosthetics that one day could be wired to the nervous system to deliver a sense of touch. Even before that is possible, soft yet grippy electronic skin would let amputees and burn victims do more everyday tasks like picking up delicate objects—and possibly help alleviate phantom-limb pain.

To mimic and in some ways surpass the capabilities of the skin on human hands, Bao is rethinking what an electronic material can be. Electronic skin should be not only sensitive to pressure but also lightweight, durable, stretchy, pliable, and self-healing, just like real skin. It should also be relatively inexpensive to manufacture in large sheets for wrapping around prosthetics. Traditional electronic materials are none of these things.

Bao (an MIT Technology Review Innovator Under 35 in 2003) has been working on electronic skin since 2010. She has had to create new chemical recipes for every electronic component, replacing rigid materials like silicon with flexible organic molecules, polymers, and nanomaterials.

Bao’s group uses stretchy rubber materials that are similar to human skin in the way they give and recover. Sometimes her team mixes electronic materials into the rubber; other times they build on top of it. To make a touch sensor, researchers mix in carbon that is electrically conductive. The voltage across this conductive rubber sheet changes when the material is pressed. Bao’s group found that covering these touch sensors with a pattern of microscale pyramids improves their touch sensitivity—much as the whorls of our fingerprints do. Depending on the design, these sensors can be made at least as sensitive as the skin on our hands. Her group also prints transistors, electrical leads, and other components on the rubbery skins to make stretchy circuits that could process data from touch sensors on a prosthetic hand.

Now Bao is working on weirder materials. One polymer she developed is much stretchier than human skin: it can be pulled to 100 times its normal length without breaking. This material also heals when cut, without any heat or other trigger. And it can act as a weak artificial muscle, expanding and contracting when an electric field is applied.

With the basic materials and designs in place, she’s working on semiconductors and other electronic materials that have the same healing and stretching prowess. But reinventing the electronic materials won’t be enough: data from these artificial skins has to be delivered to the nervous system in a format that the body can understand. Bao’s group is now working on circuit designs that will send signals to the nervous system, so that electronic skins will one day not only help amputees regain dexterity but also let them feel the touch of their loved ones.

]]>
Tue, 9 Aug 2016 16:27:40 +0400
<![CDATA[Sprinkling of neural dust opens door to electroceuticals]]>http://2045.com/news/35032.html35032University of California, Berkeley engineers have built the first dust-sized, wireless sensors that can be implanted in the body, bringing closer the day when a Fitbit-like device could monitor internal nerves, muscles or organs in real time.

Wireless, batteryless implantable sensors could improve brain control of prosthetics, avoiding wires that go through the skull. Video by Roxanne Makasdjian and Stephen McNally.

Because these batteryless sensors could also be used to stimulate nerves and muscles, the technology also opens the door to “electroceuticals” to treat disorders such as epilepsy or to stimulate the immune system or tamp down inflammation.

The so-called neural dust, which the team implanted in the muscles and peripheral nerves of rats, is unique in that ultrasound is used both to power and read out the measurements. Ultrasound technology is already well-developed for hospital use, and ultrasound vibrations can penetrate nearly anywhere in the body, unlike radio waves, the researchers say.

“I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader,“ said Michel Maharbiz, an associate professor of electrical engineering and computer sciences and one of the study’s two main authors. “Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.“

The sensor, 3 millimeters long and 1×1 millimeters in cross section, attached to a nerve fiber in a rat. Once implanted, the batteryless sensor is powered and the data read out by ultrasound. Ryan Neely photo.

Maharbiz, neuroscientist Jose Carmena, a professor of electrical engineering and computer sciences and a member of the Helen Wills Neuroscience Institute, and their colleagues will report their findings in the August 3 issue of the journal Neuron.

The sensors, which the researchers have already shrunk to a 1 millimeter cube – about the size of a large grain of sand – contain a piezoelectric crystal that converts ultrasound vibrations from outside the body into electricity to power a tiny, on-board transistor that is in contact with a nerve or muscle fiber. A voltage spike in the fiber alters the circuit and the vibration of the crystal, which changes the echo detected by the ultrasound receiver, typically the same device that generates the vibrations. The slight change, called backscatter, allows them to determine the voltage.

Motes sprinkled thoughout the body

In their experiment, the UC Berkeley team powered up the passive sensors every 100 microseconds with six 540-nanosecond ultrasound pulses, which gave them a continual, real-time readout. They coated the first-generation motes – 3 millimeters long, 1 millimeter high and 4/5 millimeter thick – with surgical-grade epoxy, but they are currently building motes from biocompatible thin films which would potentially last in the body without degradation for a decade or more.

The sensor mote contains a piezoelectric crystal (silver cube) plus a simple electronic circuit that responds to the voltage across two electrodes to alter the backscatter from ultrasound pulses produced by a transducer outside the body. The voltage across the electrodes can be determined by analyzing the ultrasound backscatter. Ryan Neely photo.

While the experiments so far have involved the peripheral nervous system and muscles, the neural dust motes could work equally well in the central nervous system and brain to control prosthetics, the researchers say. Today’s implantable electrodes degrade within 1 to 2 years, and all connect to wires that pass through holes in the skull. Wireless sensors – dozens to a hundred – could be sealed in, avoiding infection and unwanted movement of the electrodes.

“The original goal of the neural dust project was to imagine the next generation of brain-machine interfaces, and to make it a viable clinical technology,” said neuroscience graduate student Ryan Neely. “If a paraplegic wants to control a computer or a robotic arm, you would just implant this electrode in the brain and it would last essentially a lifetime.”

In a paper published online in 2013, the researchers estimated that they could shrink the sensors down to a cube 50 microns on a side – about 2 thousandths of an inch, or half the width of a human hair. At that size, the motes could nestle up to just a few nerve axons and continually record their electrical activity.

“The beauty is that now, the sensors are small enough to have a good application in the peripheral nervous system, for bladder control or appetite suppression, for example,“ Carmena said. “The technology is not really there yet to get to the 50-micron target size, which we would need for the brain and central nervous system. Once it’s clinically proven, however, neural dust will just replace wire electrodes. This time, once you close up the brain, you’re done.“

The team is working now to miniaturize the device further, find more biocompatible materials and improve the surface transceiver that sends and receives the ultrasounds, ideally using beam-steering technology to focus the sounds waves on individual motes. They are now building little backpacks for rats to hold the ultrasound transceiver that will record data from implanted motes.

Diagram showing the components of the sensor. The entire device is covered in a biocompatible gel.

They’re also working to expand the motes’ ability to detect non-electrical signals, such as oxygen or hormone levels.

“The vision is to implant these neural dust motes anywhere in the body, and have a patch over the implanted site send ultrasonic waves to wake up and receive necessary information from the motes for the desired therapy you want,” said Dongjin Seo, a graduate student in electrical engineering and computer sciences. “Eventually you would use multiple implants and one patch that would ping each implant individually, or all simultaneously.”

Ultrasound vs radio

Maharbiz and Carmena conceived of the idea of neural dust about five years ago, but attempts to power an implantable device and read out the data using radio waves were disappointing. Radio attenuates very quickly with distance in tissue, so communicating with devices deep in the body would be difficult without using potentially damaging high-intensity radiation.

A sensor implanted on a peripheral nerve is powered and interrogated by an ultrasound transducer. The backscatter signal carries information about the voltage across the sensor’s two electrodes. The ‘dust’ mote was pinged every 100 microseconds with six 540-nanosecond ultrasound pulses.

Marharbiz hit on the idea of ultrasound, and in 2013 published a paper with Carmena, Seo and their colleagues describing how such a system might work. “Our first study demonstrated that the fundamental physics of ultrasound allowed for very, very small implants that could record and communicate neural data,” said Maharbiz. He and his students have now created that system.

“Ultrasound is much more efficient when you are targeting devices that are on the millimeter scale or smaller and that are embedded deep in the body,” Seo said. “You can get a lot of power into it and a lot more efficient transfer of energy and communication when using ultrasound as opposed to electromagnetic waves, which has been the go-to method for wirelessly transmitting power to miniature implants”

“Now that you have a reliable, minimally invasive neural pickup in your body, the technology could become the driver for a whole gamut of applications, things that today don’t even exist,“ Carmena said.

Other co-authors of the Neuron paper are graduate student Konlin Shen, undergraduate Utkarsh Singhal and UC Berkeley professors Elad Alon and Jan Rabaey. The work was supported by the Defense Advanced Research Projects Agency of the Department of Defense.

]]>
Fri, 5 Aug 2016 22:43:07 +0400
<![CDATA[Bad news for Bob the Builder: Watch Hadrian X the robo-builder create an entire house in just two days ]]>http://2045.com/news/35030.html35030It can build an entire house in just two days - and never takes tea breaks.

An Australian firm has revealed the Hadrian X, a giant truck mounted building robot that can lay 1,000 bricks an hour, glueing them into place.

It can work 24 hours day, and finish an entire house in just two days. 

Mounted on the back of a truck, Hadrian X is simply driven onto a building site, and can put down 1,000 bricks an hour using a 30m boom, allowing it to stay in a single position while it builds.

Mounted on the back of a truck, it is simply driven onto a building site.

It can put down 1,000 bricks an hour using a 30m boom, allowing it to stay in a single position while it builds a house. 

Fastbrick, the firm behind it, says it could revolutionise building.

CEO Mike Pivac said:'We are a frontier company, and we are one step closer to bringing fully automated, end to end 3D printing brick construction into mainstream.

The bricks travel along the boom and are gripped by a clawlike device that lays them out methodically, directed by a laser guiding system.

Mortar or adhesive is also deliver under pressure to the hand of the arm and applied to the brick, so no external human element is required.

We're very excited to take the world first technology we proved with the Hadrian 105 demonstrator and manufacturing a state of the art machine.

Instead of traditional cement, Hadrian X will use a construction glue.

'By utilising a construction adhesive rather that traditional mortar, the Hadrian X will maximise the speed of the build and strength and thermal effeciency of the final structure,' the firm said.

The Hadrian X can handle different sized bricks, and also cuts, grinds and mills each brick to fit. 

 The company describes the robots as '3D automated robotic bricklaying technology.'

]]>
Fri, 5 Aug 2016 21:49:30 +0400
<![CDATA[How bionic limbs are being made more affordable, with a hand from Deus Ex]]>http://2045.com/news/35026.html35026Earlier this year, Eidos Montreal announced a partnership with prosthetics specialist Open Bionics to create bionic arms inspired by the game franchise Deus Ex.

The human augmentations of Deus Ex made for an obvious tie-in, but the announcement also brought to light the work Open Bionics is doing to make prosthetics more affordable.

Open Bionics is using 3D printing to bring less expensive prosthetics to market, and will also be making its blueprints available royalty-free, letting people modify their own designs.

We went down to Open Bionics to talk to them about what they're doing to radically change the process of creating prosthetics, and how their partnership with Deus Ex has got them thinking about bionic limbs being used not just as medical products to replace limbs, but as additional devices to augment.

You can watch the full video below.

]]>
Thu, 28 Jul 2016 11:11:52 +0400
<![CDATA[Rise of the Surgical Robot and What Doctors Want]]>http://2045.com/news/35024.html35024They want robots to provide a way to feel the body’s tissue remotely.

Even though many doctors see need for improvement, surgical robots are poised for big gains in operating rooms around the world.

Within five years, one in three U.S. surgeries – more than double current levels – is expected to be performed withrobotic systems, with surgeons sitting at computer consoles guiding mechanical arms. Companies developing newrobots also plan to expand their use in India, China and other emerging markets.

Robotic surgery has been long dominated by pioneer Intuitive Surgical  ISRG 1.03% , which has more than 3,600 of its da Vinci machines in hospitals worldwide and said last week the number of procedures that used them jumped by 16% in the second quarter compared to a year earlier.

The anticipated future growth – and perceived weaknesses of the current generation of robots – is attracting deep-pocketed rivals, including Medtronic  MDT -0.07%  and a startup backed by Johnson & Johnson  JNJ -0.25% and Google  GOOGL 6.17% .

Developers of the next wave aim to make the robots less expensive, more nimble and capable of performing more types of procedures, company executives and surgeons told Reuters.

Although surgical robots run an average of $1.5 million and entail ongoing maintenance expenses, insurers pay no more for surgeries that utilize the systems than for other types of minimally-invasive procedures, such as laparoscopy.

Still, most top U.S. hospitals for cancer treatment, urology, gynecology and gastroenterology have made the investment. The robots are featured prominently in hospital marketing campaigns aimed at attracting patients, and new doctors are routinely trained in their use.

Surgical robots are used in hernia repair, bariatric surgery, hysterectomies and the vast majority of prostate removals in the United States, according to Intuitive Surgical data.

Doctors say they reduce fatigue and give them greater precision.

But robot-assisted surgery can take more of the surgeon’s time than traditional procedures, reducing the number of operations doctors can perform. That’s turned off some like Dr. Helmuth Billy.

Billy was an early adopter of Intuitive’s da Vinci system 15 years ago. But equipping its arms with instruments slowed him down. He rarely uses it now.

“I like to do five operations a day,” Billy said. “If I have to constantly dock and undock da Vinci, it becomes cumbersome.”

SURGEONS’ WISH LIST

To gain an edge, new robots will need to outperform laparoscopic surgery, said Dr. Dmitry Oleynikov, who heads a robotics task force for the Society of American Gastrointestinal and Endoscopic Surgeons.

Surgeons told Reuters they want robots to provide a way to feel the body’s tissue remotely, called haptic sensing, and better camera image quality.

New systems also will need to be priced low enough to entice hospitals and outpatient surgical centers that have not yet invested in a da Vinci, as well as convince those with established robotic programs to consider a second vendor or switching suppliers altogether.

“That is where competitors can differentiate,” said Vik Srinivasan of the Advisory Board Co, a research and consulting firm that advises hospitals.

Developers say they are paying attention. Verb Surgical, the J&J-Google venture that is investing about $250 million in its project, said creating a faster and easier-to-use system is a priority.

Verb also envisions a system that is “always there, always on,” enabling the surgeon to use therobot for parts of a procedure as needed, said Chief Executive Scott Huennekens.

Intuitive said it too is looking to improve technology at a reasonable cost, but newcomers will face the same challenges.

“As competitors come in, they are going to have to work within that same framework,” CEO Gary Guthart said in an interview.

Device maker Medtronic  MDT -0.07%  has said it expects to launch its surgical robot before mid-2018 and will start in India. Others developing surgical robots include TransEnterix and Canada’s Titan Medical  TITXF 3.87% .

An RBC Capital Markets survey found that U.S. surgeons expect about 35% of operations will involve robots in five years, up from 15% today.

J&J, which hopes to be second to market with a product from Verb, has said it sees robotics as a multibillion-dollar market opportunity. Huennekens said Verb’s surgical robotwill differ from another Google robotics effort, the driverless car, in one important aspect.

“There will always be a surgeon there,” he said.

]]>
Thu, 28 Jul 2016 11:08:48 +0400
<![CDATA[Scientists program cells to remember and respond to series of stimuli]]>http://2045.com/news/35021.html35021Synthetic biology allows researchers to program cells to perform novel functions such as fluorescing in response to a particular chemical or producing drugs in response to disease markers. In a step toward devising much more complex cellular circuits, MIT engineers have now programmed cells to remember and respond to a series of events.

These cells can remember, in the correct order, up to three different inputs, but this approach should be scalable to incorporate many more stimuli, the researchers say. Using this system, scientists can track cellular events that occur in a particular order, create environmental sensors that store complex histories, or program cellular trajectories.

"You can build very complex computing systems if you integrate the element of memory together with computation," says Timothy Lu, an associate professor of electrical engineering and computer science and of biological engineering, and head of the Synthetic Biology Group at MIT's Research Laboratory of Electronics.

This approach allows scientists to create biological "state machines" -- devices that exist in different states depending on the identities and orders of inputs they receive. The researchers also created software that helps users design circuits that implement state machines with different behaviors, which can then be tested in cells.

Lu is the senior author of the new study, which appears in the 22 July issue ofScience. Nathaniel Roquet, an MIT and Harvard graduate student, is the paper's lead author. Other authors on the paper include Scott Aaronson, an associate professor of electrical engineering and computer science, recent MIT graduate Ava Soleimany, and recent Wellesley College graduate Alyssa Ferris.

Long-term memory

In 2013, Lu and colleagues designed cell circuits that could perform a logic function and then store a memory of the event by encoding it in their DNA.

The state machine circuits that they designed in the new paper rely on enzymes called recombinases. When activated by a specific input in the cell, such as a chemical signal, recombinases either delete or invert a particular stretch of DNA, depending on the orientation of two DNA target sequences known as recognition sites. The stretch of DNA between those sites may contain recognition sites for other recombinases that respond to different inputs. Flipping or deleting those sites alters what will happen to the DNA if a second or third recombinase is later activated. Therefore, a cell's history can be determined by sequencing its DNA.

In the simplest version of this system, with just two inputs, there are five possible states for the circuit: states corresponding to neither input, input A only, input B only, A followed by B, and B followed by A. The researchers also designed and built circuits that record three inputs, in which 16 states are possible.

For this study, the researchers programmed E. coli cells to respond to substances commonly used in lab experiments, including ATc (an analogue of the antibiotic tetracycline), a sugar called arabinose, and a chemical called DAPG. However, for medical or environmental applications, the recombinases could be re-engineered to respond to other conditions such as acidity or the presence of specific transcription factors (proteins that control gene expression).

Gene control

After creating circuits that could record events, the researchers then incorporated genes into the array of recombinase binding sites, along with genetic regulatory elements. In these circuits, when recombinases rearrange the DNA, the circuits not only record information but also control which genes get turned on or off.

The researchers tested this approach with three genes that code for different fluorescent proteins -- green, red, and blue, constructing a circuit that expressed a different combination of the fluorescent proteins for each identity and order of two inputs. For example, when cells carrying this circuit recieved input A followed by input B they fluoresced red and green, while cells that recieved B before A fluoresced red and blue.

Lu's lab now hopes to use this approach to study cellular processes that are controlled by a series of events, such as the appearance of cytokines or other signaling molecules, or the activation of certain genes.

"This idea that we can record and respond to not just combinations of biological events but also their orders opens up a lot of potential applications. A lot is known about what factors regulate differentiation of specific cell types or lead to the progression of certain diseases, but not much is known about the temporal organization of those factors. That's one of the areas we hope to dive into with our device," Roquet says.

For example, scientists could use this technique to follow the trajectory of stem cells or other immature cells into differentiated, mature cell types. They could also follow the progression of diseases such as cancer. A recent study has shown that the order in which cancer-causing mutations are acquired can determine the behavior of the disease, including how cancer cells respond to drugs and develop into tumors. Furthermore, engineers could use the state machine platform developed here to program cell functions and differentiation pathways.

Story Source:

The above post is reprinted from materials provided by Massachusetts Institute of Technology. The original item was written by Anne Trafton. Note: Materials may be edited for content and length.

Journal Reference:

  1. Nathaniel Roquet, Ava P. Soleimany, Alyssa C. Ferris, Scott Aaronson, Timothy K. Lu. Synthetic recombinase-based state machines in living cells. Science, 2016 DOI: 10.1126/science.aad8559
]]>
Mon, 25 Jul 2016 21:06:30 +0400
<![CDATA[Tiny robot unfolds in stomach, goes to work]]>http://2045.com/news/35020.html35020Researchers at the Massachusetts Institute of Technology are designing an ingestible robot that could patch wounds, deliver medicine or dislodge a foreign object. They call their experiment an "origami robot" because the accordion-shaped gadget gets folded up and frozen into an ice capsule.

"You swallow the robot, and when it gets to your stomach the ice melts and the robot unfolds," said Daniela Rus, a professor who directs MIT's Computer Science and Artificial Intelligence Laboratory. "Then, we can direct it to a very precise location."

It's still a long way before the device can be deployed in a human or animal. In the meantime, the researchers have created an artificial stomach made of silicone to test it.

Rus said one of the robot's most important missions could be to save the lives of children who swallow the disc-shaped button batteries that increasingly power electronic devices. If swallowed, the battery can quickly burn through the stomach lining and be fatal.

The robots could seek out and capture the battery before it causes too much damage, pushing it down through the gastrointestinal tract and out of the body.

The robot's flexible frame is biodegradable, made of the same dried pig intestine used for sausage casing. The researchers scoured markets in Boston's Chinatown before finding the right material to build an agile robot body that could dissolve once its mission was accomplished.

"They tried rice paper and sugar paper and hydrogel paper, all sorts of different materials," Rus said. "We found that sausage casing has the best properties when it comes to folding and unfolding and controllability."

Embedded in its meaty body -- it wouldn't be hard to make a kosher version, Rus said -- is a neodymium magnet that looks like a tiny metal cube.

Magnetic forces control its movement. Researchers use remote-control joysticks to change the magnetic field, allowing the robot to slip and crawl through the stomach on the way to the object it is trying to retrieve or the wound where it must deliver drugs.

Would it hurt to ingest a robot? Probably not, said research team member Steven Guitron, an MIT graduate student in mechanical engineering.

"I'm sure if you swallowed an ice cube accidentally, it's very similar," he said.

MIT's team has a patent pending and presented its research at a robotics conference in Sweden this spring. Rus said medical companies have expressed interest in clinical applications, which require going through the regulatory process of conducting animal and human studies.

"It's a nifty idea," but it could be a decade or so before hospitals could use such a device, said William Messner, a professor of mechanical engineering at Tufts University in Massachusetts who is not involved with the project. He said it could also have promise in performing biopsies.

The U.S. Food and Drug Administration "has to get involved with anything like this and they're rightfully very careful about any kind of medical instrument," Messner said. "The big problem is: What if it gets stuck? Now you've really got a problem."

The multidisciplinary project fits into the growing field of soft robotics that coalesced with the 2013 founding of the peer-reviewed Soft Robotics Journal, based at Tufts. The Boston region is a hub for research into the moving machines made of flexible materials that can change shape and size, making them useful for surgery and other complex environments.

]]>
Mon, 25 Jul 2016 21:04:44 +0400
<![CDATA[Patch that delivers drug, gene, and light-based therapy to tumor sites shows promising results]]>http://2045.com/news/35025.html35025Approximately one in 20 people will develop colorectal cancer in their lifetime, making it the third-most prevalent form of the disease in the U.S. In Europe, it is the second-most common form of cancer.

The most widely used first line of treatment is surgery, but this can result in incomplete removal of the tumor. Cancer cells can be left behind, potentially leading to recurrence and increased risk of metastasis. Indeed, while many patients remain cancer-free for months or even years after surgery, tumors are known to recur in up to 50 percent of cases.

Conventional therapies used to prevent tumors recurring after surgery do not sufficiently differentiate between healthy and cancerous cells, leading to serious side effects.

In a paper published today in the journal Nature Materials, researchers at MIT describe an adhesive patch that can stick to the tumor site, either before or after surgery, to deliver a triple-combination of drug, gene, and photo (light-based) therapy.

Releasing this triple combination therapy locally, at the tumor site, may increase the efficacy of the treatment, according to Natalie Artzi, a principal research scientist at MIT’s Institute for Medical Engineering and Science (IMES) and an assistant professor of medicine at Brigham and Women’s Hospital, who led the research.

The general approach to cancer treatment today is the use of systemic, or whole-body, therapies such as chemotherapy drugs. But the lack of specificity of anticancer drugs means they produce undesired side effects when systemically administered.

What’s more, only a small portion of the drug reaches the tumor site itself, meaning the primary tumor is not treated as effectively as it should be.

Indeed, recent research in mice has found that only 0.7 percent of nanoparticles administered systemically actually found their way to the target tumor.

“This means that we are treating both the source of the cancer — the tumor — and the metastases resulting from that source, in a suboptimal manner,” Artzi says. “That is what prompted us to think a little bit differently, to look at how we can leverage advancements in materials science, and in particular nanotechnology, to treat the primary tumor in a local and sustained manner.”

The researchers have developed a triple-therapy hydrogel patch, which can be used to treat tumors locally. This is particularly effective as it can treat not only the tumor itself but any cells left at the site after surgery, preventing the cancer from recurring or metastasizing in the future.

Firstly, the patch contains gold nanorods, which heat up when near-infrared radiation is applied to the local area. This is used to thermally ablate, or destroy, the tumor.

These nanorods are also equipped with a chemotherapy drug, which is released when they are heated, to target the tumor and its surrounding cells.

Finally, gold nanospheres that do not heat up in response to the near-infrared radiation are used to deliver RNA, or gene therapy to the site, in order to silence an important oncogene in colorectal cancer. Oncogenes are genes that can cause healthy cells to transform into tumor cells.

The researchers envision that a clinician could remove the tumor, and then apply the patch to the inner surface of the colon, to ensure that no cells that are likely to cause cancer recurrence remain at the site. As the patch degrades, it will gradually release the various therapies.

The patch can also serve as a neoadjuvant, a therapy designed to shrink tumors prior to their resection, Artzi says.

When the researchers tested the treatment in mice, they found that in 40 percent of cases where the patch was not applied after tumor removal, the cancer returned.

But when the patch was applied after surgery, the treatment resulted in complete remission.

Indeed, even when the tumor was not removed, the triple-combination therapy alone was enough to destroy it.

The technology is an extraordinary and unprecedented synergy of three concurrent modalities of treatment, according to Mauro Ferrari, president and CEO of the Houston Methodist Research Institute, who was not involved in the research.

“What is particularly intriguing is that by delivering the treatment locally, multimodal therapy may be better than systemic therapy, at least in certain clinical situations,” Ferrari says.

Unlike existing colorectal cancer surgery, this treatment can also be applied in a minimally invasive manner. In the next phase of their work, the researchers hope to move to experiments in larger models, in order to use colonoscopy equipment not only for cancer diagnosis but also to inject the patch to the site of a tumor, when detected.

“This administration modality would enable, at least in early-stage cancer patients, the avoidance of open field surgery and colon resection,” Artzi says. “Local application of the triple therapy could thus improve patients’ quality of life and therapeutic outcome.”

Artzi is joined on the paper by João Conde, Nuria Oliva, and Yi Zhang, of IMES. Conde is also at Queen Mary University in London.

]]>
Mon, 25 Jul 2016 11:09:51 +0400
<![CDATA[Human brain mapped in unprecedented detail]]>http://2045.com/news/35017.html35017Think of a spinning globe and the patchwork of countries it depicts: such maps help us to understand where we are, and that nations differ from one another. Now, neuroscientists have charted an equivalent map of the brain’s outermost layer — the cerebral cortex — subdividing each hemisphere's mountain- and valley-like folds into 180 separate parcels.

Ninety-seven of these areas have never previously been described, despite showing clear differences in structure, function and connectivity from their neighbours. The new brain map is published today in Nature1.

Each discrete area on the map contains cells with similar structure, function and connectivity. But these areas differ from each other, just as different countries have well-defined borders and unique cultures, says David Van Essen, a neuroscientist at Washington University Medical School in St Louis, Missouri, who supervised the study.

 

Neuroscientists have long sought to divide the brain into smaller pieces to better appreciate how it works as a whole. One of the best-known brain maps chops the cerebral cortex into 52 areas based on the arrangement of cells in the tissue. More recently, maps have been constructed using magnetic resonance imaging (MRI) techniques — such as functional MRI, which measures the flow of blood in response to different mental tasks.

Yet until now, most such maps have been based on a single type of measurement. That can provide an incomplete or even misleading view of the brain's inner workings, says Thomas Yeo, a computational neuroscientist at the National University of Singapore. The new map is based on multiple MRI measurements, which Yeo says “greatly increases confidence that they are producing the best in vivo estimates of cortical areas”.

Divide and conquer

To construct the map, a team led by neuroscientist Mathew Glasser at Washington University Medical School used imaging data collected from 210 healthy young adults participating in the Human Connectome Project, a US government-funded initiative to map the brain’s structural and functional connections. The information included measurements of cortical thickness; brain function; connectivity between regions; topographic organization of cells in brain tissue; and levels of myelin — a fatty substance that speeds up neural signalling.

Glasser looked for areas in the cerebral cortex where he saw significant changes in two or more properties, and used these to delineate borders on the map. “If you crawl along the cortical surface, at some point you are going to get to a location where the properties start changing, and where multiple independent properties change in the same place,” he says.

The technique confirmed the existence of 83 previously reported brain areas and identified 97 new ones. Scientists tested their map by looking for these regions in the brains of 210 additional people. They found that the map was accurate, but that the size of the areas in it varied from person to person. These differences may reveal new insights into individual variability in cognitive ability and disease risk.

Limited view

“While the focus of this work was on creating a beautiful, reliable, average brain template, it really opens up the possibility to further explore the unique intersection of individual talents with intellectual and creative abilities — the things that make us uniquely human,” says Rex Jung, a neuropsychologist at the University of New Mexico in Albuquerque.

But the map is limited in some important ways. For one, it reveals little about the biochemical underpinnings of the brain — or about the activity of single neurons or small groups. “It is analogous to having a fantastic Google Earth map of your neighbourhood, down to your individual back yard,” says Jung. “Yet, you cannot really see how your neighbours are moving around, where they are going or what sort of jobs they have.”

“We’re thinking of this as version 1.0,” says Glasser. “That doesn’t mean it’s the final version, but it’s a far better map than the ones we’ve had before.”

]]>
Fri, 22 Jul 2016 09:58:28 +0400
<![CDATA[Researchers build a crawling robot from sea slug parts and a 3-D printed body]]>http://2045.com/news/35016.html35016Researchers at Case Western Reserve University have combined tissues from a sea slug with flexible 3-D printed components to build "biohybrid" robots that crawl like sea turtles on the beach.

muscle from the slug's mouth provides the movement, which is currently controlled by an external electrical field. However, future iterations of the device will include ganglia, bundles of neurons and nerves that normally conduct signals to the muscle as the slug feeds, as an organic controller.

The researchers also manipulated collagen from the slug's skin to build an organic scaffold to be tested in new versions of the robot.

In the future, swarms of biohybrid robots could be released for such tasks as locating the source of a toxic leak in a pond that would send animals fleeing, the scientists say. Or they could search the ocean floor for a black box flight data recorder, a potentially long process that may leave current robots stilled with dead batteries.

"We're building a living machine—a biohybrid robot that's not completely organic—yet," said Victoria Webster, a PhD student who is leading the research. Webster will discuss mining the sea slug for materials and constructing the hybrid, which is a little under 2 inches long, at the Living Machines conference in Edinburgh, Scotland, this week.

Webster worked with Roger Quinn, the Arthur P. Armington Professor of Engineering and director of Case Western Reserve's Biologically Inspired Robotics Laboratory; Hillel Chiel, a biology professor who has studied the California sea slug for decades; Ozan Akkus, professor of mechanical and aerospace engineering and director of the CWRU Tissue Fabrication and Mechanobiology Lab; Umut Gurkan, head of the CWRU Biomanufacturing and Microfabrication Laboratory, undergraduate researchers Emma L. Hawley and Jill M. Patel and recent master's graduate Katherine J. Chapin

By combining materials from the California sea slug, Aplysia californica, with three-dimensional printed parts, "we're creating a robot that can manage different tasks than an animal or a purely manmade robot could," Quinn said.

The researchers chose the sea slug because the animal is durable down to its cells, withstanding substantial changes in temperature, salinity and more as Pacific Ocean tides shift its environment between deep water and shallow pools. Compared to mammal and bird muscles, which require strictly controlled environments to operate, the slug's are much more adaptable.

For the searching tasks, "we want the robots to be compliant, to interact with the environment," Webster said. "One of the problems with traditional robotics, especially on the small scale, is that actuators—the units that provide movement—tend to be rigid."

Muscle cells are compliant and also carry their own fuel source—nutrients in the medium around them. Because they're soft, they're safer for operations than nuts-and-bolts actuators and have a much higher power-to-weight ratio, Webster said.

The researchers originally tried using muscle cells but changed to using the entire I2 muscle from the mouth area, or buccal mass. "The muscle already had the optimal structure and form to provide the function and strength needed," Chiel said.

Akkus said, "When we integrate the muscle with its natural biological structure, it's hundreds to 1,000 times better."

In their first robots, the buccal muscle, which naturally has two "arms," is connected to the robots printed polymer arms and body. The robot moves when the buccal muscle contracts and releases, swinging the arms back and forth. In early testing, the bot pulled itself about 0.4 centimeters per minute.

To control movement, the scientists are turning to the animal's own ganglia. They can use either chemical or electrical stimuli to induce the nerves to contract the muscle.

"With the ganglia, the muscle is capable of much more complex movement, compared to using a manmade control, and it's capable of learning," Webster said.

The team hopes to train ganglia to move the robot forward in response to one signal and backward in response to a second.

With the goal of making a completely organic robot, Akkus' lab gelled collagen from the slug's skin and also used electrical currents to align and compact collagen threads together, to build a lightweight, flexible, yet strong scaffold.

The team is preparing to test organic versions as well as new geometries for the body, designed to produce more efficient movement.

If completely organic robots prove workable, the researchers say, a swarm released at sea or in a pond or a remote piece of land, won't be much of a worry if they can't be recovered. They're likely to be inexpensive and won't pollute the location with metals and battery chemicals but be eaten or degrade into compost.

]]>
Mon, 18 Jul 2016 09:12:39 +0400
<![CDATA[Inside Facebook’s Artificial Intelligence Engine Room]]>http://2045.com/news/35015.html35015Access Facebook from the western half of North America and there’s a good chance your data will be pulled from a computer cooled by the juniper- and sage-scented air of central Oregon’s high desert.

In the town of Prineville, home to roughly 9,000 people, Facebook stores the data of hundreds of millions more. Rows and rows of computers stand inside four giant buildings totaling nearly 800,000 square feet, precisely aligned to let in the dry and generally cool summer winds that blow in from the northwest. The aisles of stacked servers with blinking blue and green lights make a dull roar as they process logins, likes, and LOLs.

Facebook has installed new high-powered servers to help its artificial intelligence researchers move faster. They are powered by GPU chips (the green cards at the back of the image) made by Nvidia.

Facebook has lately added some new machines to the mix in Prineville. The company has installed new, high-powered servers designed to speed up efforts to train software to do things like translate posts between languages, be a smarter virtual assistant, or follow written narratives.

Facebook’s new Big Sur servers are designed around high-powered processors of a kind originally developed for graphics processing, known as GPUs. These chips underpin recent leaps in artificial intelligence technology that have come from a technique known as deep learning. Software has become strikingly better at understanding images and speech thanks to the power of GPUs allowing old ideas about how to train software to be applied to much larger, more complex data sets (see “Teaching Machines to Understand Us”).

Kevin Lee, an engineer at Facebook who works on the servers, says they help Facebook’s researchers train software using more data, by working faster. “These servers are purpose-built hardware for AI research and machine learning,” he says. “GPUs can take a photo and split it into tiny pieces and work on them all at once.”

Facebook builds each Big Sur server around eight GPUs made by Nvidia, the leading supplier of such chips. Lee declined to say exactly how many of the servers have been deployed but said the company has “thousands” of GPUs at work. Big Sur servers have been installed in the company’s Prineville and Ashburn, Virginia, data centers.

Because GPUs are extremely power hungry, Facebook has to pack them less densely than it does other types of server in the data center, to avoid creating hot spots that would make things harder for the cooling system and require extra power. Eight Big Sur servers are stacked into a seven-foot-tall rack that might otherwise hold 30 standard Facebook servers that do the more routine work of serving up user data.

Facebook is far from alone in running giant data centers or collecting GPUs to power machine learning research. Microsoft, Google, and Chinese search company Baidu have all relied on GPUs to power deep learning research.

Facebook’s new servers for artificial intelligence research, inside the company’s data center in Prineville, Oregon.

The social network is unusual in that it has opened up the designs for Big Sur and its other server designs, as well as the plans for its Prineville data center. The company contributes them to a nonprofit called theOpen Compute Project, started by Facebook in 2011 to encourage computing companies to work together on designs for low-cost, high-efficiency data center hardware. The project is seen as having helped Asian hardware companies and squeezing traditional vendors such as Dell and HP.

Facebook’s director of AI research, Yann LeCun, said when Big Sur was announced earlier this year that he believed making the designs available could accelerate progress in the field by enabling more organizations to build powerful machine learning infrastructure (see “Facebook Joins Stampede of Tech Giants Giving Away Artificial Intelligence Technology”).

Future machine learning servers built on Facebook’s plans may not be built around the GPUs at their heart today, though. Multiple companies are working on new chip designs more specifically tailored to the math of deep learning than GPUs.

Google announced in May that it had started using a chip of its own design, called a TPU, to power deep learning software in products such as speech recognition. The current chip appears to be suited to running algorithms after they have been trained, not the initial training step that Big Sur servers are designed to expedite, but Google is working on a second-generation chip. Nvidia and several startups including Nervana Systems are also working on chips customized for deep learning  (see “Intel Outside As Other Companies Prosper from AI Chips”).

Eugenio Culurciello, an associate professor at Purdue University, says that the usefulness of deep learning means such chips look sure to be very widely used. “There’s been a big need for a while and it’s only growing,” he says.

Asked whether Facebook was working on its own custom chips, Lee says the company is “looking into it.”

]]>
Fri, 15 Jul 2016 20:26:38 +0400
<![CDATA[Artificial intelligence: can we control it?]]>http://2045.com/news/35014.html35014It is the world’s greatest opportunity, and its greatest threat, believes Oxford philosopher Nick Bostrom

Scientists reckon there have been at least five mass extinction events in the history of our planet, when a catastrophically high number of species were wiped out in a relatively short period of time. We are possibly now living through a sixth — caused by human activity. But could humans themselves be next?

This is the sort of question that preoccupies the staff at the Future of Humanity Institute in Oxford. The centre is an offshoot of the Oxford Martin School, founded in 2005 by the late James Martin, a technology author and entrepreneur who was among the university’s biggest donors. As history has hit the fast-forward button, it seems to have become the fashion among philanthropists to endow research institutes that focus on the existential challenges of our age, and this is one of the most remarkable.

Tucked away behind Oxford’s modern art museum, the institute is in a bland modern office block; the kind of place you might expect to find a provincial law firm. The dozen or so mathematicians, philosophers, computer scientists and engineers who congregate here spend their days thinking about how to avert catastrophes: meteor strikes, nuclear winter, environmental destruction, extraterrestrial threats. On the afternoon I visit there is a fascinating and (to me) largely unfathomable seminar on the mathematical probability of alien life.

Presiding over this extraordinary institute since its foundation has been Professor Nick Bostrom, who, in his tortoise-shell glasses and grey, herringbone jacket, appears a rather ordinary academic, even if his purple socks betray a streak of flamboyance. His office resembles an Ikea showroom, brilliantly lit by an array of floor lamps, somewhat redundant on a glorious sunny day. He talks in a kind of verbal origami, folding down the edges of his arguments with precision before revealing his final, startling conclusions. The slightest monotonal accent betrays his Swedish origins.

 … 

Bostrom makes it clear that he and his staff are not interested in everyday disasters; they deal only with the big stuff: “There are a lot of things that can go and have gone wrong throughout history — earthquakes and wars and plagues and whatnot. But there is one kind of thing that has not ever gone wrong; we have never, so far, permanently destroyed the entire future.”

Anticipating the obvious next question, Bostrom argues that it is fully justified to devote resources to studying such threats because, even if they are remote, the downside is so terrible. Staving off future catastrophes (assuming that is possible) would bring far more benefit to far greater numbers of people than solving present-day problems such as cancer or extreme poverty. The number of lives saved in the future would be many times greater, particularly if “Earth civilisation”, as he calls it, spreads to other stars and galaxies. “We have a particular interest in future technologies that might potentially transform the human condition in some fundamental way,” he says.

So what tops the institute’s list of existential threats? A man-made one: that rapidly advancing research into artificial intelligence might lead to a runaway “superintelligence” which could threaten our survival. The 43-year-old philosopher is himself a notable expert on AI and the author of Superintelligence, a startling and controversial book that discusses what he describes as “quite possibly the most important and most daunting challenge humanity has ever faced.”

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct,” he wrote.

Whole article...

]]>
Fri, 15 Jul 2016 20:22:52 +0400
<![CDATA[Robots come to each other's aid when they get the signal]]>http://2045.com/news/35013.html35013Sometimes all it takes to get help from someone is to wave at them, or point. Now the same is true for robots. Researchers at KTH Royal Institute of Technology in Sweden have completed work on an EU project aimed at enabling robots to cooperate with one another on complex jobs, by using body language.

Dimos Dimarogonas, an associate professor at KTH and project coordinator for RECONFIG, says the research project has developed protocols that enable robots to ask for help from each other and to recognize when other robots need assistance—and change their plans accordingly.

"Robots can stop what they're doing and go over to assist another robot which has asked for help," Dimarogonas says. "This will mean flexible and dynamic robots that act much more like humans—robots capable of constantly facing new choices and that are competent enough to make decisions."

As autonomous machines take on more responsibilities, they are bound to encounter tasks that are too big for a single robot. Shared work could include lending an extra hand to lift and carry something, or holding an object in place, but Dimarogonas says the concept can be scaled up to include any number of functions in a home, a factory or other kinds of workplaces.

The project was completed in May 2016, with project partners at Aalto University in Finland, the National Technical University of Athens in Greece, and the École Centrale Paris in France.

In a series of filmed presentations, the researchers demonstrate the newfound abilities of several off-the-shelf autonomous machines, including NAO robots. One video shows a robot pointing out an object to another robot, conveying the message that it needs the robot to lift the item.

Dimarogonas says that common perception among the robots is one key to this collaborative work.

"The visual feedback that the robots receive is translated into the same symbol for the same object," he says. "With updated vision technology they can understand that one object is the same from different angles. That is translated to the same symbol one layer up to the decision-making—that it is a thing of interest that we need to transport or not. In other words, they have perceptual agreement."

In another demonstration two robots carry an object together. One leads the other, which senses what the lead robot wants by the force it exerts on the object, he says.

"It's just like if you and I were carrying a table and I knew where it had to go," he says. "You would sense which direction I wanted to go by the way I turn and push, or pull."

The important point is that all of these actions take place without human interaction or help, he says.

"This is done in real time, autonomously," he says. The project also uses a novel communication protocol that sets it apart from other collaborative robot concepts. "We minimize communication. There is a symbolic communication protocol, but it's not continuous. When help is needed, a call for help is broadcast and a helper robot brings the message to another robot. But it's a single shot."

]]>
Fri, 15 Jul 2016 20:19:49 +0400
<![CDATA[Georgia Tech's DURUS robot has a more natural human-like stride]]>http://2045.com/news/35009.html35009Last time we saw the DURUS robot walking like a human, it was still doing so relatively flat footed. The folks at Georgia Tech's AMBER-Lab have improved the robot's movements to incorporate even more human-like heel strikes and push-offs. As you can see in the video below, the new range of motion gives DURUS a more natural stride, and the ability to wear some sweet sneakers. Until about a week ago, the robot shuffled along flat footed before getting a pair of new metal feet with arches soles. After some tweaking of the algorithms and a few falls, DURUS now strides like the rest of us.

"Our robot is able to take much longer, faster steps than its flat-footed counterparts because it's replicating human locomotion," said director Georgia Tech's lab and engineering professor Aaron Ames. He explained that the new behavior makes strides towards the eventual goal of having DURUS walk outdoors.

DURUS has springs between its ankles and feet that act like elastic tendons in humans. The springs allow the robot to store mechanical energy from the heel strike to be used when the toe pushes off the ground. As you might expect, this makes the system very efficient with a 1.4 cost of transport, a common measure of robotic locomotion. Compare that to the 3.0 cost of transport for other humanoid robots and you can see the kinds of upgrades Ames and the students at Georgia Tech are making. Ames also said that updates like this one to DURUS could mean big improvements to robotic devices like prostheses and exoskeletons.

]]>
Tue, 12 Jul 2016 23:29:04 +0400
<![CDATA[DNA origami lights up a microscopic glowing Van Gogh]]>http://2045.com/news/35007.html35007Using folded DNA to precisely place glowing molecules within microscopic light resonators, researchers at Caltech have created one of the world's smallest reproductions of Vincent van Gogh's The Starry Night. The reproduction and the technique used to create it are described in a paper published in the advance online edition of the journal Nature on July 11.

The monochrome image -- just the width of a dime across -- was a proof-of-concept project that demonstrated, for the first time, how the precision placement of DNA origami can be used to build chip-based devices like computer circuits at smaller scales than ever before.

DNA origami, developed 10 years ago by Caltech's Paul Rothemund (BS '94), is a technique that allows researchers to fold a long strand of DNA into any desired shape. The folded DNA then acts as a scaffold onto which researchers can attach and organize all kinds of nanometer-scale components, from fluorescent molecules to electrically conductive carbon nanotubes to drugs.

"Think of it a bit like the pegboards people use to organize tools in their garages, only in this case, the pegboard assembles itself from DNA strands and the tools likewise find their own positions," says Rothemund, research professor of bioengineering, computing and mathematical sciences, and computation and neural systems. "It all happens in a test tube without human intervention, which is important because all of the parts are too small to manipulate efficiently, and we want to make billions of devices."

The process has the potential to influence a variety of applications from drug delivery to the construction of nanoscale computers. But for many applications, organizing nanoscale components to create devices on DNA pegboards is not enough; the devices have to be wired together into larger circuits and need to have a way of communicating with larger-scale devices.

One early approach was to make electrodes first, and then scatter devices randomly on a surface, with the expectation that at least a few would land where desired, a method Rothemund describes as "spray and pray."

In 2009, Rothemund and colleagues at IBM Research first described a technique through which DNA origami can be positioned at precise locations on surfaces using electron-beam lithography to etch sticky binding sites that have the same shape as the origami. For example, triangular sticky patches bind triangularly folded DNA.

Over the last seven years, Rothemund and Ashwin Gopinath, senior postdoctoral scholar in bioengineering at Caltech, have refined and extended this technique so that DNA shapes can be precisely positioned on almost any surface used in the manufacture of computer chips. In the Nature paper, they report the first application of the technique -- using DNA origami to install fluorescent molecules into microscopic light sources.

"It's like using DNA origami to screw molecular light bulbs into microscopic lamps," Rothemund says.

In this case, the lamps are microfabricated structures called photonic crystal cavities (PCCs), which are tuned to resonate at a particular wavelength of light, much like a tuning fork vibrates with a particular pitch. Created within a thin glass-like membrane, a PCC takes the form of a bacterium-shaped defect within an otherwise perfect honeycomb of holes.

"Depending on the exact size and spacing of the holes, a particular wavelength of light reflects off the edge of the cavity and gets trapped inside," says Gopinath, the lead author of the study. He built PCCs that are tuned to resonate at around 660 nanometers, the wavelength corresponding to a deep shade of the color red. Fluorescent molecules tuned to glow at a similar wavelength light up the lamps -- provided they stick to exactly the right place within the PCC.

"A fluorescent molecule tuned to the same color as a PCC actually glows more brightly inside the cavity, but the strength of this coupling effect depends strongly on the molecule's position within the cavity. A few tens of nanometers is the difference between the molecule glowing brightly, or not at all," Gopinath says.

By moving DNA origami through the PCCs in 20-nanometer steps, the researchers found that they could map out a checkerboard pattern of hot and cold spots, where the molecular light bulbs either glowed weakly or strongly. As a result, they were able to use DNA origami to position fluorescent molecules to make lamps of varying intensity. Similar structures have been proposed to power quantum computers and for use in other optical applications that require many tiny light sources integrated together on a single chip.

"All previous work coupling light emitters to PCCs only successfully created a handful of working lamps, owing to the extraordinary difficulty of reproducibly controlling the number and position of emitters in a cavity," Gopinath says. To prove their new technology, the researchers decided to scale-up and provide a visually compelling demonstration. By creating PCCs with different numbers of binding sites, Gopinath was able to reliably install any number from zero to seven DNA origami, allowing him to digitally control the brightness of each lamp. He treated each lamp as a pixel with one of eight different intensities, and produced an array of 65,536 of the PCC pixels (a 256 x 256 pixel grid) to create a reproduction of Van Gogh's "The Starry Night."

Now that the team can reliably combine molecules with PCCs, they are working to improve the light emitters. Currently, the fluorescent molecules last about 45 seconds before reacting with oxygen and "burning out," and they emit a few shades of red rather than a single pure color. Solving both these problems will help with applications such as quantum computers.

"Aside from applications, there's a lot of fundamental science to be done," Gopinath says.

]]>
Tue, 12 Jul 2016 23:23:37 +0400
<![CDATA[Stingray Robot Powered by Light, and Living Rat Cells]]>http://2045.com/news/35011.html35011If a robot is made of living cells, can respond to external stimuli and has the ability to compute and coordinate movement, is it alive?

This question can be posed of a new, tiny stingray-inspired robot that is able to follow pulses of light to swim through an obstacle course.

“It’s not an organism per se, but it’s certainly alive,” said Kevin Kit Parker, a professor of bioengineering at Harvard University and one of the authors of a paper detailing the robot, published in Science on Thursday.

To create the robot, which measures 16 millimeters in length, Dr. Parker’s team layered heart cells from rats onto a gold and silicone scaffold that they designed to resemble a stingray. They then injected a gene into the cells that caused them to contract when exposed to blue light.

By shining pulses of blue light, the researchers were able to control the robot’s movements. Flashing the light more rapidly caused the robot to swim faster. Blinking the light on the robot’s right side caused it to turn left, and vice versa.

PhotoScientists built an artificial stingray to respond to pulses of blue light. By blinking blue light, they were able to guide the robot to swim through an obstacle course with the undulating movements of an actual stingray.CreditSung-Jin Park and Kyung Soo Park.

Using these techniques, the engineers navigated the robot along a curving obstacle course at an average speed of 1.5 millimeters per second.

Every week, we'll bring you stories that capture the wonders of the human body, nature and the cosmos.

The new artificial stingray advances the nascent field of “biohybrid” robotics, which integrates mechanical engineering with genetic and tissue engineering, saidRashid Bashir, a professor of bioengineering at the University of Illinois at Urbana-Champaign. Earlier this spring, his research group built a similar light-controlled robot that crawlsrather than swims.

Among other applications, this work could lead to the development of robots that aid in environmental cleanups or cargo transport, added Dr. Bashir, who was not involved in building the stingray robot.

Dr. Parker, meanwhile, is most interested in what the robot can teach him about the human heart.

By studying how rat heart cells work together to propel the robot forward, he hopes to gain insight into how heart cells communicate with each other and generate force.

He also plans to apply this research to the development of a light-activated pacemaker, which would involve injecting the light-sensitive gene into heart tissue so the organ can be controlled with pulses of light.

Using light to pace the heart would “be a change for the whole medical device industry,” he said.

]]>
Mon, 11 Jul 2016 23:35:20 +0400