2045 Initiativehttp://2045.com/Strategic Social Initiativehttp://2045.ru/images/logo_en.png2045 Initiativehttp://2045.com/<![CDATA[Dmitry Itskov: www.Immortal.me - Want to be immortal? Act!]]>http://2045.com/news/33999.html33999Fellow Immortalists!

Many of the daily letters that the 2045 Initiative and I receive ask the question: will only the very rich be able to afford an avatar in the future, or will they be relatively cheap and affordable for almost everyone?

I would like to answer this question once again: avatars will be cheap and affordable for many people,… but only if people themselves make every effort needed to achieve this, rather than wait until someone else does everything for them.

To facilitate and expedite this, I am hereby soft-launching a project today which will allow anyone to contribute to the creation of a ‘people’s avatar’… and perhaps even capitalize on this in the future. The project is named Electronic Immortality Corporation. It will soon be launched at http://www.immortal.me under the motto "Want to be immortal? Act!"

The Electronic Immortality Corporation will be a social network, operating under the rules of a commercial company. Instead of a user agreement, volunteers will get jobs and sign a virtual contract.

In addition to creating a ‘people’s avatar’, the Electronic Immortality Corporation will also implement various commercial and charitable projects aimed at realizing ideas of the 2045 Initiative, transhumanism and immortalism.

We will create future technologies that can be commercialized within decades (e.g. Avatar C) as well as implement ‘traditional’ business projects such as, for example, producing commercially viable movies.

Even the smallest volunteer contribution to the work of the Corporation will be rewarded by means of its own virtual currency that will be emitted for two purposes only: a) to reward volunteer work, and b) to compensate real financial investments in the company. Who knows, our virtual currency may well become as popular and in demand as Bitcoin.

The first steps are as follows:

First, we will establish an expert group, which will shape the final concept and the statutes of the Electronic Immortality Corporation.

Second, we will announce and organize two competitions: a) to create the corporate identity of the Electronic Immortality Corporation, and b) the code of the social network.

Third, we will form the Board of Directors of the Electronic Immortality Corporation.  There, we would like to see experienced businessmen with a track record of successfully implemented large projects.

Fourth, we will engage celebrities and public figures from around the world.

Therefore, if you…

- have experience in creating social networks, online games, gaming communities and are willing to discuss the final concept of the Electronic Immortality Corporation,

- are a brilliant designer,

- are a talented programmer with experience in developing large-scale and/or open source projects,

- are a businessman with experience in managing large companies and ready to participate in the Board of Directors of the Electronic Immortality Corporation or you know of such a person,

- are in contact with celebrities and ready to engage them in the Electronic Immortality Corporation;

and at the same time you desire to change the world, to build a high-tech reality, to participate in creating avatars and immortality technologies… if all of this is your dream and you are ready to serve it selflessly,

email us at team@immortal.me

Want to be immortal? Act!

 

Dmitry Itskov

Founder of the 2045 Initiative



]]>
Sun, 23 Apr 2045 21:50:23 +0400
<![CDATA[MIT’s Cheetah 3 robot is built to save lives]]>http://2045.com/news/35161.html35161The latest version of MIT’s Cheetah robot made its stage debut today at TC Sessions: Robotics in Cambridge, Mass. It’s a familiar project to anyone who follows the industry with any sort of regularity, as one of the most impressive demos to come out of one of the world’s foremost robotics schools in recent years. Earlier versions of the four-legged robot have been able to run at speeds up to 14 miles an hourbound over objects autonomously and even respond to questions with Alexa, by way of an Echo Dot mounted on its back.

The Cheetah 3, however, marks a kind of philosophical change for the robot created by professor Sang-bae Kim and his team at MIT’s Biomimetics lab. The focus has shifted from impressive demos to something more practical — this time out, the team is focused on giving the world a robot that can perform search and rescue.

“Our vision changed to wanting to use this in a real situation, to dispatch it to Fukushima,” Kim told TechCrunch ahead of the event. “We want to use this in a place where we don’t want to use humans. We can use the robot to monitor the environment and other emergency situations. There are a lot of emergency situations where you just want to do a routine check.” 

Post-nuclear disaster Fukushima Japan is often brought up in these discussions around where industrial robots can be useful in the real world, and indeed, a number of robots have already been deployed to site, going where humans can’t — or at least shouldn’t. iRobot/Endeavor’s Packbot has done some work surveying the site, but the Cheetah 3 is able to do things that more traditional wheeled robots can’t, owed in part to its animal-inspired, four-legged build.

“I’ve been fascinated by developing legged machines, which can go where real machines cannot go,” explained Kim. “As mankind, we’ve conquered air, water, ground — all of these categories, but we conquered ground in a different way. We modified the ground for our wheels.”

And it makes sense. It’s the same reason so many roboticists continue to be drawn to human- and animal-inspired robots. We’ve built our environment with us in mind, so a robot drawing on similar evolutionary source material will probably do a better job navigating around. In the case of the Cheetah, that means walking around rubble and up stairs. The company also demoed the new Cheetah’s ability to balance on three legs, using the fourth as a sort of makeshift arm. It’s still in the early stages, but the team is working on a dexterous hand that can perform complex tasks like opening doors — velociraptors eat your hearts out.

 

The new Cheetah design also makes it more capable of carrying payloads — and if this is all starting to sound like what Boston Dynamics has been working on with robots like Big Dog, it’s no coincidence. Both projects were born out of the same DARPA funding. Though, unlike Boston Dynamics’ work, Kim points out, the Cheetah project has used electric motors (rather than hydraulics) all along. Though Boston Dynamics introduced that functionality as well with the Spot and Spot Mini.

Kim is careful to remind me that this is all still early stages — after all, today’s event is Cheetah 3’s big public debut. For now, however, the team is taking a more pragmatic approach. “We’re doing the easy things first,” Kim explained with a laugh. The robot is current being tested across the MIT campus, traversing hills and walking up stairs. Next year, the company will push the Cheetah even further. It took the functions out in the new versions but will be adding them back into the 3 later.

]]>
Mon, 17 Jul 2017 23:00:51 +0400
<![CDATA[Rice team developing flat microscope for the brain]]>http://2045.com/news/35160.html35160Rice University engineers are building a flat microscope, called FlatScope, and developing software that can decode and trigger neurons on the surface of the brain.

Their goal as part of a new government initiative is to provide an alternate path for sight and sound to be delivered directly to the brain.

The project is part of a $65 million effort announced this week by the federal Defense Advanced Research Projects Agency (DARPA) to develop a high-resolution neural interface. Among many long-term goals, the Neural Engineering System Design (NESD) program hopes to compensate for a person's loss of vision or hearing by delivering digital information directly to parts of the brain that can process it.

Members of Rice's Electrical and Computer Engineering Department will focus first on vision. They will receive $4 million over four years to develop an optical hardware and software interface. The optical interface will detect signals from modified neurons that generate light when they are active. The project is a collaboration with the Yale University-affiliated John B. Pierce Laboratory led by neuroscientist Vincent Pieribone.

Current probes that monitor and deliver signals to neurons—for instance, to treat Parkinson's disease or epilepsy—are extremely limited, according to the Rice team. "State-of-the-art systems have only 16 electrodes, and that creates a real practical limit on how well we can capture and represent information from the brain," Rice engineer Jacob Robinson said.

Robinson and Rice colleagues Richard Baraniuk, Ashok Veeraraghavan and Caleb Kemere are charged with developing a thin interface that can monitor and stimulate hundreds of thousands and perhaps millions of neurons in the cortex, the outermost layer of the brain.

"The inspiration comes from advances in semiconductor manufacturing," Robinson said. "We're able to create extremely dense processors with billions of elements on a chip for the phone in your pocket. So why not apply these advances to neural interfaces?"

Kemere said some teams participating in the multi-institution project are investigating devices with thousands of electrodes to address individual neurons. "We're taking an all-optical approach where the microscope might be able to visualize a million neurons," he said.

That requires neurons to be visible. Pieribone's Pierce Lab is gathering expertise in bioluminescence —think fireflies and glowing jellyfish—with the goal of programming neurons with proteins that release a photon when triggered. "The idea of manipulating cells to create light when there's an electrical impulse is not extremely far-fetched in the sense that we are already using fluorescence to measure electrical activity," Robinson said.

The scope under development is a cousin to Rice's FlatCam, developed by Baraniuk and Veeraraghavan to eliminate the need for bulky lenses in cameras. The new project would make FlatCam even flatter, small enough to sit between the skull and cortex without putting additional pressure on the brain, and with enough capacity to sense and deliver signals from perhaps millions of neurons to a computer.

Alongside the hardware, Rice is modifying FlatCam algorithms to handle data from the brain interface.

"The microscope we're building captures three-dimensional images, so we'll be able to see not only the surface but also to a certain depth below," Veeraraghavan said. "At the moment we don't know the limit, but we hope we can see 500 microns deep in tissue."

"That should get us to the dense layers of cortex where we think most of the computations are actually happening, where the neurons connect to each other," Kemere said.

A team at Columbia University is tackling another major challenge: The ability to wirelessly power and gather data from the interface.

In its announcement, DARPA described its goals for the implantable package. "Part of the fundamental research challenge will be developing a deep understanding of how the brain processes hearing, speech and vision simultaneously with individual neuron-level precision and at a scale sufficient to represent detailed imagery and sound," according to the agency. "The selected teams will apply insights into those biological processes to the development of strategies for interpreting neuronal activity quickly and with minimal power and computational resources."

"It's amazing," Kemere said. "Our team is working on three crazy challenges, and each one of them is pushing the boundaries. It's really exciting. This particular DARPA project is fun because they didn't just pick one science-fiction challenge: They decided to let it be DARPA-hard in multiple dimensions."

 Explore further: Scientists use algorithm to peer through opaque brains

Provided by: Rice University  

]]>
Wed, 12 Jul 2017 22:56:30 +0400
<![CDATA[Neuron-integrated nanotubes to repair nerve fibers]]>http://2045.com/news/35153.html35153Carbon nanotubes exhibit interesting characteristics rendering them particularly suited to the construction of special hybrid devices consisting of biological issue and synthetic material. These could re-establish connections between nerve cells at the spinal level that were lost due to lesions or trauma. This is the result of research published in the scientific journal Nanomedicine: Nanotechnology, Biology, and Medicine conducted by a multi-disciplinary team comprising SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE.

Researchers have investigated the possible effects on neurons of interactions with carbon nanotubes. Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms such as the growth of neurons as part of a self-regulating process. This result, which shows the extent to which the integration between nerve cells and these synthetic structures is stable and efficient, highlights possible uses of carbon nanotubes as facilitators of neuronal regeneration or to create a kind of artificial bridge between groups of neurons whose connection has been interrupted. In vivo testing has already begun.

"Interface systems, or, more generally, neuronal prostheses, that enable an effective re-establishment of these connections are under active investigation," says Laura Ballerini (SISSA). "The perfect material to build these neural interfaces does not exist, yet the carbon nanotubes we are working on have already proved to have great potentialities. After all, nanomaterials currently represent our best hope for developing innovative strategies in the treatment of spinal cord injuries." These nanomaterials are used both as scaffolds, as supportive frameworks for nerve cells, and as interfaces transmitting those signals by which nerve cells communicate with each other.

Many aspects, however, still need to be addressed. Among them, the impact on neuronal physiology of the integration of these nanometric structures with the cell membrane. "Studying the interaction between these two elements is crucial, as it might also lead to some undesired effects, which we ought to exclude," says Laura Ballerini. "If, for example, the mere contact provoked a vertiginous rise in the number of synapses, these materials would be essentially unusable."

"This," Maurizio Prato adds, "is precisely what we have investigated in this study where we used pure carbon nanotubes."

The results of the research are extremely encouraging: "First of all, we have proved that nanotubes do not interfere with the composition of lipids, of cholesterol in particular, which make up the cellular membrane in neurons. Membrane lipids play a very important role in the transmission of signals through the synapses. Nanotubes do not seem to influence this process, which is very important."

The research has also highlighted the fact that the nerve cells growing on the substratum of nanotubes via this interaction develop and reach maturity very quickly, eventually reaching a condition of biological homeostasis. "Nanotubes facilitate the full growth of neurons and the formation of new synapses. This growth, however, is not indiscriminate and unlimited. We proved that after a few weeks, a physiological balance is attained. Having established the fact that this interaction is stable and efficient is an aspect of fundamental importance."

Laura Ballerini says, "We are proving that carbon nanotubes perform excellently in terms of duration, adaptability and mechanical compatibility with the tissue. Now, we know that their interaction with the biological material, too, is efficient. Based on this evidence, we are already studying the in vivo application, and preliminary results appear to be quite promising also in terms of recovery of the lost neurological functions."

 Explore further: A 'bridge' of carbon between nerve tissues

More information: Niccolò Paolo Pampaloni et al, Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces, Nanomedicine: Nanotechnology, Biology and Medicine (2017). DOI: 10.1016/j.nano.2017.01.020 

Provided by: International School of Advanced Studies (SISSA) 

]]>
Mon, 3 Jul 2017 22:23:47 +0400
<![CDATA[This Parkour Robot Easily Bounces Its Way Over Obstacles]]>http://2045.com/news/35155.html35155With a spinning tail and small thrusters, it has total control over its orientation in mid-air so that it’s always ready for the next hop.

Researchers at the University of California, Berkeley, have updated their parkour robot, and the results would make any free-runner green with envy.

Late last year, we wrote about Duncan Haldane’s Salto robot. It was impressive: weighing mere ounces and standing just a few inches tall, it crouched low, jumped high, and could quickly prepare for another jump. That meant that it could, say, bounce itself off walls.

The only problem was that the small spinning tail it used to control its aerial orientation could control it only along one axis—known as pitch, as on an airplane. That meant it could only jump forward and backward, and then only for few hops at a time, because if it went off balance along the other two axes it would fall to the left or right.

Now, though, IEEE Spectrum reports that Salto has been upgraded: say hello to Salto-1P. The addition of two small thrusters, like propellers from a quadcopter drone, allows it to adjust its orientation in the two other directions, known as roll and yaw, as it moves through the air. It can also crouch lower, enabling it to jump a little farther. (It’s worth noting that it’s not autonomous—a computer is working out how it should move and wirelessly beaming it instructions.)

You can see the impressive results of those upgrades in the clips above. Now, Salto-1P can bounce forward and backward many times over, move side to side to cover the entire floor of a room, and even traverse obstacles like foam blocks and a ramp.  

(Read more: IEEE Spectrum, “This Super-Springy Robot Can Do Parkour”)

]]>
Thu, 29 Jun 2017 22:34:23 +0400
<![CDATA[hitchBOT creators to study how AI and robots can help patients]]>http://2045.com/news/35154.html35154McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care.

With the help of Softbank's humanoid robot Pepper and IBM Bluemix Watson Cognitive Services, the researchers will study health information exchange through a state-of-the-art human-robot interaction system. The project is a collaboration between David Harris Smith, professor in the Department of Communication Studies and Multimedia at McMaster University, Frauke Zeller, professor in the School of Professional Communication at Ryerson University and Hermenio Lima, a dermatologist and professor of medicine at McMaster's Michael G. DeGroote School of Medicine. His main research interests are in the area of immunodermatology and technology applied to human health.

The research project involves the development and analysis of physical and virtual human-robot interactions, and has the capability to improve healthcare outcomes by helping healthcare professionals better understand patients' behaviour.

Zeller and Harris Smith have previously worked together on hitchBOT, the friendly hitchhiking robot that travelled across Canada and has since found its new home in the Science and Technology Museum in Ottawa.

"Pepper will help us highlight some very important aspects and motives of human behaviour and communication," said Zeller.

Designed to be used in professional environments, Pepper is a humanoid robot that can interact with people, 'read' emotions, learn, move and adapt to its environment, and even recharge on its own. Pepper is able to perform facial recognition and develop individualized relationships when it interacts with people.

Lima, the clinic director, said: "We are excited to have the opportunity to potentially transform patient engagement in a clinical setting, and ultimately improve healthcare outcomes by adapting to clients' communications needs."

At Ryerson, Pepper was funded by the Co-lab in the Faculty of Communication and Design. FCAD's Co-lab provides strategic leadership, technological support and acquisitions of technologies that are shaping the future of communications.

"This partnership is a testament to the collaborative nature of innovation," said dean of FCAD, Charles Falzon. "I'm thrilled to support this multidisciplinary project that pushes the boundaries of research, and allows our faculty and students to find uses for emerging tech inside and outside the classroom."

"This project exemplifies the value that research in the Humanities can bring to the wider world, in this case building understanding and enhancing communications in critical settings such as health care," says McMaster's Dean of Humanities, Ken Cruikshank.

The integration of IBM Watson cognitive computing services with the state-of-the-art social robot Pepper, offers a rich source of research potential for the projects at Ryerson and McMaster. This integration is also supported by IBM Canada and SOSCIP by providing the project access to high performance research computing resources and staff in Ontario.

"We see this as the initiation of an ongoing collaborative university and industry research program to develop and test applications of embodied AI, a research program that is well-positioned to integrate and apply emerging improvements in machine learning and social robotics innovations," said Harris Smith.

 Explore further: Hitchhiking robot travels across Canada (Update)

Provided by: McMaster University  

]]>
Fri, 23 Jun 2017 22:26:18 +0400
<![CDATA[NASA releases footage of robot 'Valkyrie']]>http://2045.com/news/35150.html35150Scientists from the United States space agency NASA teamed up with the Johnson Space Center to test the agency's new robot Valkyrie, an android that has a head, two arms and two legs. The robot is expected to be sent to Mars in the future.

A humanoid robot known as Valkyrie that could one day walk on Mars has been showing off its skills in a new video.

Named after the female war spirits of Norse mythology, Valkyrie walks on two legs and has jointed arms and hands that can grasp objects.

Designed and built by NASA's Johnson Space Center, Valkyrie will walk on Mars before the first human explorers, who are expected to reach the Red Planet in the mid-2030s .

The humanoid design was chosen to make it easier for Valkyrie to work alongside people so that, for instance, no special ramps have to be provided to accommodate wheels.

In the video, Valkyrie is shown walking over a series of stepping stones in a curved, uneven path, without stumbling once.

All the decisions about where Valkyrie will place its foot next and how to counterbalance its weight are made autonomously, thanks to a control algorithm developed by IHMC Robotics, which acts as the robot's brain.

This algorithm gathers data about the environment using a spinning laser radar or "Lidar" system housed in its face - similar to those used in driverless cars.

The instrument measures the distance to objects by firing pulses of light at surfaces and timing how long it takes the reflected "echoes" to bounce back.

It then processes this data to identify flat "planar regions" that are suitable for stepping on, before plotting out footsteps to reach a specified location.

Maintaining balance is one of the biggest hurdles to be crossed when designing a walking humanoid robot, according to IHMC Robotics.

Valkyrie overcomes this problem by rapidly computing in real time how to alter its centre of mass position to stay upright.

The robot has no "ears" and cannot speak, but it is equipped with a pair of stereoscopic camera "eyes", cameras on its belly, and an intricate set of force sensors to help it react to touch and pressure.

The robot has a total of 34 "degrees of freedom" - essentially, modes in which it can move - but it is expected acquire more dexterous capabilities over the next few years.

]]>
Mon, 19 Jun 2017 23:28:00 +0400
<![CDATA[The bionic skin to help robots feel]]>http://2045.com/news/35152.html35152Meet the team behind the 3D-printed stretchable sensors equipping machines with a sense of touch.

Robots can’t feel. Or can they? Engineering researchers at the University of Minnesota have developed a revolutionary process for 3D printing a stretchable electronic fabric and it’s allowing robots to experience tactile sensation. We reached out to University of Minnesota mechanical engineering associate professor and lead researcher on the study, Michael McAlpine, to find out how the super sensors work.

McAlpine is no stranger to Red Bull or 3D printing. He first achieved international acclaim for integrating electronics and 3D-printed nanomaterials to create a ‘bionic ear’ designed to hear radio frequencies beyond human capability, and featured in our 20 Mightiest Minds on Earth edition of The Red Bulletin way back in 2012. Now he’s tackling a new sense, touch, and his bionic skin may just save lives.

“Putting this type of ‘bionic skin’ on surgical robots would give surgeons the ability to actually feel during minimally invasive surgeries, which would make surgery easier and more precise instead of just using cameras like they do now. These sensors could also make it easier for other robots to walk and interact with their environment,” McAlpine says.

In a further melding of man and machine, future sensors could be printed directly onto human skin for purposes of health monitoring or to protect soldiers in the field from dangerous chemicals or explosives – the ultimate in wearable tech.

“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” McAlpine says. “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”

The applications are pretty impressive, but how exactly does it work? Well, as you might imagine, it’s not your standard 3D printer.

New technology could print directly on human skin© Shuang-Zhuang Guo and Michael McAlpine, University of Minnesota

Conventional 3D printing using liquid plastic is too rigid and hot to print on skin, so McAlpine and his team print their unique sensing material using a one-of-a-kind printer they built in the lab. The multifunctional printer has four nozzles to print the various specialised 'inks' that make up the layers of the device – a base layer of silicone, top and bottom electrodes made of a conducting ink, a coil-shaped pressure sensor and a sacrificial layer that holds the top layer in place while it sets. The supporting sacrificial layer is later washed away in the final manufacturing process.

All the 'inks' used in this process can set at room temperature and the 3D-printed sensors can stretch up to three times their original size.

“With most research, you discover something and then it needs to be scaled up. Sometimes it could be years before it ready for use,” McAlpine explains. “The nice thing about this 3D-printing tool is that the manufacturing is built right into the tool, so this is reality now. We’re starting to integrate these devices directly onto the human body now, and it’s going to completely revolutionise the way people think about 3D printing.”

]]>
Sun, 18 Jun 2017 23:40:49 +0400
<![CDATA[If we want bionic limbs that actually work, we might need smarter amputations]]>http://2045.com/news/35151.html35151Prosthetic limbs are advancing in leaps and bounds. They’re becoming computerizedbrain-controlled, and sensational. But as futuristic as these bionic limbs are, users often prefer simpler devices because the fancy ones are hard to control and they don’t provide enough feedback.

If you flex your wrist, even if your eyes are closed, you can feel where your wrist is and how fast you’re flexing it. And if you’re holding a barbell, you can feel how heavy it is. Someone with an artificial wrist can’t feel any of that—instead, she has to constantly keep an eye on her prosthetic to see what it’s doing.

“Those sensations are what we intend to provide back to people with limb amputation,” says Hugh Herr, who creates prosthetic limbs at MIT and wears two bionic legs himself.

Herr and his colleagues argue that part of the reason advanced prosthetics aren’t taking off is because amputation essentially hasn’t changed since the Civil War. In a new paper in Science Robotics, they’ve tested a new amputation procedure that may provide better control of advanced prostheses, as well as sensory feedback.

Typical amputations slice right through a patient’s nerves and muscles, leaving some extra muscle to tuck around the end of the limb for cushioning. Without any organs to stimulate, the severed nerves swell painfully. In addition, the arrangement weakens the electrical signals from the muscle, making it difficult to control some bionic limbs that take their orders from the body’s electrical circuitry.

Normally, muscles come in pairs that do opposite things. When you flex your biceps, for example, your triceps stretch. That stretching tricep automatically sends a signal back to your brain, telling you what’s happening in your arm. Amputation typically breaks up these muscle pairings, but Herr thinks that recreating them could make controlling a bionic limb feel more natural, and could give users a sense of their bionic limb’s position and movements without having to look at it. (That sense is called proprioception.)

Muscles normally come in pairs. When one muscle in the pair contracts, the other stretches and sends a signal back to the brain. Researchers think they might be able to use these natural pairings to help amputees "feel" what their artificial limb is doing.

To test out this idea, Herr and his team created artificial muscle pairings in seven rats. Taking two muscles whose nerves had been removed, they linked them together and grafted them into the rats’ legs. Then they took two nerves that normally flex and extend leg muscles, and attached one to each muscle graft. Later, when they stimulated one of the muscles to make it contract, measurements showed that the second muscle automatically sent out a signal to the brain as it stretched. The experiment showed that these artificial muscle pairings work similarly to the biological pairings. Plus, the muscles and nerves provided a strong enough electrical signal that it could potentially be used to control a prosthetic device.

To Herr, these results mean that the artificial muscle pairings might allow information to flow to and from a prosthetic limb. Electrical signals from the contracting muscle could tell the bionic limb what to do, while the stretching muscle tells the brain how the limb is moving, creating a sense of position. Electrical stimulation from the bionic limb to the muscle could provide additional feedback about where the limb is and what it’s feeling. That way, the arm can tell you if someone is shaking your artificial hand or how heavy a barbell in your grip is.

Each muscle pairing can only control one type of motion—for example, moving your forearm up and down for a handshake. Other, independent muscle pairings would be needed to flex each finger, or adjust your wrist.

Study author Hugh Herr hopes to be one of the first humans to try out the new procedure. So far it's only been tested in rats.

Some people with amputations may still have some of these natural muscle pairings in their residual limb. For others, the pairings could be reconstructed by taking muscles from other parts of the body and grafting them to the prosthetic attachment site, like Herr’s team did in this study. And for amputations that are planned in advance, the limb that’s being amputated can be an excellent source of muscles and nerves to help recreate the muscle pairings.

“In the past, the limb was amputated and cremated, and all those great tissues were thrown away,” says Herr. “Even in my case—both my legs are amputated below the knee, and my amputations were done 30-some years ago in a really silly way, in a conventional way—in principle we can do a revision on my limbs and get muscles from another part of my body and create these pairs.”

And in fact, that’s exactly what he plans to do. “We want to rapidly translate this to humans, and I personally want this done on my limbs,” says Herr. Currently he’s having his limbs imaged, developing a surgical plan, and waiting for approval from an ethical review board, but he thinks he could undergo the surgery “very soon.”

“In the past, the limb was amputated and cremated, and all those great tissues were thrown away.”

The procedure is considered low risk since it just involves rearranging tissues. If it doesn’t work, the results should be similar to a conventional amputation.

Another advantage, says Herr, is that the technique provides feedback to the user’s nerves via the muscles. “Muscles don’t mind getting touched by synthetic things, but nerves really complain. It doesn’t like it at all, and ends up rejecting it. Muscles are a lot less touchy.” The FDA has already approved other electrical devices that interface with muscles, so the team will face less of a hurdle there.

If it works, the amputation technique may provide more precise control and sensory feedback, which in turn can lead to better reflexes and a better user experience.

They still need to test it in humans, but the team is hopeful that their technique will help make bionic limbs feel and behave more like natural limbs.

Other researchers, who are putting wires into peoples’ nerves have to figure out what electrical patterns can recreate a sense of force, touch, position, speed. By contrast, says Herr, “we’re using the body’s natural sensors to create these sensations. We’re confident because of that, it’ll feel like position, it’ll feel like speed, it’ll feel like force.”

]]>
Sat, 17 Jun 2017 23:36:49 +0400
<![CDATA[This wriggling worm-bot could be used for colonoscopies one day]]>http://2045.com/news/35149.html35149Nobody needs to reinvent the wheel, but reinventing the colonoscope is definitely worth somebody’s time. Mark Rentschler, an associate professor at the University of Colorado Boulder, is one of those people. He and his team have been working on the wormy robot, above, as a replacement for the usual flexible-camera-tube colonoscope.

“YOU’RE BASICALLY PUSHING A ROPE THROUGH A DEFORMABLE TUBE.”

“Don’t get me wrong, the traditional methods work very well, but they’re not pleasant for the patient,” Rentschler tells The Verge. “You’re basically pushing a rope through a deformable tube and the rope only bends when you get enough force against a wall. That’s where a lot of the discomfort comes from.”

Removing that discomfort is about more than just patient happiness. If colon cancer is caught early, “you’re almost guaranteed survival,” says Rentschler. The problem is that people are so unnerved by the idea of a colonoscopy that they just don’t get checked.

To overcome this problem, scientists are working on a number of different colonoscope designs, all of which have a degree of autonomy. Some have caterpillar treads, some have wheels, but Rentschler and his team thought the best approach would be to mimic natural movements inside the body. That’s why they settled on peristalsis as their chosen form of locomotion. This is the constriction and relaxation of muscles, and is used to move food along the bowels. So why not use it to move robots, too?

Peristalsis in Rentschler’s bot is simulated using springs made from a shape-memory alloy — a material that “remembers” its shape, and returns to it when heated. The metal is heated by a small electric current and expands outward. Then, a combination of cooling air and a 3D-printed silicon sheath covering the exterior of the bots acts as a natural “restoring force” to push them back in. “With this we can drive along and steer about,” says Rentschler. “Then we just put a camera on the end.”

The new bot was shown off earlier this month at the 2017 IEEE International Conference on Robotics and Automation or ICRA. It’s still in the prototype stage, though, and a number of improvements will need to be made if it ever makes it into hospitals (and bodies).

“We definitely want to get a little bit smaller in diameter,” says Rentschler. “And then the other big challenge is speed.” Right now, the bot can squirm along at a rate of around six inches in 15 seconds. An average colonoscopy takes about 30 minutes, and Rentschler’s aim is to get this down to the 20-minute mark. “We're close, but we do want to increase our speed,” he says.

And with a better colonoscope, lives can be saved. Not bad for a wriggly, squiggly robot.

]]>
Thu, 15 Jun 2017 23:20:15 +0400
<![CDATA[Meet the Most Nimble-Fingered Robot Yet]]>http://2045.com/news/35144.html35144A dexterous multi-fingered robot practiced using virtual objects in a simulated world, showing how machine learning and the cloud could revolutionize manual work.

Inside a brightly decorated lab at the University of California, Berkeley, an ordinary-looking robot has developed an exceptional knack for picking up awkward and unusual objects. What’s stunning, though, is that the robot got so good at grasping by working with virtual objects.

The robot learned what kind of grip should work for different items by studying a vast data set of 3-D shapes and suitable grasps. The UC Berkeley researchers fed images to a large deep-learning neural network connected to an off-the-shelf 3-D sensor and a standard robot arm. When a new object is placed in front of it, the robot’s deep-learning system quickly figures out what grasp the arm should use.

The bot is significantly better than anything developed previously. In tests, when it was more than 50 percent confident it could grasp an object, it succeeded in lifting the item and shaking it without dropping the object 98 percent of the time. When the robot was unsure, it would poke the object in order to figure out a better grasp. After doing that it was successful at lifting it 99 percent of the time. This is a significant step up from previous methods, the researchers say.

The work shows how new approaches to robot learning, combined with the ability for robots to access information through the cloud, could advance the capabilities of robots in factories and warehouses, and might even enable these machines to do useful work in new settings like hospitals and homes (see “10 Breakthrough Technologies 2017: Robots That Teach Each Other”). It is described in a paper to be published at a major robotics conference held this July.

Many researchers are working on ways for robots to learn to grasp and manipulate things by practicing over and over, but the process is very time-consuming. The new robot learns without needing to practice, and it is significantly better than any previous system. “We’re producing better results but without that kind of experimentation,” says Ken Goldberg, a professor at UC Berkeley who led the work. “We’re very excited about this.”

Instead of practicing in the real world, the robot learned by feeding on a data set of more than a thousand objects that includes their 3-D shape, visual appearance, and the physics of grasping them. This data set was used to train the robot’s deep-learning system. “We can generate sufficient training data for deep neural networks in a day or so instead of running months of physical trials on a real robot,” says Jeff Mahler, a postdoctoral researcher who worked on the project.

Goldberg and colleagues plan to release the data set they created. Public data sets have been important for advancing the state of the art in computer vision, and now new 3-D data sets promise to help robots advance.

Stefanie Tellex, an assistant professor at Brown University who specializes in robot learning, describes the research as “a big deal,” noting that it could accelerate laborious machine-learning approaches.

“It's hard to collect large data sets of robotic data,” Tellex says. “This paper is exciting because it shows that a simulated data set can be used to train a model for grasping.  And this model translates to real successes on a physical robot.”

Advances in control algorithms and machine-learning approaches, together with new hardware, are steadily building a foundation on which a new generation of robots will operate. These systems will be able to perform a much wider range of everyday tasks. More nimble-fingered machines are, in fact, already taking on manual labor that has long remained out of reach (see “A Robot with Its Head in the Cloud Tackles Warehouse Picking”).

Russ Tedrake, an MIT professor who works on robots, says a number of research groups are making progress on much more capable dexterous robots. He adds that the UC Berkeley work is impressive because it combines newer machine-learning methods with more traditional approaches that involve reasoning over the shape of an object.

The emergence of more dexterous robots could have significant economic implications, too. The robots found in factories today are remarkably precise and determined, but incredibly clumsy when faced with an unfamiliar object. A number of companies, including Amazon, are using robots in warehouses, but so far only for moving products around, and not for picking objects for orders.

The UC Berkeley researchers collaborated with Juan Aparicio, a research group head at Siemens. The German company is interested in commercializing cloud robotics, among other connected manufacturing technologies.

Aparicio says the research is exciting because the reliability of the arm offers a clear path toward commercialization.

Developments in machine dexterity may also be significant for the advancement of artificial intelligence. Manual dexterity played a critical role in the evolution of human intelligence, forming a virtuous feedback loop with sharper vision and increasing brain power. The ability to manipulate real objects more effectively seems certain to play a role in the evolution of artificial intelligence, too.

]]>
Sat, 3 Jun 2017 01:33:51 +0400
<![CDATA[I Spy With My DragonflEye: Scientists 'Hack' Insect to Create Cyborg Drone]]>http://2045.com/news/35145.html35145Many might think of a cyborg as something out of a science-fiction movie script, but scientists have found a way to alter a living dragonfly so they can control its movements.

As countries like the United States continue to rely on surveillance drones, the challenge of shrinking the flying robots down to an inconspicuous size has become a point of interest for military researchers.

Scientists at Charles Stark Draper Laboratory in the US, have developed a way of using living insects as drones.

The research has been named DragonflEye, and is essentially a cyborg dragon fly, meaning it is half dragonfly, half machine.

It was created by genetically modifying regular dragonflies with "steering neurons" in the spinal cord of the insect. Through doing this tiny, fiber-optic-like structures in the eyes of the dragonfly send bursts of light to the brain, which then allows scientists to control where the insect flies via remote control.

On the dragonfly's back, is a tiny device that appears as a backpack, which contains sensors and a solar panel to power the data collection technology.

The hope is that dragonfly will then be able to be steered by the researchers and collect data through its sensors, environments that are either not safe for humans, or are to small for humans to fit through, such as cracks in walls.

Some champion this as a huge breakthrough for technology, while others might feel slightly uncomfortable with the thought of genetic modification being used to control insects, or perhaps one day, even higher-up species.

However, the cyborg insect could also be very advantageous to the way that we understand the world, and perhaps even one day to humans.

Some have suggested that such technology could be used to help humans who are paralyzed to restore movement. 

]]>
Fri, 2 Jun 2017 01:38:23 +0400
<![CDATA[MIT teaches machines to learn from each other]]>http://2045.com/news/35141.html35141There are two typical ways to train a robot today: you can have it watch repeated demonstrations of what you want it to do or you can program its movements directly using motion-planning techniques. But a team of researchers from MIT's CSAIL lab have developed a hybridized third option that will enable robots to transfer skills and knowledge between themselves. It's no Skynet, but it's a start.

The system, dubbed C-LEARN, is designed to enable anybody, regardless of their programming know-how, to program robots for a wide range of tasks. But rather than having the robot ape your movements or hand-coding its desired movements, C-LEARN only requires that the user input a birt of information on how the objects the robot will interact with are typically handled then run through a single demonstration. The robot can then share this kinematic data with others of its kind.

First, the user inputs the environmental constraints -- essentially how to reach out, grasp and hold the items it's interacting with. That way the robot isn't not crushing everything it touches or holding objects in a way that will cause them to break or fall. Then, using a CAD program, the user can create a single digital demonstration for the robot. It actually works a lot like traditional hand-drawn animations wherein the robot's motions hit specific movements and positions as "keyframes" and fills in the rest.

Of course, the robot doesn't have the final say in this, all motion plans have to be verified by the human operator first. Overall, the robots were able to choose the optimal motion plan 87.5 percent of the time without human intervention, though that number jumped to 100 percent when a human operator was able to tweak the plan as needed.

The first robot to benefit from this new system is the Optimus, a two-armed bomb disposal-bot. The CSAIL team taught it to open doors, carry items and even pull objects out of jars. The Optimus was then able to transfer these same skills to another robot in the CSAIL lab, the 6-foot, 400-pound Atlas.

]]>
Fri, 12 May 2017 12:47:27 +0400
<![CDATA[Freaky Ostrich-like running robot built for ‘planetary exploration’ (VIDEOS)]]>http://2045.com/news/35135.html35135It may look like an ostrich cantering over the ground, but the Planar Elliptical Runner could become the model for a human-sized running robot – and even aid “planetary exploration.”

Developed by the Institute for Human and Machine Cognition (IHMC) in Pensacola, Florida, the machine’s fluid locomotion has drawn comparisons with the flightless bird.

Speaking to Digital Trends, research associate Johnny Godowski said: “It’s emulating what you see in nature. Birds are able to run over holes and obstacles half their leg height, and they don’t even break stride. Our robot mechanism is designed to do the same thing.”

Unlike other two-legged robots, it does not use computer sensors to balance itself. Instead a single motor drives the machine’s two legs while a side-to-side motion keeps it upright. The robot is also guided by a standard radio controller, meaning it does not waste battery power.

The robot can reach speeds of up to 10mph (16kph) – but researchers believe a human-sized machine could one day hit speeds of up to three times that of its smaller counterpart.

Jerry Pratt, a senior research scientist at IMHC, told Technology Review: “We believe that the lessons learned from this robot can be applied to more practical running robots to make them more efficient and natural looking.

“Robots with legs will be particularly useful in places where you want a human presence, but it’s too dangerous, expensive, or remote to send a real human. Examples include nuclear power plant decommissioning and planetary exploration.”

In 2013, Pratt led a team to second place in the DARPA Robotics Challenge, a US Defense Department contest testing robots’ abilities to perform a series of tasks in extreme environments.  

Other robotics firms are hoping to make breakthroughs with their own two and four-legged machines.

In February, Agility Robotics unveiled Cassie, another ostrich-inspired bipedal creation, while Honda continues to market their humanoid robot ASIMO.

Meanwhile, Pratt’s team is working on a number of biped projects. IMHC showcased these advances at their annual Robotics Open House in Florida last month.

]]>
Sat, 6 May 2017 00:36:17 +0400
<![CDATA[Bionic hand that can see for itself makes things easy to grasp]]>http://2045.com/news/35136.html35136An artificial hand is using artificial intelligence to see with an artificial eye. The new prosthetic can choose how best to grab objects placed in front of it automatically, making it easier to use.

When it sees an object, the artificial hand detects the intention to grasp by interpreting electrical signals from muscles in the wearer’s arm. It then takes a picture of the object using a cheap webcam and picks one of four possible grasping positions.

The different grips include one similar to picking up a cup, one similar to picking up a TV remote from a table, one that uses two fingers and a thumb, and another that uses just the thumb and index finger. “The hand learns the best way to grasp objects – that’s the beauty of it,” says Ghazal Ghazaei at Newcastle University, UK.

To train the hand, Ghazaei and her colleagues showed it images of more than 500 objects. Each object came with 72 different images, showing different angles and different backgrounds, as well as the best grip for picking it up. Through trial and error, the system learned to choose the best grips for itself.

Not quite there

Existing controllable prosthetics work by converting electrical signals in a person’s arm or leg into movement. But it can take a long time to learn to control an artificial limb and the movements can still be clumsy. The new system is just a prototype, but by giving a hand the ability to see what it is doing and position itself accordingly, the team believe they can make a better prosthetic.

The design has been tested by two people who have had a hand amputated. They were able to grab a range of objects with just under 90 per cent accuracy. That’s not bad for a prototype but dropping one out of 10 things users try to pick up is not yet good enough.

“We’re aiming for 100 per cent accuracy,” says Ghazaei. The researchers hope to achieve this by trying out different algorithms. They also plan to make a lighter version with the camera embedded in the palm of the hand.

The key with prostheses like these is getting the balance right between user and computer control, says Dario Farina at Imperial College London. “People don’t want to feel like a robot, they want to feel like they are fully in control,” he says.

It’s important that the technology helps assist grasping rather than fully taking over. “It should be similar to brake assistance on a car, the driver decides when to brake but the car helps them brake better,” says Farina.

]]>
Wed, 3 May 2017 00:42:47 +0400
<![CDATA[What humans will look like in 1,000 years]]>http://2045.com/news/35137.html35137Humans are still evolving, So, where will evolution take us in 1,000 years?
Chances are we’ll be taller. Humans have already seen a boom in height over the last 130 years.

In 1880 the average American male was 5’7’’. Today, he’s 5’10’’.

We may also merge with machines that can enhance our hearing, eyesight, health, and much more. Right now, there are hearing aids that let you record sounds, generate white noise, and even come with a built-in phone.

Another example is a team out of the University of Oregon which is developing bionic eyes that help the blind to see. But it’s not impossible to imagine that this technology could become a tool for seeing what we currently consider invisible, like different energies of light such as infrared and x-rays.

There will eventually be a day where prosthetics are no longer just for the disabled.

However, it’s not just our outside appearance that will change – our genes will also evolve on microscopic levels to aid our survival. For example, an Oxford-led study discovered a group of HIV-infected children in South Africa living healthy lives. It turns out, they have a built-in defense against HIV that prevents the virus from advancing to AIDS.

And with gene-editing tools like CRISPR, we may eventually control our genes and DNA to the point where we make ourselves immune to disease and even reverse the effects of aging.

Another way to jump-start the human evolution on a different path is to move some of us to Mars. Mars receives 66% less sunlight than Earth. Which could mean humans on Mars will evolve larger pupils that can absorb more light in order to see. And since Mars’ gravitational pull is only 38% of Earth’s, people born on Mars might actually be taller than anyone on Earth. In space, the fluid that separates our vertebrae expands, which led American aerospace engineer, Robert Zubrin to suggest that Mars’ low gravity could allow the human spine to elongate enough to add a few extra inches to our height.

However, not even a move to Mars could spark the biggest change in human evolution that we may have coming in the next 1,000 years: immortality. The path to immortality will likely require humans to download their consciousness into a machine. Right now, scientists in Italy and China are performing head transplants on animals to determine if you can transfer consciousness from one body to another. They claim their next big step is to transplant human heads.

Whatever happens in the next 1,000 years — whether we merge with machines or become them — one thing is certain: The human race is always changing — and the faster we change and branch out from Earth, the better chance we have of outrunning extinction.

]]>
Sat, 29 Apr 2017 00:49:35 +0400
<![CDATA[NASA Gives Rover An Origami-Inspired Robot Scout]]>http://2045.com/news/35127.html35127NASA has started testing an origami-inspired scout robot that will be used to explore the Martian surface.

Mars exploration missions have gained traction in the last few years, and space agencies are developing new rovers and robots that can enable scientists to garner more details of the Red Planet. 

PUFFER: A New Robot Scout

Pop-Up Flat Folding Explorer Robot or PUFFER has been developed by NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California.

This device was introduced by Jaakko Karras,who is the project manager of PUFFER at JPL, while he was testing the origami designs. Karras and his associates thought of using a printed circuit board while creating these devices.

PUFFER includes a lightweight structure and it is made in such a way that it can tuck its wheels, flatten itself, and explore places, which a typical rover cannot access.

Features Of PUFFER

The scout robot has been tested under varied rugged conditions, starting from the Mojave Desert in California to the frozen plains of Antarctica. It was put through these tests to ensure its functionality in all kinds of terrain, whether sand covered or snow laden.

Originally, this device consisted of four wheels, but at present it only has two wheels that are foldable. The folding of the wheel over the body allows the machine to roll and crawl.

It also consists of a tail which is made for stability. The robot inlcudes a "microimager," which is a high resolution camera, and solar panels are placed on the belly of the PUFFER. The machine flips over when the batteries are drained.

PUFFER can climb up to 45 degree slopes and can even fall into craters and pits unharmed. The robot is considered a strong assistant to large robotic devices that will be sent to Mars in the near term.

"They can do parallel science with a rover, so you can increase the amount you're doing in a day. We can see these being used in hard-to-reach locations - squeezing under ledges, for example," stated Karras.

Another member of the PUFFER group, Christine Fullera at JPL, said that the body and the electronics of PUFFER involve a circuit board. There are no escalating fasteners or any other parts, which are attached to it. The robot has an integrated body.

The team has built a sample of PUFFER and has already started testing it for the past few months. The officials of the PUFFER project have said that this device is not yet ready. They plan to give the robot more autonomy by including scientific instruments like gear, which identifies carbon containing organic molecules.

]]>
Wed, 15 Mar 2017 09:22:27 +0400
<![CDATA[Brain activity appears to continue after people are dead, according to new study]]>http://2045.com/news/35120.html35120Brain activity may continue for more than 10 minutes after the body appears to have died, according to a new study.

Canadian doctors in an intensive care unit appear to have observed a person's brain continuing to work even after they were declared clinically dead.

In the case, doctors confirmed their patient was dead through a range of the normal observations, including the absence of a pulse and unreactive pupils. But tests showed that the patients’ brain appeared to keep working – experiencing the same kind of brain waves that are seen during deep sleep.

In a study that noted the findings could lead to new medical and ethical challenges, doctors reported that they had seen “single delta wave bursts persisted following the cessation of both the cardiac rhythm and arterial blood pressure (ABP)”. The findings are reported in a new study published by a team from the University of Western Ontario.

Only one of the four people studied exhibited the long-lasting and mysterious brain activity, with activity in most patients dying off before their heart stopped beating. But all of their brains behaved different in the minutes after they died – adding further mystery to what happens to them after death.

The doctors don’t know what the purpose of the activity might be, and caution against drawing too many conclusions from such a small sample. But they write that it is difficult to think the activity was the result of a mistake, given that all of the equipment appeared to be working fine.

Researchers had previously thought that almost all brain activity ended in one huge mysterious surge about a minute after death. But those studies were based on rats – and the research found no comparable effect in humans.

“We did not observe a delta wave within 1 minute following cardiac arrest in any of our four patients,” they write in the new study.

What happens to the body and mind after death remains almost entirely mysterious to scientists. Two other studies last year, for instance, demonstrated that genes appeared to continue functioning – and even function more energetically – in the days after people die.

]]>
Fri, 10 Mar 2017 17:51:25 +0400
<![CDATA[Researchers Take A Step Toward Mind-Controlled Robots]]>http://2045.com/news/35121.html35121What if your friend the robot could tell what you're thinking, without you saying a word?

Researchers at MIT's Computer Science and Artificial Intelligence Lab and Boston University have created a system where humans can guide robots with their brainwaves. This may sound like a theory out of a sci-fi novel, but the goal of seamless human-robot interaction is the next major frontier for robotic research.

For now, the MIT system can only handle simple binary activities such as correcting a robot as it sorts objects into two boxes, but CSAIL Director Daniela Rus sees a future where one day we could control robots in more natural ways, rather than having to program them for specific tasks — like allowing a supervisor on a factory floor to control a robot without ever pushing a button.

"Imagine you look at the robots, and at some point one robot is not doing the job correctly," Rus explained. "You will think that, you will have that thought, and through this detection you would in fact communicate remotely with the robot to say 'stop.' "

Rus admits the MIT development is a baby step, but she says it's an important step toward improving the way humans and robots interact.

Currently, most communication with robots requires thinking in a particular way that computers can recognize or vocalizing a command, which can be exhausting.

"We would like to change the paradigm," Rus said. "We would like to get the robot to adapt to the human language."

The MIT paper shows it's possible to have a robot read your mind — at least when it comes to a super simplistic task. And Andres Salazar-Gomez, a Boston University Ph.D. candidate working with the CSAIL research team, says this system could one day help people who can't communicate verbally.

Meet Baxter

For this study, MIT researchers used a robot named Baxter from Rethink Robotics.

Baxter had a simple task: Put a can of spray paint into the box marked "paint" and a spool of wire in the box labeled "wire." A volunteer hooked up to an EEG cap, which reads electrical activity in the brain, sat across from Baxter, and observed him doing his job. If they noticed a mistake, they would naturally emit a brain signal known as an "error-related potential."

"You can use [that signal] to tell a robot to stop or you can use that to alter the action of the robot," Rus explained.

The system then translates that brain signal to Baxter, so he understands he's wrong, his cheeks blush to show he's embarrassed, and he corrects his behavior.

The MIT system correctly identified the volunteer's brain signal and then corrected the robot's behavior 70 percent of the time.

Making robots effective "collaborators"

"I think this is exciting work," said Bin He, a biomedical engineer at the University of Minnesota, who published a paper in December that showed people can control a robotic arm with their minds.

He was not affiliated with the MIT research, but he sees this as a "clever" application in a growing yet nascent field.

Researchers say there's an increasing desire to find ways to make robots effective "collaborators," not just obedient servants.

"One key aspect of collaboration is being able ... to know when you're making a mistake," said Siddhartha Srinivasa, a professor at Carnegie Mellon University who was not affiliated with the MIT study. "What this paper shows is how you can use human intuition to boot-strap a robot's learning of what its world looks like and how it can know right from wrong."

Srinivasa says this research could potentially have key implications for prosthetics, but cautions it's an "excellent first step toward solving a harder, much more complicated problem."

"There's a long gray line between not making a mistake and making a mistake," Srinivasa said. "Being able to decode more of the neuronal activity... is really critical."

And Srinivasa says that's a topic that more scientists need to explore.

Potential real-world applications

MIT's Rus imagines a future where anybody can communicate with a robot without any training — a world where this technology could help steer a self-driving car or clean up your home.

"Imagine ... you have your robot pick up all the toys and socks from the floor, and you want the robot to put the socks in the sock bin and put the toys in the toy bin," she said.

She says that would save her a lot of time, but for now the mechanical house cleaner that can read your mind is still a dream.

]]>
Wed, 8 Mar 2017 17:54:13 +0400
<![CDATA[Ghost Minitaur™ Highly Agile Direct-Drive Quadruped Demonstrates Why Legged Robots are Far Superior to Wheels and Tracks When Venturing Outdoors]]>http://2045.com/news/35119.html35119Ghost Robotics, a leader in fast and lightweight direct-drive (gearless) legged robots, announced today that its patent-pending Ghost Minitaur™ has been updated with advanced reactive behaviors for navigating grass, rock, sand, snow and ice fields, urban objects and debris, and vertical terrain. (https://youtu.be/bnKOeMoibLg)

The latest gaits adapt reactively to unstructured and dynamic environments to maintain balance, ascend steep inclines (up to 35º), handle curb-sized steps in stride (up to 15cm), crouch to fit under crawl spaces (as low as 27cm), and operate at variable speeds and turning rates. Minitaur's high-force capabilities enable it to leap onto ledges (up to 40cm) and across gaps (up to 80cm). Its high control bandwidth allows it to actively balance on two legs, and high speed operation allows its legs to manipulate the world faster than the blink of an eye, while deftly reacting to unexpected contact.

Continue Reading
Ghost Minitaur(TM) Highly Agile Direct-Drive Quadruped Demonstrates Why Legged Robots are Far Superior to Wheels and Tracks When Venturing Outdoors.

"Our primary focus since releasing the Minitaur late last year has been expanding its behaviors to traverse a wide range of terrains and real-world operating scenarios," said Gavin Kenneally, and Avik De, Co-founders of Ghost Robotics. "In a short time, we have shown that legged robots not only have superior baseline mobility over wheels and tracks in a variety of environments and terrains, but also exhibit a diverse set of behaviors that allow them to easily overcome natural obstacles. We are excited to push the envelope with future capabilities, improved hardware, as well as integrated sensing and autonomy."

Ghost Robotics is designing next-generation legged robots that are superior to wheeled and tracked autonomous vehicles in real-world field applications, while substantially reducing costs to drive adoption and scalable deployments. Its direct-drive technology creates the lowest cost model with durability for commercializing very small to medium size legged UGV sensor platforms over any competitive design. The company's underlying research and intellectual property have additional applications in ultra-precise manipulators that are human-safe, and advanced gait research.

While a commercial version of the Ghost Minitaur™ robot is slated for delivery in the future, the current development platform is in high demand and has been shipped to many top robotics researchers worldwide because of its design simplicity, low cost and flexible software development environment for a broad range of research and commercialization initiatives.

"We are pleased with our R&D progress towards commercializing the Ghost Minitaur™ to prove legged robots can surpass the performance of wheel and track UGVs, while keeping the cost model low to support volume adoption, which is certainly not the case with existing bipedal and quadrupedal robot vendors," said Jiren Parikh, Ghost Robotics, CEO.

In the coming quarters, the company plans to demonstrate further improvements in mobility, built-in manipulation capabilities to interact with objects in the world, integration with more sensors, built-in autonomy for operation with reduced human intervention, as well as increased mechanical robustness and durability for operation in harsh environments.

About Ghost Robotics

Robots that Feel the World™. Ghost Robotics develops patent-pending, ultrafast and highly responsive direct-drive (no gearbox) legged robots for instantaneous and precise force feedback applications, offering superior operability over wheeled and tracked robots. The lightweight and low-cost Ghost Minitaur™ robot platform can be used as an autonomous vehicle fitted with sensors for ISR, search and rescue, asset management and inspection, exploration, scientific and military applications where unknown, rough, varied, hazardous, environmentally sensitive and even vertical terrain is present. Ghost Robotics is privately held and backed by the University of Pennsylvania and PCI Ventures with offices in Philadelphia. www.ghostrobotics.io

SOURCE Ghost Robotics, LLC

Related Links

http://ghostrobotics.io

]]>
Wed, 1 Mar 2017 17:55:29 +0400
<![CDATA[Boston Dynamics’ newest robot: Introducing Handle]]>http://2045.com/news/35118.html35118Handle is a research robot standing 6.5 ft tall, travels at 9 mph and jumps 4 feet vertically. It uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles on one battery charge. Handle uses many of the same dynamics, balance and mobile manipulation principles found in the other quadruped and biped robots Boston Dynamics’ build, but with only about 10 actuated joints, it is significantly less complex. Wheels are efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs Handle can have the best of both worlds.

]]>
Tue, 28 Feb 2017 21:35:55 +0400
<![CDATA[The 'Curious' Robots Searching for the Ocean's Secrets]]>http://2045.com/news/35116.html35116People have been exploring the Earth since ancient times—traversing deserts, climbing mountains, and trekking through forests. But there is one ecological realm that hasn’t yet been well explored: the oceans. To date, just 5 percent of Earth’s oceans have been seen by human eyes or by human-controlled robots.

That’s quickly changing thanks to advancements in robotic technologies. In particular, a new class of self-controlled robots that continually adapt to their surroundings is opening the door to undersea discovery.  These autonomous, “curious” machines can efficiently search for specific undersea features such as marine organisms and landscapes, but they are also programmed to keep an eye out for other interesting things that may unexpectedly pop up.

Curious robots—which can be virtually any size or shape—use sensors and cameras to guide their movements. The sensors take sonar, depth, temperature, salinity, and other readings, while the cameras constantly send pictures of what they’re seeing in compressed, low-resolution form to human operators. If an image shows something different than the feature a robot was programmed to explore, the operator can give the robot the okay to go over and check out in greater detail.

The field of autonomous underwater robots is relatively young, but the curious-robots exploration method has already lead to some pretty interesting discoveries, says Hanumant Singh, an ocean physicist and engineer at Woods Hole Oceanographic Institution in Massachusetts. In 2015, he and a team of researchers went on an expedition to study creatures living on Hannibal Seamount, an undersea mountain chain off Panama’s coast. They sent a curious robot down to the seabed from their “manned submersible”—a modern version of the classic Jacques Cousteau yellow submarine—to take photos and videos and collect living organisms on several dives over the course of 21 days.

On the expedition’s final dive, the robot detected an anomaly on the seafloor, and sent back several low-resolution photos of what looked like red fuzz in a very low oxygen zone. “The robot’s operators thought what was in the image might be interesting, so they sent it over to the feature to take more photos,” says Singh. “Thanks to the curious robot, we were able to tell that these were crabs—a whole swarming herd of them.”

The team used submarines to scoop up several live crabs, which were later identified through DNA sequencing as Pleuroncodes planipes, commonly known as pelagic red crabs, a species native to Baja California. Singh says it was extremely unusual to find the crabs so far south of their normal range and in such a high abundance, gathered together like a swarm of insects. Because the crabs serve as an important food source for open-ocean predators in the eastern Pacific, the researchers hypothesize the crabs may be an undetected food source for predators at the Hannibal Seamount, too.

When autonomous robot technology first developed 15 years ago, Singh says he and other scientists were building robots and robotics software from scratch. Today a variety of programming interfaces—some of which are open-source—exist, making scientists’ jobs a little easier. Now they just have to build the robot itself, install some software, and fine-tune some algorithms to fit their research goals.

“To efficiently explore and map our oceans, intelligent robots … are a necessity.”

While curious robot software systems vary, Girdhar says some of the basics remain the same. All curious robots need to collect data, and they do this with their ability to understand different undersea scenes without supervision. This involves “teaching” robots to detect a given class of oceanic features, such as different types of fish, coral, or sediment. The robots must also be able to detect anomalies in context, following a path that balances their programmed mission with their own curiosity.

This detection method is different from traditional undersea robots, which are preprogrammed to follow just one exploration path and look for one feature or a set of features, ignoring anomalies or changing oceanic conditions. One example of a traditional robot is Jason, a human-controlled “ROV,” or remotely operated vehicle, used by scientists at Woods Hole to study the seafloor.

Marine scientists see curious robots as a clear path forward. “To efficiently explore and map our oceans, intelligent robots with abilities to deliberate sensor data and make smart decisions are a necessity,” says Øyvind Ødegård, a marine archaeologist and Ph.D. candidate at the Centre for Autonomous Marine Operations and Systems at Norwegian University of Science and Technology.

Ødegård uses robots to detect and investigate shipwrecks, often in places too dangerous for human divers to explore—like the Arctic. Other undersea scientists in fields like biology and chemistry are starting to use curious robots to do things like monitor oil spills and searching for invasive species.

Compared to other undersea robots, Ødegård says, autonomous curious robots are best suited to long-term exploration. For shorter missions in already explored marine environments, it’s possible to preprogram robots to cope with predictable situations, says Ødegård. Yet, “for longer missions, with limited prior knowledge of the environment, such predictions become increasingly harder to make. The robot must have deliberative abilities or ‘intelligence’ that is robust enough for coping with unforeseen events in a manner that ensures its own safety and also the goals of the mission.”

One big challenge is sending larger amounts of data to human operators in real time. Water inhibits the movement of electromagnetic signals such as GPS, so curious robots can only communicate in small bits of data. Ødegård says to overcome this challenge, scientists are looking for ways to optimize data processing.

According to Singh, one next step in curious robot technology is teaching the robots to work in tandem with drones to give scientists pictures of sea ice from both above and below. Another is teaching the robots to deal with different species biases. For example, the robots frighten some fish and attract others—and this could cause data anomalies, making some species appear less or more abundant than they actually are.

Ødegård adds that new developments in robotics programs could allow even scientists without a background in robotics the opportunity to reap the benefits of robotics research. “I hope we will see more affordable robots that lower the threshold for playing with them and taking risks,” he says. “That way it will be easier to find new and innovative ways to use them.

]]>
Thu, 23 Feb 2017 15:27:38 +0400
<![CDATA[What Happens When Robots Become Role Models]]>http://2045.com/news/35112.html35112When you spend a lot of time with someone, their characteristics can rub off on you. But what happens when that someone is a robot?

As artificial intelligence systems become increasingly human, their abilities to influence people also improve. New Scientist reports that children who spend time with a robotic companion appear to pick up elements of its behavior. New experiments suggest that when kids play with a robot that’s a real go-getter, for instance, the child acquires some of its unremitting can-do attitude.

Other researchers are seeking to take advantage of similar effects in adults. A group at the Queensland University of Technology is enrolling a small team of pint-sized humanoid Nao robots to coach people to eat healthy. It hopes that chatting through diet choices with a robot, rather than logging calorie consumption on a smartphone, will be more effective in changing habits. It could work: as our own Will Knight has found out in the past, some conversational AI interfaces can be particularly compelling.

So as personal robots increasingly enter the home, robots may not just do our bidding—they might also become role models, too. And that means we must tread carefully, because while the stories above hint at the possibilities of positive reinforcement from automatons, others hint at potential negative effects.

Some parents, for instance, have complained that Amazon’s Alexa personal assistant is training their children to be rude. Alexa doesn’t need people to say please and thank you, will tolerate answering the same question over and over, and remains calm in the face of tantrums. In short: it doesn’t prime kids for how to interact with real people.

The process can flow both ways, of course. Researchers at Stanford University recently developed a robot that was designed to roam sidewalks, monitor humans, and learn how to behave with them naturally and appropriately. But as we’ve seen in the case of Microsoft’s AI chatbot, Tay—which swiftly became rude and anti-Semitic when it learned from Twitter users—taking cues from the crowd doesn’t always play out well.

In reality, there isn’t yet a fast track to creating robots that are socially intelligent—it remains one of the large unsolved problems of AI. That means that roboticists must instead carefully choose the traits they wish to be present in their machines, or else risk delivering armies of bad influence into our homes.

]]>
Wed, 22 Feb 2017 07:44:34 +0400
<![CDATA[Implants enable richer communication for people with paralysis]]>http://2045.com/news/35115.html35115

John Scalzi's science fiction novel Lock In predicts a near future where people with complete body paralysis can live meaningful, authentic lives thanks to (fictional) advances in brain-computer interfaces. A new study by researchers at Stanford University might be the first step towards such a reality.

Using brain-computer interfaces (BCI) to help people with paralysis communicate isn't completely new. But getting people using it to have a complex conversation is. This study's participants were able to output words at a much faster, more accurate rate than ever recorded thanks to the advanced technique.

The investigators worked with three people who experience severe limb weakness, either from amyotrophic lateral sclerosis (ALS), also called Lou Gehrig's disease, or from a spinal cord injury. They each had a tiny electrode array or two placed in their brains to record the signals from a region in the motor cortex that controls muscle movement. With only a little bit of training, the participants were able to master the typing interface. One participant, Dennis Degray of Menlo Park, California, was able to type eight words per minute with just his brain, a rate approaching texting speeds.

The researchers used the newest generation of BCI called the BrainGate Neural Interface System, the first such device to be surgically placed inside a patient's head. The tiny chips have 100 electrodes that penetrate the brain and can tap into individual nerve cells, a massive improvement over the older systems which can only measure brain waves and blood flow subcutaneously or from the outside of the scalp.

This is only the first step to creating a much more connected life for those with significant motor issues. The team of investigators looks forward to a day, perhaps just five years from now, when systems like this can be used to help people with paralysis communicate meaningfully with others.

]]>
Tue, 21 Feb 2017 08:02:14 +0400
<![CDATA[Facing the robotic revolution]]>http://2045.com/news/35113.html35113Pepper awakes. "Hi, I am a humanoid robot, and I am 1.2m [4ft] tall. I was born at Aldebaran in Paris. You can keep on asking me questions if you want."

Michael Szollosy, who looks at the social impact and cultural influence of robots, has just switched on the new arrival at the Sheffield Robotics centre, at the University of Sheffield.

He asks: "What do you do, Pepper?"

"Human."

"You do human?" I interject.

"Of course not," says Pepper, "but that shouldn't keep us from chatting."

I say indeed not, and ask what he thought of Paris.

"You can caress my head or hands for example," is the reply. "Very Parisian," I observe, stroking the sensors atop of Pepper.

"I like it when you touch my head. Ah, miaow."

"You're a scream, Pepper."

Image captionMark Mardell meets Milo

"Miaow ! I feel like a cat!"

Pepper is slim white robot, with skeletal hands, a plastic body and big black eyes.

Mr Szollosy says: "Human beings don't need very much to identify something as alive.

"So a couple of black dots and a line underneath and we see a face every time.

"People say, 'Oh he's smiling at me,' - his mouth doesn't move. But that's what humans bring to the equation.

"We invent these things. I say robots were invented in the imagination long before they were built in labs."

This project is less about developing the technology and more about examining the way we relate to it - most people working in this field are convinced Pepper and and his kind will have huge implications for all of us, changing the way we work, the way we live, even the way we relate to each other.

"I think it is going to be increasingly the case that robots do more and more of the jobs that people used to do," says the centre's director, Prof Tony Prescott.

"We have lots of Eastern Europeans weeding fields because nobody in the UK wants to do that. It could be automated. It's a perfect job for a robot to do."

We are now at a tipping point.

The advances in AI (artificial intelligence) mean robots can now do much more.

But it hasn't developed in the way people might have expected 50 years ago.

A computer can do really clever stuff - beating a chess grandmaster with ease, and now winning at Go.

But a robot butler, which could make you a cup of coffee and run your bath, remains out of reach.

The very idea of robots excites and scares. It is part of the reason behind this centre.

After the development of genetically modified (GM) food, also known in the tabloids as "Frankenstein food", and the backlash against it, they decided some education was called for.

Mr Szollosy says people are frightened by the wrong things. He bemoans the fact that any story about robotics is accompanied by a picture of the Terminator.

"If artificial intelligence does want to take over the world, eradicate the human race, there are much more efficient ways of doing it," he says.

"Gun-wielding bipedal robots - we could beat them no problem. Daleks can't go upstairs.

"My job is to make people understand what not to fear but also explain that robots may well take 60% of the jobs in 20 years' time and that is of deep concern, if we don't restructure society to go along with that."

Prof Prescott hopes robots are part of the solution to a problem that haunts politicians.

"We have a shortage of trained carers, and it is often migrant labour," he says.

"Those jobs are very poorly paid.

"The quality of life for people in care is low, the quality of life for the carers is also low.

"I would like to protect the right to human contact in law, but people with dementia may need a lot of physical help and a lot of that can be provided by robots."

Milo, with a chunky body and a mobile face under anime-style hair, is designed to mimic human expressions to help autistic children.

But some of those he manages I've never seen on a real person.

MiRo is much cuter, looking somewhat like a dog, a donkey or a rabbit.

"It's designed to mimic the behaviour of animals," says Sheffield Robotics' senior experimental officer Dr James Law.

"For patients, particularly the elderly, particularly with Alzheimer's and dementia it is akin to pet therapy, which can have a lot of value for people who need more social interaction in their lives."

Still MiRo is not very cuddly. Unlike Paro.

I would say he's a very sophisticated furry toy seal, squeaking as you stroke his sensors, flashing big black eyes as you caress him.

Dr Emily Collins is interested in using such robots in children's wards, where real animals and even fur is a danger.

"I'm very interested in what mechanism is going on between a human and an animal which results in increased neuropeptide release, so they need less pain medication," she says.

"Being able to replicate that in paediatric wards, where you cannot have animals, would be fantastic.

"I don't see the point in a humanoid robot, apart from the fact people like the form and the shape.

"As soon as you make a robot look like a human analogue, people have expectations that the robot is going to do the same as a person, and we can't replicate that."

It is a really interesting debate, and one that maybe one day we'll have to face. But there are far more pressing problem.

If Mr Szollosy is right and robots take 60% of the jobs by 2037, what does he think will happen?

"The jobs are going to go," he says.

"There is going to be greater unemployment. Maybe we need to recast our society so that becomes a good thing, not a bad thing."

Prof Prescott says: "If people aren't able to sell their labour, then the whole market struggles because the people producing still need people to buy.

"So maybe we need to pay people to consume, maybe through some basic income.

"I think it is inevitable that we go in that direction. It's good news.

"The possibility now exists we can put over a lot of the work we don't like to robots and AIs."

The idea of "the basic" would face huge political opposition.

But it's worth noting that many who work in the field think there are few alternatives, even if there has to be an economic crisis before it's taken seriously.

This is not the same as interesting questions for the future about robot rights or consciousness - these problems are coming toward us with, well, the speed and ferocity of the Terminator.

Mainstream politicians are only just beginning to take notice.

You can hear Mark Mardell's report for The World This Weekend, plus a debate about what the future holds for robots and jobs, via BBC iPlayer.

]]>
Mon, 20 Feb 2017 07:48:09 +0400
<![CDATA[The age of the BIONIC BODY]]>http://2045.com/news/35114.html35114When The Six Million Dollar Man first aired in the Seventies, with its badly injured astronaut being rebuilt with machine parts, the TV show seemed a far-fetched fantasy.

But fast-forward 40 years and the idea of a part-man, part-robot doesn't seem so extraordinary after all.

Just last week, it was reported that former policewoman Nicki Donnelly, 33, paralysed from the waist down after a driver smashed into her police car, is now able to walk her daughter to school, thanks to a robotic exoskeleton that does the walking for her.

And today, the Mail reveals that robotic arms controlled by thought are now being developed in Britain.

Here, we look at the many ways scientists are using bionic technology to transform patients' lives...

EYES

For people with sight loss, there is hope that they could one day benefit from extraordinary new technology to help them 'see' again.

Last December, ten blind NHS patients had their vision partially restored using a bionic eye. 

A mini video camera mounted on a pair of glasses sent images wirelessly to a computer chip attached to the patient's retina, the light-sensitive patch at the back of the eye.

The world the patients see via the bionic eye, called the Argus II, is black and white. 

They can detect light and darkness, shapes and obstacles, and learn to see movement.

Objects appear in outline, and trials — held at Moorfields Eye Hospital in London — have shown patients can correctly reach and grab familiar objects around the house.

They could also make out cars on the street and safely cross the road using a pedestrian crossing. 

Some can learn to see the numerals on a clock or read letters in large print.

This is only the start, says the maker, U.S. firm Second Sight. Face recognition and 3D vision will become available with planned software upgrades.

BRAIN

Brain implants are now being used to harness the power of the mind to help people who are paralysed.

For 100 years, it's been known that the brain produces electromagnetic waves that instruct muscles in the body to move. 

Now, this understanding is being used to access patients' thoughts and move muscles. 

The first person to benefit was an American man, Johnny Ray, who had locked-in syndrome and couldn't communicate.

In 1998, scientists at Emory University in Atlanta implanted electrodes into his brain, and Ray was able to use the power of his thoughts to move a cursor on a screen and pick out letters, enabling him to talk to the outside world.

The system, known as Brain-Computer Interface (BCI) has now been refined, so the brainwaves can be used to make mechanical equipment move.

Last year, diners at a restaurant in Tubingen, Germany, saw a remarkable demonstration of BCI. 

Several wheelchair-bound patients who had no control of their arms or legs pulled up to tables and used a bionic hand to pick up cups and feed themselves with a fork.

To achieve this, they wore soft caps fitted with 64 electrodes, which captured and transmitted brainwaves coming from the region that controls hand movements.

These brainwaves were picked up by a computer in the wheelchairs which turned the waves into electrical signals and sent them to a meshwork plastic glove wrapped round one of the patient's paralysed hands.

This allowed them to open or close the bionic exoskeleton in response to their thoughts.

Only simple signals were sent to the hand — because picking up brain activity from outside the skull is difficult.

'It's like listening to a concert outside the hall,' said Professor Riccardo Poli, of the School of Computer Science and Electronic Engineering at the University of Essex. 

'The way to get a clearer signal is to open the skull and insert a computer chip directly on to a specific area, but such an invasive operation raises the risk of infection and the chip could become dislodged.'

One way around this, being tested at the University of Melbourne, is to use techniques developed for inserting a stent in a blocked blood vessel, sliding a computer chip the size of a small paperclip into a blood vessel in the relevant area of the brain.

HANDS

The BCI brain technology may soon benefit patients with spinal injuries or who've had strokes.

And what makes this so exciting is that there is now evidence making a paralysed hand move regularly for several weeks in a BCI-driven exoskeleton can reactivate unresponsive nerves and muscles.

'It allows patients to see the hand moving and maybe even feel it,' says Dr Surjo Soekadar, who heads the Applied Neurotechnology Lab at Tubingen University in Germany. 

'This can wake up nerves involved with movement that had closed down.'

In one study he published in 2014, 32 stroke survivors who could not wash, dress or walk unaided no longer needed help after just 20 sessions of BCI stimulation.

But it's not simply that BCI technology can direct an exoskeleton or glove. 

Four years ago, a lorry driver from Sweden known as Magnus became the first patient in the world to have an implanted body part controlled by the brain.

Magnus had his arm amputated above the elbow as a result of cancer. 

Four years ago, he had a prosthetic with a mechanical hand implanted into the remaining bone by a team at Sahlgrenska University Hospital in Gothenberg.

Painstaking surgery connected electrodes from the prosthetic arm to the nerves and muscles dedicated to their movement, so that when he thinks of moving his hand, it responds.

'I was able to go back to my job as a driver and operate machinery,' Magnus said. 'At home, I can tie my children's shoelaces.' A planned upgrade, involving sensors on the hand, should soon allow him to sense how things he is holding feel.

EARS

Creating artificial limbs is relatively simple compared with the challenge of replacing or upgrading sensory organs, such as the ear. 

The most successful so far has been the cochlear implant, a replacement for the cochlea — the part of the inner ear where sounds are turned into electric signals by 32,000 tiny hair cells and then sent to the brain.

In the bionic version, a microphone transforms sounds into digital impulses and onto the brain.

LEGS

Patients with paralysed legs are already being helped to walk again using mechanical versions.

Right now, the most sophisticated devices for daily use involve an exoskeleton, such as that given to Nicki Donnelly. 

Wearing one, you can walk at 1 mph with the aid of crutches, pressing buttons on them to control movement.

Similar robotic legs have been developed by the Neuro-Rehabilitation Unit at East Kent University Foundation Hospitals Trust. 

Thick and metallic with room for legs inside and flat, stable feet, they won't take a patient anywhere fast — but, thanks to back support, they won't let them fall, either.

'Being in a wheelchair can lead to all sorts of problems,' says the director of the unit, Dr Mohamed Sakel, referring to the way blood can pool, leading to clots. Other complications include osteoporosis.

'In the legs, patients can stand up and exercise in ways they can't using bars and the like,' adds Dr Sakel.

'It also allows them to move about without crutches, which means their hands are free to do things.'

A BCI system that allows control of the legs with the mind is planned.

Meanwhile, Michael Goldfarb, professor of mechanical engineering, and his team at Vanderbilt University, Tennessee, have a more ambitious plan. 

They are working on legs much closer to natural ones, with powered knee and ankle joints, allowing the patient to walk up and down stairs and cross uneven ground — yet they will weigh no more than a normal leg.

PANCREAS

Injections of insulin have been the mainstay treatment for people with type 1 diabetes, who need up to five jabs a day. Now, there is an alternative: the artificial pancreas.

The role of the pancreas is to produce insulin to mop up sugar from the blood and take it into the cells. 

Cambridge scientists have developed a device that can both monitor blood sugar and pump out insulin as needed — and much more accurately than patients do.

This helps reduce the risk of 'hypos' (very low blood sugar levels).

A sensor inserted just beneath the skin of the abdomen monitors blood sugar and sends information to a computer, which can calculate how much insulin is needed.

This information is then sent to a pump worn on a belt that injects insulin via a patch into the skin.

A study in the New England Journal of Medicine found it improved insulin control by 25 per cent. 

Last year, 16 British diabetic women became the first in the world to go through pregnancy with an artificial pancreas.

Larger trials are needed, but it's hoped the device could be available on the NHS within two years.

Read more: http://www.dailymail.co.uk/health/article-4197904/Is-age-BIONIC-BODY.html#ixzz4ZfKXLySg 

]]>
Mon, 6 Feb 2017 07:55:41 +0400
<![CDATA[Pigs given ROBOTIC hearts in medical breakthrough that could save MILLIONS of lives]]>http://2045.com/news/35107.html35107Researchers have developed a soft robotic sleeve which twists and compresses in synchronisation with a heart to help people who have weaker hearts.

The team from Harvard University and Boston Children’s Hospital created the device which does not come into contact with blood, unlike similar devices today, minimising the risk even more.

The device also reduces the need for patients to take potentially dangerous blood thinning medications.

The thin silicon sleeve of the robotic heart is attached to the actual heart through pneumonic actuators which match the beat.

An external pump is attached, which uses air to power the device and each sleeve is customised to the individual.

A study from the team saw six pigs fitted with the device, with promising results as there was little inflammation and better blood flow.

Ellen T Roche, the paper’s first author and a former Ph.D. student at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), said: “This research demonstrates that the growing field of soft robotics can be applied to clinical needs and potentially reduce the burden of heart disease and improve the quality of life for patients.”

Conor Walsh, senior author of the paper from John L. Loeb Associate Professor of Engineering and Applied Sciences at SEAS, added: “This work represents an exciting proof-of-concept result for this soft robot, demonstrating that it can safely interact with soft tissue and lead to improvements in cardiac function. 

“We envision many other future applications where such devices can delivery mechanotherapy both inside and outside of the body.”

Heart failure affects around 41 million people worldwide.

Current treatments at the moment include ventricular assist devices (VADs) which work by pumping blood from the heart’s ventricles to the aorta.

However, a common issue among people who are fitted with VADs are blood clots and strokes, which is why the scientists wanted to make something safer.

Frank Pigula, a cardiothoracic surgeon and co-corresponding author on the study, who was formerly clinical director of paediatric cardiac surgery at Boston Children’s Hospital, said: “The cardiac field had turned away from idea of developing heart compression instead of blood-pumping VADs due to technological limitations, but now with advancements in soft robotics it’s time to turn back.

“Most people with heart failure do still have some function left; one day the robotic sleeve may help their heart work well enough that their quality of life can be restored.”

]]>
Fri, 27 Jan 2017 23:52:17 +0400
<![CDATA[Lawmakers Call For Halt To DARPA Program: Robots Repairing Satellites]]>http://2045.com/news/35105.html35105WASHINGTON: Three influential House lawmakers have asked DARPA in a Jan. 25 letter to review a robotic space repair program to see if it violates the National Space Policy by competing with private-sector efforts and to put the program on hold until the review is complete. The National Space Policy requires “that the government not build or buy systems that “preclude, discourage or compete” with commercial systems. Orbital ATK is building a system it believes competes directly with the DARPA initiative, known as Robotic Servicing of Geosynchronous Satellites.

It’s an intriguing program. DARPA’s goal is to develop robotic systems that can fix damaged satellites 22,000 miles up. In the words of the program web page, it would be designed  to “make house calls in space.”

But Rep. Jim Bridenstine, one of the most active lawmakers on space issues today (and possibly the next head of NASA); Rep. Barbara Comstock, chair of the House Science, Space and Technology subcommittee on research and technology; and Rep. Rob Bishop, chair of the House Natural Resources Committee, signed a letter today asking Acting DARPA Director Steven Walker to review RSGS to ensure it complies with the National Space Policy’s requirement that the government not build or buy systems that “preclude, discourage or compete” with commercial systems.

The rub may be that Orbital-ATK has invested $100 million in such a system, the Orbital Mission Extension Vehicle (MEV). In April last year, Orbital announced that the commercial satellite giant Intelsat would buy the first of the system.

The launch of the first MEV is slated for late 2018 with in-orbit testing and demonstration to be performed with an Intelsat satellite. Testing should be done by early 2019, the Intelsat announcement said. “MEV-1 will then relocate to the Intelsat satellite scheduled for the mission extension service, which is planned for a five-year period. Intelsat will also have the option to service multiple satellites using the same MEV,” the announcement says.

MEV is the product of a wholly-owned subsidiary of Orbital ATK known as Space Logistics, LLC.

Orbital released this statement when they heard I was writing this: “Orbital ATK has strong concerns regarding DARPA’s program approach to its new Robotic Servicing of Geosynchronous Satellite (RSGS) program, which both distorts the emerging commercial market for in-space satellite servicing and violates long-standing principles of the National Space Policy. DARPA’s RSGS program will subsidize a single company with several hundred million dollars’ worth of space hardware and launch service, courtesy of the U.S. taxpayer, to directly compete with commercial satellite servicing systems that Orbital ATK and other companies are developing with their own private capital. Even worse, we estimate that DARPA will provide about 75% of the program funding but retain only about 10% of its capability, a highly questionable and inefficient use of public funds.”

The company also says, as one would expect, that, “DARPA’s approach also violates both the letter and the spirit of the U.S. National Space Policy.”

The DARPA program has another interesting wrinkle. The folks who invented the Internet want to create a Consortium For Execution of Rendezvous and Servicing Operations (CONFERS), which would serve as “a permanent, self-sustaining ‘one-stop shop’ where industry can collaborate and engage with the U.S. Government about on-orbit servicing, as well as drive the creation of the standards that future servicing providers will follow,” according to Todd Master, the DARPA program manager. “These standards would integrate data, expertise, and experience from both government and industry while protecting commercial participants’ financial and strategic interests, and provide investors, insurers, potential customers, and other stakeholders with the confidence to pursue and engage in this promising new sector.” Once up and running, DARPA plans to “transfer” CONFERS  to industry before 2021, when it expects to demonstrate RSGS capabilities in space.

As a longtime space reporter, and one who bets President Donald Trump’s administration will favor industry over government in most showdowns, look for DARPA to lose this one — unless there are factors of which I’m ignorant.

]]>
Wed, 25 Jan 2017 23:38:48 +0400
<![CDATA[Robotic Fabricator Could Change the Way Buildings Are Constructed]]>http://2045.com/news/35106.html35106A construction robot has to be powerful enough to handle heavy material, small enough to enter standard buildings, and flexible enough to navigate the terrain.

Back in the 1970s, robots revolutionized the automotive industry, performing a wide range of task more reliably and quickly than humans. More recently, a new generation of more gentle robots has begun to crop up on production lines in other industries. These machines are capable of more delicate, fiddly tasks like packing lettuce. This powerful new workforce is set to revolutionize manufacturing in ways that are, as yet, hard to imagine.

But the building industry is trickier than many others. Construction sites are complex environments that are constantly changing. Any robot would have to be powerful enough to handle heavy material but light and small enough to enter standard buildings and flexible enough to navigate the terrain.

That’s a big ask, but the potential benefits are huge. Construction robots would allow new types of complex structures to be assembled in situ rather than in distant factories and then transported to the site. That allows new types of structures to be built in place, indeed these structures could be modified in real time to allow for any unexpected changes in the environment.

So what is the state-of-the-art for construction robots?

Today we get an answer thanks to the work of Markus Giftthaler at the ETH Zurich in Switzerland and a few pals who have developed a new class of robot capable of creating novel structures on a construction site. They call their new robot the In Situ Fabricator1 and today show what it is capable of.

The In Situ Fabricator1 is designed from the bottom up to be practical. It can build stuff using a range of tools with a precision of less than five millimeters, it is designed to operate semi-autonomously in a complex changing environment, it can reach the height of a standard wall, and it can fit through ordinary doorways. And it is dust- and waterproof, runs off standard electricity, and has battery backup. On top of all this, it must be Internet-connected so that an architect can make real-time changes to any plans if necessary.

Those are a tricky set of targets but ones that the In Situ Fabricator1 largely meets. It has a set of cameras to sense its environment and powerful onboard processors for navigating and planning tasks. It also has a flexible, powerful robotic arm to position construction tools.

To show off its capabilities, Giftthaler and co have used it to build a pair of structures in an experimental construction site in Switzerland called NEST (Next Evolution is Sustainable building Technologies). The first is a double-leaf undulating brick wall that is 6.5 meters long and two meters high and made of 1,600 bricks.

Even positioning such a wall correctly on a construction site is a tricky task. In Situ Fabricator1 does this by comparing the map of the construction site it has gathered from its sensors with the architect’s plans. But even then, it must have the flexibility to allow for unforeseen problems such as uneven terrain or material sagging that changes a structure’s shape.

“To fully exploit the design-related potentials of using such a robot for fabrication, it is essential to make use not only of the manipulation skills of this robot, but to also use the possibility to feed back its sensing data into the design environment,” say Giftthaler and co.

The resulting wall, in which all the bricks are positioned to within seven millimeters, is an impressive structure.

The second task was to weld wires together to form a complex, curved steel mesh that can be filled with concrete. Once again, In Situ Fabricator1’s flexibility proved crucial. One problem with welding is that the process creates tensions that can change the overall shape of the structure in unpredictable ways.  So at each stage in the construction, the robot must assess the structure and allow for any shape changes as it welds the next set of wires together. Once again, the results at NEST are impressive.

In Situ Fabricator1 is not perfect, of course. As a proof-of-principle device, Giftthaler and co use it to identify improvements they can make to the next generation of construction robot. One of these is that at almost 1.5 metric tons, In Situ Fabricator1 is too heavy to enter many standard buildings—500 kilograms is the goal for future machines.

But perhaps the most significant problem is a practical limit on the strength and flexibility of robotic arms. In Situ Fabricator1 is capable of manipulating objects up to about 40 kilograms but ideally ought to be able to handle objects as heavy as 60 kilograms.

But that pushes it up against a practical limit. In Situ Fabricator1’s arm is controlled by electric motors that are incapable of handling heavier objects with the same level of precision. What’s more, electric motors are notoriously unreliable in the conditions found on construction sites, which is why most heavy machinery on these sites is hydraulic.

So Giftthaler and co are already at work on a solution. These guys have designed and built a hydraulic actuator that can control a next-generation robot arm while handling heavier objects more reliably and with the same precision. They are already using this design to build the next generation construction robot that they call In Situ Fabricator2, which should be ready by the end of this year.

All that shows significant promise for the building industry. Other groups have tested advances such as 3-D printing new buildings. But a significant limitation of 3-D printing is that the building cannot be bigger than the 3-D printer. So a robot that can construct things that are bigger than itself is a useful advance.

But there is significant work ahead. The building industry is naturally conservative.  The relatively long lead time in creating new buildings (not to mention the red tape that goes with it) make it hard for construction companies to invest in this kind of high-tech approach.

But the work of Giftthaler and co should help to overcome this and showcase the ability of robots to create entirely new forms of structure. It’ll be interesting to see if they can do for the construction industry what robots have done, and continue to do, for cars.

Ref: arxiv.org/abs/1701.03573: Mobile Robotic Fabrication at 1:1 scale: the In situ Fabricator

]]>
Tue, 24 Jan 2017 23:49:35 +0400
<![CDATA[San Francisco biohackers are wearing implants made for diabetes in the pursuit of 'human enhancement']]>http://2045.com/news/35099.html35099Paul Benigeri, a lead engineer at cognitive enhancement supplement startup Nootrobox, flexes his tricep nervously as his coworkers gather around him, phones set to record the scene. He runs his fingers over the part of the arm where Benigeri's boss, Geoff Woo, will soon stick him with a small implant.

"This is the sweet spot," Woo says.

"Oh, shit," Benigeri says, eyeing the needle.

"Paul's fine," Woo says. "K, ooooone ..."

An instrument no bigger than an inhaler lodges a needle into the back of Benigeri's arm. Woo removes his hand to reveal a white plate sitting just above the implant. Benigeri smiles.

"You are now a tagged elephant," Woo says, admiring his handiwork.

"A bionic human," says Nootrobox cofounder Michael Brandt.

In San Francisco, a growing number of entrepreneurs and biohackers are using a lesser-known medical technology called a continuous glucose monitor, or CGM, in order to learn more about how their bodies work. They wear the device under their skin for weeks at a time.

CGMs, which cropped up on the market less than 10 years ago and became popular in the last few years, are typically prescribed by doctors to patients living with diabetes types 1 and 2. They test glucose levels, or the amount of sugar in a person's blood, and send real-time results to a phone or tablet. Unlike fingerstick tests, CGMs collect data passively, painlessly, and often.

For tech workers taking a DIY approach to biology, CGMs offer a way to quantify the results of their at-home experiments around fasting, exercise, stress, and sleep.

More...

]]>
Wed, 18 Jan 2017 11:22:26 +0400
<![CDATA[Giving rights to robots is a dangerous idea]]>http://2045.com/news/35100.html35100The EU’s legal affairs committee is walking blindfold into a swamp if it thinks that “electronic personhood” will protect society from developments in AI (Give robots ‘personhood’, say EU committee, 13 January). The analogy with corporate personhood is unfortunate, as this has not protected society in general, but allowed owners of companies to further their own interests – witness the example of the Citizens United movement in the US, where corporate personhood has been used as a tool for companies to interfere in the electoral process, on the basis that a corporation has the same right to free speech as a biological human being.

Electronic personhood will protect the interests of a few, at the expense of the many. As soon as rules of robotic personhood are published, the creators of AI devices will “adjust” their machines to take the fullest advantage of this opportunity – not because these people are evil but because that is part of the logic of any commercial activity.

Just as corporate personhood has been used in ways that its original proponents never expected, so the granting of “rights” to robots will have consequences that we cannot fully predict – to take just two admittedly futuristic examples, how could we refuse a sophisticated robot the right to participate in societal decision-making, ie to vote? And on what basis could we deny an intelligent machine the right to sit on a jury?
Paul Griseri
La Genetouze, France

]]>
Mon, 16 Jan 2017 11:24:52 +0400
<![CDATA[Bionic legs and smart slacks: exoskeletons that could enhance us all]]>http://2045.com/news/35101.html35101There are tantalising signs that as well as aiding rehabilitation, devices could soon help humans run faster and jump higher.

Wearing an £80,000 exoskeleton, Sophie Morgan is half woman, half robot.

Beneath her feet are two metal plates, and at her hand a digital display, a joystick and, somewhat alarmingly, a bright red emergency button.

As she pushes the joystick forward, the bionic legs take their first steps – a loud, industrial whirring strikes up and her right foot is raised, extended and placed forward. Her left slowly follows. As she looks up, a smile spreads across her face.

Exoskeletons, touted as devices that will allow the injured to walk, elderly people to remain independent for longer, the military to get more from soldiers and even turn all of us into mechanically enhanced humans, have captured the imagination of researchers across the world, from startups to Nasa.

For now, the most obvious – and tangible – application has involved allowing paralysed people to stand and walk. “It was a mixture of surrealism and just absolute, just the most exhilarating feeling,” says Morgan, describing her first experience of the technology four years ago.

Now 31, the artist, model and presenter of Channel 4’s 2016 Paralympic coverage was paralysed in a car accident aged 18 and has used a wheelchair ever since. The idea to try the exoskeleton, she says, came from the BBC security correspondent Frank Gardner, who uses a wheelchair after being shot while reporting from Saudi Arabia.

The exoskeleton, from Rex Bionics, offered a life-changing experience, according to Morgan. “It had been 10 years, give or take, since I had properly stood, so that was in itself quite overwhelming,” she says. The impact was far reaching. “It is not just about the joy of ‘Oh, I am standing’. It is the difference it makes, the way you feel afterwards, psychologically and physiologically – it is immeasurable.”

Returning to her wheelchair, says Morgan, is a disappointing experience. “I am walking in my dreams, so it does blur that line – that liminal space between real and dream, and reality and fantasy,” she says of the device.

The exoskeleton isn’t just about stirring excitement. As Morgan points out, there are myriad health problems associated with sitting for long periods of time. A report co-commissioned byPublic Health England and published last year highlighted findings showing that, compared with those up and about the most, individuals who spend the longest time sitting are around twice as likely to develop type 2 diabetes and have a 13% higher risk of developing cancer.

Wheelchair users, adds Morgan, also face side-effects, from pressure sores to urinary tract infections. “It could be the difference between longevity and not for people like me,” she says of the exoskeleton.

The competition

About 40 of the Rex Bionic devices are currently in use worldwide, including in rehabilitation centres, says Richard Little, co-founder of the company. An engineer, Little says he was inspired to develop the system after his best friend and co-founder was diagnosed with multiple sclerosis.

But there is competition. As Little points out, the development of battery technology, processing power and components has brought a number of exoskeletons on to the market in recent years, including those from the US-based companies ReWalk and Ekso Bionics. “[They] offer a whole load of different things which are similar in some ways but different in others,” says Little. “[Ours] doesn’t use crutches,” he points out, adding that the innovation removes the risk of users inadvertently damaging their shoulders, and frees their arms.

There are tantalising signs that exoskeletons could do more than just aid rehabilitation or increase the mobility options for those who have experienced a stroke or spinal cord injury.

While the bionic legs tried by Morgan are pre-programmed, researchers have developed exoskeletons controlled by a non-invasive system linked to the brain, allowing an even wider range of wheelchair users to walk. What’s more, when combined with virtual reality and tactile feedback, the systems even appear to promote a degree of recovery for people with paraplegia.

“All our patients got some degree of neurological recovery, which has never been documented in spinal cord injury,” says Miguel Nicolelis, co-director of Duke University’s centre for neuroengineering, who led the work.

It’s a development that excites Little, whose team have also been exploring the possibility of thought control with their own device.

Yet despite their transformative capabilities, the limitations of such bulky exoskeletons have left many frustrated. Tim Swift, co-founder of the US startup Roam Robotics and one of the original researchers behind the exoskeleton from Ekso Bionics, is one of them.

“It is a 50lb machine that costs $100,000 and has a half-mile-an-hour speed and can’t turn,” he says of his former work. “There are only so many applications where that makes sense. This is not a shift towards consumer, this is a hunt for somewhere we can actually use the technologies we are making.”

The dream, says Swift, is to create affordable devices that could turn us all into superhumans, augmenting our abilities by merging the biological with state of the art devices to unleash a new, improved, wave of soldiers, workers, agile pensioners and even everyday hikers. But in devising the underpinning technology, he says it is time to ditch the motors and metal approach that he himself pioneered.

While hefty, rigid devices can support someone with paraplegia, says Swift, such exoskeletons are too heavy and costly for wider applications – such as helping a runner go faster. The fundamental challenge, he adds, is to create a device that remains powerful while keeping the weight down. “I think you have two solutions,” he says. The first is to develop a new, lightweight system that efficiently uses battery energy to generate movement. The second, he says, is to stick with metals and motors but be more intelligent in how you use them.

Swift’s answer is based on the former – but it hasn’t received universal acclaim. “I have spent the last two and a half years literally getting laughed out of conferences when I tell people we are going to make inflated exoskeletons,” he says. “People think it is a running joke.”

But Swift is adamant that to produce a system that can be used in myriad ways to augment humans, be it on the building site, in the home or up a mountain, technologists must innovate. And air, he believes, is the way to do it. The result, so far, is a series of proof-of-concept devices, braces that look a little like padded shin-guards, that can be strapped on to arms or legs.

“The fundamentals allow you to have extremely lightweight structures [and] extremely low cost because everything is basically plastics and fabrics as opposed to precision machined metals,” he says. And there is another boon. “Because you can make something that is very lightweight without sacrificing power, you are actually increasing the power density, which creates these opportunities to do highly dynamic behaviours.”

In other words, according to Swift, exoskeletons made of inflated fabric could not only boost a human’s walking abilities, but also help them run, jump or even climb. “When I say I want someone to go into Footlocker and buy a shoe that makes them run 25% faster – [we are] actively looking at things that look like that,” he says.

Others agree with Swift about the need to reduce the clunkiness of exoskeletons, but take a different approach.

Augmenting humans

Hugh Herr is a rock climber, engineer and head of the biomechatronics research group at MIT. A double amputee, the result of a climbing accident on Mount Washington, Herr has pioneered the development of bionic limbs, inventing his own in the process. But it was in 2014 that his team became the first to make an all-important breakthrough: creating a powered, autonomous exoskeleton that could reduce the energy it took a human to walk.

“No one is going to want to wear an exoskeleton if it is a fancy exercise machine, if it makes you sweat more and work harder, what is the point?” says Herr. “My view is if an exoskeleton fails to reduce metabolism, one needs to start over and go back to the drawing board.”

To boost our bodies, says Herr, it is necessary to break the challenge down. “We are taking a first principle approach, and joint by joint understanding deeply what has to be done scientifically and technologically to augment a human,” he says. 

For Herr the future is not inflatables (“pneumatics tend to be very inefficient,” he says) but minimalistic, stripping away the mass of conventional exoskeletons so that the device augments, rather than weighs down, the wearer. “If you separated the device from the human, it can’t even uphold its own weight,” he says. 

The approach, he adds, was to focus on the area of the body with biggest influence when it came to walking, “Arguably the most important muscle to bipedal human gait is the calf muscle,” he says. “So we said in a minimalist design [with] minimal weight and mass, one arguably should build an artificial calf muscle.” 

Boasting sensors for position, speed and force for feedback, and programmed to move and respond in a natural way, the device drives the foot forward, saving the wearer energy on each step. “Our artificial calf muscle pushes the human in just the right time in the gait cycle where the human is most inefficient and after that period gets out of the way completely,” he says.

Herr isn’t alone in focusing on such minimalist ankle-based devices. Among other pioneers is Conor Walsh at Harvard University who has created similar exoskeletons to help stroke patients walk. The devices are a million miles from the cumbersome bionic legs with with Morgan walked across the office, but then Herr believes the future for exoskeletons lies firmly with the augmented human.

“In the future when a person is paralysed, they won’t use an exoskeleton. The reason is we are going to understand how to repair tissues,” he says. “The only time to use an exoskeleton is if you want to go beyond what the muscles are capable of, beyond innate physicality.”

Making them look like second skins and behave like second skins is going to happen

In Bristol, Jonathan Rossiter is hoping to do just that with an even bolder approach: smart materials. “Fabrics and textiles and rubbers is a really good description of the things we are looking at,” he says. Professor of robotics at Bristol University and head of the Soft Robotics group at Bristol Robotics Laboratory, Rossiter believes exoskeletons of the future will look more like a pair of trousers. “Making them look like second skins and actually behave like second skins is going to happen,” he says.

The technology behind it, says Rossiter, will be hi-tech materials: rubbers that bend when electricity is applied, or fabrics that move in response to light, for example. “We build up from the materials to the mechanisms,” he says.

Conscious of an ageing population, Rossiter believes a pair of smart trousers will prove invaluable in keeping people independent for longer, from helping them out of chairs to allowing them to walk that bit further. But he too sees them becoming popular gadgets, helping hikers clamber up mountains.

There is, however, a hitch. Scaling up smart materials from the tiny dimensions explored in the lab to a full-blown set of slacks is no small feat. “You are taking something which is [a] nanomaterial. You have to fabricate it so that it layers up nicely, it doesn’t have any errors in it, it doesn’t have any fractures or anything else and see if you can transpose that into something you can wear,” says Rossiter. In short, it will be a few seasons yet before your wardrobe will be boasting some seriously smart legwear.

But as technology marches on, the dream gets closer to reality. Herr, for one, believes commercial devices are a hop, skip and a jump away – arriving within the next two decades.

“Imagine if you had leg exoskeletons where you could traverse across very, very irregular natural surfaces, natural terrains with a dramatically reduced metabolism and an increased speed while you are jumping over logs and hopping from rock to rock, going up and down mountains,” he says, conjuring up a scene of a bionic, human gazelle.

“When that device exists in the world, no one will ever use the mountain bike again.”

]]>
Tue, 10 Jan 2017 11:30:51 +0400
<![CDATA[This CES 2017 robot can be controlled by one hand]]>http://2045.com/news/35094.html35094Earlier at CES, we saw the Lego Boost announced -- a kit that lets you build and control Lego robots. Ziro is a similar kit, by the company ZeroUI, but it lets you build robots out of any material and control them with a smart glove.

Ziro has three parts to it: a motorized module, a wireless glove to control that module and an app to animate/program modules. The idea is that you build the modules into your robot. You program those modules with the Ziro app. And you remote control your creation using a smart glove worn on one hand.

Ziro is aimed at kids and their creativity, ZeroUI CEO Raja Jasti told me at CES. He said he wants to empower kids to create and design robots out of anything -- emphasizing the use of eco-friendly materials over plastic.

Jasti's passion is matched by the fun of seeing someone control a robot with just their hand. In a demonstration, a man wearing the Ziro smart glove moved his hand slightly forward. At the same time, a robot (that looked like a famous droid from a large movie franchise) moved forward. Then, the man twisted his hand in a circular motion. The robot spun in a circle.

Jasti said that they have already gotten Ziro kits into some schools, but the kit can also be used at home. Ziro could be this generation's Erector Set.

The Ziro starter kit includes a smart glove, two modules and parts for a trike assembly base. Ziro is available to preorder for $150 (which converts to £120 and AU$200) and be available in the spring of 2017.

]]>
Sat, 7 Jan 2017 11:38:16 +0400
<![CDATA['Caterpillar' Robot Wriggles to Get Around]]>http://2045.com/news/35093.html35093A soft, caterpillar-like robot might one day climb trees to monitor the environment, a new study finds.

Traditionally, robots have usually been made from rigid parts, which make them susceptible to harm from bumps, scrapes, twists and falls. These hard parts can also keep them from being able to wriggle past obstacles.

Increasingly, scientists are building robots that are made of soft, bendable plastic and rubber. These soft robots, with designs that are often inspired by octopuses, starfish, worms and other real-life boneless creatures, are generally more resistant to damage and can squirm past many of the obstacles that impair hard robots, the researchers said. [The 6 Strangest Robots Ever Created]

"I believe that this kind of robot is very suitable for our living environment, since the softness of the body can guarantee our safety when we are interacting with the robots," said lead study author Takuya Umedachi, now a project lecturer in the Graduate School of Information Science and Technology at the University of Tokyo.

However, soft materials easily deform into complex shapes that make them difficult to control when conventional robotics techniques are used, according to Umedachi and his colleagues. Modeling and predicting such activity currently requires vast amounts of computation because of the many and unpredictable ways in which such robots can move, the researchers said.

To figure out better ways to control soft robots, Umedachi and his colleagues analyzed the caterpillars of the tobacco hornworm Manduca sexta, hoping to learn how these animals coordinate their motions without a hard skeleton. Over millions of years, caterpillars have evolved to move in complex ways without using massive, complex brains.

The scientists reasoned that caterpillars do not rely on a control center like the brain to steer their bodies, because they only have a small number of neurons. Instead, the scientists suggest that caterpillars might control their bodies in a more decentralized manner. Their model demonstrates their theory that sensory neurons embedded in soft tissues relay data to groups of muscles that can then help caterpillars move in a concerted manner.

The scientists developed a caterpillar-like soft robot that was inspired by their animal model. They attached sensors to the robot, which has a soft body that can deform as it interacts with its environment, such as when it experiences friction from the surface on which it walks. This data was fed into a computer that controlled the robot's motors, and the motor could, in turn, contract the robot body's four segments.

The researchers found that they could use this sensory data to guide the robot's inching and crawling motions with very little in the way of guidance mechanisms. "We believe that the softness of the body can be crucial when designing intelligent behaviors of a robot," Umedachi told Live Science.

"I would like to build a real, caterpillar-like robot that can move around on branches of trees," Umedachi said. "You can put temperature and humidity sensors and cameras on the caterpillar-like robots to use such spaces."

The scientists detailed their findings online Dec. 7 in the journal Open Science.

Original article on Live Science.

]]>
Sat, 7 Jan 2017 11:34:33 +0400
<![CDATA[Meet Kuri, Another Friendly Robot for Your Home]]>http://2045.com/news/35095.html35095Mayfield Robotics set out to build an approachable robot on wheels for surveillance and entertainment. Will anyone buy it?

Inside the Silicon Valley office of Mayfield Robotics, Kuri looks up at me and squints as if in a smile. Then the robot rolls across the floor, emitting a few R2-D2-like beeps.

Mayfield Robotics, which spun out of the research branch of Bosch, built Kuri as the next step in home robotics. It joins an increasingly crowded field: joining smart-home devices like Amazon’s Alexa and Google Home are wheeled robots like Jibo, Pepper, and Buddy, ready to offer companionship and entertainment (see “Personal Robots: Artificial Friends with Limited Benefits”).

Kaijen Hsiao, CTO of Mayfield Robotics, says Kuri was built to focus on doing a few things very well, and its personality will be what sets it apart. The 20-inch-tall robot is essentially an Amazon Alexa on wheels, letting users play music or control their smart devices from anywhere in the home. It can also live-stream video of your home for surveillance purposes.

Kuri is currently available for pre-order for $699 and is expected to ship to buyers by the end of the year. Mayfield is beginning to manufacture the robot now but will spend the year fleshing out the software side.

While people are at home, Kuri’s mission is to provide entertainment, whether that’s playing music or a podcast or reading a story out loud. It can autonomously follow users from room to room as it performs these tasks. Through a website called IFTTT, users can also set up custom commands for specific actions.

Kuri promises to keep working for you when you’re not home, too. Behind one of Kuri’s eyes is a 1080p camera, and users can access a live stream from the Kuri app. The video function can be used to check on a pet or make sure no intruders are present. Microphones embedded in the robot can detect unusual sounds, prompting the robot to roll in that direction and investigate. Or users can remotely pilot the robot to a specific area. The company says Kuri has “hours of battery life” and drives itself to its dock when it needs to charge.

Mayfield built this robot to perform all these tasks with personality. Kuri comes across as lovable but simple, so there’s no reason to expect it to do more than simple jobs. “He talks robot. He talks in bleeps and bloops,” Hsiao says. “It makes him endearing, but it also sets expectations appropriately.”

But will that be enough to make people want Kuri? In 2017, there will be a range of home robots that use artificial personality, says Andra Keay, the founder of Robot Launchpad and managing director of Silicon Valley Robotics.

“However, I believe that there is going to be a limit to the number of personalities we will want to have in our houses,” Keay says. “So the race is on to create not just engagement but loyalty. That’s a real challenge.” 

]]>
Thu, 5 Jan 2017 11:39:47 +0400
<![CDATA[Languages still a major barrier to global science, new research finds]]>http://2045.com/news/35092.html35092English is now considered the common language, or 'lingua franca', of global science. All major scientific journals seemingly publish in English, despite the fact that their pages contain research from across the globe.

However, a new study suggests that over a third of new scientific reports are published in languages other than English, which can result in these findings being overlooked - contributing to biases in our understanding.

As well as the international community missing important science, language hinders new findings getting through to practitioners in the field say researchers from the University of Cambridge.

They argue that whenever science is only published in one language, including solely in English, barriers to the transfer of knowledge are created.

The Cambridge researchers call on scientific journals to publish basic summaries of a study's key findings in multiple languages, and universities and funding bodies to encourage translations as part of their 'outreach' evaluation criteria.

"While we recognise the importance of a lingua franca, and the contribution of English to science, the scientific community should not assume that all important information is published in English," says Dr Tatsuya Amano from Cambridge's Department of Zoology.

"Language barriers continue to impede the global compilation and application of scientific knowledge."

The researchers point out an imbalance in knowledge transfer in countries where English is not the mother tongue: "much scientific knowledge that has originated there and elsewhere is available only in English and not in their local languages."

This is a particular problem in subjects where both local expertise and implementation is vital - such as environmental sciences.

As part of the study, published today in the journal PLOS Biology, those in charge of Spain's protected natural areas were surveyed. Over half the respondents identified language as an obstacle to using the latest science for habitat management.

The Cambridge team also conducted a litmus test of language use in science. They surveyed the web platform Google Scholar - one of the largest public repositories of scientific documents - in a total of 16 languages for studies relating to biodiversity conservation published during a single year, 2014.

Of the over 75,000 documents, including journal articles, books and theses, some 35.6% were not in English. Of these, the majority was in Spanish (12.6%) or Portuguese (10.3%). Simplified Chinese made up 6%, and 3% were in French.

The researchers also found thousands of newly published conservation science documents in other languages, including several hundred each in Italian, German, Japanese, Korean and Swedish.

Random sampling showed that, on average, only around half of non-English documents also included titles or abstracts in English. This means that around 13,000 documents on conservation science published in 2014 are unsearchable using English keywords.

This can result in sweeps of current scientific knowledge - known as 'systematic reviews' - being biased towards evidence published in English, say the researchers. This, in turn, may lead to over-representation of results considered positive or 'statistically significant', and these are more likely to appear in English language journals deemed 'high-impact'.

In addition, information on areas specific to countries where English is not the mother tongue can be overlooked when searching only in English.

For environmental science, this means important knowledge relating to local species, habitats and ecosystems - but also applies to diseases and medical sciences. For example, documents reporting the infection of pigs with avian flu in China initially went unnoticed by international communities, including the WHO and the UN, due to publication in Chinese-language journals.

"Scientific knowledge generated in the field by non-native English speakers is inevitably under-represented, particularly in the dominant English-language academic journals. This potentially renders local and indigenous knowledge unavailable in English," says lead author Amano.

"The real problem of language barriers in science is that few people have tried to solve it. Native English speakers tend to assume that all the important information is available in English. But this is not true, as we show in our study.

"On the other hand, non-native English speakers, like myself, tend to think carrying out research in English is the first priority, often ending up ignoring non-English science and its communication.

"I believe the scientific community needs to start seriously tackling this issue."

Amano and colleagues say that, when conducting systematic reviews or developing databases at a global scale, speakers of a wide range of languages should be included in the discussion: "at least Spanish, Portuguese, Chinese and French, which, in theory, cover the vast majority of non-English scientific documents."

The website conservationevidence.com, a repository for conservation science developed at Cambridge by some of the authors, has also established an international panel to extract the best non-English language papers, including Portuguese, Spanish and Chinese.

"Journals, funders, authors and institutions should be encouraged to supply translations of a summary of a scientific publication - regardless of the language it is originally published in," says Amano. The authors of the new study have provided a summary in Spanish, Portuguese, Chinese and French as well as Japanese.

"While outreach activities have recently been advocated in science, it is rare for such activities to involve communication across language barriers."

The researchers suggest efforts to translate should be evaluated in a similar way to other outreach activities such as public engagement, particularly if the science covers issues at a global scale or regions where English is not the mother tongue.

Adds Amano: "We should see this as an opportunity as well as a challenge. Overcoming language barriers can help us achieve less biased knowledge and enhance the application of science globally."

]]>
Thu, 29 Dec 2016 11:31:32 +0400
<![CDATA[Seven robots you need to know. Pointing the way to an android future]]>http://2045.com/news/35084.html35084Walking. Grasping an object. Empathising. Some of the hardest problems in robotics involve trying to replicate things that humans do easily. The goal? Creating a general purpose robot (think C-3PO from Star Wars) rather than specialised industrial machines. Here are seven existing robots that point the way towards the humanoid robots of the future.

Atlas

Use: Originally built for Darpa Robotics Challenge
Made by: Boston Dynamics
What it tries to do: Achieve human-like balance and locomotion using deep learning, a form of artificial intelligence.

“Our long-term goal is to make robots that have mobility, dexterity, perception and intelligence comparable to humans and animals, or perhaps exceeding them; this robot is a step along the way.”​

MARC RAIBERT, FOUNDER, BOSTON DYNAMICS

Features: 
• 1.7m tall and weighs 82kg
• Can walk on two feet and get back up if it falls down 
Human equivalent: Legs/skeleton/musculature

Superflex

Use: Military. Part of Darpa’s Warrior Web project
Made by: SRI Robotics
What it tries to do: A suit that makes the wearer stronger and helps prevent injury

Superflex is a type of ‘soft’ robot, which can mould itself to the environment or a human body in a way that typical robots can’t. The goal is to make machines that feel and behave more like biological than mechanical systems, and give additional powers to the wearer.

Features: 
• Battery-powered compressive suit weighs seven pounds 
• Faux ‘muscles’ can withstand 250lb of force
Human equivalent: Musculature

Photo: SRI International

Amazon Echo

Use: Voice-controlled speaker 
Made by: Amazon
What it tries to do: Lets you control devices by talking to them

It may not have any moving parts, but Amazon’s Echo – and Alexa, the digital assistant that lives inside it, is definitely trying to solve one of the central problems in robotics: how to create robots that can recognise human speech and provide natural voice responses.

You can tell Alexa to: 
• Control your light switches• Give you the latest sports scores
• Help tune your guitar
Human equivalent: Voice and ears

Life-like humanoids

Use: Natural interactions
Made by: Hiroshi Ishiguro Laboratories
What they try to do: Create a sense of ‘presence’, or sonzai-kan in Japanese, by making robots that look identical to humans

“Our goal is to realise an advanced robot close to humankind and, at the same time, the quest for the basis of human nature.”

Geminoid-F photo: Getty, video: Hiroshi Ishiguro Laboratories.

Pepper

Use: Day-to-day companion, and customer assistant
Made by: SoftBank
What it tries to do: Recognise and respond to human emotions

While Pepper clearly looks like a robot rather than a human, it uses its body movement and tone of voice to communicate in a way designed to feel natural and intuitive.

See Pepper's visit to the FT

Human equivalent: Feelings and emotions

Photo: Getty

Robo Brain

Use: Knowledge base for robots
Made by: Cornell University
What it tries to do: Accumulate all robotics-related information into an interconnected knowledge base similar to the memory and knowledge you hold in your brain.

The human brain is such a complex organ that it would be extremely difficult to create an artificial replica that sits inside a robot. But what if robots’ ‘brains’ could exist, disembodied in the cloud? Robo Brain hopes to achieve just that.
Researchers hope to integrate 100,000 data sources into the database.

Challenges: Understanding and juggling different types of data

Google Car

Use: Self-driving car
Made by: Google
What it tries to do: Group learning and real-time co-ordination

The true ambition behind Google’s automotive efforts is not just to make a car that can drive itself. Instead, it’s to use group learning to strengthen artificial intelligence, so that if one Google car makes a mistake and has an accident, all Google cars will learn from it. This involves managing large-scale, real-time co-ordination.

What happens when robots rule the road

Photos: FT Graphic/Getty/Dreamstime

]]>
Sat, 24 Dec 2016 23:40:32 +0400
<![CDATA[The '2016 Robot Revolution' and all the insane new things that robots have done this year]]>http://2045.com/news/35083.html35083Robots are useful for all kinds of things: building cars, recycling old electronics, having sex with - the list goes on and on.

And 2016 has been a big year for our cyber companions as they've evolved in ways we couldn't have imagined in 2015.

Robots have taken up jobs for the first time and even stepped in to save people from parking tickets .

We've compiled the above video to show you some of the highlights of 2016 and get you either excited or terrified for what the future holds.

"The pattern for the next 10-15 years will be various companies looking towards consciousness," noted futurologist Dr. Ian Pearson told Mirror Online.

"The idea behind it that if you make a machine with emotions it will be easier for people to get on with.

"[But] There is absolutely no reason to assume that a super-smart machine will be hostile to us."

Whether it's artificial intelligence, the singularity or just more celebrity sex dolls , there's certainly going to be a lot to talk about when we all meet back here in December 2017.

]]>
Fri, 23 Dec 2016 23:30:25 +0400
<![CDATA[Good news! You probably won’t be killed by a sex robot]]>http://2045.com/news/35082.html35082After spending a fascinating two days at the International Congress on Love and Sex with Robots, where academics discussed everything from robot design to the ethics of programming lovers, I was surprised to learn from Gizmodo that “sex robots may literally f**k us to death.”

How, I wondered, could these otherwise thoughtful researchers allow humanity to walk into such a dystopian nightmare?

Quite rightly, they won’t. That headline was in fact inspired by a discussion on the ethics of artificial intelligence by Prof. Oliver Bendel, who outlined some of the broad implications of creating machines which can “think” – including how we make sure robots make good moral decisions and don’t end up causing humans harm. Far from “warning” of the dangers of oversexed robots, Bendel was actually trying to ensure that they don’t “f**k us to death”. So while I might personally fantasise about the future headlines like “Woman, 102, Sexed To Death By Robot Boyfriend”, it’s unlikely that I’ll kick the bucket with such panache. Thanks to Bendel, and others who are exploring these questions as artificial intelligence develops, sex robots will likely have a built-in kill switch (or “kill the mood” switch) to prevent anyone from being trapped in a nightmare sex marathon with a never-tiring machine.

Reporting on events like the sex robots conference is notoriously tricky. On the one hand, sex robots are guaranteed to grab the attention of anyone looking for something to distract them from their otherwise robot-less lives, so an article is guaranteed to be a hit. On the other hand, academics are notoriously careful in what they say, so quite rightly you’re unlikely to find one who’ll actually screech warnings about imminent death at the hands (or genitals) of a love machine.

But no one wants to click a Twitter link that says “Academic Research Revealed To Be More Complicated Than We Can Cram Into 20 Words.” Hence Gizmodo’s terrifying headline, and other pieces which picked an interesting observation, then sold it to readers with something more juicy than the title in the conference schedule. The Register went with “Non-existent sex robots already burning holes in men’s pockets” in reference to a paper presented by Jessica Szczuka, in which men were quizzed about their possible intentions to buy a sex robot. The Daily Mail chose to highlight the data issues which arise from intimate connections with machines by telling us “Sex Robots Could Reveal Your Secret Perversions!

They’re blunt tools, but they get people interested, and hopefully encourage people to read further into issues they might not previously have considered. For example, during her keynote talk, Dr Kate Devlin mentioned a robot which hit the headlines last year because it “looked like Scarlet Johansson”. She posed an ethical question for makers of realistic bots and dolls: how do you get permission from the person whose likeness you’re using? Alternatively: “Celebrities Could Sue Over Sex Robot Doppelgangers!”

Dr Devlin also questioned why research into care robots for elderly people doesn’t also include meeting their sexual needs (“Academic Demands Sex Toys For Pensioners”) and pointed out that while more established parts of the sex industry tend to be male-dominated, in the sex tech field pioneering women are leading the way (“Are Women The Future Of The Sex Industry?”).

Julie Wosk – professor of art history and author of “My Fair Ladies: Female Robots, Androids and other artificial Eves” explored pop culture representations of sex robots, from Ex Machina’s Ava to Good Girl’s brothel-owned learning sex bot. Sex robots are most commonly female, beautiful and subservient, and Wosk pointed out that in pop culture they also have a tendency to rebel. Westworld, Humans, Ex Machina – all include strong, often terrifying, female robots who gain consciousness, and could be seen as a manifestation of society’s fears of women gaining power. Put a sub editor’s hat on and voila: “Is Feminism To Blame For Our Fear of Sex Robots?”

Dr Lynne Hall focused on user experience – while sex robots are often portrayed as humanoid, in fact a robot that pleasures you may be more akin to something you strap to your body while you watch porn. She went on to point out that porn made with one or more robotic actors has a number of interesting benefits such as a lower risk of STI transmission, and perhaps better performer safety, as robot actors replace potentially predatory porn actors (“Sex Robots Will Revolutionise Porn!”). David Levy, author of “Love and Sex with Robots”, gave a controversial keynote on the implications of robot consciousness when it comes to relationships: “Humans Will Marry Robots By 2050.”

In other presentations, designers and engineers showed off the real-life robots they had built. Cristina Portalès introduced us to ‘ROMOT’ – a robotic theatre which combines moving seats, smells, virtual reality and more to create a uniquely intense experience. But while the ROMOT team have no plans to turn it into a sex show, Cristina outlined how it could be used to enhance sexual experiences - using porn videos and sex scents to create a wholly X-rated experience. Or, if you prefer: ‘Immersive Sex Theatre Could Be The Future Of Swinging.’ Other designers showed off projects designed to increase human intimacy over a long distance – like ‘Kissinger’ (‘Remarkable Gadget Helps You Smooch A Lover Over The Internet’) and ‘Teletongue’ (‘With X-Rated Lollipop You Can Make Sweet Love At A Distance’).

You get the idea. If we had a classification system for science reporting, all these headlines would be flagged to let the user know that the actual story is far more complicated. But they’d also probably languish unclicked, meaning similar research is less likely to get covered in the future.

Towards the end of the conference one of the Q+A sessions moved into the area of science and tech communication. Inevitably, with so many journalists in the room, there was an uneasiness from some academics about the way in which the conference would be covered. As someone with a bee in my bonnet about the way sex is often reported in the mainstream media, I think this wariness is often justified. But while my initial reaction to Gizmodo’s headline was to roll my eyes, their presence – and that of other journalists – made the overall topic of robotic relationships and intimacy much more accessible to the public. There have been one or two swiftly-corrected inaccuracies, but the press presence means that what could otherwise have been a small conference just for academics has sparked debate around the world. 

]]>
Thu, 22 Dec 2016 23:26:34 +0400
<![CDATA[We will soon be able to read minds and share our thoughts]]>http://2045.com/news/35085.html35085The first true brain-to-brain communication in people could start next year, thanks to huge recent advances.

Early attempts won’t quite resemble telepathy as we often imagine it. Our brains work in unique ways, and the way each of us thinks about a concept is influenced by our experiences and memories. This results in different patterns of brain activity, but if neuroscientists can learn one individual’s patterns, they may be able to trigger certain thoughts in that person’s brain. In theory, they could then use someone else’s brain activity to trigger these thoughts.

“You could detect certain thought processes and use them to influence other people’s decisions” 

So far, researchers have managed to get two people, sitting in different rooms, to play a game of 20 questions on a computer. The participants transmitted “yes” or “no” answers, thanks to EEG caps that monitored brain activity, with a technique called transcranial magnetic stimulation triggering an electrical current in the other person’s brain. By pushing this further, it may be possible to detect certain thought processes, and use them to influence those of another person, including the decisions they make.

Another approach is for the brain activity of several individuals to be brought together on a single electronic device. This has been done in animals already. Three monkeys with brain implants have learned to think together, cooperating to control and move a robotic arm.

Similar work has been done in rats, connecting their brains in a “brainet”. The next step is to develop a human equivalent that doesn’t require invasive surgery. These might use EEG caps instead, and their first users will probably be people who are paralysed. Hooking up a brainet to a robotic suit, for example, could enable them to get help from someone else when learning to use exoskeletons to regain movement.

This article appeared in print under the headline “Mind-reading fuses thoughts”

]]>
Wed, 14 Dec 2016 23:43:54 +0400
<![CDATA[Phantom movements in augmented reality helps patients with chronic intractable phantom limb pain]]>http://2045.com/news/35079.html35079Dr Max Ortiz Catalan at Chalmers University of Technology, the Department of Signals and systems, has developed a novel method of treating phantom limb pain using machine learning and augmented reality. This approach has been tested on over a dozen of amputees with chronic phantom limb pain who found no relief by other clinically available methods before. The new treatment reduced their pain by approximately 50 per cent, reports a clinical study published in The Lancet.

​People who lose an arm or leg often experience phantom limb pain, as if the missing limb was still there. Phantom limb pain can become a serious chronic condition that significantly reduces the patients’ quality of life. It is still unclear why phantom limb pain and other phantom sensations occur.

Several medical and non-medical treatments have been proposed to alleviate phantom limb pain. Examples include mirror therapy, various types of medications, acupuncture, and implantable nerve stimulators. However, in many cases nothing helps. This was the situation for the 14 arm amputees who took part in the first clinical trial of a new treatment, invented by Chalmers researcher Max Ortiz Catalan, and further developed with his multidisciplinary team in the past years.

“We selected the most difficult cases from several clinics,” Dr Ortiz Catalan says. “We wanted to focus on patients with chronic phantom limb pain who had not responded to any treatments. Four of the patients were constantly medicated, and the others were not receiving any treatment at all because nothing they tried had helped them. They had been experiencing phantom limb pain for an average of 10 years.”

The patients were treated with the new method for 12 sessions. At the last session the intensity, frequency, and quality of pain had decreased by approximately 50 per cent. The intrusion of pain in sleep and activities of the daily living was also reduced by half. In addition, two of the four patients who were on analgesics were able to reduce their doses by 81 per cent and 33 per cent.

“The results are very encouraging, especially considering that these patients had tried up to four different treatment methods in the past with no satisfactory results,” Ortiz Catalan says. “In our study, we also saw that the pain continuously decreased all the way through to the last treatment. The fact that the pain reduction did not plateau suggests that further improvement could be achieved with more sessions.”

Ortiz Catalan calls the new method phantom motor execution. It consist of using muscle signals from the amputated limb to control augmented and virtual environments. Electric signals in the muscles are picked up by electrodes on the skin. Artificial intelligence algorithms translate the signals into movements of a virtual arm in real-time. The patients see themselves on a screen with the virtual arm in the place of the missing arm, and they can control it as they would control their biological arm.

Thus, the perceived phantom arm is brought to life by a virtual representation that the patient can see and control. This allows the patient to reactivate areas of the brain that were used to move the arm before it was amputated, which might be the reason that the phantom limb pain decrease. No other existing treatment for phantom limb pain generates such a reactivation of these areas of the brain with certainty. The research led by Ortiz Catalan not only creates new opportunities for clinical treatment, but it also contributes to our understanding of what happens in the brain when phantom pain occurs.

The clinical trial was conducted in collaboration with Sahlgrenska University Hospital in Gothenburg, Örebro University Hospital in Örebro, Bräcke Diakoni Rehabcenter Sfären in Stockholm, all in Sweden, and the University Rehabilitation Institute in Ljubljana, Slovenia.

“Our joint project was incredibly rewarding, and we now intend to go further with a larger controlled clinical trial,” Ortiz Catalan says. “The control group will be treated with one of the current treatment methods for phantom limb pain. This time we will also include leg amputees. More than 30 patients from several different countries will participate, and we will offer more treatment sessions to see if we can make the pain go away completely.”

The technology for phantom motor execution is available in two modalities – an open source research platform, and a clinically friendly version in the process of being commercialised by the Gothenburg-based company Integrum. The researchers believe that this technology could also be used for other patient groups who need to rehabilitate their movement capability, for example after a stroke, nerve damage or hand injury.

]]>
Sat, 3 Dec 2016 19:25:59 +0400
<![CDATA[A new minimally invasive device to treat cancer and other illnesses ]]>http://2045.com/news/35081.html35081 A new study by Lyle Hood, assistant professor of mechanical engineering at The University of Texas at San Antonio (UTSA), describes a new device that could revolutionize the delivery of medicine to treat cancer as well as a host of other diseases and ailments (Journal of Biomedical Nanotechnology, "Nanochannel Implants for Minimally-Invasive Insertion and Intratumoral Delivery"). Hood developed the device in partnership with Alessandro Grattoni, chair of the Department of Nanomedicine at Houston Methodist Research Institute. "The problem with most drug-delivery systems is that you have a specific minimum dosage of medicine that you need to take for it to be effective," Hood said. "There's also a limit to how much of the drug can be present in your system so that it doesn't make you sick." As a result of these limitations, a person who needs frequent doses of a specific medicine is required to take a pill every day or visit a doctor for injections. Hood's creation negates the need for either of these approaches, because it's a tiny implantable drug delivery system. "It's an implantable capsule, filled with medicinal fluid that uses about 5000 nanochannels to regulate the rate of release of the medicine," Hood said. "This way, we have the proper amount of drugs in a person's system to be effective, but not so much that they'll harm that person." The capsule can deliver medicinal doses for several days or a few weeks. According to Hood, it can be used for any kind of ailment that needs a localized delivery over several days or a few weeks. This makes it especially tailored for treating cancer, while a larger version of the device, which was originally created by Grattoni, can treat diseases like HIV for up to a year. "In HIV treatment, you can bombard the virus with drugs to the point that that person is no longer infectious and shows no symptoms," Hood said. "The danger is that if that person stops taking their drugs, the amount of medicine in his or her system drops below the effective dose and the virus is able to become resistant to the treatments." The capsule, however, could provide a constant delivery of the HIV-battling drugs to prevent such an outcome. Hood noted it can also be used to deliver cortisone to damaged joints to avoid painful, frequent injections, and possibly even to pursue immunotherapy treatments for cancer patients. "The idea behind immunotherapy is to deliver a cocktail of immune drugs to call attention to the cancer in a person's body, so the immune system will be inspired to get rid of the cancer itself," he said. The current prototype of the device is permanent and injected under the skin, but Hood is working with Teja Guda, assistant professor of biomedical engineering, to collaborate on 3-D printing technology to make a new, fully biodegradable iteration of the device that could potentially be swallowed.

Read more: A new minimally invasive device to treat cancer and other illnesses 

]]>
Thu, 1 Dec 2016 19:30:12 +0400
<![CDATA[For robots, artificial intelligence gets physical]]>http://2045.com/news/35076.html35076In a high-ceilinged laboratory at Children’s National Health System in Washington, D.C., a gleaming white robot stitches up pig intestines.

The thin pink tissue dangles like a deflated balloon from a sturdy plastic loop. Two bulky cameras watch from above as the bot weaves green thread in and out, slowly sewing together two sections. Like an experienced human surgeon, the robot places each suture deftly, precisely — and with intelligence.

Or something close to it.

For robots, artificial intelligence means more than just “brains.” Sure, computers can learn how to recognize faces or beat humans in strategy games. But the body matters too. In humans, eyes and ears and skin pick up cues from the environment, like the glow of a campfire or the patter of falling raindrops. People use these cues to take action: to dodge a wayward spark or huddle close under an umbrella.

Part of intelligence is “walking around and picking things up and opening doors and stuff,” says Cornell computer scientist Bart Selman. It “has to do with our perception and our physical being.” For machines to function fully on their own, without humans calling the shots, getting physical is essential. Today’s robots aren’t there yet — not even close — but amping up the senses could change that.

 

“If we’re going to have robots in the world, in our home, interacting with us and exploring the environment, they absolutely have to have sensing,” says Stanford roboticist Mark Cutkosky. He and a group of like-minded scientists are making sensors for robotic feet and fingers and skin — and are even helping robots learn how to use their bodies, like babies first grasping how to squeeze a parent’s finger.

The goal is to build robots that can make decisions based on what they’re sensing around them — robots that can gauge the force needed to push open a door or figure out how to step carefully on a slick sidewalk. Eventually, such robots could work like humans, perhaps even caring for the elderly.

The whole story...

]]>
Sun, 20 Nov 2016 19:03:22 +0400
<![CDATA[The robot suit providing hope of a walking cure]]>http://2045.com/news/35075.html35075Clothing that can help people learn how to walk again after a stroke is the brainchild of a Harvard team reinventing the way we use robot technology

Conor Walsh’s laboratory at Harvard University is not your everyday research centre. There are no bench-top centrifuges, no fume cupboards for removing noxious gases, no beakers or crucibles, no racks of test tubes and only a handful laptop computers. Instead, the place is dominated by clothing.

On one side of the lab stands a group of mannequins dressed in T-shirts and black running trousers. Behind them, there are racks of sweatshirts and running shoes. On another wall of shelves, shorts and leggings have been carefully folded and labelled for different-size wearers. On my recent visit, one student was sewing a patch on a pair of slacks.

Walk in off the street and you might think you had stumbled into a high-class sports shop. But this is no university of Nike. This is the Harvard Biodesign Lab, home of a remarkable research project that aims to revolutionise the science of “soft robotics” and, in the process, transform the fortunes of stroke victims by helping them walk again.

“Essentially, we are making clothing that will give power to people who have suffered mobility impairment and help them move,” says Professor Walsh, head of the biodesign laboratory. “It will help them lift their feet and walk again. It is the ultimate in power-dressing.”

Last week, at a ceremony in Los Angeles, 35-year-old Walsh was awarded a Rolex award for enterprise for his work. He plans to use the prize money – 100,000 Swiss francs (about £82,000) – to expand “soft robotics” to develop suits that could also enhance the ability of workers and soldiers to lift and carry weights and also improve other areas of medical care, including treatments for patients suffering from Parkinson’s disease, cerebral palsy and other ailments that affect mobility.

Walsh is a graduate – in manufacturing and mechanical engineering – of Trinity College Dublin. While a student, he became fascinated with robotics after he read about the exoskeletons being developed in the United States to help humans handle heavy loads. Essentially, an exoskeleton is a hard, robot-like shell that fits around a user and moves them about. Think of the metal suit worn by Robert Downey Jr in Iron Man or the powered skeletal frame Sigourney Weaver used in Aliens to deal with the acid-dribbling extraterrestrial that threatened her spaceship.

“I thought that it all looked really, really cool,” says Walsh. So he applied, and was accepted, to study at the Massachusetts Institute of Technology (MIT) under biomechatronics expert Professor Hugh Herr. But when Walsh began working on rigid exoskeletons, he found the experience unsatisfactory. “It was like being inside a robotic suit of armour. It was hard, uncomfortable and ponderous and the suit didn’t always move the way a human would,” he says.

So when Walsh moved to Harvard, where he set up the biodesign lab, he decided to take a different approach to the problem. “I saw immediately that if you had a softer suit that accentuated the right actions, was comfy to wear and didn’t encumber you, it could have huge biomedical applications,” he says. “I began to wonder: can we make wearable robots soft?”

The answer turned out to be yes. Walsh, assisted by colleagues Terry Ellis, Louis Award and Ken Holt of Boston University, worked with experts in electronics, mechanical engineering, materials science and neurology to create an ingenious, low-tech way to boost walking: the soft exosuit. A band of cloth is wrapped around a person’s calf muscles. Pulleys, made from bicycle brake cables, are attached to these calf wraps and the other ends of the cables fitted to a power pack worn on a patient’s back. When the wearer starts to lift his foot to take a step, the power pack pulls the cables and this helps the wearer lift their leg. Then, as their foot swings forward, another cable, attached to the toecap of their shoes, tightens to help raise the toe so that it does not drag on the ground as they swing their legs forward. This condition is known as “foot drop” and it is a common difficulty for stroke patients.

In this way, an often critical problem for someone who can no longer control their muscles properly is alleviated. They can lift their legs and, just as importantly, keep their toes from turning down so that they do not drag on the ground and make them stumble. It is the perfect leg-up, in short.

“Designing robotic devices that target specific joints just hadn’t been done before,” says Walsh. “People had only looked at constructing a full-leg exoskeleton. We are targeting just one joint, not a whole leg. Crucially, in the case of strokes, it is the one that is often most badly impaired. Also, we have managed to keep our materials very light and easily wearable. Simple is best. That is our mantra.”

Analysis Cryonics: does it offer humanity a chance to return from the dead?While it used to be the stuff of science fiction, the technology behind the dream has advanced in recent years Read more

Originally, the pulleys that lifted the cables that helped wearers’ raise their legs and toes were powered by a trolley-like device that trundled alongside them. One of the key improvements involved in Walsh’s project has been to reduce that power pack to a size that can be worn reasonable comfortably. The unit weighs 10lbs (4.5kg) and Walsh expects his team will be able to make further reductions in the near future. “Motors are going to get lighter, batteries are going to get lighter. That will all be of great benefit, without doubt.”

The packs are also fitted with devices known as inertial measurement units (IMU), which analyse the forces created by foot movements and raise and lower the brake-cable pulleys. These sensors have to work with millisecond accuracy for the system to work properly. “Timing is absolutely critical,” says Walsh.

Test runs have already proved successful, however. Videos of stroke patients wearing soft exosuits and walking on treadmills reveal a marked improvement in their movement. Once fitted with the suits, they no longer clutch the handrails and their strides become much quicker and more confident. “We are not saying our system is the only solution to impaired mobility,” adds Walsh. “There will always be a place for hard exoskeleton power suits, for example, for people who are completely paralysed. But for less severe problems, soft robotic suits, with their lightness and flexibility, are a better solution.”

Every year, about 110,000 people suffer a stroke in the UK. Most patients survive but strokes are still the third-largest cause of death, after heart disease and cancer, in this country. Strokes occur when the blood supply to the brain is stopped due to a blood clot or when a weakened blood vessel bursts. One impact affects how muscles work. As the Stroke Association points out, your brain sends signals to your muscles, through your nerves, to make them move. A stroke, in damaging your brain, disrupts these signals. Classic symptoms include foot drop and loss of stamina. Patients feel tired and become more clumsy, making it even more difficult to control their movements.

“Patients often withdraw from life. They stop going out and miss out on all sort of social events – their grandchildren’s sports events or parties,” says Ignacio Galiana of the Wyss Institute for Biologically Inspired Engineering at Harvard University, which is also involved in the soft exosuit project. “They prefer to stay at home and to stop exercising because it is so tiring and draining. They withdraw from the world. By making it possible to walk normally again we hope we can stop that sort of thing happening.”

The soft exosuits will not be worn all of the time, it is thought, but instead be put on for a few hours so patients can get out of their homes without exhausting themselves. The devices should also help in physiotherapy sessions aimed at restoring sufferers’ ability to walk. “This is a new tool that will greatly extend and accelerate rehabilitation therapy for stroke patients,” says Walsh. “Patients no longer have to think about the process of moving. It starts to come naturally to them, as it was before they had their stroke.”

As to timing, Walsh envisages that his team will be able to get their prototypes on to the market in about three years. Nor will soft exoskeleton use be confined to stroke cases. “Cerebral palsy, Alzheimer’s, multiple sclerosis, Parkinson’s, old age: patients with any of these conditions could benefit,” adds Walsh. “When muscles no longer generate sufficient forces to allow people to walk, soft, wearable robots will be able to help them.”

]]>
Sun, 20 Nov 2016 18:59:52 +0400
<![CDATA[Medical Bionic Implant And Artificial Organs Market Volume Forecast and Value Chain Analysis 2016-2026]]>http://2045.com/news/35074.html35074Artificial organ and implants are special type of made devices/ prosthetics which are implanted in human body, so that it can imitates the function original organ. The crucial requirement of such organ is to function as normal organ. Bionics is combination Biology and Electronics. Medical Bionics are substitute or improvement of other body parts with robotic versions. Medical bionic implants are diverse from artificial organ, they impersonate original function very thoroughly or even do better than it.

Organ transplantation becomes mandate when an organ in body of person is damaged due to injury or disease. But, number of organ donors is very less than the demand. Although after the organ is transplanted there are chances of rejection of transplanted organ. This signifies that immune system of the recipient is not able to accept the organ. Artificial organs and bionics are made of biomaterial. Biomaterial is a living or non-living substance which is introduced in body as portion of artificial organ or bionics to substitute an organ or functions associated with it. Heart and kidney are most developed artificial organs while pace makers and cochlear implants most developed medical bionics.

Medical Bionic Implant and artificial Organs Market: Drivers and Restraints

Currently, medical bionic implant and artificial organs globalmarket is driven by the fact that large number of patients are in need for organ transplantation although not everyone can get organ as the number of donors are less. Growing advancements in medical technologiesare fueling the global medical bionic implant and artificial organsmarket. Growing public awareness about various diseases, advancements in medical bionic implant and artificial organs procedures and the need for screenings conducted for early diagnosis and treatment of various diseases are also expected to favor the global medical bionic implant and artificial organs market. Expiring of the patents of 3D printing will also play important development of 3D printing of artificial organs. However high cost associated with organ transplant procedure and price of medical bionics act as a restraint for global medicalbionic implant and artificial organsmarket.

Request Free Report Sample@ http://www.futuremarketinsights.com/reports/sample/rep-gb-1407

Medical Bionic Implant and artificial Organs Market: Segmentation

Global medical bionic implant and artificial organsmarketis segmented on the basis of product type as given below:

Based on product type, global medical bionic implant and artificial organs market is segmented into:

  • Heart Bionics
    • Ventricular Assist Device
    • Total Artificial Heart
    • Artificial Heart Valves
    • Pacemaker
      • Implantable Cardiac Pacemaker
      • External Pacemaker
  • Orthopedic Bionics
    • Bionic Hand
    • Bionic Limb
    • Bionic Leg
  • Ear Bionics
    • Bone Anchored Hearing Aid
    • Cochlear Implant

Based on implant location, global medical bionic implant and artificial organs market is segmented into:

  • Externally Worn
  • Implantable

Request For TOC@ http://www.futuremarketinsights.com/toc/rep-gb-1407

Medical Bionic Implant and artificial Organs Market: Overview

With quick technological advancement, rapid technological advancements in medical field, ever increasing demand of medical bionics implants and artificial organs, the global medical bionic implant and artificial organsmarket is anticipated to have vigorous development during the forecast period.

Medical Bionic Implant and artificial Organs Market: Region- wise Outlook

Depending on geographic regions, Global medical bionic implant and artificial organsmarketis segmented into seven key regions: North America, Latin America, Eastern Europe, Western Europe, Asia Pacific Excluding Japan, Japan and Middle East & Africa.North America is the leading market for medical bionic implant and artificial organsdue to rapid technological innovations and huge investment in research and development and increased healthcare expenditures on artificial prosthesis. Whereas, Asia-pacific andEurope is expected to grow at a significant growth due to large consumer base, rising government initiatives for enhancing healthcare, and high disposable incomewill contribute to the global medical bionic implant and artificial organsmarket value exhibiting robust CAGR during the forecast period.

Medical Bionic Implant and artificial Organs Market: Key Players

Some of the key market players in global medical bionic implant and artificial organsmarketareTouch Bionics Inc., Lifenet Health Inc.,Cochlear Ltd., Sonova, Otto Bock Inc., Edwards Lifesciences Corporation, Medtronic, Inc. HeartWare, Orthofix Holdings, Inc., BionX Medical Technologies, Inc. and others.

]]>
Wed, 16 Nov 2016 18:56:00 +0400
<![CDATA[Modular Exoskeleton Reduces Risk of Work-Related Injury]]>http://2045.com/news/35073.html35073Robotics startup suitX is turning human laborers into bionic workers with a new modular, full-body exoskeleton that will help reduce the number of on-the-job injuries.

The flexible MAX (Modular Agile eXoskeleton) system is designed to support those body parts—shoulders, lower back, knees—most prone to injury during heavy physical exertion.

A spinoff of the University of California Berkeley's Robotics and Human Engineering Lab, suitX built MAX out of three units: backX, shoulderX, and legX. Each can be worn independently or in any combination necessary.

"All modules intelligently engage when you need them, and don't impede you" when moving up or down stairs and ladders, driving, or biking, the product page said.

Field evaluations conducted in the US and Japan, as well as in laboratory settings, indicate the MAX system "reduces muscle force required to complete tasks by as much as 60 percent."

The full-body suit and its modules are aimed primarily at those working in industrial settings like construction, airport baggage handling, assembly lines, shipbuilding, warehouses, courier delivery services, and factories.

The full MAX Suit (BackX, ShoulderX, LegX together) will run you $10,000; the BackX and ShoulderX are $3,000 each; and a LegX is $5,000. SuitX suggests consumers contact sales@suitx.com for more details.

The company is perhaps best known for its Phoenix exoskeleton, which enables people with mobility disorders to stand up, walk, and interact with others. The lightweight device—still in the testing phase—carries a charge for up to four hours of constant use, or eight hours of intermittent walking.

]]>
Wed, 16 Nov 2016 18:53:04 +0400
<![CDATA[Advanced robot can understand how humans THINK and knows how the brain works]]>http://2045.com/news/35067.html35067The latest generation of artificially intelligent robots took centre stage recently at the 2016 World Robot Conference held in the Chinese capital Beijing.

But one of the stand out devices was a robot that can actually understand the intricacies of the human brain, and how a human thinks.

Xiao I has the ability to analyse human languages as well as a huge amount of data, and can assemble the functions of a human brain.

The advanced robot can understand and act on user’s instructions by analysing the specific context, thanks to its massive database which has accumulated information concerning daily life and industries for decades, according to an exhibitor at the Xiao I booth.

"The top four companies representing the best human-computer interaction technology were voted for at a summit in Orlando the day before yesterday.

Xiao I ranks as the top one, and others include Apple's Siri, Microsoft's Cortana and Amazon's Echo," said the exhibitor.

Over the past few years, Beijing authorities have been giving policy support to the robot developers in an attempt to stimulate growth of the city’s high-tech industry.

"Without artificial intelligence a robot will be nothing but a machine. Most robot-related research is developing towards the direction of artificial intelligence, which will enhance the sensory ability of robots and enable them to offer better services," said Sheng Licheng, deputy director of Beijing’s Yizhuang Development Zone Administration.

The five-day 2016 World Robot Conference wrapped up on Tuesday, after dazzling visitors with the very latest advancements in robot technology.

]]>
Mon, 31 Oct 2016 20:43:24 +0400
<![CDATA[Soft robot with a mouth and gut can forage for its own food]]>http://2045.com/news/35066.html35066Lying in a bath in Bristol, UK, is a robotic scavenger, gorging itself on its surroundings. It’s able to get just enough energy to take in another stomach full of food, before ejecting its waste and repeating the process. This is no ordinary robot. It’s a self-sustaining soft robot with a mouth and gut.

Developed by a Bristol-based collaboration, this robot imitates the life of salps – squishy tube-shaped marine organisms. Salps have an opening at each end, one for food to enter and one for waste to leave. They digest any tasty treats that pass through their body, giving them just enough energy to wiggle about. The same is true for the Bristol bot.

By opening its “mouth”, made from a soft polymer membrane, the robot can suck in a belly full of water and biomatter. The artificial gut – a microbial fuel cell (MFC) – is filled with greedy microbes that break down the biomass and convert its chemical energy into electrical energy, which powers the robot. Digested waste matter is then expelled out the rear end, just as more water is sucked in the front for the next feed. With every mouthful, the robot’s reserves are replenished, so in theory it could roam indefinitely.

 

 

“Squeezing out enough energy to be self-sustainable is the real breakthrough,” says Fumiya Iida, a robotics researcher from the University of Cambridge.

Leave it alone

The energy that an MFC can get from food like this is currently pretty low. But by using soft materials for the mouth and the gut, the team was able to reduce the robot’s energy consumption. They got more power by putting several MFCs in series, like a battery.

One advantage of a self-sustaining robot is that if you don’t have to charge it, change its batteries, or hook it up to a power source, it won’t need any human interference. This would make it ideal for use in inhospitable environments: leave the robot in a radioactive disaster zone or a lake filled with pollution, then let it to get to work.

At the moment, it is just a proof of concept. The surrounding water is idealised, meaning that the nutrients have been evenly spread and are in an easy-to-digest form, but other researchers have shown that MFCs can work in more testing conditions.

A self-sustaining robot could one day clean up “red tides” like this one in China, as well as collecting rubbish

Getty Images News

Now that self-sustainability has been achieved, the team wants to get more power so that the robot can start performing useful tasks.

“In the future, robots like this could be released into the ocean to collect garbage,” says Hemma Philamore, one of the robot’s creators from the University of Bristol. Another application could see the robots feeding in agricultural irrigation systems while monitoring plants or applying chemicals to crops. “What we are developing is a robot that can act naturally, in a natural environment,” says Philamore.

Journal reference: Soft RoboticsDOI: 10.1089/soro.2016.0020

]]>
Mon, 31 Oct 2016 20:40:00 +0400
<![CDATA[See a sweating robot do push-ups like it's Schwarzenegger]]>http://2045.com/news/35058.html35058Wasn't it Thomas Edison who said genius is 99 percent perspiration and 1 percent inspiration? Here's a new development that leans heavily on both. The University of Tokyo has developed Kengoro, a musculoskeletal humanoid robot that cools its motors by sweating.

Kengoro, which stands 5 feet 6 inches (1.7 meters) tall, made its debut at the International Conference on Intelligent Robots and Systems held this week in Daejon, Korea. Japanese researchers needed to find a way to cool it down without adding a batch of tubes and fans, so they decided to make it sweat.

According to IEEE Spectrum, fake sweat glands allow deionized water to seep out through Kengoro's frame around its 108 motors. As the motors heat up, the water cools them. Kengoro's metal frame is embedded with permeable channels, kind of like a sponge. The deionized water seeps slowly from the inner layers to the more porous layers as needed for cooling.

But Kengoro doesn't have to worry about wiping down its gym equipment -- the water evaporates as it cools so it doesn't drip in gross puddles like it does with guy on the Stairmaster next to you.

The creative cooling method allowed Kengoro to demonstrate doing push-ups for an impressive 11 minutes straight without overheating. That's right, push-ups. It's a skinless Arnold Schwarzenegger, in other words. Let's just hope it sticks to "Kindergarten Cop" Arnold, and not "Terminator" Arnold, because we all know how mankind's little adventure with super-advanced robots turned out there.

]]>
Sat, 15 Oct 2016 13:58:13 +0400
<![CDATA[Brain implant provides sense of touch with robotic hand – and that’s just the start]]>http://2045.com/news/35057.html35057A dozen years ago, an auto accident left Nathan Copeland paralyzed, without any feeling in his fingers. Now that feeling is back, thanks to a robotic hand wired up to a brain implant.

“I can feel just about every finger – it’s a really weird sensation,” the 28-year-old Pennsylvanian told doctors a month after his surgery.

Today the brain-computer interface is taking a share of the spotlight at the White House Frontiers Conference in Pittsburgh, with President Barack Obama and other luminaries in attendance.

The ability to wire sensors into the part of the brain that registers the human sense of touch is just one of many medical marvels being developed on the high-tech frontiers of rehabilitation.

“You learn completely new and different things every time you come at this from different directions,” Arati Prabhakar, director of the Pentagon’s Defense Advanced Research Projects Agency, said last week at the GeekWire Summit in Seattle.

Prabhakar provided a preview of the Copeland’s progress during her talk. DARPA’s Revolutionizing Prosthetics program provided the primary funding for the project, which was conducted at the University of Pittsburgh and its medical center, UPMC.

The full details of the experiment were published online today in Science Translational Medicine.

Copeland’s spinal cord was severely injured in an accident in the winter of 2004, when he was an 18-year-old college freshman. The injury left him paralyzed from the upper chest down, with no ability to feel or move his lower arms or legs.

Right after the accident, Copeland put himself on Pitt’s registry of patients willing to participate in clinical trials. Nearly a decade later, a medical team led by Pitt researcher Robert Gaunt chose him to participate in a groundbreaking series of operations.

Gaunt and his colleagues had been working for years on developing brain implants that let disabled patients control prosthetic limbs with their thoughts. “Slowly but surely, we have been moving this research forward,” study co-author Michael Boninger, a professor at Pitt as well as the director of post-acute care for UPMC’s Health Services Division, said in a news release.

This experiment moved the team’s efforts in a new direction. Four arrays of microelectrodes were implanted into the region of Copeland’s brain that would typically take in sensory signals from his fingers. Over the course of several months, researchers stimulated specific points in the somatosensory cortex, and mapped which points made Copeland feel as if a phantom finger was being touched.

“Sometimes it feels electrical, and sometimes it’s pressure,” Copeland said, “but for the most part, I can tell most of the fingers with definite precision. It feels like my fingers are getting touched or pushed.’

To test the results, the researchers placed sensors onto each of the fingers of a robotic hand. They connected the system to Copeland’s brain electrodes, and put a blindfold over his eyes. Then an experimenter touched the robo-hand’s fingers and asked Copeland if he could tell where the feeling was coming from.

Over the course of 13 sessions, each involving hundreds of finger touches, Copeland’s success rate was 84 percent. The index and little fingers were easy to identify, while the middle and ring fingers were harder.

During the experiment, Copeland learned to distinguish the intensity of the touch to some extent – but for what it’s worth, he couldn’t distinguish between hot and cold. That’ll have to come later.

“The ultimate goal is to create a system which moves and feels just like a natural arm would,” Gaunt said. “We have a long way to go to get there, but this is a great start.”

Prabhakar said neurotechnology is a high priority for DARPA, in part because of the kinds of injuries that warfighters have suffered in conflicts abroad.

“Lower-limb prosthetics have gotten very good – but upper-limb prosthetics, until very recently, have still been limited to a very simple hook,” she said.

]]>
Sat, 15 Oct 2016 13:54:32 +0400