2045 Initiativehttp://2045.com/Strategic Social Initiativehttp://2045.ru/images/logo_en.png2045 Initiativehttp://2045.com/<![CDATA[Dmitry Itskov: www.Immortal.me - Want to be immortal? Act!]]>http://2045.com/news/33999.html33999Fellow Immortalists!

Many of the daily letters that the 2045 Initiative and I receive ask the question: will only the very rich be able to afford an avatar in the future, or will they be relatively cheap and affordable for almost everyone?

I would like to answer this question once again: avatars will be cheap and affordable for many people,… but only if people themselves make every effort needed to achieve this, rather than wait until someone else does everything for them.

To facilitate and expedite this, I am hereby soft-launching a project today which will allow anyone to contribute to the creation of a ‘people’s avatar’… and perhaps even capitalize on this in the future. The project is named Electronic Immortality Corporation. It will soon be launched at http://www.immortal.me under the motto "Want to be immortal? Act!"

The Electronic Immortality Corporation will be a social network, operating under the rules of a commercial company. Instead of a user agreement, volunteers will get jobs and sign a virtual contract.

In addition to creating a ‘people’s avatar’, the Electronic Immortality Corporation will also implement various commercial and charitable projects aimed at realizing ideas of the 2045 Initiative, transhumanism and immortalism.

We will create future technologies that can be commercialized within decades (e.g. Avatar C) as well as implement ‘traditional’ business projects such as, for example, producing commercially viable movies.

Even the smallest volunteer contribution to the work of the Corporation will be rewarded by means of its own virtual currency that will be emitted for two purposes only: a) to reward volunteer work, and b) to compensate real financial investments in the company. Who knows, our virtual currency may well become as popular and in demand as Bitcoin.

The first steps are as follows:

First, we will establish an expert group, which will shape the final concept and the statutes of the Electronic Immortality Corporation.

Second, we will announce and organize two competitions: a) to create the corporate identity of the Electronic Immortality Corporation, and b) the code of the social network.

Third, we will form the Board of Directors of the Electronic Immortality Corporation.  There, we would like to see experienced businessmen with a track record of successfully implemented large projects.

Fourth, we will engage celebrities and public figures from around the world.

Therefore, if you…

- have experience in creating social networks, online games, gaming communities and are willing to discuss the final concept of the Electronic Immortality Corporation,

- are a brilliant designer,

- are a talented programmer with experience in developing large-scale and/or open source projects,

- are a businessman with experience in managing large companies and ready to participate in the Board of Directors of the Electronic Immortality Corporation or you know of such a person,

- are in contact with celebrities and ready to engage them in the Electronic Immortality Corporation;

and at the same time you desire to change the world, to build a high-tech reality, to participate in creating avatars and immortality technologies… if all of this is your dream and you are ready to serve it selflessly,

email us at team@immortal.me

Want to be immortal? Act!


Dmitry Itskov

Founder of the 2045 Initiative

Sun, 23 Apr 2045 21:50:23 +0400
<![CDATA[NASA Gives Rover An Origami-Inspired Robot Scout]]>http://2045.com/news/35127.html35127NASA has started testing an origami-inspired scout robot that will be used to explore the Martian surface.

Mars exploration missions have gained traction in the last few years, and space agencies are developing new rovers and robots that can enable scientists to garner more details of the Red Planet. 

PUFFER: A New Robot Scout

Pop-Up Flat Folding Explorer Robot or PUFFER has been developed by NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California.

This device was introduced by Jaakko Karras,who is the project manager of PUFFER at JPL, while he was testing the origami designs. Karras and his associates thought of using a printed circuit board while creating these devices.

PUFFER includes a lightweight structure and it is made in such a way that it can tuck its wheels, flatten itself, and explore places, which a typical rover cannot access.

Features Of PUFFER

The scout robot has been tested under varied rugged conditions, starting from the Mojave Desert in California to the frozen plains of Antarctica. It was put through these tests to ensure its functionality in all kinds of terrain, whether sand covered or snow laden.

Originally, this device consisted of four wheels, but at present it only has two wheels that are foldable. The folding of the wheel over the body allows the machine to roll and crawl.

It also consists of a tail which is made for stability. The robot inlcudes a "microimager," which is a high resolution camera, and solar panels are placed on the belly of the PUFFER. The machine flips over when the batteries are drained.

PUFFER can climb up to 45 degree slopes and can even fall into craters and pits unharmed. The robot is considered a strong assistant to large robotic devices that will be sent to Mars in the near term.

"They can do parallel science with a rover, so you can increase the amount you're doing in a day. We can see these being used in hard-to-reach locations - squeezing under ledges, for example," stated Karras.

Another member of the PUFFER group, Christine Fullera at JPL, said that the body and the electronics of PUFFER involve a circuit board. There are no escalating fasteners or any other parts, which are attached to it. The robot has an integrated body.

The team has built a sample of PUFFER and has already started testing it for the past few months. The officials of the PUFFER project have said that this device is not yet ready. They plan to give the robot more autonomy by including scientific instruments like gear, which identifies carbon containing organic molecules.

Wed, 15 Mar 2017 09:22:27 +0400
<![CDATA[Brain activity appears to continue after people are dead, according to new study]]>http://2045.com/news/35120.html35120Brain activity may continue for more than 10 minutes after the body appears to have died, according to a new study.

Canadian doctors in an intensive care unit appear to have observed a person's brain continuing to work even after they were declared clinically dead.

In the case, doctors confirmed their patient was dead through a range of the normal observations, including the absence of a pulse and unreactive pupils. But tests showed that the patients’ brain appeared to keep working – experiencing the same kind of brain waves that are seen during deep sleep.

In a study that noted the findings could lead to new medical and ethical challenges, doctors reported that they had seen “single delta wave bursts persisted following the cessation of both the cardiac rhythm and arterial blood pressure (ABP)”. The findings are reported in a new study published by a team from the University of Western Ontario.

Only one of the four people studied exhibited the long-lasting and mysterious brain activity, with activity in most patients dying off before their heart stopped beating. But all of their brains behaved different in the minutes after they died – adding further mystery to what happens to them after death.

The doctors don’t know what the purpose of the activity might be, and caution against drawing too many conclusions from such a small sample. But they write that it is difficult to think the activity was the result of a mistake, given that all of the equipment appeared to be working fine.

Researchers had previously thought that almost all brain activity ended in one huge mysterious surge about a minute after death. But those studies were based on rats – and the research found no comparable effect in humans.

“We did not observe a delta wave within 1 minute following cardiac arrest in any of our four patients,” they write in the new study.

What happens to the body and mind after death remains almost entirely mysterious to scientists. Two other studies last year, for instance, demonstrated that genes appeared to continue functioning – and even function more energetically – in the days after people die.

Fri, 10 Mar 2017 17:51:25 +0400
<![CDATA[Researchers Take A Step Toward Mind-Controlled Robots]]>http://2045.com/news/35121.html35121What if your friend the robot could tell what you're thinking, without you saying a word?

Researchers at MIT's Computer Science and Artificial Intelligence Lab and Boston University have created a system where humans can guide robots with their brainwaves. This may sound like a theory out of a sci-fi novel, but the goal of seamless human-robot interaction is the next major frontier for robotic research.

For now, the MIT system can only handle simple binary activities such as correcting a robot as it sorts objects into two boxes, but CSAIL Director Daniela Rus sees a future where one day we could control robots in more natural ways, rather than having to program them for specific tasks — like allowing a supervisor on a factory floor to control a robot without ever pushing a button.

"Imagine you look at the robots, and at some point one robot is not doing the job correctly," Rus explained. "You will think that, you will have that thought, and through this detection you would in fact communicate remotely with the robot to say 'stop.' "

Rus admits the MIT development is a baby step, but she says it's an important step toward improving the way humans and robots interact.

Currently, most communication with robots requires thinking in a particular way that computers can recognize or vocalizing a command, which can be exhausting.

"We would like to change the paradigm," Rus said. "We would like to get the robot to adapt to the human language."

The MIT paper shows it's possible to have a robot read your mind — at least when it comes to a super simplistic task. And Andres Salazar-Gomez, a Boston University Ph.D. candidate working with the CSAIL research team, says this system could one day help people who can't communicate verbally.

Meet Baxter

For this study, MIT researchers used a robot named Baxter from Rethink Robotics.

Baxter had a simple task: Put a can of spray paint into the box marked "paint" and a spool of wire in the box labeled "wire." A volunteer hooked up to an EEG cap, which reads electrical activity in the brain, sat across from Baxter, and observed him doing his job. If they noticed a mistake, they would naturally emit a brain signal known as an "error-related potential."

"You can use [that signal] to tell a robot to stop or you can use that to alter the action of the robot," Rus explained.

The system then translates that brain signal to Baxter, so he understands he's wrong, his cheeks blush to show he's embarrassed, and he corrects his behavior.

The MIT system correctly identified the volunteer's brain signal and then corrected the robot's behavior 70 percent of the time.

Making robots effective "collaborators"

"I think this is exciting work," said Bin He, a biomedical engineer at the University of Minnesota, who published a paper in December that showed people can control a robotic arm with their minds.

He was not affiliated with the MIT research, but he sees this as a "clever" application in a growing yet nascent field.

Researchers say there's an increasing desire to find ways to make robots effective "collaborators," not just obedient servants.

"One key aspect of collaboration is being able ... to know when you're making a mistake," said Siddhartha Srinivasa, a professor at Carnegie Mellon University who was not affiliated with the MIT study. "What this paper shows is how you can use human intuition to boot-strap a robot's learning of what its world looks like and how it can know right from wrong."

Srinivasa says this research could potentially have key implications for prosthetics, but cautions it's an "excellent first step toward solving a harder, much more complicated problem."

"There's a long gray line between not making a mistake and making a mistake," Srinivasa said. "Being able to decode more of the neuronal activity... is really critical."

And Srinivasa says that's a topic that more scientists need to explore.

Potential real-world applications

MIT's Rus imagines a future where anybody can communicate with a robot without any training — a world where this technology could help steer a self-driving car or clean up your home.

"Imagine ... you have your robot pick up all the toys and socks from the floor, and you want the robot to put the socks in the sock bin and put the toys in the toy bin," she said.

She says that would save her a lot of time, but for now the mechanical house cleaner that can read your mind is still a dream.

Wed, 8 Mar 2017 17:54:13 +0400
<![CDATA[Ghost Minitaur™ Highly Agile Direct-Drive Quadruped Demonstrates Why Legged Robots are Far Superior to Wheels and Tracks When Venturing Outdoors]]>http://2045.com/news/35119.html35119Ghost Robotics, a leader in fast and lightweight direct-drive (gearless) legged robots, announced today that its patent-pending Ghost Minitaur™ has been updated with advanced reactive behaviors for navigating grass, rock, sand, snow and ice fields, urban objects and debris, and vertical terrain. (https://youtu.be/bnKOeMoibLg)

The latest gaits adapt reactively to unstructured and dynamic environments to maintain balance, ascend steep inclines (up to 35º), handle curb-sized steps in stride (up to 15cm), crouch to fit under crawl spaces (as low as 27cm), and operate at variable speeds and turning rates. Minitaur's high-force capabilities enable it to leap onto ledges (up to 40cm) and across gaps (up to 80cm). Its high control bandwidth allows it to actively balance on two legs, and high speed operation allows its legs to manipulate the world faster than the blink of an eye, while deftly reacting to unexpected contact.

Continue Reading
Ghost Minitaur(TM) Highly Agile Direct-Drive Quadruped Demonstrates Why Legged Robots are Far Superior to Wheels and Tracks When Venturing Outdoors.

"Our primary focus since releasing the Minitaur late last year has been expanding its behaviors to traverse a wide range of terrains and real-world operating scenarios," said Gavin Kenneally, and Avik De, Co-founders of Ghost Robotics. "In a short time, we have shown that legged robots not only have superior baseline mobility over wheels and tracks in a variety of environments and terrains, but also exhibit a diverse set of behaviors that allow them to easily overcome natural obstacles. We are excited to push the envelope with future capabilities, improved hardware, as well as integrated sensing and autonomy."

Ghost Robotics is designing next-generation legged robots that are superior to wheeled and tracked autonomous vehicles in real-world field applications, while substantially reducing costs to drive adoption and scalable deployments. Its direct-drive technology creates the lowest cost model with durability for commercializing very small to medium size legged UGV sensor platforms over any competitive design. The company's underlying research and intellectual property have additional applications in ultra-precise manipulators that are human-safe, and advanced gait research.

While a commercial version of the Ghost Minitaur™ robot is slated for delivery in the future, the current development platform is in high demand and has been shipped to many top robotics researchers worldwide because of its design simplicity, low cost and flexible software development environment for a broad range of research and commercialization initiatives.

"We are pleased with our R&D progress towards commercializing the Ghost Minitaur™ to prove legged robots can surpass the performance of wheel and track UGVs, while keeping the cost model low to support volume adoption, which is certainly not the case with existing bipedal and quadrupedal robot vendors," said Jiren Parikh, Ghost Robotics, CEO.

In the coming quarters, the company plans to demonstrate further improvements in mobility, built-in manipulation capabilities to interact with objects in the world, integration with more sensors, built-in autonomy for operation with reduced human intervention, as well as increased mechanical robustness and durability for operation in harsh environments.

About Ghost Robotics

Robots that Feel the World™. Ghost Robotics develops patent-pending, ultrafast and highly responsive direct-drive (no gearbox) legged robots for instantaneous and precise force feedback applications, offering superior operability over wheeled and tracked robots. The lightweight and low-cost Ghost Minitaur™ robot platform can be used as an autonomous vehicle fitted with sensors for ISR, search and rescue, asset management and inspection, exploration, scientific and military applications where unknown, rough, varied, hazardous, environmentally sensitive and even vertical terrain is present. Ghost Robotics is privately held and backed by the University of Pennsylvania and PCI Ventures with offices in Philadelphia. www.ghostrobotics.io

SOURCE Ghost Robotics, LLC

Related Links


Wed, 1 Mar 2017 17:55:29 +0400
<![CDATA[Boston Dynamics’ newest robot: Introducing Handle]]>http://2045.com/news/35118.html35118Handle is a research robot standing 6.5 ft tall, travels at 9 mph and jumps 4 feet vertically. It uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles on one battery charge. Handle uses many of the same dynamics, balance and mobile manipulation principles found in the other quadruped and biped robots Boston Dynamics’ build, but with only about 10 actuated joints, it is significantly less complex. Wheels are efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs Handle can have the best of both worlds.

Tue, 28 Feb 2017 21:35:55 +0400
<![CDATA[The 'Curious' Robots Searching for the Ocean's Secrets]]>http://2045.com/news/35116.html35116People have been exploring the Earth since ancient times—traversing deserts, climbing mountains, and trekking through forests. But there is one ecological realm that hasn’t yet been well explored: the oceans. To date, just 5 percent of Earth’s oceans have been seen by human eyes or by human-controlled robots.

That’s quickly changing thanks to advancements in robotic technologies. In particular, a new class of self-controlled robots that continually adapt to their surroundings is opening the door to undersea discovery.  These autonomous, “curious” machines can efficiently search for specific undersea features such as marine organisms and landscapes, but they are also programmed to keep an eye out for other interesting things that may unexpectedly pop up.

Curious robots—which can be virtually any size or shape—use sensors and cameras to guide their movements. The sensors take sonar, depth, temperature, salinity, and other readings, while the cameras constantly send pictures of what they’re seeing in compressed, low-resolution form to human operators. If an image shows something different than the feature a robot was programmed to explore, the operator can give the robot the okay to go over and check out in greater detail.

The field of autonomous underwater robots is relatively young, but the curious-robots exploration method has already lead to some pretty interesting discoveries, says Hanumant Singh, an ocean physicist and engineer at Woods Hole Oceanographic Institution in Massachusetts. In 2015, he and a team of researchers went on an expedition to study creatures living on Hannibal Seamount, an undersea mountain chain off Panama’s coast. They sent a curious robot down to the seabed from their “manned submersible”—a modern version of the classic Jacques Cousteau yellow submarine—to take photos and videos and collect living organisms on several dives over the course of 21 days.

On the expedition’s final dive, the robot detected an anomaly on the seafloor, and sent back several low-resolution photos of what looked like red fuzz in a very low oxygen zone. “The robot’s operators thought what was in the image might be interesting, so they sent it over to the feature to take more photos,” says Singh. “Thanks to the curious robot, we were able to tell that these were crabs—a whole swarming herd of them.”

The team used submarines to scoop up several live crabs, which were later identified through DNA sequencing as Pleuroncodes planipes, commonly known as pelagic red crabs, a species native to Baja California. Singh says it was extremely unusual to find the crabs so far south of their normal range and in such a high abundance, gathered together like a swarm of insects. Because the crabs serve as an important food source for open-ocean predators in the eastern Pacific, the researchers hypothesize the crabs may be an undetected food source for predators at the Hannibal Seamount, too.

When autonomous robot technology first developed 15 years ago, Singh says he and other scientists were building robots and robotics software from scratch. Today a variety of programming interfaces—some of which are open-source—exist, making scientists’ jobs a little easier. Now they just have to build the robot itself, install some software, and fine-tune some algorithms to fit their research goals.

“To efficiently explore and map our oceans, intelligent robots … are a necessity.”

While curious robot software systems vary, Girdhar says some of the basics remain the same. All curious robots need to collect data, and they do this with their ability to understand different undersea scenes without supervision. This involves “teaching” robots to detect a given class of oceanic features, such as different types of fish, coral, or sediment. The robots must also be able to detect anomalies in context, following a path that balances their programmed mission with their own curiosity.

This detection method is different from traditional undersea robots, which are preprogrammed to follow just one exploration path and look for one feature or a set of features, ignoring anomalies or changing oceanic conditions. One example of a traditional robot is Jason, a human-controlled “ROV,” or remotely operated vehicle, used by scientists at Woods Hole to study the seafloor.

Marine scientists see curious robots as a clear path forward. “To efficiently explore and map our oceans, intelligent robots with abilities to deliberate sensor data and make smart decisions are a necessity,” says Øyvind Ødegård, a marine archaeologist and Ph.D. candidate at the Centre for Autonomous Marine Operations and Systems at Norwegian University of Science and Technology.

Ødegård uses robots to detect and investigate shipwrecks, often in places too dangerous for human divers to explore—like the Arctic. Other undersea scientists in fields like biology and chemistry are starting to use curious robots to do things like monitor oil spills and searching for invasive species.

Compared to other undersea robots, Ødegård says, autonomous curious robots are best suited to long-term exploration. For shorter missions in already explored marine environments, it’s possible to preprogram robots to cope with predictable situations, says Ødegård. Yet, “for longer missions, with limited prior knowledge of the environment, such predictions become increasingly harder to make. The robot must have deliberative abilities or ‘intelligence’ that is robust enough for coping with unforeseen events in a manner that ensures its own safety and also the goals of the mission.”

One big challenge is sending larger amounts of data to human operators in real time. Water inhibits the movement of electromagnetic signals such as GPS, so curious robots can only communicate in small bits of data. Ødegård says to overcome this challenge, scientists are looking for ways to optimize data processing.

According to Singh, one next step in curious robot technology is teaching the robots to work in tandem with drones to give scientists pictures of sea ice from both above and below. Another is teaching the robots to deal with different species biases. For example, the robots frighten some fish and attract others—and this could cause data anomalies, making some species appear less or more abundant than they actually are.

Ødegård adds that new developments in robotics programs could allow even scientists without a background in robotics the opportunity to reap the benefits of robotics research. “I hope we will see more affordable robots that lower the threshold for playing with them and taking risks,” he says. “That way it will be easier to find new and innovative ways to use them.

Thu, 23 Feb 2017 15:27:38 +0400
<![CDATA[What Happens When Robots Become Role Models]]>http://2045.com/news/35112.html35112When you spend a lot of time with someone, their characteristics can rub off on you. But what happens when that someone is a robot?

As artificial intelligence systems become increasingly human, their abilities to influence people also improve. New Scientist reports that children who spend time with a robotic companion appear to pick up elements of its behavior. New experiments suggest that when kids play with a robot that’s a real go-getter, for instance, the child acquires some of its unremitting can-do attitude.

Other researchers are seeking to take advantage of similar effects in adults. A group at the Queensland University of Technology is enrolling a small team of pint-sized humanoid Nao robots to coach people to eat healthy. It hopes that chatting through diet choices with a robot, rather than logging calorie consumption on a smartphone, will be more effective in changing habits. It could work: as our own Will Knight has found out in the past, some conversational AI interfaces can be particularly compelling.

So as personal robots increasingly enter the home, robots may not just do our bidding—they might also become role models, too. And that means we must tread carefully, because while the stories above hint at the possibilities of positive reinforcement from automatons, others hint at potential negative effects.

Some parents, for instance, have complained that Amazon’s Alexa personal assistant is training their children to be rude. Alexa doesn’t need people to say please and thank you, will tolerate answering the same question over and over, and remains calm in the face of tantrums. In short: it doesn’t prime kids for how to interact with real people.

The process can flow both ways, of course. Researchers at Stanford University recently developed a robot that was designed to roam sidewalks, monitor humans, and learn how to behave with them naturally and appropriately. But as we’ve seen in the case of Microsoft’s AI chatbot, Tay—which swiftly became rude and anti-Semitic when it learned from Twitter users—taking cues from the crowd doesn’t always play out well.

In reality, there isn’t yet a fast track to creating robots that are socially intelligent—it remains one of the large unsolved problems of AI. That means that roboticists must instead carefully choose the traits they wish to be present in their machines, or else risk delivering armies of bad influence into our homes.

Wed, 22 Feb 2017 07:44:34 +0400
<![CDATA[Implants enable richer communication for people with paralysis]]>http://2045.com/news/35115.html35115

John Scalzi's science fiction novel Lock In predicts a near future where people with complete body paralysis can live meaningful, authentic lives thanks to (fictional) advances in brain-computer interfaces. A new study by researchers at Stanford University might be the first step towards such a reality.

Using brain-computer interfaces (BCI) to help people with paralysis communicate isn't completely new. But getting people using it to have a complex conversation is. This study's participants were able to output words at a much faster, more accurate rate than ever recorded thanks to the advanced technique.

The investigators worked with three people who experience severe limb weakness, either from amyotrophic lateral sclerosis (ALS), also called Lou Gehrig's disease, or from a spinal cord injury. They each had a tiny electrode array or two placed in their brains to record the signals from a region in the motor cortex that controls muscle movement. With only a little bit of training, the participants were able to master the typing interface. One participant, Dennis Degray of Menlo Park, California, was able to type eight words per minute with just his brain, a rate approaching texting speeds.

The researchers used the newest generation of BCI called the BrainGate Neural Interface System, the first such device to be surgically placed inside a patient's head. The tiny chips have 100 electrodes that penetrate the brain and can tap into individual nerve cells, a massive improvement over the older systems which can only measure brain waves and blood flow subcutaneously or from the outside of the scalp.

This is only the first step to creating a much more connected life for those with significant motor issues. The team of investigators looks forward to a day, perhaps just five years from now, when systems like this can be used to help people with paralysis communicate meaningfully with others.

Tue, 21 Feb 2017 08:02:14 +0400
<![CDATA[Facing the robotic revolution]]>http://2045.com/news/35113.html35113Pepper awakes. "Hi, I am a humanoid robot, and I am 1.2m [4ft] tall. I was born at Aldebaran in Paris. You can keep on asking me questions if you want."

Michael Szollosy, who looks at the social impact and cultural influence of robots, has just switched on the new arrival at the Sheffield Robotics centre, at the University of Sheffield.

He asks: "What do you do, Pepper?"


"You do human?" I interject.

"Of course not," says Pepper, "but that shouldn't keep us from chatting."

I say indeed not, and ask what he thought of Paris.

"You can caress my head or hands for example," is the reply. "Very Parisian," I observe, stroking the sensors atop of Pepper.

"I like it when you touch my head. Ah, miaow."

"You're a scream, Pepper."

Image captionMark Mardell meets Milo

"Miaow ! I feel like a cat!"

Pepper is slim white robot, with skeletal hands, a plastic body and big black eyes.

Mr Szollosy says: "Human beings don't need very much to identify something as alive.

"So a couple of black dots and a line underneath and we see a face every time.

"People say, 'Oh he's smiling at me,' - his mouth doesn't move. But that's what humans bring to the equation.

"We invent these things. I say robots were invented in the imagination long before they were built in labs."

This project is less about developing the technology and more about examining the way we relate to it - most people working in this field are convinced Pepper and and his kind will have huge implications for all of us, changing the way we work, the way we live, even the way we relate to each other.

"I think it is going to be increasingly the case that robots do more and more of the jobs that people used to do," says the centre's director, Prof Tony Prescott.

"We have lots of Eastern Europeans weeding fields because nobody in the UK wants to do that. It could be automated. It's a perfect job for a robot to do."

We are now at a tipping point.

The advances in AI (artificial intelligence) mean robots can now do much more.

But it hasn't developed in the way people might have expected 50 years ago.

A computer can do really clever stuff - beating a chess grandmaster with ease, and now winning at Go.

But a robot butler, which could make you a cup of coffee and run your bath, remains out of reach.

The very idea of robots excites and scares. It is part of the reason behind this centre.

After the development of genetically modified (GM) food, also known in the tabloids as "Frankenstein food", and the backlash against it, they decided some education was called for.

Mr Szollosy says people are frightened by the wrong things. He bemoans the fact that any story about robotics is accompanied by a picture of the Terminator.

"If artificial intelligence does want to take over the world, eradicate the human race, there are much more efficient ways of doing it," he says.

"Gun-wielding bipedal robots - we could beat them no problem. Daleks can't go upstairs.

"My job is to make people understand what not to fear but also explain that robots may well take 60% of the jobs in 20 years' time and that is of deep concern, if we don't restructure society to go along with that."

Prof Prescott hopes robots are part of the solution to a problem that haunts politicians.

"We have a shortage of trained carers, and it is often migrant labour," he says.

"Those jobs are very poorly paid.

"The quality of life for people in care is low, the quality of life for the carers is also low.

"I would like to protect the right to human contact in law, but people with dementia may need a lot of physical help and a lot of that can be provided by robots."

Milo, with a chunky body and a mobile face under anime-style hair, is designed to mimic human expressions to help autistic children.

But some of those he manages I've never seen on a real person.

MiRo is much cuter, looking somewhat like a dog, a donkey or a rabbit.

"It's designed to mimic the behaviour of animals," says Sheffield Robotics' senior experimental officer Dr James Law.

"For patients, particularly the elderly, particularly with Alzheimer's and dementia it is akin to pet therapy, which can have a lot of value for people who need more social interaction in their lives."

Still MiRo is not very cuddly. Unlike Paro.

I would say he's a very sophisticated furry toy seal, squeaking as you stroke his sensors, flashing big black eyes as you caress him.

Dr Emily Collins is interested in using such robots in children's wards, where real animals and even fur is a danger.

"I'm very interested in what mechanism is going on between a human and an animal which results in increased neuropeptide release, so they need less pain medication," she says.

"Being able to replicate that in paediatric wards, where you cannot have animals, would be fantastic.

"I don't see the point in a humanoid robot, apart from the fact people like the form and the shape.

"As soon as you make a robot look like a human analogue, people have expectations that the robot is going to do the same as a person, and we can't replicate that."

It is a really interesting debate, and one that maybe one day we'll have to face. But there are far more pressing problem.

If Mr Szollosy is right and robots take 60% of the jobs by 2037, what does he think will happen?

"The jobs are going to go," he says.

"There is going to be greater unemployment. Maybe we need to recast our society so that becomes a good thing, not a bad thing."

Prof Prescott says: "If people aren't able to sell their labour, then the whole market struggles because the people producing still need people to buy.

"So maybe we need to pay people to consume, maybe through some basic income.

"I think it is inevitable that we go in that direction. It's good news.

"The possibility now exists we can put over a lot of the work we don't like to robots and AIs."

The idea of "the basic" would face huge political opposition.

But it's worth noting that many who work in the field think there are few alternatives, even if there has to be an economic crisis before it's taken seriously.

This is not the same as interesting questions for the future about robot rights or consciousness - these problems are coming toward us with, well, the speed and ferocity of the Terminator.

Mainstream politicians are only just beginning to take notice.

You can hear Mark Mardell's report for The World This Weekend, plus a debate about what the future holds for robots and jobs, via BBC iPlayer.

Mon, 20 Feb 2017 07:48:09 +0400
<![CDATA[The age of the BIONIC BODY]]>http://2045.com/news/35114.html35114When The Six Million Dollar Man first aired in the Seventies, with its badly injured astronaut being rebuilt with machine parts, the TV show seemed a far-fetched fantasy.

But fast-forward 40 years and the idea of a part-man, part-robot doesn't seem so extraordinary after all.

Just last week, it was reported that former policewoman Nicki Donnelly, 33, paralysed from the waist down after a driver smashed into her police car, is now able to walk her daughter to school, thanks to a robotic exoskeleton that does the walking for her.

And today, the Mail reveals that robotic arms controlled by thought are now being developed in Britain.

Here, we look at the many ways scientists are using bionic technology to transform patients' lives...


For people with sight loss, there is hope that they could one day benefit from extraordinary new technology to help them 'see' again.

Last December, ten blind NHS patients had their vision partially restored using a bionic eye. 

A mini video camera mounted on a pair of glasses sent images wirelessly to a computer chip attached to the patient's retina, the light-sensitive patch at the back of the eye.

The world the patients see via the bionic eye, called the Argus II, is black and white. 

They can detect light and darkness, shapes and obstacles, and learn to see movement.

Objects appear in outline, and trials — held at Moorfields Eye Hospital in London — have shown patients can correctly reach and grab familiar objects around the house.

They could also make out cars on the street and safely cross the road using a pedestrian crossing. 

Some can learn to see the numerals on a clock or read letters in large print.

This is only the start, says the maker, U.S. firm Second Sight. Face recognition and 3D vision will become available with planned software upgrades.


Brain implants are now being used to harness the power of the mind to help people who are paralysed.

For 100 years, it's been known that the brain produces electromagnetic waves that instruct muscles in the body to move. 

Now, this understanding is being used to access patients' thoughts and move muscles. 

The first person to benefit was an American man, Johnny Ray, who had locked-in syndrome and couldn't communicate.

In 1998, scientists at Emory University in Atlanta implanted electrodes into his brain, and Ray was able to use the power of his thoughts to move a cursor on a screen and pick out letters, enabling him to talk to the outside world.

The system, known as Brain-Computer Interface (BCI) has now been refined, so the brainwaves can be used to make mechanical equipment move.

Last year, diners at a restaurant in Tubingen, Germany, saw a remarkable demonstration of BCI. 

Several wheelchair-bound patients who had no control of their arms or legs pulled up to tables and used a bionic hand to pick up cups and feed themselves with a fork.

To achieve this, they wore soft caps fitted with 64 electrodes, which captured and transmitted brainwaves coming from the region that controls hand movements.

These brainwaves were picked up by a computer in the wheelchairs which turned the waves into electrical signals and sent them to a meshwork plastic glove wrapped round one of the patient's paralysed hands.

This allowed them to open or close the bionic exoskeleton in response to their thoughts.

Only simple signals were sent to the hand — because picking up brain activity from outside the skull is difficult.

'It's like listening to a concert outside the hall,' said Professor Riccardo Poli, of the School of Computer Science and Electronic Engineering at the University of Essex. 

'The way to get a clearer signal is to open the skull and insert a computer chip directly on to a specific area, but such an invasive operation raises the risk of infection and the chip could become dislodged.'

One way around this, being tested at the University of Melbourne, is to use techniques developed for inserting a stent in a blocked blood vessel, sliding a computer chip the size of a small paperclip into a blood vessel in the relevant area of the brain.


The BCI brain technology may soon benefit patients with spinal injuries or who've had strokes.

And what makes this so exciting is that there is now evidence making a paralysed hand move regularly for several weeks in a BCI-driven exoskeleton can reactivate unresponsive nerves and muscles.

'It allows patients to see the hand moving and maybe even feel it,' says Dr Surjo Soekadar, who heads the Applied Neurotechnology Lab at Tubingen University in Germany. 

'This can wake up nerves involved with movement that had closed down.'

In one study he published in 2014, 32 stroke survivors who could not wash, dress or walk unaided no longer needed help after just 20 sessions of BCI stimulation.

But it's not simply that BCI technology can direct an exoskeleton or glove. 

Four years ago, a lorry driver from Sweden known as Magnus became the first patient in the world to have an implanted body part controlled by the brain.

Magnus had his arm amputated above the elbow as a result of cancer. 

Four years ago, he had a prosthetic with a mechanical hand implanted into the remaining bone by a team at Sahlgrenska University Hospital in Gothenberg.

Painstaking surgery connected electrodes from the prosthetic arm to the nerves and muscles dedicated to their movement, so that when he thinks of moving his hand, it responds.

'I was able to go back to my job as a driver and operate machinery,' Magnus said. 'At home, I can tie my children's shoelaces.' A planned upgrade, involving sensors on the hand, should soon allow him to sense how things he is holding feel.


Creating artificial limbs is relatively simple compared with the challenge of replacing or upgrading sensory organs, such as the ear. 

The most successful so far has been the cochlear implant, a replacement for the cochlea — the part of the inner ear where sounds are turned into electric signals by 32,000 tiny hair cells and then sent to the brain.

In the bionic version, a microphone transforms sounds into digital impulses and onto the brain.


Patients with paralysed legs are already being helped to walk again using mechanical versions.

Right now, the most sophisticated devices for daily use involve an exoskeleton, such as that given to Nicki Donnelly. 

Wearing one, you can walk at 1 mph with the aid of crutches, pressing buttons on them to control movement.

Similar robotic legs have been developed by the Neuro-Rehabilitation Unit at East Kent University Foundation Hospitals Trust. 

Thick and metallic with room for legs inside and flat, stable feet, they won't take a patient anywhere fast — but, thanks to back support, they won't let them fall, either.

'Being in a wheelchair can lead to all sorts of problems,' says the director of the unit, Dr Mohamed Sakel, referring to the way blood can pool, leading to clots. Other complications include osteoporosis.

'In the legs, patients can stand up and exercise in ways they can't using bars and the like,' adds Dr Sakel.

'It also allows them to move about without crutches, which means their hands are free to do things.'

A BCI system that allows control of the legs with the mind is planned.

Meanwhile, Michael Goldfarb, professor of mechanical engineering, and his team at Vanderbilt University, Tennessee, have a more ambitious plan. 

They are working on legs much closer to natural ones, with powered knee and ankle joints, allowing the patient to walk up and down stairs and cross uneven ground — yet they will weigh no more than a normal leg.


Injections of insulin have been the mainstay treatment for people with type 1 diabetes, who need up to five jabs a day. Now, there is an alternative: the artificial pancreas.

The role of the pancreas is to produce insulin to mop up sugar from the blood and take it into the cells. 

Cambridge scientists have developed a device that can both monitor blood sugar and pump out insulin as needed — and much more accurately than patients do.

This helps reduce the risk of 'hypos' (very low blood sugar levels).

A sensor inserted just beneath the skin of the abdomen monitors blood sugar and sends information to a computer, which can calculate how much insulin is needed.

This information is then sent to a pump worn on a belt that injects insulin via a patch into the skin.

A study in the New England Journal of Medicine found it improved insulin control by 25 per cent. 

Last year, 16 British diabetic women became the first in the world to go through pregnancy with an artificial pancreas.

Larger trials are needed, but it's hoped the device could be available on the NHS within two years.

Read more: http://www.dailymail.co.uk/health/article-4197904/Is-age-BIONIC-BODY.html#ixzz4ZfKXLySg 

Mon, 6 Feb 2017 07:55:41 +0400
<![CDATA[Pigs given ROBOTIC hearts in medical breakthrough that could save MILLIONS of lives]]>http://2045.com/news/35107.html35107Researchers have developed a soft robotic sleeve which twists and compresses in synchronisation with a heart to help people who have weaker hearts.

The team from Harvard University and Boston Children’s Hospital created the device which does not come into contact with blood, unlike similar devices today, minimising the risk even more.

The device also reduces the need for patients to take potentially dangerous blood thinning medications.

The thin silicon sleeve of the robotic heart is attached to the actual heart through pneumonic actuators which match the beat.

An external pump is attached, which uses air to power the device and each sleeve is customised to the individual.

A study from the team saw six pigs fitted with the device, with promising results as there was little inflammation and better blood flow.

Ellen T Roche, the paper’s first author and a former Ph.D. student at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), said: “This research demonstrates that the growing field of soft robotics can be applied to clinical needs and potentially reduce the burden of heart disease and improve the quality of life for patients.”

Conor Walsh, senior author of the paper from John L. Loeb Associate Professor of Engineering and Applied Sciences at SEAS, added: “This work represents an exciting proof-of-concept result for this soft robot, demonstrating that it can safely interact with soft tissue and lead to improvements in cardiac function. 

“We envision many other future applications where such devices can delivery mechanotherapy both inside and outside of the body.”

Heart failure affects around 41 million people worldwide.

Current treatments at the moment include ventricular assist devices (VADs) which work by pumping blood from the heart’s ventricles to the aorta.

However, a common issue among people who are fitted with VADs are blood clots and strokes, which is why the scientists wanted to make something safer.

Frank Pigula, a cardiothoracic surgeon and co-corresponding author on the study, who was formerly clinical director of paediatric cardiac surgery at Boston Children’s Hospital, said: “The cardiac field had turned away from idea of developing heart compression instead of blood-pumping VADs due to technological limitations, but now with advancements in soft robotics it’s time to turn back.

“Most people with heart failure do still have some function left; one day the robotic sleeve may help their heart work well enough that their quality of life can be restored.”

Fri, 27 Jan 2017 23:52:17 +0400
<![CDATA[Lawmakers Call For Halt To DARPA Program: Robots Repairing Satellites]]>http://2045.com/news/35105.html35105WASHINGTON: Three influential House lawmakers have asked DARPA in a Jan. 25 letter to review a robotic space repair program to see if it violates the National Space Policy by competing with private-sector efforts and to put the program on hold until the review is complete. The National Space Policy requires “that the government not build or buy systems that “preclude, discourage or compete” with commercial systems. Orbital ATK is building a system it believes competes directly with the DARPA initiative, known as Robotic Servicing of Geosynchronous Satellites.

It’s an intriguing program. DARPA’s goal is to develop robotic systems that can fix damaged satellites 22,000 miles up. In the words of the program web page, it would be designed  to “make house calls in space.”

But Rep. Jim Bridenstine, one of the most active lawmakers on space issues today (and possibly the next head of NASA); Rep. Barbara Comstock, chair of the House Science, Space and Technology subcommittee on research and technology; and Rep. Rob Bishop, chair of the House Natural Resources Committee, signed a letter today asking Acting DARPA Director Steven Walker to review RSGS to ensure it complies with the National Space Policy’s requirement that the government not build or buy systems that “preclude, discourage or compete” with commercial systems.

The rub may be that Orbital-ATK has invested $100 million in such a system, the Orbital Mission Extension Vehicle (MEV). In April last year, Orbital announced that the commercial satellite giant Intelsat would buy the first of the system.

The launch of the first MEV is slated for late 2018 with in-orbit testing and demonstration to be performed with an Intelsat satellite. Testing should be done by early 2019, the Intelsat announcement said. “MEV-1 will then relocate to the Intelsat satellite scheduled for the mission extension service, which is planned for a five-year period. Intelsat will also have the option to service multiple satellites using the same MEV,” the announcement says.

MEV is the product of a wholly-owned subsidiary of Orbital ATK known as Space Logistics, LLC.

Orbital released this statement when they heard I was writing this: “Orbital ATK has strong concerns regarding DARPA’s program approach to its new Robotic Servicing of Geosynchronous Satellite (RSGS) program, which both distorts the emerging commercial market for in-space satellite servicing and violates long-standing principles of the National Space Policy. DARPA’s RSGS program will subsidize a single company with several hundred million dollars’ worth of space hardware and launch service, courtesy of the U.S. taxpayer, to directly compete with commercial satellite servicing systems that Orbital ATK and other companies are developing with their own private capital. Even worse, we estimate that DARPA will provide about 75% of the program funding but retain only about 10% of its capability, a highly questionable and inefficient use of public funds.”

The company also says, as one would expect, that, “DARPA’s approach also violates both the letter and the spirit of the U.S. National Space Policy.”

The DARPA program has another interesting wrinkle. The folks who invented the Internet want to create a Consortium For Execution of Rendezvous and Servicing Operations (CONFERS), which would serve as “a permanent, self-sustaining ‘one-stop shop’ where industry can collaborate and engage with the U.S. Government about on-orbit servicing, as well as drive the creation of the standards that future servicing providers will follow,” according to Todd Master, the DARPA program manager. “These standards would integrate data, expertise, and experience from both government and industry while protecting commercial participants’ financial and strategic interests, and provide investors, insurers, potential customers, and other stakeholders with the confidence to pursue and engage in this promising new sector.” Once up and running, DARPA plans to “transfer” CONFERS  to industry before 2021, when it expects to demonstrate RSGS capabilities in space.

As a longtime space reporter, and one who bets President Donald Trump’s administration will favor industry over government in most showdowns, look for DARPA to lose this one — unless there are factors of which I’m ignorant.

Wed, 25 Jan 2017 23:38:48 +0400
<![CDATA[Robotic Fabricator Could Change the Way Buildings Are Constructed]]>http://2045.com/news/35106.html35106A construction robot has to be powerful enough to handle heavy material, small enough to enter standard buildings, and flexible enough to navigate the terrain.

Back in the 1970s, robots revolutionized the automotive industry, performing a wide range of task more reliably and quickly than humans. More recently, a new generation of more gentle robots has begun to crop up on production lines in other industries. These machines are capable of more delicate, fiddly tasks like packing lettuce. This powerful new workforce is set to revolutionize manufacturing in ways that are, as yet, hard to imagine.

But the building industry is trickier than many others. Construction sites are complex environments that are constantly changing. Any robot would have to be powerful enough to handle heavy material but light and small enough to enter standard buildings and flexible enough to navigate the terrain.

That’s a big ask, but the potential benefits are huge. Construction robots would allow new types of complex structures to be assembled in situ rather than in distant factories and then transported to the site. That allows new types of structures to be built in place, indeed these structures could be modified in real time to allow for any unexpected changes in the environment.

So what is the state-of-the-art for construction robots?

Today we get an answer thanks to the work of Markus Giftthaler at the ETH Zurich in Switzerland and a few pals who have developed a new class of robot capable of creating novel structures on a construction site. They call their new robot the In Situ Fabricator1 and today show what it is capable of.

The In Situ Fabricator1 is designed from the bottom up to be practical. It can build stuff using a range of tools with a precision of less than five millimeters, it is designed to operate semi-autonomously in a complex changing environment, it can reach the height of a standard wall, and it can fit through ordinary doorways. And it is dust- and waterproof, runs off standard electricity, and has battery backup. On top of all this, it must be Internet-connected so that an architect can make real-time changes to any plans if necessary.

Those are a tricky set of targets but ones that the In Situ Fabricator1 largely meets. It has a set of cameras to sense its environment and powerful onboard processors for navigating and planning tasks. It also has a flexible, powerful robotic arm to position construction tools.

To show off its capabilities, Giftthaler and co have used it to build a pair of structures in an experimental construction site in Switzerland called NEST (Next Evolution is Sustainable building Technologies). The first is a double-leaf undulating brick wall that is 6.5 meters long and two meters high and made of 1,600 bricks.

Even positioning such a wall correctly on a construction site is a tricky task. In Situ Fabricator1 does this by comparing the map of the construction site it has gathered from its sensors with the architect’s plans. But even then, it must have the flexibility to allow for unforeseen problems such as uneven terrain or material sagging that changes a structure’s shape.

“To fully exploit the design-related potentials of using such a robot for fabrication, it is essential to make use not only of the manipulation skills of this robot, but to also use the possibility to feed back its sensing data into the design environment,” say Giftthaler and co.

The resulting wall, in which all the bricks are positioned to within seven millimeters, is an impressive structure.

The second task was to weld wires together to form a complex, curved steel mesh that can be filled with concrete. Once again, In Situ Fabricator1’s flexibility proved crucial. One problem with welding is that the process creates tensions that can change the overall shape of the structure in unpredictable ways.  So at each stage in the construction, the robot must assess the structure and allow for any shape changes as it welds the next set of wires together. Once again, the results at NEST are impressive.

In Situ Fabricator1 is not perfect, of course. As a proof-of-principle device, Giftthaler and co use it to identify improvements they can make to the next generation of construction robot. One of these is that at almost 1.5 metric tons, In Situ Fabricator1 is too heavy to enter many standard buildings—500 kilograms is the goal for future machines.

But perhaps the most significant problem is a practical limit on the strength and flexibility of robotic arms. In Situ Fabricator1 is capable of manipulating objects up to about 40 kilograms but ideally ought to be able to handle objects as heavy as 60 kilograms.

But that pushes it up against a practical limit. In Situ Fabricator1’s arm is controlled by electric motors that are incapable of handling heavier objects with the same level of precision. What’s more, electric motors are notoriously unreliable in the conditions found on construction sites, which is why most heavy machinery on these sites is hydraulic.

So Giftthaler and co are already at work on a solution. These guys have designed and built a hydraulic actuator that can control a next-generation robot arm while handling heavier objects more reliably and with the same precision. They are already using this design to build the next generation construction robot that they call In Situ Fabricator2, which should be ready by the end of this year.

All that shows significant promise for the building industry. Other groups have tested advances such as 3-D printing new buildings. But a significant limitation of 3-D printing is that the building cannot be bigger than the 3-D printer. So a robot that can construct things that are bigger than itself is a useful advance.

But there is significant work ahead. The building industry is naturally conservative.  The relatively long lead time in creating new buildings (not to mention the red tape that goes with it) make it hard for construction companies to invest in this kind of high-tech approach.

But the work of Giftthaler and co should help to overcome this and showcase the ability of robots to create entirely new forms of structure. It’ll be interesting to see if they can do for the construction industry what robots have done, and continue to do, for cars.

Ref: arxiv.org/abs/1701.03573: Mobile Robotic Fabrication at 1:1 scale: the In situ Fabricator

Tue, 24 Jan 2017 23:49:35 +0400
<![CDATA[San Francisco biohackers are wearing implants made for diabetes in the pursuit of 'human enhancement']]>http://2045.com/news/35099.html35099Paul Benigeri, a lead engineer at cognitive enhancement supplement startup Nootrobox, flexes his tricep nervously as his coworkers gather around him, phones set to record the scene. He runs his fingers over the part of the arm where Benigeri's boss, Geoff Woo, will soon stick him with a small implant.

"This is the sweet spot," Woo says.

"Oh, shit," Benigeri says, eyeing the needle.

"Paul's fine," Woo says. "K, ooooone ..."

An instrument no bigger than an inhaler lodges a needle into the back of Benigeri's arm. Woo removes his hand to reveal a white plate sitting just above the implant. Benigeri smiles.

"You are now a tagged elephant," Woo says, admiring his handiwork.

"A bionic human," says Nootrobox cofounder Michael Brandt.

In San Francisco, a growing number of entrepreneurs and biohackers are using a lesser-known medical technology called a continuous glucose monitor, or CGM, in order to learn more about how their bodies work. They wear the device under their skin for weeks at a time.

CGMs, which cropped up on the market less than 10 years ago and became popular in the last few years, are typically prescribed by doctors to patients living with diabetes types 1 and 2. They test glucose levels, or the amount of sugar in a person's blood, and send real-time results to a phone or tablet. Unlike fingerstick tests, CGMs collect data passively, painlessly, and often.

For tech workers taking a DIY approach to biology, CGMs offer a way to quantify the results of their at-home experiments around fasting, exercise, stress, and sleep.


Wed, 18 Jan 2017 11:22:26 +0400
<![CDATA[Giving rights to robots is a dangerous idea]]>http://2045.com/news/35100.html35100The EU’s legal affairs committee is walking blindfold into a swamp if it thinks that “electronic personhood” will protect society from developments in AI (Give robots ‘personhood’, say EU committee, 13 January). The analogy with corporate personhood is unfortunate, as this has not protected society in general, but allowed owners of companies to further their own interests – witness the example of the Citizens United movement in the US, where corporate personhood has been used as a tool for companies to interfere in the electoral process, on the basis that a corporation has the same right to free speech as a biological human being.

Electronic personhood will protect the interests of a few, at the expense of the many. As soon as rules of robotic personhood are published, the creators of AI devices will “adjust” their machines to take the fullest advantage of this opportunity – not because these people are evil but because that is part of the logic of any commercial activity.

Just as corporate personhood has been used in ways that its original proponents never expected, so the granting of “rights” to robots will have consequences that we cannot fully predict – to take just two admittedly futuristic examples, how could we refuse a sophisticated robot the right to participate in societal decision-making, ie to vote? And on what basis could we deny an intelligent machine the right to sit on a jury?
Paul Griseri
La Genetouze, France

Mon, 16 Jan 2017 11:24:52 +0400
<![CDATA[Bionic legs and smart slacks: exoskeletons that could enhance us all]]>http://2045.com/news/35101.html35101There are tantalising signs that as well as aiding rehabilitation, devices could soon help humans run faster and jump higher.

Wearing an £80,000 exoskeleton, Sophie Morgan is half woman, half robot.

Beneath her feet are two metal plates, and at her hand a digital display, a joystick and, somewhat alarmingly, a bright red emergency button.

As she pushes the joystick forward, the bionic legs take their first steps – a loud, industrial whirring strikes up and her right foot is raised, extended and placed forward. Her left slowly follows. As she looks up, a smile spreads across her face.

Exoskeletons, touted as devices that will allow the injured to walk, elderly people to remain independent for longer, the military to get more from soldiers and even turn all of us into mechanically enhanced humans, have captured the imagination of researchers across the world, from startups to Nasa.

For now, the most obvious – and tangible – application has involved allowing paralysed people to stand and walk. “It was a mixture of surrealism and just absolute, just the most exhilarating feeling,” says Morgan, describing her first experience of the technology four years ago.

Now 31, the artist, model and presenter of Channel 4’s 2016 Paralympic coverage was paralysed in a car accident aged 18 and has used a wheelchair ever since. The idea to try the exoskeleton, she says, came from the BBC security correspondent Frank Gardner, who uses a wheelchair after being shot while reporting from Saudi Arabia.

The exoskeleton, from Rex Bionics, offered a life-changing experience, according to Morgan. “It had been 10 years, give or take, since I had properly stood, so that was in itself quite overwhelming,” she says. The impact was far reaching. “It is not just about the joy of ‘Oh, I am standing’. It is the difference it makes, the way you feel afterwards, psychologically and physiologically – it is immeasurable.”

Returning to her wheelchair, says Morgan, is a disappointing experience. “I am walking in my dreams, so it does blur that line – that liminal space between real and dream, and reality and fantasy,” she says of the device.

The exoskeleton isn’t just about stirring excitement. As Morgan points out, there are myriad health problems associated with sitting for long periods of time. A report co-commissioned byPublic Health England and published last year highlighted findings showing that, compared with those up and about the most, individuals who spend the longest time sitting are around twice as likely to develop type 2 diabetes and have a 13% higher risk of developing cancer.

Wheelchair users, adds Morgan, also face side-effects, from pressure sores to urinary tract infections. “It could be the difference between longevity and not for people like me,” she says of the exoskeleton.

The competition

About 40 of the Rex Bionic devices are currently in use worldwide, including in rehabilitation centres, says Richard Little, co-founder of the company. An engineer, Little says he was inspired to develop the system after his best friend and co-founder was diagnosed with multiple sclerosis.

But there is competition. As Little points out, the development of battery technology, processing power and components has brought a number of exoskeletons on to the market in recent years, including those from the US-based companies ReWalk and Ekso Bionics. “[They] offer a whole load of different things which are similar in some ways but different in others,” says Little. “[Ours] doesn’t use crutches,” he points out, adding that the innovation removes the risk of users inadvertently damaging their shoulders, and frees their arms.

There are tantalising signs that exoskeletons could do more than just aid rehabilitation or increase the mobility options for those who have experienced a stroke or spinal cord injury.

While the bionic legs tried by Morgan are pre-programmed, researchers have developed exoskeletons controlled by a non-invasive system linked to the brain, allowing an even wider range of wheelchair users to walk. What’s more, when combined with virtual reality and tactile feedback, the systems even appear to promote a degree of recovery for people with paraplegia.

“All our patients got some degree of neurological recovery, which has never been documented in spinal cord injury,” says Miguel Nicolelis, co-director of Duke University’s centre for neuroengineering, who led the work.

It’s a development that excites Little, whose team have also been exploring the possibility of thought control with their own device.

Yet despite their transformative capabilities, the limitations of such bulky exoskeletons have left many frustrated. Tim Swift, co-founder of the US startup Roam Robotics and one of the original researchers behind the exoskeleton from Ekso Bionics, is one of them.

“It is a 50lb machine that costs $100,000 and has a half-mile-an-hour speed and can’t turn,” he says of his former work. “There are only so many applications where that makes sense. This is not a shift towards consumer, this is a hunt for somewhere we can actually use the technologies we are making.”

The dream, says Swift, is to create affordable devices that could turn us all into superhumans, augmenting our abilities by merging the biological with state of the art devices to unleash a new, improved, wave of soldiers, workers, agile pensioners and even everyday hikers. But in devising the underpinning technology, he says it is time to ditch the motors and metal approach that he himself pioneered.

While hefty, rigid devices can support someone with paraplegia, says Swift, such exoskeletons are too heavy and costly for wider applications – such as helping a runner go faster. The fundamental challenge, he adds, is to create a device that remains powerful while keeping the weight down. “I think you have two solutions,” he says. The first is to develop a new, lightweight system that efficiently uses battery energy to generate movement. The second, he says, is to stick with metals and motors but be more intelligent in how you use them.

Swift’s answer is based on the former – but it hasn’t received universal acclaim. “I have spent the last two and a half years literally getting laughed out of conferences when I tell people we are going to make inflated exoskeletons,” he says. “People think it is a running joke.”

But Swift is adamant that to produce a system that can be used in myriad ways to augment humans, be it on the building site, in the home or up a mountain, technologists must innovate. And air, he believes, is the way to do it. The result, so far, is a series of proof-of-concept devices, braces that look a little like padded shin-guards, that can be strapped on to arms or legs.

“The fundamentals allow you to have extremely lightweight structures [and] extremely low cost because everything is basically plastics and fabrics as opposed to precision machined metals,” he says. And there is another boon. “Because you can make something that is very lightweight without sacrificing power, you are actually increasing the power density, which creates these opportunities to do highly dynamic behaviours.”

In other words, according to Swift, exoskeletons made of inflated fabric could not only boost a human’s walking abilities, but also help them run, jump or even climb. “When I say I want someone to go into Footlocker and buy a shoe that makes them run 25% faster – [we are] actively looking at things that look like that,” he says.

Others agree with Swift about the need to reduce the clunkiness of exoskeletons, but take a different approach.

Augmenting humans

Hugh Herr is a rock climber, engineer and head of the biomechatronics research group at MIT. A double amputee, the result of a climbing accident on Mount Washington, Herr has pioneered the development of bionic limbs, inventing his own in the process. But it was in 2014 that his team became the first to make an all-important breakthrough: creating a powered, autonomous exoskeleton that could reduce the energy it took a human to walk.

“No one is going to want to wear an exoskeleton if it is a fancy exercise machine, if it makes you sweat more and work harder, what is the point?” says Herr. “My view is if an exoskeleton fails to reduce metabolism, one needs to start over and go back to the drawing board.”

To boost our bodies, says Herr, it is necessary to break the challenge down. “We are taking a first principle approach, and joint by joint understanding deeply what has to be done scientifically and technologically to augment a human,” he says. 

For Herr the future is not inflatables (“pneumatics tend to be very inefficient,” he says) but minimalistic, stripping away the mass of conventional exoskeletons so that the device augments, rather than weighs down, the wearer. “If you separated the device from the human, it can’t even uphold its own weight,” he says. 

The approach, he adds, was to focus on the area of the body with biggest influence when it came to walking, “Arguably the most important muscle to bipedal human gait is the calf muscle,” he says. “So we said in a minimalist design [with] minimal weight and mass, one arguably should build an artificial calf muscle.” 

Boasting sensors for position, speed and force for feedback, and programmed to move and respond in a natural way, the device drives the foot forward, saving the wearer energy on each step. “Our artificial calf muscle pushes the human in just the right time in the gait cycle where the human is most inefficient and after that period gets out of the way completely,” he says.

Herr isn’t alone in focusing on such minimalist ankle-based devices. Among other pioneers is Conor Walsh at Harvard University who has created similar exoskeletons to help stroke patients walk. The devices are a million miles from the cumbersome bionic legs with with Morgan walked across the office, but then Herr believes the future for exoskeletons lies firmly with the augmented human.

“In the future when a person is paralysed, they won’t use an exoskeleton. The reason is we are going to understand how to repair tissues,” he says. “The only time to use an exoskeleton is if you want to go beyond what the muscles are capable of, beyond innate physicality.”

Making them look like second skins and behave like second skins is going to happen

In Bristol, Jonathan Rossiter is hoping to do just that with an even bolder approach: smart materials. “Fabrics and textiles and rubbers is a really good description of the things we are looking at,” he says. Professor of robotics at Bristol University and head of the Soft Robotics group at Bristol Robotics Laboratory, Rossiter believes exoskeletons of the future will look more like a pair of trousers. “Making them look like second skins and actually behave like second skins is going to happen,” he says.

The technology behind it, says Rossiter, will be hi-tech materials: rubbers that bend when electricity is applied, or fabrics that move in response to light, for example. “We build up from the materials to the mechanisms,” he says.

Conscious of an ageing population, Rossiter believes a pair of smart trousers will prove invaluable in keeping people independent for longer, from helping them out of chairs to allowing them to walk that bit further. But he too sees them becoming popular gadgets, helping hikers clamber up mountains.

There is, however, a hitch. Scaling up smart materials from the tiny dimensions explored in the lab to a full-blown set of slacks is no small feat. “You are taking something which is [a] nanomaterial. You have to fabricate it so that it layers up nicely, it doesn’t have any errors in it, it doesn’t have any fractures or anything else and see if you can transpose that into something you can wear,” says Rossiter. In short, it will be a few seasons yet before your wardrobe will be boasting some seriously smart legwear.

But as technology marches on, the dream gets closer to reality. Herr, for one, believes commercial devices are a hop, skip and a jump away – arriving within the next two decades.

“Imagine if you had leg exoskeletons where you could traverse across very, very irregular natural surfaces, natural terrains with a dramatically reduced metabolism and an increased speed while you are jumping over logs and hopping from rock to rock, going up and down mountains,” he says, conjuring up a scene of a bionic, human gazelle.

“When that device exists in the world, no one will ever use the mountain bike again.”

Tue, 10 Jan 2017 11:30:51 +0400
<![CDATA[This CES 2017 robot can be controlled by one hand]]>http://2045.com/news/35094.html35094Earlier at CES, we saw the Lego Boost announced -- a kit that lets you build and control Lego robots. Ziro is a similar kit, by the company ZeroUI, but it lets you build robots out of any material and control them with a smart glove.

Ziro has three parts to it: a motorized module, a wireless glove to control that module and an app to animate/program modules. The idea is that you build the modules into your robot. You program those modules with the Ziro app. And you remote control your creation using a smart glove worn on one hand.

Ziro is aimed at kids and their creativity, ZeroUI CEO Raja Jasti told me at CES. He said he wants to empower kids to create and design robots out of anything -- emphasizing the use of eco-friendly materials over plastic.

Jasti's passion is matched by the fun of seeing someone control a robot with just their hand. In a demonstration, a man wearing the Ziro smart glove moved his hand slightly forward. At the same time, a robot (that looked like a famous droid from a large movie franchise) moved forward. Then, the man twisted his hand in a circular motion. The robot spun in a circle.

Jasti said that they have already gotten Ziro kits into some schools, but the kit can also be used at home. Ziro could be this generation's Erector Set.

The Ziro starter kit includes a smart glove, two modules and parts for a trike assembly base. Ziro is available to preorder for $150 (which converts to £120 and AU$200) and be available in the spring of 2017.

Sat, 7 Jan 2017 11:38:16 +0400
<![CDATA['Caterpillar' Robot Wriggles to Get Around]]>http://2045.com/news/35093.html35093A soft, caterpillar-like robot might one day climb trees to monitor the environment, a new study finds.

Traditionally, robots have usually been made from rigid parts, which make them susceptible to harm from bumps, scrapes, twists and falls. These hard parts can also keep them from being able to wriggle past obstacles.

Increasingly, scientists are building robots that are made of soft, bendable plastic and rubber. These soft robots, with designs that are often inspired by octopuses, starfish, worms and other real-life boneless creatures, are generally more resistant to damage and can squirm past many of the obstacles that impair hard robots, the researchers said. [The 6 Strangest Robots Ever Created]

"I believe that this kind of robot is very suitable for our living environment, since the softness of the body can guarantee our safety when we are interacting with the robots," said lead study author Takuya Umedachi, now a project lecturer in the Graduate School of Information Science and Technology at the University of Tokyo.

However, soft materials easily deform into complex shapes that make them difficult to control when conventional robotics techniques are used, according to Umedachi and his colleagues. Modeling and predicting such activity currently requires vast amounts of computation because of the many and unpredictable ways in which such robots can move, the researchers said.

To figure out better ways to control soft robots, Umedachi and his colleagues analyzed the caterpillars of the tobacco hornworm Manduca sexta, hoping to learn how these animals coordinate their motions without a hard skeleton. Over millions of years, caterpillars have evolved to move in complex ways without using massive, complex brains.

The scientists reasoned that caterpillars do not rely on a control center like the brain to steer their bodies, because they only have a small number of neurons. Instead, the scientists suggest that caterpillars might control their bodies in a more decentralized manner. Their model demonstrates their theory that sensory neurons embedded in soft tissues relay data to groups of muscles that can then help caterpillars move in a concerted manner.

The scientists developed a caterpillar-like soft robot that was inspired by their animal model. They attached sensors to the robot, which has a soft body that can deform as it interacts with its environment, such as when it experiences friction from the surface on which it walks. This data was fed into a computer that controlled the robot's motors, and the motor could, in turn, contract the robot body's four segments.

The researchers found that they could use this sensory data to guide the robot's inching and crawling motions with very little in the way of guidance mechanisms. "We believe that the softness of the body can be crucial when designing intelligent behaviors of a robot," Umedachi told Live Science.

"I would like to build a real, caterpillar-like robot that can move around on branches of trees," Umedachi said. "You can put temperature and humidity sensors and cameras on the caterpillar-like robots to use such spaces."

The scientists detailed their findings online Dec. 7 in the journal Open Science.

Original article on Live Science.

Sat, 7 Jan 2017 11:34:33 +0400
<![CDATA[Meet Kuri, Another Friendly Robot for Your Home]]>http://2045.com/news/35095.html35095Mayfield Robotics set out to build an approachable robot on wheels for surveillance and entertainment. Will anyone buy it?

Inside the Silicon Valley office of Mayfield Robotics, Kuri looks up at me and squints as if in a smile. Then the robot rolls across the floor, emitting a few R2-D2-like beeps.

Mayfield Robotics, which spun out of the research branch of Bosch, built Kuri as the next step in home robotics. It joins an increasingly crowded field: joining smart-home devices like Amazon’s Alexa and Google Home are wheeled robots like Jibo, Pepper, and Buddy, ready to offer companionship and entertainment (see “Personal Robots: Artificial Friends with Limited Benefits”).

Kaijen Hsiao, CTO of Mayfield Robotics, says Kuri was built to focus on doing a few things very well, and its personality will be what sets it apart. The 20-inch-tall robot is essentially an Amazon Alexa on wheels, letting users play music or control their smart devices from anywhere in the home. It can also live-stream video of your home for surveillance purposes.

Kuri is currently available for pre-order for $699 and is expected to ship to buyers by the end of the year. Mayfield is beginning to manufacture the robot now but will spend the year fleshing out the software side.

While people are at home, Kuri’s mission is to provide entertainment, whether that’s playing music or a podcast or reading a story out loud. It can autonomously follow users from room to room as it performs these tasks. Through a website called IFTTT, users can also set up custom commands for specific actions.

Kuri promises to keep working for you when you’re not home, too. Behind one of Kuri’s eyes is a 1080p camera, and users can access a live stream from the Kuri app. The video function can be used to check on a pet or make sure no intruders are present. Microphones embedded in the robot can detect unusual sounds, prompting the robot to roll in that direction and investigate. Or users can remotely pilot the robot to a specific area. The company says Kuri has “hours of battery life” and drives itself to its dock when it needs to charge.

Mayfield built this robot to perform all these tasks with personality. Kuri comes across as lovable but simple, so there’s no reason to expect it to do more than simple jobs. “He talks robot. He talks in bleeps and bloops,” Hsiao says. “It makes him endearing, but it also sets expectations appropriately.”

But will that be enough to make people want Kuri? In 2017, there will be a range of home robots that use artificial personality, says Andra Keay, the founder of Robot Launchpad and managing director of Silicon Valley Robotics.

“However, I believe that there is going to be a limit to the number of personalities we will want to have in our houses,” Keay says. “So the race is on to create not just engagement but loyalty. That’s a real challenge.” 

Thu, 5 Jan 2017 11:39:47 +0400
<![CDATA[Languages still a major barrier to global science, new research finds]]>http://2045.com/news/35092.html35092English is now considered the common language, or 'lingua franca', of global science. All major scientific journals seemingly publish in English, despite the fact that their pages contain research from across the globe.

However, a new study suggests that over a third of new scientific reports are published in languages other than English, which can result in these findings being overlooked - contributing to biases in our understanding.

As well as the international community missing important science, language hinders new findings getting through to practitioners in the field say researchers from the University of Cambridge.

They argue that whenever science is only published in one language, including solely in English, barriers to the transfer of knowledge are created.

The Cambridge researchers call on scientific journals to publish basic summaries of a study's key findings in multiple languages, and universities and funding bodies to encourage translations as part of their 'outreach' evaluation criteria.

"While we recognise the importance of a lingua franca, and the contribution of English to science, the scientific community should not assume that all important information is published in English," says Dr Tatsuya Amano from Cambridge's Department of Zoology.

"Language barriers continue to impede the global compilation and application of scientific knowledge."

The researchers point out an imbalance in knowledge transfer in countries where English is not the mother tongue: "much scientific knowledge that has originated there and elsewhere is available only in English and not in their local languages."

This is a particular problem in subjects where both local expertise and implementation is vital - such as environmental sciences.

As part of the study, published today in the journal PLOS Biology, those in charge of Spain's protected natural areas were surveyed. Over half the respondents identified language as an obstacle to using the latest science for habitat management.

The Cambridge team also conducted a litmus test of language use in science. They surveyed the web platform Google Scholar - one of the largest public repositories of scientific documents - in a total of 16 languages for studies relating to biodiversity conservation published during a single year, 2014.

Of the over 75,000 documents, including journal articles, books and theses, some 35.6% were not in English. Of these, the majority was in Spanish (12.6%) or Portuguese (10.3%). Simplified Chinese made up 6%, and 3% were in French.

The researchers also found thousands of newly published conservation science documents in other languages, including several hundred each in Italian, German, Japanese, Korean and Swedish.

Random sampling showed that, on average, only around half of non-English documents also included titles or abstracts in English. This means that around 13,000 documents on conservation science published in 2014 are unsearchable using English keywords.

This can result in sweeps of current scientific knowledge - known as 'systematic reviews' - being biased towards evidence published in English, say the researchers. This, in turn, may lead to over-representation of results considered positive or 'statistically significant', and these are more likely to appear in English language journals deemed 'high-impact'.

In addition, information on areas specific to countries where English is not the mother tongue can be overlooked when searching only in English.

For environmental science, this means important knowledge relating to local species, habitats and ecosystems - but also applies to diseases and medical sciences. For example, documents reporting the infection of pigs with avian flu in China initially went unnoticed by international communities, including the WHO and the UN, due to publication in Chinese-language journals.

"Scientific knowledge generated in the field by non-native English speakers is inevitably under-represented, particularly in the dominant English-language academic journals. This potentially renders local and indigenous knowledge unavailable in English," says lead author Amano.

"The real problem of language barriers in science is that few people have tried to solve it. Native English speakers tend to assume that all the important information is available in English. But this is not true, as we show in our study.

"On the other hand, non-native English speakers, like myself, tend to think carrying out research in English is the first priority, often ending up ignoring non-English science and its communication.

"I believe the scientific community needs to start seriously tackling this issue."

Amano and colleagues say that, when conducting systematic reviews or developing databases at a global scale, speakers of a wide range of languages should be included in the discussion: "at least Spanish, Portuguese, Chinese and French, which, in theory, cover the vast majority of non-English scientific documents."

The website conservationevidence.com, a repository for conservation science developed at Cambridge by some of the authors, has also established an international panel to extract the best non-English language papers, including Portuguese, Spanish and Chinese.

"Journals, funders, authors and institutions should be encouraged to supply translations of a summary of a scientific publication - regardless of the language it is originally published in," says Amano. The authors of the new study have provided a summary in Spanish, Portuguese, Chinese and French as well as Japanese.

"While outreach activities have recently been advocated in science, it is rare for such activities to involve communication across language barriers."

The researchers suggest efforts to translate should be evaluated in a similar way to other outreach activities such as public engagement, particularly if the science covers issues at a global scale or regions where English is not the mother tongue.

Adds Amano: "We should see this as an opportunity as well as a challenge. Overcoming language barriers can help us achieve less biased knowledge and enhance the application of science globally."

Thu, 29 Dec 2016 11:31:32 +0400
<![CDATA[Seven robots you need to know. Pointing the way to an android future]]>http://2045.com/news/35084.html35084Walking. Grasping an object. Empathising. Some of the hardest problems in robotics involve trying to replicate things that humans do easily. The goal? Creating a general purpose robot (think C-3PO from Star Wars) rather than specialised industrial machines. Here are seven existing robots that point the way towards the humanoid robots of the future.


Use: Originally built for Darpa Robotics Challenge
Made by: Boston Dynamics
What it tries to do: Achieve human-like balance and locomotion using deep learning, a form of artificial intelligence.

“Our long-term goal is to make robots that have mobility, dexterity, perception and intelligence comparable to humans and animals, or perhaps exceeding them; this robot is a step along the way.”​


• 1.7m tall and weighs 82kg
• Can walk on two feet and get back up if it falls down 
Human equivalent: Legs/skeleton/musculature


Use: Military. Part of Darpa’s Warrior Web project
Made by: SRI Robotics
What it tries to do: A suit that makes the wearer stronger and helps prevent injury

Superflex is a type of ‘soft’ robot, which can mould itself to the environment or a human body in a way that typical robots can’t. The goal is to make machines that feel and behave more like biological than mechanical systems, and give additional powers to the wearer.

• Battery-powered compressive suit weighs seven pounds 
• Faux ‘muscles’ can withstand 250lb of force
Human equivalent: Musculature

Photo: SRI International

Amazon Echo

Use: Voice-controlled speaker 
Made by: Amazon
What it tries to do: Lets you control devices by talking to them

It may not have any moving parts, but Amazon’s Echo – and Alexa, the digital assistant that lives inside it, is definitely trying to solve one of the central problems in robotics: how to create robots that can recognise human speech and provide natural voice responses.

You can tell Alexa to: 
• Control your light switches• Give you the latest sports scores
• Help tune your guitar
Human equivalent: Voice and ears

Life-like humanoids

Use: Natural interactions
Made by: Hiroshi Ishiguro Laboratories
What they try to do: Create a sense of ‘presence’, or sonzai-kan in Japanese, by making robots that look identical to humans

“Our goal is to realise an advanced robot close to humankind and, at the same time, the quest for the basis of human nature.”

Geminoid-F photo: Getty, video: Hiroshi Ishiguro Laboratories.


Use: Day-to-day companion, and customer assistant
Made by: SoftBank
What it tries to do: Recognise and respond to human emotions

While Pepper clearly looks like a robot rather than a human, it uses its body movement and tone of voice to communicate in a way designed to feel natural and intuitive.

See Pepper's visit to the FT

Human equivalent: Feelings and emotions

Photo: Getty

Robo Brain

Use: Knowledge base for robots
Made by: Cornell University
What it tries to do: Accumulate all robotics-related information into an interconnected knowledge base similar to the memory and knowledge you hold in your brain.

The human brain is such a complex organ that it would be extremely difficult to create an artificial replica that sits inside a robot. But what if robots’ ‘brains’ could exist, disembodied in the cloud? Robo Brain hopes to achieve just that.
Researchers hope to integrate 100,000 data sources into the database.

Challenges: Understanding and juggling different types of data

Google Car

Use: Self-driving car
Made by: Google
What it tries to do: Group learning and real-time co-ordination

The true ambition behind Google’s automotive efforts is not just to make a car that can drive itself. Instead, it’s to use group learning to strengthen artificial intelligence, so that if one Google car makes a mistake and has an accident, all Google cars will learn from it. This involves managing large-scale, real-time co-ordination.

What happens when robots rule the road

Photos: FT Graphic/Getty/Dreamstime

Sat, 24 Dec 2016 23:40:32 +0400
<![CDATA[The '2016 Robot Revolution' and all the insane new things that robots have done this year]]>http://2045.com/news/35083.html35083Robots are useful for all kinds of things: building cars, recycling old electronics, having sex with - the list goes on and on.

And 2016 has been a big year for our cyber companions as they've evolved in ways we couldn't have imagined in 2015.

Robots have taken up jobs for the first time and even stepped in to save people from parking tickets .

We've compiled the above video to show you some of the highlights of 2016 and get you either excited or terrified for what the future holds.

"The pattern for the next 10-15 years will be various companies looking towards consciousness," noted futurologist Dr. Ian Pearson told Mirror Online.

"The idea behind it that if you make a machine with emotions it will be easier for people to get on with.

"[But] There is absolutely no reason to assume that a super-smart machine will be hostile to us."

Whether it's artificial intelligence, the singularity or just more celebrity sex dolls , there's certainly going to be a lot to talk about when we all meet back here in December 2017.

Fri, 23 Dec 2016 23:30:25 +0400
<![CDATA[Good news! You probably won’t be killed by a sex robot]]>http://2045.com/news/35082.html35082After spending a fascinating two days at the International Congress on Love and Sex with Robots, where academics discussed everything from robot design to the ethics of programming lovers, I was surprised to learn from Gizmodo that “sex robots may literally f**k us to death.”

How, I wondered, could these otherwise thoughtful researchers allow humanity to walk into such a dystopian nightmare?

Quite rightly, they won’t. That headline was in fact inspired by a discussion on the ethics of artificial intelligence by Prof. Oliver Bendel, who outlined some of the broad implications of creating machines which can “think” – including how we make sure robots make good moral decisions and don’t end up causing humans harm. Far from “warning” of the dangers of oversexed robots, Bendel was actually trying to ensure that they don’t “f**k us to death”. So while I might personally fantasise about the future headlines like “Woman, 102, Sexed To Death By Robot Boyfriend”, it’s unlikely that I’ll kick the bucket with such panache. Thanks to Bendel, and others who are exploring these questions as artificial intelligence develops, sex robots will likely have a built-in kill switch (or “kill the mood” switch) to prevent anyone from being trapped in a nightmare sex marathon with a never-tiring machine.

Reporting on events like the sex robots conference is notoriously tricky. On the one hand, sex robots are guaranteed to grab the attention of anyone looking for something to distract them from their otherwise robot-less lives, so an article is guaranteed to be a hit. On the other hand, academics are notoriously careful in what they say, so quite rightly you’re unlikely to find one who’ll actually screech warnings about imminent death at the hands (or genitals) of a love machine.

But no one wants to click a Twitter link that says “Academic Research Revealed To Be More Complicated Than We Can Cram Into 20 Words.” Hence Gizmodo’s terrifying headline, and other pieces which picked an interesting observation, then sold it to readers with something more juicy than the title in the conference schedule. The Register went with “Non-existent sex robots already burning holes in men’s pockets” in reference to a paper presented by Jessica Szczuka, in which men were quizzed about their possible intentions to buy a sex robot. The Daily Mail chose to highlight the data issues which arise from intimate connections with machines by telling us “Sex Robots Could Reveal Your Secret Perversions!

They’re blunt tools, but they get people interested, and hopefully encourage people to read further into issues they might not previously have considered. For example, during her keynote talk, Dr Kate Devlin mentioned a robot which hit the headlines last year because it “looked like Scarlet Johansson”. She posed an ethical question for makers of realistic bots and dolls: how do you get permission from the person whose likeness you’re using? Alternatively: “Celebrities Could Sue Over Sex Robot Doppelgangers!”

Dr Devlin also questioned why research into care robots for elderly people doesn’t also include meeting their sexual needs (“Academic Demands Sex Toys For Pensioners”) and pointed out that while more established parts of the sex industry tend to be male-dominated, in the sex tech field pioneering women are leading the way (“Are Women The Future Of The Sex Industry?”).

Julie Wosk – professor of art history and author of “My Fair Ladies: Female Robots, Androids and other artificial Eves” explored pop culture representations of sex robots, from Ex Machina’s Ava to Good Girl’s brothel-owned learning sex bot. Sex robots are most commonly female, beautiful and subservient, and Wosk pointed out that in pop culture they also have a tendency to rebel. Westworld, Humans, Ex Machina – all include strong, often terrifying, female robots who gain consciousness, and could be seen as a manifestation of society’s fears of women gaining power. Put a sub editor’s hat on and voila: “Is Feminism To Blame For Our Fear of Sex Robots?”

Dr Lynne Hall focused on user experience – while sex robots are often portrayed as humanoid, in fact a robot that pleasures you may be more akin to something you strap to your body while you watch porn. She went on to point out that porn made with one or more robotic actors has a number of interesting benefits such as a lower risk of STI transmission, and perhaps better performer safety, as robot actors replace potentially predatory porn actors (“Sex Robots Will Revolutionise Porn!”). David Levy, author of “Love and Sex with Robots”, gave a controversial keynote on the implications of robot consciousness when it comes to relationships: “Humans Will Marry Robots By 2050.”

In other presentations, designers and engineers showed off the real-life robots they had built. Cristina Portalès introduced us to ‘ROMOT’ – a robotic theatre which combines moving seats, smells, virtual reality and more to create a uniquely intense experience. But while the ROMOT team have no plans to turn it into a sex show, Cristina outlined how it could be used to enhance sexual experiences - using porn videos and sex scents to create a wholly X-rated experience. Or, if you prefer: ‘Immersive Sex Theatre Could Be The Future Of Swinging.’ Other designers showed off projects designed to increase human intimacy over a long distance – like ‘Kissinger’ (‘Remarkable Gadget Helps You Smooch A Lover Over The Internet’) and ‘Teletongue’ (‘With X-Rated Lollipop You Can Make Sweet Love At A Distance’).

You get the idea. If we had a classification system for science reporting, all these headlines would be flagged to let the user know that the actual story is far more complicated. But they’d also probably languish unclicked, meaning similar research is less likely to get covered in the future.

Towards the end of the conference one of the Q+A sessions moved into the area of science and tech communication. Inevitably, with so many journalists in the room, there was an uneasiness from some academics about the way in which the conference would be covered. As someone with a bee in my bonnet about the way sex is often reported in the mainstream media, I think this wariness is often justified. But while my initial reaction to Gizmodo’s headline was to roll my eyes, their presence – and that of other journalists – made the overall topic of robotic relationships and intimacy much more accessible to the public. There have been one or two swiftly-corrected inaccuracies, but the press presence means that what could otherwise have been a small conference just for academics has sparked debate around the world. 

Thu, 22 Dec 2016 23:26:34 +0400
<![CDATA[We will soon be able to read minds and share our thoughts]]>http://2045.com/news/35085.html35085The first true brain-to-brain communication in people could start next year, thanks to huge recent advances.

Early attempts won’t quite resemble telepathy as we often imagine it. Our brains work in unique ways, and the way each of us thinks about a concept is influenced by our experiences and memories. This results in different patterns of brain activity, but if neuroscientists can learn one individual’s patterns, they may be able to trigger certain thoughts in that person’s brain. In theory, they could then use someone else’s brain activity to trigger these thoughts.

“You could detect certain thought processes and use them to influence other people’s decisions” 

So far, researchers have managed to get two people, sitting in different rooms, to play a game of 20 questions on a computer. The participants transmitted “yes” or “no” answers, thanks to EEG caps that monitored brain activity, with a technique called transcranial magnetic stimulation triggering an electrical current in the other person’s brain. By pushing this further, it may be possible to detect certain thought processes, and use them to influence those of another person, including the decisions they make.

Another approach is for the brain activity of several individuals to be brought together on a single electronic device. This has been done in animals already. Three monkeys with brain implants have learned to think together, cooperating to control and move a robotic arm.

Similar work has been done in rats, connecting their brains in a “brainet”. The next step is to develop a human equivalent that doesn’t require invasive surgery. These might use EEG caps instead, and their first users will probably be people who are paralysed. Hooking up a brainet to a robotic suit, for example, could enable them to get help from someone else when learning to use exoskeletons to regain movement.

This article appeared in print under the headline “Mind-reading fuses thoughts”

Wed, 14 Dec 2016 23:43:54 +0400
<![CDATA[Phantom movements in augmented reality helps patients with chronic intractable phantom limb pain]]>http://2045.com/news/35079.html35079Dr Max Ortiz Catalan at Chalmers University of Technology, the Department of Signals and systems, has developed a novel method of treating phantom limb pain using machine learning and augmented reality. This approach has been tested on over a dozen of amputees with chronic phantom limb pain who found no relief by other clinically available methods before. The new treatment reduced their pain by approximately 50 per cent, reports a clinical study published in The Lancet.

​People who lose an arm or leg often experience phantom limb pain, as if the missing limb was still there. Phantom limb pain can become a serious chronic condition that significantly reduces the patients’ quality of life. It is still unclear why phantom limb pain and other phantom sensations occur.

Several medical and non-medical treatments have been proposed to alleviate phantom limb pain. Examples include mirror therapy, various types of medications, acupuncture, and implantable nerve stimulators. However, in many cases nothing helps. This was the situation for the 14 arm amputees who took part in the first clinical trial of a new treatment, invented by Chalmers researcher Max Ortiz Catalan, and further developed with his multidisciplinary team in the past years.

“We selected the most difficult cases from several clinics,” Dr Ortiz Catalan says. “We wanted to focus on patients with chronic phantom limb pain who had not responded to any treatments. Four of the patients were constantly medicated, and the others were not receiving any treatment at all because nothing they tried had helped them. They had been experiencing phantom limb pain for an average of 10 years.”

The patients were treated with the new method for 12 sessions. At the last session the intensity, frequency, and quality of pain had decreased by approximately 50 per cent. The intrusion of pain in sleep and activities of the daily living was also reduced by half. In addition, two of the four patients who were on analgesics were able to reduce their doses by 81 per cent and 33 per cent.

“The results are very encouraging, especially considering that these patients had tried up to four different treatment methods in the past with no satisfactory results,” Ortiz Catalan says. “In our study, we also saw that the pain continuously decreased all the way through to the last treatment. The fact that the pain reduction did not plateau suggests that further improvement could be achieved with more sessions.”

Ortiz Catalan calls the new method phantom motor execution. It consist of using muscle signals from the amputated limb to control augmented and virtual environments. Electric signals in the muscles are picked up by electrodes on the skin. Artificial intelligence algorithms translate the signals into movements of a virtual arm in real-time. The patients see themselves on a screen with the virtual arm in the place of the missing arm, and they can control it as they would control their biological arm.

Thus, the perceived phantom arm is brought to life by a virtual representation that the patient can see and control. This allows the patient to reactivate areas of the brain that were used to move the arm before it was amputated, which might be the reason that the phantom limb pain decrease. No other existing treatment for phantom limb pain generates such a reactivation of these areas of the brain with certainty. The research led by Ortiz Catalan not only creates new opportunities for clinical treatment, but it also contributes to our understanding of what happens in the brain when phantom pain occurs.

The clinical trial was conducted in collaboration with Sahlgrenska University Hospital in Gothenburg, Örebro University Hospital in Örebro, Bräcke Diakoni Rehabcenter Sfären in Stockholm, all in Sweden, and the University Rehabilitation Institute in Ljubljana, Slovenia.

“Our joint project was incredibly rewarding, and we now intend to go further with a larger controlled clinical trial,” Ortiz Catalan says. “The control group will be treated with one of the current treatment methods for phantom limb pain. This time we will also include leg amputees. More than 30 patients from several different countries will participate, and we will offer more treatment sessions to see if we can make the pain go away completely.”

The technology for phantom motor execution is available in two modalities – an open source research platform, and a clinically friendly version in the process of being commercialised by the Gothenburg-based company Integrum. The researchers believe that this technology could also be used for other patient groups who need to rehabilitate their movement capability, for example after a stroke, nerve damage or hand injury.

Sat, 3 Dec 2016 19:25:59 +0400
<![CDATA[A new minimally invasive device to treat cancer and other illnesses ]]>http://2045.com/news/35081.html35081 A new study by Lyle Hood, assistant professor of mechanical engineering at The University of Texas at San Antonio (UTSA), describes a new device that could revolutionize the delivery of medicine to treat cancer as well as a host of other diseases and ailments (Journal of Biomedical Nanotechnology, "Nanochannel Implants for Minimally-Invasive Insertion and Intratumoral Delivery"). Hood developed the device in partnership with Alessandro Grattoni, chair of the Department of Nanomedicine at Houston Methodist Research Institute. "The problem with most drug-delivery systems is that you have a specific minimum dosage of medicine that you need to take for it to be effective," Hood said. "There's also a limit to how much of the drug can be present in your system so that it doesn't make you sick." As a result of these limitations, a person who needs frequent doses of a specific medicine is required to take a pill every day or visit a doctor for injections. Hood's creation negates the need for either of these approaches, because it's a tiny implantable drug delivery system. "It's an implantable capsule, filled with medicinal fluid that uses about 5000 nanochannels to regulate the rate of release of the medicine," Hood said. "This way, we have the proper amount of drugs in a person's system to be effective, but not so much that they'll harm that person." The capsule can deliver medicinal doses for several days or a few weeks. According to Hood, it can be used for any kind of ailment that needs a localized delivery over several days or a few weeks. This makes it especially tailored for treating cancer, while a larger version of the device, which was originally created by Grattoni, can treat diseases like HIV for up to a year. "In HIV treatment, you can bombard the virus with drugs to the point that that person is no longer infectious and shows no symptoms," Hood said. "The danger is that if that person stops taking their drugs, the amount of medicine in his or her system drops below the effective dose and the virus is able to become resistant to the treatments." The capsule, however, could provide a constant delivery of the HIV-battling drugs to prevent such an outcome. Hood noted it can also be used to deliver cortisone to damaged joints to avoid painful, frequent injections, and possibly even to pursue immunotherapy treatments for cancer patients. "The idea behind immunotherapy is to deliver a cocktail of immune drugs to call attention to the cancer in a person's body, so the immune system will be inspired to get rid of the cancer itself," he said. The current prototype of the device is permanent and injected under the skin, but Hood is working with Teja Guda, assistant professor of biomedical engineering, to collaborate on 3-D printing technology to make a new, fully biodegradable iteration of the device that could potentially be swallowed.

Read more: A new minimally invasive device to treat cancer and other illnesses 

Thu, 1 Dec 2016 19:30:12 +0400
<![CDATA[For robots, artificial intelligence gets physical]]>http://2045.com/news/35076.html35076In a high-ceilinged laboratory at Children’s National Health System in Washington, D.C., a gleaming white robot stitches up pig intestines.

The thin pink tissue dangles like a deflated balloon from a sturdy plastic loop. Two bulky cameras watch from above as the bot weaves green thread in and out, slowly sewing together two sections. Like an experienced human surgeon, the robot places each suture deftly, precisely — and with intelligence.

Or something close to it.

For robots, artificial intelligence means more than just “brains.” Sure, computers can learn how to recognize faces or beat humans in strategy games. But the body matters too. In humans, eyes and ears and skin pick up cues from the environment, like the glow of a campfire or the patter of falling raindrops. People use these cues to take action: to dodge a wayward spark or huddle close under an umbrella.

Part of intelligence is “walking around and picking things up and opening doors and stuff,” says Cornell computer scientist Bart Selman. It “has to do with our perception and our physical being.” For machines to function fully on their own, without humans calling the shots, getting physical is essential. Today’s robots aren’t there yet — not even close — but amping up the senses could change that.


“If we’re going to have robots in the world, in our home, interacting with us and exploring the environment, they absolutely have to have sensing,” says Stanford roboticist Mark Cutkosky. He and a group of like-minded scientists are making sensors for robotic feet and fingers and skin — and are even helping robots learn how to use their bodies, like babies first grasping how to squeeze a parent’s finger.

The goal is to build robots that can make decisions based on what they’re sensing around them — robots that can gauge the force needed to push open a door or figure out how to step carefully on a slick sidewalk. Eventually, such robots could work like humans, perhaps even caring for the elderly.

The whole story...

Sun, 20 Nov 2016 19:03:22 +0400
<![CDATA[The robot suit providing hope of a walking cure]]>http://2045.com/news/35075.html35075Clothing that can help people learn how to walk again after a stroke is the brainchild of a Harvard team reinventing the way we use robot technology

Conor Walsh’s laboratory at Harvard University is not your everyday research centre. There are no bench-top centrifuges, no fume cupboards for removing noxious gases, no beakers or crucibles, no racks of test tubes and only a handful laptop computers. Instead, the place is dominated by clothing.

On one side of the lab stands a group of mannequins dressed in T-shirts and black running trousers. Behind them, there are racks of sweatshirts and running shoes. On another wall of shelves, shorts and leggings have been carefully folded and labelled for different-size wearers. On my recent visit, one student was sewing a patch on a pair of slacks.

Walk in off the street and you might think you had stumbled into a high-class sports shop. But this is no university of Nike. This is the Harvard Biodesign Lab, home of a remarkable research project that aims to revolutionise the science of “soft robotics” and, in the process, transform the fortunes of stroke victims by helping them walk again.

“Essentially, we are making clothing that will give power to people who have suffered mobility impairment and help them move,” says Professor Walsh, head of the biodesign laboratory. “It will help them lift their feet and walk again. It is the ultimate in power-dressing.”

Last week, at a ceremony in Los Angeles, 35-year-old Walsh was awarded a Rolex award for enterprise for his work. He plans to use the prize money – 100,000 Swiss francs (about £82,000) – to expand “soft robotics” to develop suits that could also enhance the ability of workers and soldiers to lift and carry weights and also improve other areas of medical care, including treatments for patients suffering from Parkinson’s disease, cerebral palsy and other ailments that affect mobility.

Walsh is a graduate – in manufacturing and mechanical engineering – of Trinity College Dublin. While a student, he became fascinated with robotics after he read about the exoskeletons being developed in the United States to help humans handle heavy loads. Essentially, an exoskeleton is a hard, robot-like shell that fits around a user and moves them about. Think of the metal suit worn by Robert Downey Jr in Iron Man or the powered skeletal frame Sigourney Weaver used in Aliens to deal with the acid-dribbling extraterrestrial that threatened her spaceship.

“I thought that it all looked really, really cool,” says Walsh. So he applied, and was accepted, to study at the Massachusetts Institute of Technology (MIT) under biomechatronics expert Professor Hugh Herr. But when Walsh began working on rigid exoskeletons, he found the experience unsatisfactory. “It was like being inside a robotic suit of armour. It was hard, uncomfortable and ponderous and the suit didn’t always move the way a human would,” he says.

So when Walsh moved to Harvard, where he set up the biodesign lab, he decided to take a different approach to the problem. “I saw immediately that if you had a softer suit that accentuated the right actions, was comfy to wear and didn’t encumber you, it could have huge biomedical applications,” he says. “I began to wonder: can we make wearable robots soft?”

The answer turned out to be yes. Walsh, assisted by colleagues Terry Ellis, Louis Award and Ken Holt of Boston University, worked with experts in electronics, mechanical engineering, materials science and neurology to create an ingenious, low-tech way to boost walking: the soft exosuit. A band of cloth is wrapped around a person’s calf muscles. Pulleys, made from bicycle brake cables, are attached to these calf wraps and the other ends of the cables fitted to a power pack worn on a patient’s back. When the wearer starts to lift his foot to take a step, the power pack pulls the cables and this helps the wearer lift their leg. Then, as their foot swings forward, another cable, attached to the toecap of their shoes, tightens to help raise the toe so that it does not drag on the ground as they swing their legs forward. This condition is known as “foot drop” and it is a common difficulty for stroke patients.

In this way, an often critical problem for someone who can no longer control their muscles properly is alleviated. They can lift their legs and, just as importantly, keep their toes from turning down so that they do not drag on the ground and make them stumble. It is the perfect leg-up, in short.

“Designing robotic devices that target specific joints just hadn’t been done before,” says Walsh. “People had only looked at constructing a full-leg exoskeleton. We are targeting just one joint, not a whole leg. Crucially, in the case of strokes, it is the one that is often most badly impaired. Also, we have managed to keep our materials very light and easily wearable. Simple is best. That is our mantra.”

Analysis Cryonics: does it offer humanity a chance to return from the dead?While it used to be the stuff of science fiction, the technology behind the dream has advanced in recent years Read more

Originally, the pulleys that lifted the cables that helped wearers’ raise their legs and toes were powered by a trolley-like device that trundled alongside them. One of the key improvements involved in Walsh’s project has been to reduce that power pack to a size that can be worn reasonable comfortably. The unit weighs 10lbs (4.5kg) and Walsh expects his team will be able to make further reductions in the near future. “Motors are going to get lighter, batteries are going to get lighter. That will all be of great benefit, without doubt.”

The packs are also fitted with devices known as inertial measurement units (IMU), which analyse the forces created by foot movements and raise and lower the brake-cable pulleys. These sensors have to work with millisecond accuracy for the system to work properly. “Timing is absolutely critical,” says Walsh.

Test runs have already proved successful, however. Videos of stroke patients wearing soft exosuits and walking on treadmills reveal a marked improvement in their movement. Once fitted with the suits, they no longer clutch the handrails and their strides become much quicker and more confident. “We are not saying our system is the only solution to impaired mobility,” adds Walsh. “There will always be a place for hard exoskeleton power suits, for example, for people who are completely paralysed. But for less severe problems, soft robotic suits, with their lightness and flexibility, are a better solution.”

Every year, about 110,000 people suffer a stroke in the UK. Most patients survive but strokes are still the third-largest cause of death, after heart disease and cancer, in this country. Strokes occur when the blood supply to the brain is stopped due to a blood clot or when a weakened blood vessel bursts. One impact affects how muscles work. As the Stroke Association points out, your brain sends signals to your muscles, through your nerves, to make them move. A stroke, in damaging your brain, disrupts these signals. Classic symptoms include foot drop and loss of stamina. Patients feel tired and become more clumsy, making it even more difficult to control their movements.

“Patients often withdraw from life. They stop going out and miss out on all sort of social events – their grandchildren’s sports events or parties,” says Ignacio Galiana of the Wyss Institute for Biologically Inspired Engineering at Harvard University, which is also involved in the soft exosuit project. “They prefer to stay at home and to stop exercising because it is so tiring and draining. They withdraw from the world. By making it possible to walk normally again we hope we can stop that sort of thing happening.”

The soft exosuits will not be worn all of the time, it is thought, but instead be put on for a few hours so patients can get out of their homes without exhausting themselves. The devices should also help in physiotherapy sessions aimed at restoring sufferers’ ability to walk. “This is a new tool that will greatly extend and accelerate rehabilitation therapy for stroke patients,” says Walsh. “Patients no longer have to think about the process of moving. It starts to come naturally to them, as it was before they had their stroke.”

As to timing, Walsh envisages that his team will be able to get their prototypes on to the market in about three years. Nor will soft exoskeleton use be confined to stroke cases. “Cerebral palsy, Alzheimer’s, multiple sclerosis, Parkinson’s, old age: patients with any of these conditions could benefit,” adds Walsh. “When muscles no longer generate sufficient forces to allow people to walk, soft, wearable robots will be able to help them.”

Sun, 20 Nov 2016 18:59:52 +0400
<![CDATA[Medical Bionic Implant And Artificial Organs Market Volume Forecast and Value Chain Analysis 2016-2026]]>http://2045.com/news/35074.html35074Artificial organ and implants are special type of made devices/ prosthetics which are implanted in human body, so that it can imitates the function original organ. The crucial requirement of such organ is to function as normal organ. Bionics is combination Biology and Electronics. Medical Bionics are substitute or improvement of other body parts with robotic versions. Medical bionic implants are diverse from artificial organ, they impersonate original function very thoroughly or even do better than it.

Organ transplantation becomes mandate when an organ in body of person is damaged due to injury or disease. But, number of organ donors is very less than the demand. Although after the organ is transplanted there are chances of rejection of transplanted organ. This signifies that immune system of the recipient is not able to accept the organ. Artificial organs and bionics are made of biomaterial. Biomaterial is a living or non-living substance which is introduced in body as portion of artificial organ or bionics to substitute an organ or functions associated with it. Heart and kidney are most developed artificial organs while pace makers and cochlear implants most developed medical bionics.

Medical Bionic Implant and artificial Organs Market: Drivers and Restraints

Currently, medical bionic implant and artificial organs globalmarket is driven by the fact that large number of patients are in need for organ transplantation although not everyone can get organ as the number of donors are less. Growing advancements in medical technologiesare fueling the global medical bionic implant and artificial organsmarket. Growing public awareness about various diseases, advancements in medical bionic implant and artificial organs procedures and the need for screenings conducted for early diagnosis and treatment of various diseases are also expected to favor the global medical bionic implant and artificial organs market. Expiring of the patents of 3D printing will also play important development of 3D printing of artificial organs. However high cost associated with organ transplant procedure and price of medical bionics act as a restraint for global medicalbionic implant and artificial organsmarket.

Request Free Report Sample@ http://www.futuremarketinsights.com/reports/sample/rep-gb-1407

Medical Bionic Implant and artificial Organs Market: Segmentation

Global medical bionic implant and artificial organsmarketis segmented on the basis of product type as given below:

Based on product type, global medical bionic implant and artificial organs market is segmented into:

  • Heart Bionics
    • Ventricular Assist Device
    • Total Artificial Heart
    • Artificial Heart Valves
    • Pacemaker
      • Implantable Cardiac Pacemaker
      • External Pacemaker
  • Orthopedic Bionics
    • Bionic Hand
    • Bionic Limb
    • Bionic Leg
  • Ear Bionics
    • Bone Anchored Hearing Aid
    • Cochlear Implant

Based on implant location, global medical bionic implant and artificial organs market is segmented into:

  • Externally Worn
  • Implantable

Request For TOC@ http://www.futuremarketinsights.com/toc/rep-gb-1407

Medical Bionic Implant and artificial Organs Market: Overview

With quick technological advancement, rapid technological advancements in medical field, ever increasing demand of medical bionics implants and artificial organs, the global medical bionic implant and artificial organsmarket is anticipated to have vigorous development during the forecast period.

Medical Bionic Implant and artificial Organs Market: Region- wise Outlook

Depending on geographic regions, Global medical bionic implant and artificial organsmarketis segmented into seven key regions: North America, Latin America, Eastern Europe, Western Europe, Asia Pacific Excluding Japan, Japan and Middle East & Africa.North America is the leading market for medical bionic implant and artificial organsdue to rapid technological innovations and huge investment in research and development and increased healthcare expenditures on artificial prosthesis. Whereas, Asia-pacific andEurope is expected to grow at a significant growth due to large consumer base, rising government initiatives for enhancing healthcare, and high disposable incomewill contribute to the global medical bionic implant and artificial organsmarket value exhibiting robust CAGR during the forecast period.

Medical Bionic Implant and artificial Organs Market: Key Players

Some of the key market players in global medical bionic implant and artificial organsmarketareTouch Bionics Inc., Lifenet Health Inc.,Cochlear Ltd., Sonova, Otto Bock Inc., Edwards Lifesciences Corporation, Medtronic, Inc. HeartWare, Orthofix Holdings, Inc., BionX Medical Technologies, Inc. and others.

Wed, 16 Nov 2016 18:56:00 +0400
<![CDATA[Modular Exoskeleton Reduces Risk of Work-Related Injury]]>http://2045.com/news/35073.html35073Robotics startup suitX is turning human laborers into bionic workers with a new modular, full-body exoskeleton that will help reduce the number of on-the-job injuries.

The flexible MAX (Modular Agile eXoskeleton) system is designed to support those body parts—shoulders, lower back, knees—most prone to injury during heavy physical exertion.

A spinoff of the University of California Berkeley's Robotics and Human Engineering Lab, suitX built MAX out of three units: backX, shoulderX, and legX. Each can be worn independently or in any combination necessary.

"All modules intelligently engage when you need them, and don't impede you" when moving up or down stairs and ladders, driving, or biking, the product page said.

Field evaluations conducted in the US and Japan, as well as in laboratory settings, indicate the MAX system "reduces muscle force required to complete tasks by as much as 60 percent."

The full-body suit and its modules are aimed primarily at those working in industrial settings like construction, airport baggage handling, assembly lines, shipbuilding, warehouses, courier delivery services, and factories.

The full MAX Suit (BackX, ShoulderX, LegX together) will run you $10,000; the BackX and ShoulderX are $3,000 each; and a LegX is $5,000. SuitX suggests consumers contact sales@suitx.com for more details.

The company is perhaps best known for its Phoenix exoskeleton, which enables people with mobility disorders to stand up, walk, and interact with others. The lightweight device—still in the testing phase—carries a charge for up to four hours of constant use, or eight hours of intermittent walking.

Wed, 16 Nov 2016 18:53:04 +0400
<![CDATA[Advanced robot can understand how humans THINK and knows how the brain works]]>http://2045.com/news/35067.html35067The latest generation of artificially intelligent robots took centre stage recently at the 2016 World Robot Conference held in the Chinese capital Beijing.

But one of the stand out devices was a robot that can actually understand the intricacies of the human brain, and how a human thinks.

Xiao I has the ability to analyse human languages as well as a huge amount of data, and can assemble the functions of a human brain.

The advanced robot can understand and act on user’s instructions by analysing the specific context, thanks to its massive database which has accumulated information concerning daily life and industries for decades, according to an exhibitor at the Xiao I booth.

"The top four companies representing the best human-computer interaction technology were voted for at a summit in Orlando the day before yesterday.

Xiao I ranks as the top one, and others include Apple's Siri, Microsoft's Cortana and Amazon's Echo," said the exhibitor.

Over the past few years, Beijing authorities have been giving policy support to the robot developers in an attempt to stimulate growth of the city’s high-tech industry.

"Without artificial intelligence a robot will be nothing but a machine. Most robot-related research is developing towards the direction of artificial intelligence, which will enhance the sensory ability of robots and enable them to offer better services," said Sheng Licheng, deputy director of Beijing’s Yizhuang Development Zone Administration.

The five-day 2016 World Robot Conference wrapped up on Tuesday, after dazzling visitors with the very latest advancements in robot technology.

Mon, 31 Oct 2016 20:43:24 +0400
<![CDATA[Soft robot with a mouth and gut can forage for its own food]]>http://2045.com/news/35066.html35066Lying in a bath in Bristol, UK, is a robotic scavenger, gorging itself on its surroundings. It’s able to get just enough energy to take in another stomach full of food, before ejecting its waste and repeating the process. This is no ordinary robot. It’s a self-sustaining soft robot with a mouth and gut.

Developed by a Bristol-based collaboration, this robot imitates the life of salps – squishy tube-shaped marine organisms. Salps have an opening at each end, one for food to enter and one for waste to leave. They digest any tasty treats that pass through their body, giving them just enough energy to wiggle about. The same is true for the Bristol bot.

By opening its “mouth”, made from a soft polymer membrane, the robot can suck in a belly full of water and biomatter. The artificial gut – a microbial fuel cell (MFC) – is filled with greedy microbes that break down the biomass and convert its chemical energy into electrical energy, which powers the robot. Digested waste matter is then expelled out the rear end, just as more water is sucked in the front for the next feed. With every mouthful, the robot’s reserves are replenished, so in theory it could roam indefinitely.



“Squeezing out enough energy to be self-sustainable is the real breakthrough,” says Fumiya Iida, a robotics researcher from the University of Cambridge.

Leave it alone

The energy that an MFC can get from food like this is currently pretty low. But by using soft materials for the mouth and the gut, the team was able to reduce the robot’s energy consumption. They got more power by putting several MFCs in series, like a battery.

One advantage of a self-sustaining robot is that if you don’t have to charge it, change its batteries, or hook it up to a power source, it won’t need any human interference. This would make it ideal for use in inhospitable environments: leave the robot in a radioactive disaster zone or a lake filled with pollution, then let it to get to work.

At the moment, it is just a proof of concept. The surrounding water is idealised, meaning that the nutrients have been evenly spread and are in an easy-to-digest form, but other researchers have shown that MFCs can work in more testing conditions.

A self-sustaining robot could one day clean up “red tides” like this one in China, as well as collecting rubbish

Getty Images News

Now that self-sustainability has been achieved, the team wants to get more power so that the robot can start performing useful tasks.

“In the future, robots like this could be released into the ocean to collect garbage,” says Hemma Philamore, one of the robot’s creators from the University of Bristol. Another application could see the robots feeding in agricultural irrigation systems while monitoring plants or applying chemicals to crops. “What we are developing is a robot that can act naturally, in a natural environment,” says Philamore.

Journal reference: Soft RoboticsDOI: 10.1089/soro.2016.0020

Mon, 31 Oct 2016 20:40:00 +0400
<![CDATA[See a sweating robot do push-ups like it's Schwarzenegger]]>http://2045.com/news/35058.html35058Wasn't it Thomas Edison who said genius is 99 percent perspiration and 1 percent inspiration? Here's a new development that leans heavily on both. The University of Tokyo has developed Kengoro, a musculoskeletal humanoid robot that cools its motors by sweating.

Kengoro, which stands 5 feet 6 inches (1.7 meters) tall, made its debut at the International Conference on Intelligent Robots and Systems held this week in Daejon, Korea. Japanese researchers needed to find a way to cool it down without adding a batch of tubes and fans, so they decided to make it sweat.

According to IEEE Spectrum, fake sweat glands allow deionized water to seep out through Kengoro's frame around its 108 motors. As the motors heat up, the water cools them. Kengoro's metal frame is embedded with permeable channels, kind of like a sponge. The deionized water seeps slowly from the inner layers to the more porous layers as needed for cooling.

But Kengoro doesn't have to worry about wiping down its gym equipment -- the water evaporates as it cools so it doesn't drip in gross puddles like it does with guy on the Stairmaster next to you.

The creative cooling method allowed Kengoro to demonstrate doing push-ups for an impressive 11 minutes straight without overheating. That's right, push-ups. It's a skinless Arnold Schwarzenegger, in other words. Let's just hope it sticks to "Kindergarten Cop" Arnold, and not "Terminator" Arnold, because we all know how mankind's little adventure with super-advanced robots turned out there.

Sat, 15 Oct 2016 13:58:13 +0400
<![CDATA[Brain implant provides sense of touch with robotic hand – and that’s just the start]]>http://2045.com/news/35057.html35057A dozen years ago, an auto accident left Nathan Copeland paralyzed, without any feeling in his fingers. Now that feeling is back, thanks to a robotic hand wired up to a brain implant.

“I can feel just about every finger – it’s a really weird sensation,” the 28-year-old Pennsylvanian told doctors a month after his surgery.

Today the brain-computer interface is taking a share of the spotlight at the White House Frontiers Conference in Pittsburgh, with President Barack Obama and other luminaries in attendance.

The ability to wire sensors into the part of the brain that registers the human sense of touch is just one of many medical marvels being developed on the high-tech frontiers of rehabilitation.

“You learn completely new and different things every time you come at this from different directions,” Arati Prabhakar, director of the Pentagon’s Defense Advanced Research Projects Agency, said last week at the GeekWire Summit in Seattle.

Prabhakar provided a preview of the Copeland’s progress during her talk. DARPA’s Revolutionizing Prosthetics program provided the primary funding for the project, which was conducted at the University of Pittsburgh and its medical center, UPMC.

The full details of the experiment were published online today in Science Translational Medicine.

Copeland’s spinal cord was severely injured in an accident in the winter of 2004, when he was an 18-year-old college freshman. The injury left him paralyzed from the upper chest down, with no ability to feel or move his lower arms or legs.

Right after the accident, Copeland put himself on Pitt’s registry of patients willing to participate in clinical trials. Nearly a decade later, a medical team led by Pitt researcher Robert Gaunt chose him to participate in a groundbreaking series of operations.

Gaunt and his colleagues had been working for years on developing brain implants that let disabled patients control prosthetic limbs with their thoughts. “Slowly but surely, we have been moving this research forward,” study co-author Michael Boninger, a professor at Pitt as well as the director of post-acute care for UPMC’s Health Services Division, said in a news release.

This experiment moved the team’s efforts in a new direction. Four arrays of microelectrodes were implanted into the region of Copeland’s brain that would typically take in sensory signals from his fingers. Over the course of several months, researchers stimulated specific points in the somatosensory cortex, and mapped which points made Copeland feel as if a phantom finger was being touched.

“Sometimes it feels electrical, and sometimes it’s pressure,” Copeland said, “but for the most part, I can tell most of the fingers with definite precision. It feels like my fingers are getting touched or pushed.’

To test the results, the researchers placed sensors onto each of the fingers of a robotic hand. They connected the system to Copeland’s brain electrodes, and put a blindfold over his eyes. Then an experimenter touched the robo-hand’s fingers and asked Copeland if he could tell where the feeling was coming from.

Over the course of 13 sessions, each involving hundreds of finger touches, Copeland’s success rate was 84 percent. The index and little fingers were easy to identify, while the middle and ring fingers were harder.

During the experiment, Copeland learned to distinguish the intensity of the touch to some extent – but for what it’s worth, he couldn’t distinguish between hot and cold. That’ll have to come later.

“The ultimate goal is to create a system which moves and feels just like a natural arm would,” Gaunt said. “We have a long way to go to get there, but this is a great start.”

Prabhakar said neurotechnology is a high priority for DARPA, in part because of the kinds of injuries that warfighters have suffered in conflicts abroad.

“Lower-limb prosthetics have gotten very good – but upper-limb prosthetics, until very recently, have still been limited to a very simple hook,” she said.

Sat, 15 Oct 2016 13:54:32 +0400
<![CDATA[Anki's Cozmo robot is the new, adorable face of artificial intelligence]]>http://2045.com/news/35059.html35059Human beings have an uneasy relationship with robots. We’re fascinated by the prospect of intelligent machines. At the same time, we’re wary of the existential threat they pose, one emboldened by decades of Hollywood tropes. In the near-term, robots are supposed to pose a threat to our livelihood, with automation promising to replace human workers while the steady march of artificial intelligence puts a machine behind every fast food counter, toll booth, and steering wheel.

In comes Cozmo. The palm-sized robot, from San Francisco-based company Anki, is both a harmless toy and a bold refutation of that uneasy relationship so loved by film and television. The $180 bot, which starts shipping on October 16th, is powered by AI, and the end result is a WALL-E-inspired personality more akin to a clever pet than a do-everything personal assistant.

Anki isn’t trying to sell us a vision of the future like Apple, Google, and so many other Bay Area tech companies. Instead, it wants to offer an alternative. AI promises to change our lives in drastic ways. With Cozmo, Anki wants to show AI can also be a source of joy and a unique way to deepen our relationship with technology beyond the tired crusades to reinvent productivity and connect the world.

The company largely succeeds here. In my time with Cozmo over the last week, it’s been an endearing experience to discover all of the robot’s many subtle quirks, and to revisit what it’s like to play with something that feels mysteriously organic in ways you can’t quite understand. I’m reminded of childhood experiences trying to push the linguistic limits of the Furby I got for Christmas, and later on finding myself fascinated by the perceived depth of the AOL Instant Messenger bot SmarterChild.

This is intentional. Cozmo is supposed to appeal to young kids and early teenagers. It’s the same demographic Anki targeted with its first product line: a series of smartphone-controlled toy cars that can deftly maneuver a circuit-embedded track. The company, founded by Carnegie Mellon roboticists, has always proclaimed its interest in AI and robotics. Yet until the unveiling of Cozmo earlier this year, it was unclear how a toy car startup could make use of such expertise. Now, it’s evident all the software and hardware experience has paid off.

Unlike its less sophisticated predecessors in the toy market, Cozmo has advanced software to backup its smarts. Anki has programmed the robot with what it calls an emotion engine. That means Cozmo can react to situations as a human would, with a full range of emotions from happy and calm to frustrated and bold. If you pick it up, Cozmo’s blue square-shaped eyes will turn to angry slivers and its lift-like arms will raise and fall rapidly to exhibit its displeasure. Agree to play a game with Cozmo, however, and its eyes will turn into upside-down U’s to show glee. When it loses at a contest, it’ll get mad and pound the table.

Anki programmed in dozens upon dozens of nuanced personality displays to make Cozmo feel more alive, and seeing new ones pop up serendipitously is one of the products most enjoyable aspects. To create Cozmo’s personality profile and many expressions, Anki employed the help of former Pixar animator Carlos Baena, who was hired last year to give Cozmo the feeling of an animated film character come to life. The robot also emits a wide-ranging series of emotive chirps to give it a sense of constant awareness in your presence.

To further keep Cozmo feeling like a living, breathing machine, Anki uses a number of popular AI staples. The robot can employ facial recognition to remember faces and recite names. It also uses sophisticated path planning — aided by its three sensor-imbued toy cubes — to maneuver environments and avoid falling off tables. Most of these computations are not happening on the robot’s internal hardware, which keeps it light and relatively durable. Instead, Cozmo connects to a iOS or Android app, which communicates with Anki’s servers where more of heavier lifting is taken care of.

As for what you actually do with Cozmo, the activities vary. You can play a number of games with the robot using the three cubes. Those include a Whac-A-Mole game and your standard keep-away, where Cozmo tries to snatch a cube from your hand before you can pull it back. This is all coordinated through the mobile app, which uses a gamification system to let you unlock more skills for Cozmo by completing one of three daily goals. Those can include simple things like letting Cozmo free roam on your coffee table for 10 minutes. Others give you specific scenarios to create, like beating Cozmo at a game of "tap the cube" after reaching a 4-4 tie. One of the most fun features the app allows is a remote-control mode, where you can see through Cozmo's camera and use him as a kind of reconnaissance tool.

Overall, the biggest criticism you can direct toward Cozmo at the moment is that it’s just a toy, one best enjoyed by young smartphone-savvy kids. That presents a bit of a problem, because Anki’s most impressive achievements here — facial recognition, its versatile emotion engine — will be lost on the target audience. Meanwhile, adults who find Cozmo fascinating, enough to plunk down $180 at least, will be frustrated by the robot’s initial limitations. Walking that line, between appealing to kids with a fondness for Pixar films and impressing robot-loving older customers, will be difficult.

There are other downsides to Cozmo at its initial launch. Though the robot is controlled by the relatively simple mobile app, younger children will most likely need a parent or sibling’s help in getting Cozmo set up. It needs to be activated every now and again through a special Wi-Fi network, and getting it to wake up can sometimes be tricky unless Cozmo is kept in its charging dock when not in use. Being tied to the special Cozmo Wi-Fi network means the phone can’t connect to the internet, and exiting the app will put Cozmo to sleep after a few moments. These kinks may be ironed out with future software updates, but they’ll likely frustrate kids who expect toys to work out of the box or want Cozmo to have a persistent, always-on mode less reliant on a phone.

The robot does have a great deal of potential. Anki is releasing a finished software development kit in the coming months to let developers take advantage of the robot’s advanced capabilities to perform unforeseen tasks. Anki wants Cozmo to have an impact similar to Microsoft’s original Kinect motion camera, which roboticists tapped for computer vision capabilities that were at the time available only with far more expensive components. One possibility the company has floated in the past is programming Cozmo to work with smart appliances and your media center, so it can dim your Philips Hue lights and put on Netflix when it recognizes two different people sitting on the couch.

For now, though, it’s mostly a neat toy designed for kids, while only the most hardcore of robotics fans and programmers will want to pick one up for their office or at-home tinkering projects. But that may be good enough. What Anki wants to accomplish — to bring robotics and AI to everyone, in a kid-friendly package — doesn’t require a sophisticated humanoid bot to help you around the house or a ultra-capable online assistant to manage your entire life. The goal can be achieved with a likable personality that people will develop a fondness for. In that regard, Cozmo easily surpasses the bar. 

An early look at the Cozmo robot



Fri, 14 Oct 2016 14:01:46 +0400
<![CDATA[Robotic surrogates help chronically ill kids maintain social, academic ties at school]]>http://2045.com/news/35053.html35053Chronically ill, homebound children who use robotic surrogates to "attend" school feel more socially connected with their peers and more involved academically, according to a first-of-its-kind study by University of California, Irvine education researchers.

"Every year, large numbers of K-12 students are not able to go to school due to illness, which has negative academic, social and medical consequences," said lead author Veronica Newhart, a Ph.D. student in UCI's School of Education. "They face falling behind in their studies, feeling isolated from their friends and having their recovery impeded by depression. Tutors can make occasional home visits, but until recently, there hasn't been a way to provide these homebound students with inclusive academic and social experiences."

Telepresence robots could do just that. The Internet-enabled, two-way video streaming automatons have wheels for feet and a screen showing the user's face at the top of a vertical "body." From home, a student controlling the device with a laptop can see and hear everything in the classroom, talk with friends and the teacher, "raise his or her hand" via flashing lights to ask or answer questions, move around and even take field trips.

However, the robots have gone straight from production to consumer, the researchers noted, and there is great need for objective, formal studies in order for schools, hospitals and communities to responsibly engage in this innovative educational practice.

The exploratory case study -- co-authored by Mark Warschauer, UCI professor of education and informatics -- involved five homebound children, five parents, 10 teachers, 35 classmates and six school/district administrators. The students -- four males and one female -- ranged in age from 6 to 16, and their chronic illnesses included an immunodeficiency disorder, cancer and heart failure.

Getting to see their friends and staying socially connected was what they said they liked best about using the robots. The school day felt more normal, they reported, because they were able to participate in discussions, interact with peers and undergo new experiences with their classmates.

"Further research is required to determine the impact of robot utilization on students' health and well-being, as well as the most effective ways to implement this technology in various settings," said Newhart, who presented the findings at the 23rd International Conference on Learning, held in July at the University of British Columbia.

"Collaboration among education, technology and healthcare teams is key to the success of virtual inclusion in the classroom for improved learning, social and health outcomes for vulnerable children."

This fall, telepresence robots will become available on the UCI campus -- a gift from the class of 2016. "This is a solution for any student who's prevented from completing a course or degree program because of a long-term injury or illness," said Newhart, who will soon launch additional studies in school districts across the country.

Story Source:

The above post is reprinted from materials provided by University of California, Irvine. Note: Content may be edited for style and length.

Fri, 16 Sep 2016 00:43:23 +0400
<![CDATA[How a small implanted device could help limit metastatic breast cancer]]>http://2045.com/news/35052.html35052A small device implanted under the skin can improve breast cancer survival by catching cancer cells, slowing the development of metastatic tumors in other organs and allowing time to intervene with surgery or other therapies.

These findings, reported in Cancer Research, suggest a path for identifying metastatic cancer early and intervening to improve outcomes.

"This study shows that in the metastatic setting, early detection combined with a therapeutic intervention can improve outcomes. Early detection of a primary tumor is generally associated with improved outcomes. But that's not necessarily been tested in metastatic cancer," says study author Lonnie D. Shea, Ph.D., William and Valerie Hall Department Chair of Biomedical Engineering at the University of Michigan.

The study, done in mice, expands on earlier research from this team showing that the implantable scaffold device effectively captures metastatic cancer cells. Here, the researchers improve upon their device and show that surgery prior to the first signs of metastatic cancer improved survival.

"Currently, early signs of metastasis can be difficult to detect. Imaging may be done once a patient experiences symptoms, but that implies the burden of disease may already be substantial. Improved detection methods are needed to identify metastasis at a point when targeted treatments can have a significant beneficial impact on slowing disease progression," says study author Jacqueline S. Jeruss, M.D., Ph.D., associate professor of surgery and biomedical engineering and director of the Breast Care Center at the University of Michigan Comprehensive Cancer Center.

The scaffold is made of FDA-approved material commonly used in sutures and wound dressings. It's biodegradable and can last up to two years within a patient. The researchers envision it would be implanted under the skin, monitored with non-invasive imaging and removed upon signs of cancer cell colonization, at which point treatment could be administered.

The scaffold is designed to mimic the environment in other organs before cancer cells migrate there. The scaffold attracts the body's immune cells, and the immune cells draw in the cancer cells. This then limits the immune cells from heading to the lung, liver or brain, where breast cancer commonly spreads.

"Typically, immune cells initially colonize a metastatic site and then pave the way for cancer cells to spread to that organ. Our results suggest that bringing immune cells into the scaffold limits the ability of those immune cells to prepare the metastatic sites for the cancer cells. Having more immune cells in the scaffold, attracts more cancer cells to this engineered environment," Shea says.

In the mouse study at day 5 after tumor initiation, the researchers found a detectable percentage of tumor cells within the scaffold but none in the lung, liver or brain, suggesting that the cancer cells hit the scaffold first.

At 15 days after tumor initiation, they found 64 percent fewer cancer cells in the liver and 75 percent fewer cancer cells in the brains of mice with scaffolds compared to mice without scaffolds. This suggests that the presence of the scaffold slows the progress of metastatic disease.

The researchers removed the tumors at day 10, which is after detection but before substantial spreading, and found the mice that had the scaffold in place survived longer than mice that did not have a scaffold. While surgery was the primary intervention in this study, the researchers suggest that additional medical treatments might also be tested as early interventions.

In addition, researchers hope that by removing the scaffold and examining the cancer cells within it, they can use precision medicine techniques to target the treatment most likely to have an impact.

This system is early detection and treatment, not a cure, the researchers emphasize. The scaffold won't prevent metastatic disease or reverse disease progression for patients with established metastatic cancer.

The team will develop a clinical trial protocol using the scaffold to monitor for metastasis in patients treated for early stage breast cancer. In time, the researchers hope it could also be used to monitor for breast cancer in people who are at high risk due to genetic susceptibility. They are also testing the device in other types of cancer.

Story Source:

The above post is reprinted from materials provided by University of Michigan Health System. Note: Content may be edited for style and length.

Fri, 16 Sep 2016 00:40:57 +0400
<![CDATA[THE HYPE—AND HOPE—OF ARTIFICIAL INTELLIGENCE]]>http://2045.com/news/35046.html35046Earlier this month, on his HBO show “Last Week Tonight,” John Oliver skewered media companies’ desperate search for clicks. Like many of his bits, it became a viral phenomenon, clocking in at nearly six million views on YouTube. At around the ten-minute mark, Oliver took his verbal bat to the knees of Tronc, the new name for Tribune Publishing Company, and its parody-worthy promotional video, in which a robotic spokeswoman describes the journalistic benefits of artificial intelligence, as a string section swells underneath.

Tronc is not the only company to enthusiastically embrace the term “artificial intelligence.” A.I. is hot, and every company worth its stock price is talking about how this magical potion will change everything. Even Macy’s recently announced that it was testing an I.B.M. artificial-intelligence tool in ten of its department stores, in order to bring back customers who are abandoning traditional retail in favor of online shopping.

Much like “the cloud,” “big data,” and “machine learning” before it, the term “artificial intelligence” has been hijacked by marketers and advertising copywriters. A lot of what people are calling “artificial intelligence” is really data analytics—in other words, business as usual. If the hype leaves you asking “What is A.I., really?,” don’t worry, you’re not alone. I asked various experts to define the term and got different answers. The only thing they all seem to agree on is that artificial intelligence is a set of technologies that try to imitate or augment human intelligence. To me, the emphasis is on augmentation, in which intelligent software helps us interact and deal with the increasingly digital world we live in.

Three decades ago, I read newspapers, wrote on an electric typewriter, and watched a handful of television channels. Today, I have streaming video from Netflix, Amazon, HBO, and other places, and I’m sometimes paralyzed by the choices. It is becoming harder for us to stay on top of the onslaught—e-mails, messages, appointments, alerts. Augmented intelligence offers the possibility of winnowing an increasing number of inputs and options in a way that humans can’t manage without a helping hand.

Computers in general, and software in particular, are much more difficult than other kinds of technology for most people to grok, and they overwhelm us with a sense of mystery. There was a time when you would record a letter or a document on a dictaphone and someone would transcribe it for you. A human was making the voice-to-text conversion with the help of a machine. Today, you can speak into your iPhone and it will transcribe your messages itself. If people could have seen our current voice-to-text capabilities fifty years ago, it would have looked as if technology had become sentient. Now it’s just a routine way to augment how we interact with the world. Kevin Kelly, the writer and futurist, whose most recent book is “The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future,” said, “What we can do now would be A.I. fifty years ago. What we can do in fifty years will not be called A.I.”

You don’t have to look up from Facebook to get his point. Before we had the Internet, we would either call or write to our friends, one at a time, and keep up with their lives. It was a slow process, and took a lot of effort and time to learn about each other. As a result, we had fewer interactions—there was a cost attached to making long-distance phone calls and a time commitment attached to writing letters. With the advent of the Internet, e-mail emerged as a way to facilitate and speed up those interactions. Facebook did one better—it turned your address book into a hub, allowing you to simultaneously stay in touch with hundreds, even thousands, of friends. The algorithm allows us to maintain more relationships with much less effort at almost no cost.

Michelle Zhou spent over a decade and a half at I.B.M. Research and I.B.M. Watson Group before leaving to become a co-founder of Juji, a sentiment-analysis startup. An expert in a field where artificial intelligence and human-computer interaction intersect, Zhou breaks down A.I. into three stages. The first is recognition intelligence, in which algorithms running on ever more powerful computers can recognize patterns and glean topics from blocks of text, or perhaps even derive the meaning of a whole document from a few sentences. The second stage is cognitive intelligence, in which machines can go beyond pattern recognition and start making inferences from data. The third stage will be reached only when we can create virtual human beings, who can think, act, and behave as humans do.

We are a long way from creating virtual human beings. Despite what you read in the media, no technology is perfect, and the most valuable function of A.I. lies in augmenting human intelligence. To even reach that point, we need to train computers to mimic humans. An April, 2016, story in Bloomberg Businessprovided a good example. It described how companies that provide automated A.I. personal assistants (of the sort that arrange schedules or help with online shopping) had hired human “trainers” to check and evaluate the A.I. assistants’ responses before they were sent out. “It’s ironic that we define artificial intelligence with respect to its ability to replicate human intelligence,” said Sean Gourley, the founder of Primer, a data-analytics company, and an expert on deriving intelligence from large data sets with the help of algorithms.

Whether it is Spotify or Netflix or a new generation of A.I. chat bots, all of these tools rely on humans themselves to provide the data. When we listen to songs, put them on playlists, and share them with others, we are sending vital signals to Spotify that train its algorithms not only to discover what we might like but also to predict hits.

Even the much talked-about “computer vision” has become effective only because humans have uploaded billions of photos and tagged them with metadata to give those photos context. Increasingly powerful computers can scan through these photos and find patterns and meaning. Similarly, Google can use billions of voice samples it has collected over the years to build a smart system that understands accents and nuances, which make its voice-based search function possible.

Using Zhou’s three stages as a yardstick, we are only in the “recognition intelligence” phase—today’s computers use deep learning to discover patterns faster and better. It’s true, however, that some companies are working on technologies that can be used for inferring meanings, which would be the next step. “It does not matter whether we will end up at stage 3,” Zhou wrote to me in an e-mail. “I’m still a big fan of man-machine symbiosis, where computers do the best they can (that is being consistent, objective, precise), and humans do our best (creative, imprecise but adaptive).” For a few more decades, at least, humans will continue to train computers to mimic us. And, in the meantime, we’re going to have to deal with the hyperbole surrounding A.I.

Sun, 28 Aug 2016 20:37:37 +0400
<![CDATA[Bot tech controls drug release when needed]]>http://2045.com/news/35047.html35047(Tech Xplore)—A study shows that that nanobots can release drugs inside your brain. The nanorobots, reported NewScientist on Thursday, are built out of DNA. Drugs can be tethered to their shell-like shapes.

Helen Thomson had details on how this all works: "The bots also have a gate, which has a lock made from iron oxide nanoparticles. The lock opens when heated using electromagnetic energy, exposing the drug to the environment. Because the drug remains tethered to the DNA parcel, a body's exposure to the drug can be controlled by closing and opening the gate."

Their study has been published in PLOS ONE, as "Thought-Controlled Nanoscale Robots in a Living Host." New Scientist talked about the value of their work, as showing the ability to exercise more precise control over when a drug is active in the body. "Because the bots can open and close when required, the technology should minimize unwanted side effects."

Therein has been the challenge, getting drugs to where they need to be exactly when they are wanted. "Most drugs diffuse through the blood stream over time – and you're stuck with the side effects until the drug wears off," wrote Thomson.

Kate Baggaley in Popular Science said, "This technology could eventually give people more control over when and where a medication is active in their body." Thomson said the technique may be useful for treating brain disorders such as schizophrenia and ADHD.

"The technology released a drug inside cockroaches in response to the man's brain activity. As described in Popular Science: "A man's brain activity prompted nanobots made out of DNA to release drugs inside a cockroach."

The system is from a team at the Interdisciplinary Center, in Herzliya, and Bar Ilan University, in Ramat Gan, Israel. Following their research effort, the question becomes if and when we will see this applied to humans. According to theNew Scientist report, the technology is not ready for use in humans.

They still have to work on the basic setup, including a smaller, more portable way to measure brain activity. New Scientist said they also envision the person wearing an EEG device similar to a small heating aid to monitor brain activity.

What's next? Thomson said that "the technology isn't ready to be used in humans yet. To work, the setup needs a smaller, more portable method of measuring brain activity. The team also envisions a person wearing a small, hearing aid-like EEG device to monitor brain activity and detect when drugs are needed – "for example, when a person with ADHD's concentration begins to lapse. A smart watch would then create the electromagnetic field required to release a dose of Ritalin."

The authors wrote that "so far no interface has been established between a human mind and a therapeutic molecule, which are 10 orders of magnitude apart. The purpose of this study was to show that DNA robots can bridge this gap." They said the robots which they designed can be electronically remote-controlled. "This was done by adding metal nanoparticles to the robotic gates, which could heat in response to an electromagnetic field."

More information: Shachar Arnon et al. Thought-Controlled Nanoscale Robots in a Living Host, PLOS ONE (2016). DOI: 10.1371/journal.pone.0161227

We report a new type of brain-machine interface enabling a human operator to control nanometer-size robots inside a living animal by brain activity. Recorded EEG patterns are recognized online by an algorithm, which in turn controls the state of an electromagnetic field. The field induces the local heating of billions of mechanically-actuating DNA origami robots tethered to metal nanoparticles, leading to their reversible activation and subsequent exposure of a bioactive payload. As a proof of principle we demonstrate activation of DNA robots to cause a cellular effect inside the insect Blaberus discoidalis, by a cognitively straining task. This technology enables the online switching of a bioactive molecule on and off in response to a subject's cognitive state, with potential implications to therapeutic control in disorders such as schizophrenia, depression, and attention deficits, which are among the most challenging conditions to diagnose and treat. 

Sat, 27 Aug 2016 20:40:33 +0400
<![CDATA[Stretchy supercapacitors power wearable electronics]]>http://2045.com/news/35048.html35048A future of soft robots that wash your dishes or smart T-shirts that power your cell phone may depend on the development of stretchy power sources. But traditional batteries are thick and rigid—not ideal properties for materials that would be used in tiny malleable devices. In a step toward wearable electronics, a team of researchers has produced a stretchy micro-supercapacitor using ribbons of graphene.

The researchers will present their work today at the 252nd National Meeting & Exposition of the American Chemical Society (ACS).

"Most power sources, such as phone batteries, are not stretchable. They are very rigid," says Xiaodong Chen, Ph.D. "My team has made stretchable electrodes, and we have integrated them into a supercapacitor, which is an energy storage device that powers electronic gadgets."

Supercapacitors, developed in the 1950s, have a higher power density and longer life cycle than standard capacitors or batteries. And as devices have shrunk, so too have supercapacitors, bringing into the fore a generation of two-dimensional micro-supercapacitors that are integrated into cell phones, computers and other devices. However, these supercapacitors have remained rigid, and are thus a poor fit for soft materials that need to have the ability to elongate.

In this study, Chen of Nanyang Technological University, Singapore, and his team sought to develop a micro-supercapacitor from graphene. This carbon sheet is renowned for its thinness, strength and conductivity. "Graphene can be flexible and foldable, but it cannot be stretched," he says. To fix that, Chen's team took a cue from skin. Skin has a wave-like microstructure, Chen says. "We started to think of how we could make graphene more like a wave."

The researchers' first step was to make graphene micro-ribbons. Most graphene is produced with physical methods—like shaving the tip of a pencil—but Chen uses chemistry to build his material. "We have more control over the graphene's structure and thickness that way," he explains. "It's very difficult to control that with the physical approach. Thickness can really affect the conductivity of the electrodes and how much energy the supercapacitor overall can hold."

The next step was to create the stretchable polymer chip with a series of pyramidal ridges. The researchers placed the graphene ribbons across the ridges, creating the wave-like structure. The design allowed the material to stretch without the graphene electrodes of the superconductor detaching, cracking or deforming. In addition, the team developed kirigami structures, which are variations of origami folds, to make the supercapacitors 500 percent more flexible without decaying their electrochemical performance. As a final test, Chen has powered an LCD from a calculator with the stretchy graphene-based micro-supercapacitor. Similarly, such stretchy supercapacitors can be used in pressure or chemical sensors.

In future experiments, the researchers hope to increase the electrode's surface area so it can hold even more energy. The current version only stores enough energy to power LCD devices for a minute, he says.

 Explore further: Crumpled graphene could provide an unconventional energy storage

More information: Flexible Micro-supercapacitors based on graphene, 252nd National Meeting & Exposition of the American Chemical Society (ACS).

Micro-supercapacitors with unique two-dimensional (2D) structures are gaining attention due to their small size, high energy density and potential applications in on-chip and portable electronics. Compared to the sandwich structure of conventional supercapacitors, the 2D structure of micro-supercapacitors enables a reduction in the ionic diffusing pathway, and more efficient utilization of the surface area of electrode materials. Meanwhile, emerging wearable electronics require the property of stretchability in addition to flexibility for application on the soft and curved human body that is covered with highly extensible skins. Micro-supercapacitors, as a candidate for essential integrated energy conversion and storage units on wearable electronics, ought to be capable of accommodating large strain while retaining their performance. In this talk, I will present our recent development of highly stretchable micro-supercapacitors with stable electrochemical performance. The excellent stretchable and electrochemical performance relies on the out-of-plane wavy structures of graphene micro-ribbons. It decreases the stain concentration on the electrode fingers, so that the detaching and cracking of the electrode materials could be prevented. In addition, it ensured the electrode fingers keeping relative constant distance, so the stability of the micro-supercapacitors could be enhanced. 

Thu, 25 Aug 2016 20:42:36 +0400
<![CDATA[Meet DevBot, a self-driving electric racing car]]>http://2045.com/news/35045.html35045DevBot is a test mule for Roborace, the first driverless racing series.

There are less than two months to go until the start of Formula E's third season, which kicks off in Hong Kong on October 9. One of the more interesting things about Formula E's upcoming season is the new support series, Roborace. As the name suggests, it's a series for self-driving race cars, and the organizers have just unveiled the mule—called DevBot—that teams will use to develop their control software.

All of the Roborace teams will use identical Robocars, but each will develop its own control algorithms. The race cars are fully electric—in keeping with the ethos of Formula E—and have more than a little Speed Racer about them. But DevBot will look much more familiar to fans of sports car racing; it's a Le Mans-style prototype coupe, shown in the test photos without the front and rear bodywork.

DevBot also has a cockpit for a human driver, unlike the Robocars, but it does have the same powertrain, sensor suite, processors, and communication systems as the forthcoming autonomous race cars. DevBot is also fully electric, suggesting the handiwork of Drayson Racing Technologies. Several years ago, Drayson converted its Lola B10 Le Mans Prototype racer frominternal combustion to electric power and has been involved in developing the technology used by Formula E.

Although DevBot has been testing in private for several months now, it will officially break cover later this week at the Formula E preseason test, being held at Donington Park in Leicestershire, England, on August 24.

Tue, 23 Aug 2016 23:31:01 +0400
<![CDATA[Paralyzed Man Regains Hand Movement, Thanks to First-Ever Nerve-Transfer Surgery]]>http://2045.com/news/35042.html35042HEADFIRST

Tim Raglin regularly dove, headfirst, into the water at his family’s lake house. The 45-year old Canadian man had done so thousands of times without incident. In 2007, though Raglin hit his head on a rock in the shallow water, shattering a vertebra in his cervical spine.

His family pulled him to safety, saving him from drowning. However, for nine years, both his hands and feet were left paralyzed.

Now though, there’s hope for Raglin and others like him.

Raglin is the first Canadian to ever undergo a nerve transfer surgery. Dr. Kirsty Boyd from the Ottawa Hospital essentially rewired Raglin’s body– rerouting some of his fully-functional elbow nerves to his hand. Although Raglin had to wait several months for the nerves to regrow, this procedure allowed him to regain some control over his right hand.


After persevering for 18 months, Raglin was finally able to open his fingers during an occupational therapy session at The Ottawa Hospital Rehabilitation Centre.

“It was kind of a shock,” he said in an interview. “And it’s really moving now: There’s a lot of nerves touching muscles that are getting stronger…Every iteration, it just gets more and more exciting.”

It’s still a slow uphill battle for Raglin. The muscles in his hand have deteriorated from lack of use, so they tire easily. In addition, because Raglin is using a different nerve pathway to activate the muscles in his hand, it will take some time for his brain to adjust to the new system.

Despite these challenges, he has learned to close his fingers on something by flexing his bicep. In time, however, it’s expected his brain will figure out how to separate the triggers for his hand and his bicep.

“I’m not quite at the point where I can get a cup off the table, but I can envision myself doing that. I know I will be able to do that eventually—so it’s exciting to see that.”

Tue, 23 Aug 2016 23:14:10 +0400
<![CDATA[Tiny robot caterpillar can move objects ten times its size]]>http://2045.com/news/35041.html35041Soft robots aren't easy to make, since they require a completely different set of components from their rigid counterparts. It's even tougher to scale down the parts they typically use for locomotion. A team of researchers from the Faculty of Physics at the University of Warsaw, however, successfully created a 15-millimeter soft micromachine that only needs light to be able to move. The microrobot is made of Liquid Crystalline Elastomers (LCEs), smart materials that change shape when exposed to visible light. Under a light source, the machine's body contracts like a caterpillar and forms waves to propel it forward.

The researchers said the robo-caterpillar can climb steep slopes, squeeze into minuscule spaces and move objects ten times its size. A tiny machine like this that can operate in challenging environments could be used for scientific research, and maybe even espionage if someone can find a way to attach a camera or a mic to it. But if the robot's a bit too small for a specific application, researchers could also adopt the team's method to make something a wee bit bigger.

Sun, 21 Aug 2016 23:09:00 +0400
<![CDATA[Putting a computer in your brain is no longer science fiction]]>http://2045.com/news/35040.html35040Like many in Silicon Valley, technology entrepreneur Bryan Johnson sees a future in which intelligent machines can do things like drive cars on their own and anticipate our needs before we ask.

What’s uncommon is how Johnson wants to respond: find a way to supercharge the human brain so that we can keep up with the machines.

From an unassuming office in Venice Beach, his science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain to help people suffering from neurological damage caused by strokes, Alzheimer’s or concussions. Top neuroscientists who are building the chip — they call it a neuroprosthetic — hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks.

The medical device is years in the making, Johnson acknowledges, but he can afford the time. He sold his payments company, Braintree, to PayPal for $800 million in 2013. A former Mormon raised in Utah, the 38-year-old speaks about the project with missionary-like intensity and focus.

“Human intelligence is landlocked in relationship to artificial intelligence — and the landlock is the degeneration of the body and the brain,” he said in an interview about the company, which he had not discussed publicly before. “This is a question of keeping humans front and center as we progress.”

Johnson stands out among an elite set of entrepreneurs who believe Silicon Valley can play a role in funding large-scale scientific discoveries — the kind that can dramatically improve human life in ways that go beyond building software.

Though many of their ventures draw from software principles: In the last two years, venture capital firms like Y Combinator, Andreessen Horowitz, Peter Thiel’s Founders Fund, Khosla Ventures and others have poured money into start-ups that focus on “bio-hacking” — the notion that you can engineer the body the way you would a software program. They’ve funded companies that aim to sequence the bacteria in the gut, reprogram the DNA you were born with, or conduct cancer biopsies from samples of blood. They’ve backed what are known as cognitive-enhancement businesses like Thync, which builds a headset that sends mood-altering electrical pulses to the brain, and Nootrobox, a start-up that makes chewable coffee supplements that combine doses of caffeine with active ingredients in green tea, leading to a precisely engineered, zenlike high.

[Tech titans' lastest project: Creating the fountain of youth]

It’s easy to dismiss these efforts as the hubristic, techno-utopian fantasies of a self-involved elite that believes it can defy death and human decline — and in doing so, confer even more advantages on the already-privileged.

And while there’s no shortage of hubris in Silicon Valley, it’s also undoubtable some of these projects will accelerate scientific breakthroughs and fill some of the gaps left in the wake of declining public funding for scientific research, said Laurie Zoloth, professor of  bioethics and medical humanities at Northwestern University. Moreover, techies are motivated by the fact that many biological and health challenges increasingly involve data-mining and computation; they’re looking more like problems that they know how to solve. Large-scale genome sequencing, for example, has long been seen as key to unlocking targeted cancer therapies and detecting disease far earlier than current methods; it’s becoming more of a reality as the cost of sequencing, storing and analyzing the data has dropped dramatically, leading to a flood of investments in that area.

Kernel is cognitive enhancement of the not-gimmicky variety. The concept is based on the work of Theodore Berger, a pioneering biomedical engineer who directs the Center for Neural Engineering at the University of Southern California, and is the start-up’s chief science officer.

For over two decades, Berger has been working on building a neuroprostheticto help people with dementia, strokes, concussions, brain injuries and Alzheimer's disease, which afflicts 1 in 9 adults over 65.

The implanted devices try to replicate the way brain cells communicate with one another. Let’s say, for example, that you are having a conversation with your boss. A healthy brain will convert that conversation from short-term memory to long-term memory by firing off a set of electrical signals. The signals fire in a specific code that is unique to each person and is a bit like a software command.

Brain diseases throw off these signaling codes. Berger’s software tries to assist the communication between brain cells by making an instantaneous prediction as to what the healthy code should be, and then firing off in that pattern. In separate studies funded by the Defense Advanced Research Projects Agency over the last several years, Berger’s chips were shown toimprove recall functions in both rats and monkeys.

A year ago, Berger felt he had reached a ceiling in his research. He wanted to begin testing his devices with humans and was thinking about commercial opportunities when he got a cold call from Johnson in October 2015. He hadn’t heard of Johnson; the Google search said he was a tech entrepreneur who had founded a payments processing company and invested in out-there science start-ups. The two met in Berger’s office later that month. They talked for four hours, skipping lunch, and by the end of the day, Johnson said he would put up the funds for the two to start something together. “I don’t know who, but somebody was looking over us,” Berger said of the meeting.

For Johnson, the meeting was a culmination of a longtime obsession with intelligence and the brain.

[Building an artificial brain]

Shortly after he sold Braintree, he was already restless to start another company. He spent six months calling everyone he knew who was doing “something audacious” — about 200 people in all. “I wanted to understand, what mental models people maintained — how did they define what to work on and why?” he says.

He then set up a $100 million fund that invests in science and technology start-ups that could “radically improve quality of life.” The fund, which comes exclusively from his personal fortune, was called OS Fund, because he wanted to support companies that were making changes at the operating-system level, he said. Johnson’s goal was to take projects from “crazy to viable” — including start-ups attempting to mine asteroids for precious metals and water, delivery drones for developing countries, and an artificial-intelligence company building the world’s largest human genetic database.


At the same time, he kept returning to intelligence, both artificial and real. As he saw it, artificial intelligence was booming — technology advances were moving at an accelerated pace; the pace of the human brain’s evolution was sluggish by comparison. So he hired a team of neuroscientists and tasked them with combing through all the relevant research, with the goal of forming a brain company. Eventually they settled on Berger.

Ten months later, the team is starting to sketch out prototypes of the device and is conducting tests with epilepsy patients in hospitals. They hope to start a clinical trial, but first they have to figure out how to make the device portable. (Right now, patients who use it are hooked up to a computer.)

Zoloth says one of the big risks of technologists funding science is that they fund their own priorities, which can be disconnected from the greater public good. Many people don’t have enough resources to fulfill the brain potential they currently have, let alone enhance it. “Saying that if tech billionaires fund what they want may inadvertently fund science for the larger public, as a sort of leftover effect, is a problematic argument,” she said. “If brilliantly creative high school teachers in the inner city, for example, could fund science, too, then perhaps the needs of the poor might be found more interesting.”

Johnson says he is acutely aware of those concerns. He recognizes that the notion of people walking around with chips implanted in their heads to make them smarter seems far-fetched, to put it mildly. He says the goal is to build a product that is widely affordable, but acknowledges there are challenges. He points out that many scientific discoveries and inventions — even the printing press — started out for a privileged group but ended up providing massive benefits to humanity. The primary benefits of Kernel, he says, will be for the sick, for the millions of people who have lost their memories because of brain disorders. Even a small improvement in memory — a person with dementia might be able to remember the location of the bathroom in their home, for example — can help people maintain their dignity and enjoy a greater quality of life.

And in an age of AI, he insists that boosting the capacity of our brains is itself an urgent public concern. “Whatever endeavor we imagine — flying cars, go to Mars — it all fits downstream from our intelligence,” he says. “It is the most powerful resource in existence. It is the master tool.”

Wed, 17 Aug 2016 21:54:12 +0400
<![CDATA[Researchers 'reprogram' network of brain cells in mice with thin beam of light]]>http://2045.com/news/35037.html35037Neurons that fire together really do wire together, says a new study in Science, suggesting that the three-pound computer in our heads may be more malleable than we think.

In the latest issue of Science, neuroscientists at Columbia University demonstrate that a set of neurons trained to fire in unison could be reactivated as much as a day later if just one neuron in the network was stimulated. Though further research is needed, their findings suggest that groups of activated neurons may form the basic building blocks of learning and memory, as originally hypothesized by psychologist Donald Hebb in the 1940s.

"I always thought the brain was mostly hard-wired," said the study's senior author, Dr. Rafael Yuste, a neuroscience professor at Columbia University. "But then I saw the results and said 'Holy moly, this whole thing is plastic.' We're dealing with a plastic computer that's constantly learning and changing."

The researchers were able to control and observe the brain of a living mouse using the optogenetic tools that have revolutionized neuroscience in the last decade. They injected the mouse with a virus containing light-sensitive proteins engineered to reach specific brain cells. Once inside a cell, the proteins allowed researchers to remotely activate the neuron with light, as if switching on a TV.

The mouse was allowed to run freely on a treadmill while its head was held still under a microscope. With one laser, the researchers beamed light through its skull to stimulate a small group of cells in the visual cortex. With a second laser, they recorded rising levels of calcium in each neuron as it fired, thus imaging the activity of individual cells.

Before optogenetics, scientists had to open the skull and implant electrodes into living tissue to stimulate neurons with electricity and measure their response. Even a mouse brain of 100 million neurons, nearly a thousandth the size of ours, was too dense to get a close look at groups of neurons.

Optogenetics allowed researchers to get inside the brain non-invasively and control it far more precisely. In the last decade, researchers have restored sight and hearing to blind and deaf mice, and turned normal mice aggressive, all by manipulating specific brain regions.

The breakthrough that allowed researchers to reprogram a cluster of cells in the brain is the culmination of more than a decade of work. With tissue samples from the mouse visual cortex, Yuste and his colleagues showed in a 2003 study in Nature that neurons coordinated their firing in small networks called neural ensembles. A year later, they demonstrated that the ensembles fired off in sequential patterns through time.

As techniques for controlling and observing cells in living animals improved, they learned that these neural ensembles are active even without stimulation. They used this information to develop mathematical algorithms for finding neural ensembles in the visual cortex. They were then able to show, as they had in the tissue samples earlier, that neural ensembles in living animals also fire one after the other in sequential patterns.

The current study in Science shows that these networks can be artificially implanted and replayed, says Yuste, much as the scent of a tea-soaked madeleine takes novelist Marcel Proust back to his memories of childhood.

Pairing two-photon stimulation technology with two-photon calcium imaging allowed the researchers to document how individual cells responded to light stimulation. Though previous studies have targeted and recorded individual cells none have demonstrated that a bundle of neurons could be fired off together to imprint what they call a "neuronal microcircuit" in a live animal's brain.

"If you told me a year ago we could stimulate 20 neurons in a mouse brain of 100 million neurons and alter their behavior, I'd say no way," said Yuste, who is also a member of the Data Science Institute. "It's like reconfiguring three grains of sand at the beach."

The researchers think that the network of activated neurons they artificially created may have implanted an image completely unfamiliar to the mouse. They are now developing a behavioral study to try and prove this.

"We think that these methods to read and write activity into the living brain will have a major impact in neuroscience and medicine," said the study's lead author, Luis Carrillo-Reid, a postdoctoral researcher at Columbia.

Dr. Daniel Javitt, a psychiatry professor at Columbia University Medical Center who was not involved in the study, says the work could potentially be used to restore normal connection patterns in the brains of people with epilepsy and other brain disorders. Major technical hurdles, however, would need to be overcome before optogenetic techniques could be applied to humans.

The research is part of a $300 million brain-mapping effort called the U.S. BRAIN Initiative, which grew out of an earlier proposal by Yuste and his colleagues to develop tools for mapping the brain activity of fruit flies to more complex mammals, including humans.

Story Source:

The above post is reprinted from materials provided by Columbia University.Note: Content may be edited for style and length.

Journal Reference:

  1. Rafael Yuste et al. Imprinting and recalling cortical ensembles.Science, August 2016 DOI: 10.1126/science.aaf7560
Sat, 13 Aug 2016 23:54:28 +0400
<![CDATA[MIT’s DuoSkin turns temporary tattoos into on-skin interfaces]]>http://2045.com/news/35036.html35036Your next tattoo could be functional as well as aesthetic. A new MIT Media Lab product called DuoSkin created in partnership with Microsoft Research turns temporary tattoos into connected interfaces, letting them act as input for smartphones or computers, display output based on changes in body temperature and transmit data to other devices via NFC.

DuoSkin:Functional, stylish on-skin user interfaces from MIT Media Lab on Vimeo.

Cindy Hsin-Liu Kao, PhD Student at the MIT Media Lab, explains the origins of the project in the video above. Kao says that metallic jewelry-like temporary tattoos are a growing trend, providing a great opportunity for creating something that meshes with existing fashion while also adding genuinely useful functional capabilities. She notes that in Taiwan, there’s a “huge culture” of cosmetics and street fashion, which is affordable and accessible enough that “you can very easily change and edit your appearance whenever you want.” The DuoSkin team wanted to achieve the same thing with their technological twist on the tattoo trend.

As a result, the system is actually designed to be fairly inexpensive and easy to set up for just about anyone. It uses gold leaf, the same thing you’ll occasionally find delicately flaked atop swanky desserts, for basic conductivity, but otherwise employs everyday crafting materials like a vinyl cutter and temporary tattoo printing paper. You can use any desktop graphics creation software you like to design the circuit, then feed that design through the vinyl cutter, layer the gold leaf on top and apply it as you would a standard temporary tattoo. Small, surface mount electronic components including NFC chips complete the connectivity picture.

Researchers devised three different ways in which the DuoSkin tattoos could be used, including as input devices that can turn your skin into a trackpad, or a capacitive virtual control knob for adjusting volume on your connected device, for example. The tattoos can also display output, changing color based on your body temp like a Hypercolor T-shirt. Finally, they can contain data to be read by other devices, via NFC wireless communication. Kao also shows how they can contain embedded LEDs for on-skin light effects.

Kao ends by suggesting they’d like to see this tech come to tattoo parlours, so it’s easy for anyone to get connected ink. It’s definitely something that could further the use cases and value appeal of wearable tech as a category, especially among price sensitive customers who place a high value on aesthetics and don’t want to have to wear a watch or other more cumbersome piece of tech.

Startups like Inkbox are already working on material science advancements that extend the life of temporary tattoos, too, so there could be a collaboration opportunity down the road that gives people access to tattoo-based interfaces that don’t last forever, but don’t wash off overnight, either.

Sat, 13 Aug 2016 23:15:13 +0400
<![CDATA[How to Give Fake Hands Real Feeling]]>http://2045.com/news/35033.html35033In Zhenan Bao’s lab at Stanford, researchers are ­inventing materials for touch-sensitive prosthetics.

The human hand has 17,000 touch sensors that help us pick things up and connect us to the physical world. A prosthetic hand or foot has no feeling at all.

Zhenan Bao hopes to change that by wrapping prosthetics with electronic skin that can sense pressure, heal when cut, and process sensory data. It’s a critical step toward prosthetics that one day could be wired to the nervous system to deliver a sense of touch. Even before that is possible, soft yet grippy electronic skin would let amputees and burn victims do more everyday tasks like picking up delicate objects—and possibly help alleviate phantom-limb pain.

To mimic and in some ways surpass the capabilities of the skin on human hands, Bao is rethinking what an electronic material can be. Electronic skin should be not only sensitive to pressure but also lightweight, durable, stretchy, pliable, and self-healing, just like real skin. It should also be relatively inexpensive to manufacture in large sheets for wrapping around prosthetics. Traditional electronic materials are none of these things.

Bao (an MIT Technology Review Innovator Under 35 in 2003) has been working on electronic skin since 2010. She has had to create new chemical recipes for every electronic component, replacing rigid materials like silicon with flexible organic molecules, polymers, and nanomaterials.

Bao’s group uses stretchy rubber materials that are similar to human skin in the way they give and recover. Sometimes her team mixes electronic materials into the rubber; other times they build on top of it. To make a touch sensor, researchers mix in carbon that is electrically conductive. The voltage across this conductive rubber sheet changes when the material is pressed. Bao’s group found that covering these touch sensors with a pattern of microscale pyramids improves their touch sensitivity—much as the whorls of our fingerprints do. Depending on the design, these sensors can be made at least as sensitive as the skin on our hands. Her group also prints transistors, electrical leads, and other components on the rubbery skins to make stretchy circuits that could process data from touch sensors on a prosthetic hand.

Now Bao is working on weirder materials. One polymer she developed is much stretchier than human skin: it can be pulled to 100 times its normal length without breaking. This material also heals when cut, without any heat or other trigger. And it can act as a weak artificial muscle, expanding and contracting when an electric field is applied.

With the basic materials and designs in place, she’s working on semiconductors and other electronic materials that have the same healing and stretching prowess. But reinventing the electronic materials won’t be enough: data from these artificial skins has to be delivered to the nervous system in a format that the body can understand. Bao’s group is now working on circuit designs that will send signals to the nervous system, so that electronic skins will one day not only help amputees regain dexterity but also let them feel the touch of their loved ones.

Tue, 9 Aug 2016 16:27:40 +0400
<![CDATA[Sprinkling of neural dust opens door to electroceuticals]]>http://2045.com/news/35032.html35032University of California, Berkeley engineers have built the first dust-sized, wireless sensors that can be implanted in the body, bringing closer the day when a Fitbit-like device could monitor internal nerves, muscles or organs in real time.

Wireless, batteryless implantable sensors could improve brain control of prosthetics, avoiding wires that go through the skull. Video by Roxanne Makasdjian and Stephen McNally.

Because these batteryless sensors could also be used to stimulate nerves and muscles, the technology also opens the door to “electroceuticals” to treat disorders such as epilepsy or to stimulate the immune system or tamp down inflammation.

The so-called neural dust, which the team implanted in the muscles and peripheral nerves of rats, is unique in that ultrasound is used both to power and read out the measurements. Ultrasound technology is already well-developed for hospital use, and ultrasound vibrations can penetrate nearly anywhere in the body, unlike radio waves, the researchers say.

“I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader,“ said Michel Maharbiz, an associate professor of electrical engineering and computer sciences and one of the study’s two main authors. “Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.“

The sensor, 3 millimeters long and 1×1 millimeters in cross section, attached to a nerve fiber in a rat. Once implanted, the batteryless sensor is powered and the data read out by ultrasound. Ryan Neely photo.

Maharbiz, neuroscientist Jose Carmena, a professor of electrical engineering and computer sciences and a member of the Helen Wills Neuroscience Institute, and their colleagues will report their findings in the August 3 issue of the journal Neuron.

The sensors, which the researchers have already shrunk to a 1 millimeter cube – about the size of a large grain of sand – contain a piezoelectric crystal that converts ultrasound vibrations from outside the body into electricity to power a tiny, on-board transistor that is in contact with a nerve or muscle fiber. A voltage spike in the fiber alters the circuit and the vibration of the crystal, which changes the echo detected by the ultrasound receiver, typically the same device that generates the vibrations. The slight change, called backscatter, allows them to determine the voltage.

Motes sprinkled thoughout the body

In their experiment, the UC Berkeley team powered up the passive sensors every 100 microseconds with six 540-nanosecond ultrasound pulses, which gave them a continual, real-time readout. They coated the first-generation motes – 3 millimeters long, 1 millimeter high and 4/5 millimeter thick – with surgical-grade epoxy, but they are currently building motes from biocompatible thin films which would potentially last in the body without degradation for a decade or more.

The sensor mote contains a piezoelectric crystal (silver cube) plus a simple electronic circuit that responds to the voltage across two electrodes to alter the backscatter from ultrasound pulses produced by a transducer outside the body. The voltage across the electrodes can be determined by analyzing the ultrasound backscatter. Ryan Neely photo.

While the experiments so far have involved the peripheral nervous system and muscles, the neural dust motes could work equally well in the central nervous system and brain to control prosthetics, the researchers say. Today’s implantable electrodes degrade within 1 to 2 years, and all connect to wires that pass through holes in the skull. Wireless sensors – dozens to a hundred – could be sealed in, avoiding infection and unwanted movement of the electrodes.

“The original goal of the neural dust project was to imagine the next generation of brain-machine interfaces, and to make it a viable clinical technology,” said neuroscience graduate student Ryan Neely. “If a paraplegic wants to control a computer or a robotic arm, you would just implant this electrode in the brain and it would last essentially a lifetime.”

In a paper published online in 2013, the researchers estimated that they could shrink the sensors down to a cube 50 microns on a side – about 2 thousandths of an inch, or half the width of a human hair. At that size, the motes could nestle up to just a few nerve axons and continually record their electrical activity.

“The beauty is that now, the sensors are small enough to have a good application in the peripheral nervous system, for bladder control or appetite suppression, for example,“ Carmena said. “The technology is not really there yet to get to the 50-micron target size, which we would need for the brain and central nervous system. Once it’s clinically proven, however, neural dust will just replace wire electrodes. This time, once you close up the brain, you’re done.“

The team is working now to miniaturize the device further, find more biocompatible materials and improve the surface transceiver that sends and receives the ultrasounds, ideally using beam-steering technology to focus the sounds waves on individual motes. They are now building little backpacks for rats to hold the ultrasound transceiver that will record data from implanted motes.

Diagram showing the components of the sensor. The entire device is covered in a biocompatible gel.

They’re also working to expand the motes’ ability to detect non-electrical signals, such as oxygen or hormone levels.

“The vision is to implant these neural dust motes anywhere in the body, and have a patch over the implanted site send ultrasonic waves to wake up and receive necessary information from the motes for the desired therapy you want,” said Dongjin Seo, a graduate student in electrical engineering and computer sciences. “Eventually you would use multiple implants and one patch that would ping each implant individually, or all simultaneously.”

Ultrasound vs radio

Maharbiz and Carmena conceived of the idea of neural dust about five years ago, but attempts to power an implantable device and read out the data using radio waves were disappointing. Radio attenuates very quickly with distance in tissue, so communicating with devices deep in the body would be difficult without using potentially damaging high-intensity radiation.

A sensor implanted on a peripheral nerve is powered and interrogated by an ultrasound transducer. The backscatter signal carries information about the voltage across the sensor’s two electrodes. The ‘dust’ mote was pinged every 100 microseconds with six 540-nanosecond ultrasound pulses.

Marharbiz hit on the idea of ultrasound, and in 2013 published a paper with Carmena, Seo and their colleagues describing how such a system might work. “Our first study demonstrated that the fundamental physics of ultrasound allowed for very, very small implants that could record and communicate neural data,” said Maharbiz. He and his students have now created that system.

“Ultrasound is much more efficient when you are targeting devices that are on the millimeter scale or smaller and that are embedded deep in the body,” Seo said. “You can get a lot of power into it and a lot more efficient transfer of energy and communication when using ultrasound as opposed to electromagnetic waves, which has been the go-to method for wirelessly transmitting power to miniature implants”

“Now that you have a reliable, minimally invasive neural pickup in your body, the technology could become the driver for a whole gamut of applications, things that today don’t even exist,“ Carmena said.

Other co-authors of the Neuron paper are graduate student Konlin Shen, undergraduate Utkarsh Singhal and UC Berkeley professors Elad Alon and Jan Rabaey. The work was supported by the Defense Advanced Research Projects Agency of the Department of Defense.

Fri, 5 Aug 2016 22:43:07 +0400
<![CDATA[Bad news for Bob the Builder: Watch Hadrian X the robo-builder create an entire house in just two days ]]>http://2045.com/news/35030.html35030It can build an entire house in just two days - and never takes tea breaks.

An Australian firm has revealed the Hadrian X, a giant truck mounted building robot that can lay 1,000 bricks an hour, glueing them into place.

It can work 24 hours day, and finish an entire house in just two days. 

Mounted on the back of a truck, Hadrian X is simply driven onto a building site, and can put down 1,000 bricks an hour using a 30m boom, allowing it to stay in a single position while it builds.

Mounted on the back of a truck, it is simply driven onto a building site.

It can put down 1,000 bricks an hour using a 30m boom, allowing it to stay in a single position while it builds a house. 

Fastbrick, the firm behind it, says it could revolutionise building.

CEO Mike Pivac said:'We are a frontier company, and we are one step closer to bringing fully automated, end to end 3D printing brick construction into mainstream.

The bricks travel along the boom and are gripped by a clawlike device that lays them out methodically, directed by a laser guiding system.

Mortar or adhesive is also deliver under pressure to the hand of the arm and applied to the brick, so no external human element is required.

We're very excited to take the world first technology we proved with the Hadrian 105 demonstrator and manufacturing a state of the art machine.

Instead of traditional cement, Hadrian X will use a construction glue.

'By utilising a construction adhesive rather that traditional mortar, the Hadrian X will maximise the speed of the build and strength and thermal effeciency of the final structure,' the firm said.

The Hadrian X can handle different sized bricks, and also cuts, grinds and mills each brick to fit. 

 The company describes the robots as '3D automated robotic bricklaying technology.'

Fri, 5 Aug 2016 21:49:30 +0400