/ News
Human Memory, Computer Memory, and Memento
Video Interview: http://spectrum.ieee.org/podcast/robotics/artificial-intelligence/human-memory-computer-memory-and-memento
Transcript:
Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
Alan Turing dreamt of a computer program that could imitate a person, but we’re nowhere near that yet.
While the goal of computer science is a truly general-purpose problem solver—an artificial intelligence as open-ended and flexible as the human brain itself—the reality of computers today is a collection of specific problem solvers that get better and better within limited domains.
Deep Blue can beat the world champion at chess, but you or I could beat it in checkers. Google does a brilliant job at searching the Web for information, but it can’t answer trivia questions like Watson, the program that beat the world “Jeopardy!” champion but which can’t play checkers either.
My guest today has devoted much of his professional life toward the creation of a general problem solver. John Laird is the John L. Tishman Professor of Engineering at the University of Michigan. He’s the leading developer of Soar, S-o-a-r, which stands forstate, operator, and result, and he’s the author a new book, The Soar Cognitive Architecture, published this month by MIT Press.
John, welcome to the podcast.
John Laird: Thanks, Steven.
Steven Cherry: You wrote that one of the things that makes Soar more general—and I’m going to quote here—“traditionally the locus of decision making is the selection of the next rule to fire. In Soar, all matching rules fire in parallel, and the locus of decision making is selecting the next operator.” What does that mean, and is that more brainlike?
John Laird: Well, let’s get to the brain part later, and let’s start off by what did I mean by that. What I was doing was contrasting traditional, rule-based systems, which in those systems, there’s lots of rules in the system, and the way it does reasoning is it looks through the rules and finds the rule that matches the current situation in the best way, and then it selects that and then does the actions associated with that rule. Instead, what we’re trying to do in Soar is combine lots of rules at the same time, so when it’s in a given situation, many rules will match, and instead of picking one, it will fire all of them, and instead of those doing actions, say, in the world, instead what they’re doing, the first phase of that, is proposing separate actions. And then there’ll be other rules that come along and look at what has been proposed and evaluate them and say, “Well, in this situation this operator is better than another one,” or that the expected value of this operator is very high. And then there’s a decision procedure that looks at that information that’s retrieved from those rules and selects an operator. And so that gives us the chance to bring in knowledge to make the decision about what to do next. So in traditional rule-based systems, there’s not this chance to bring in additional knowledge to decide what to do next; it’s really just matching on the conditions. And what we wanted to do was make it more flexible so that you could have knowledge that would impact the selection of the next thing to do. Now, I think that is more humanlike in that humans are able to look at a lot of different aspects of a situation in order to—before they decide what their next action to do is. It’s not just sort of a reflex of what to do, which is what you end up with [with] rule-based systems. Rule-based systems end up being very reactive; they don’t allow the system these multiple sources of knowledge. So, one of [the] things we’ve done recently is adding more memories to Soar so that it can not just look at rules to determine what next, but it can go and access these other memories as well that provide additional information about what to do next.
Steven Cherry: Maybe an example would help here. You describe in the book something you call “Well World.” This is a hypothetical environment in which there are two water wells and a shelter and a thirsty computer.
John Laird: Yes, and we just use that for some experiments on how the system could possibly learn when should it try and look back into its prior memories to help it make a decision in the current situation. So, in Well World the system would be confused if it just looked at the current situation and didn’t look at where it had been in the past. But we organized or designed Well World so that whenever it had to make a decision, it needed to ask, “Well, what had happened in the past? And how should that influence my current decision?” And that then made it possible for it to learn through experience when should it ask about what it’s seen in the past or when should it just make a decision based on what it sees right now.
Steven Cherry: Your book talks about Frogger, and I have to say that caught my eye. This is the classic video game in which a player has to maneuver his frog across a busy highway and avoid all the cars rushing past it, and some listeners will have seen theclassic Seinfeld television episode in which George has to run a real-life gauntlet of crossing the street; ironically, he’s pushing an old arcade machine of Frogger. You call Frogger a, quote, “very difficult problem,” and that’s even after you narrow it down to just crossing the road once. A 6-year-old can win at Frogger. Why is it so difficult for a computer?
John Laird: Well, I don’t know about a 6-year-old winning at Frogger the first couple times. I think one of the things we’re trying to do here is start with a system not knowing very much about Frogger at all, and then through trial and error, by playing the game, trying out and finding out what works and doesn’t work, learning to play it better. And one of the things, the components of Soar, that we are illustrating in that example is what we call “mental imagery.” Most computer systems or AI systems do not have the ability to create internal images of the situations that they’ve been in in the past and use those for reasoning. And what our Frogger agent does is it imagines, “Well, if I move in this direction, will I hit one of the logs, or will I get eaten by a fish?” or whatever, and so it does that imagination, it takes that imaginative step and uses that to evaluate the situation as to whether that is going to be useful or not. And then only through experience, by either succeeding or failing, does it end up learning how to play Frogger. And you have to realize that this is a system that hasn’t played lots of video games before like a 6-year-old has; this is a system that’s learning sort of its first video game, and that makes it very challenging for it.
Steven Cherry: You mentioned memory before, and I guess how software handles memory is one of the really important things about this Soar architecture. You write that your work was in part inspired by the movie Memento. The movie came out in the year 2000; it’s about a guy who has no short-term memory at all. He forgets things minutes after they happen, so he writes himself notes, and for really important things, he tattoos himself so he can’t possibly lose the note. And I should say that the drama comes from the fact that he’s hunting for somebody he thinks killed his wife, and he has to try and keep track of all these clues. How did Memento inspire your system at all?
John Laird: Well, so—little correction, I would say, is that what he has is no ability to consolidate short-term memory into long-term memory. So he can have a memory about the current situation and look away and remember it for a little bit, but he doesn’t have those sort of medium-term memories. And so if we look at the AI systems I’ve been developing and everybody else has been developing, they don’t have that ability to just automatically save [the] history of what experiences they’ve had over time. And what the movie did for me was show, here’s a human who doesn’t have this ability that none of our AI systems have, and he’s a cognitive cripple. And he has to do all these things with his body or notes in order to try to survive in the world. And how can I expect to create an AI system that has the capabilities of humans when I’m missing this key component of human-level cognition, which is episodic memory. So that—among other things—was one of the inspirations for adding it to Soar. And I had a similar inspiration for adding what I talked about as “mental imagery,” is that when I solve problems I’m often creating images of that problem in my mind—at least that’s what it appears to me when I think about it, whereas my AI system, the original version of Soar, only could have very abstract symbolic descriptions of situations—sort of describing them in language. And that gets you so far, but there’s a lot of problems you can’t solve unless you can also do that kind of imagery, so that’s been a real inspiration for our work, which is to look at actually where people have certain deficits, and it really hurts them in their abilities in life. And so we say, “Well, do our AI systems have those same deficits?” Yes, it seems to be.So we should be looking to ask how to add those to our AI systems.
Steven Cherry: The imagery thing reminds me of the movie Inception, and the two movies have the same director, Christopher Nolan.
John Laird: My favorite director, by the way.
Steven Cherry: [laughs] No surprise. John, you say the “Jeopardy!”-playing computer Watson, which did so extraordinarily well on the show, even Watson isn’t a general-purpose problem solver in the way that Soar is, and yet the researchers there at IBM who worked on Watson, they next set their sights on medical diagnostics, and Watson was pretty quickly modified to attack that new problem area, and I gather they’re having quite a bit of success. Soar can’t win at “Jeopardy!” and it can’t do medical diagnosis. Maybe some specificity is a good thing.
John Laird: Oh, I think it’s very important. So I think having a system that can use lots and lots of what we call “domain knowledge” is going to be critical for the success of these systems. And there’s another research project that’s been going on as long as Soar called Cyc where the goal was to encode lots of very specific—well, a combination of general and specific knowledge. And I think both Cyc and Watson are examples sort of the other side of what you need in order to get intelligence. So I would in no way say that that isn’t critical, it’s just sort of tactically or strategically, I guess, in terms of what we wanted to do in terms of our research, it was to go after the more general aspects of intelligence first. And then, I think, we have to incorporate the same kind of task-specific or domain-specific knowledge that they’re including in Watson at some point. So I think maybe someday we will want to have a lot of what’s in Watson in our systems as well. The other side of it is we are just a research project that’s going along at a university, whereas Watson really had the capability of having a huge team that went and mined a lot of that knowledge, and I don’t have the capabilities of doing that right now.
Steven Cherry: Yeah, and that’s true of some other domain-specific areas of greatprogress, such as smart wheelchairs and self-driving cars and language translation. Do you see any particular areas where Soar would be particularly useful, or do you think some of these other systems would just do better if they built themselves on a backbone of Soar?
John Laird: Well, I think what we’re going to see is sort of hybrids. This is what I’d like to see is hybrids, where there might be a component of these systems, where Soar provides more general problem solving similar to what happens when a person gets into a new problem. But there’s also going to be access to these very smart knowledge bases, so that the system has been preloaded with that knowledge; it has that expertise and can use it so that you sort of get the best of both worlds. But I think that robotics—down the road, there’s a subfield of robotics called cognitive robotics where you want to have the robot know something about the person it’s interacting with, be able to learn through interaction with a human. I think that’s going to be an area we can have some sort of impact on. And one of the projects we’re working on is to teach robots new tasks and new language through interaction with a human. So that’s an area where we’ll see a lot of growth in the future.
Steven Cherry: It strikes me that the self-driving car could really benefit from learning from its memories in the way that you describe, but also from imagery.
John Laird: Well, I think they already have some of those components in that already. And if you look at the internal guts of those programs, they are going to be building representations that are similar or possibly even more sophisticated than the imagery systems we have. But they’re really focused on that driving task, and so there’s other components to them that are not as general as what we’re trying to do right now.
Steven Cherry: Very good. Well, John, it’s going to be an amazing world for our grandchildren to live in, and we have researchers like yourself to thank for it, so thanks for it, and thanks for joining us today.
John Laird: Well, thank you for calling me up, and I enjoyed this very much.
Steven Cherry: We’ve been speaking with computer scientist John Laird about artificial intelligence software that tackles problems in a very human way. His new book, The Soar Cognitive Architecture, is being published this month by MIT Press. For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.
Source: http://spectrum.ieee.org/podcast/robotics/artificial-intelligence/robots-and-human-evolution
/ About us
Founded by Russian entrepreneur Dmitry Itskov in February 2011 with the participation of leading Russian specialists in the field of neural interfaces, robotics, artificial organs and systems.
The main goals of the 2045 Initiative: the creation and realization of a new strategy for the development of humanity which meets global civilization challenges; the creation of optimale conditions promoting the spiritual enlightenment of humanity; and the realization of a new futuristic reality based on 5 principles: high spirituality, high culture, high ethics, high science and high technologies.
The main science mega-project of the 2045 Initiative aims to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality. We devote particular attention to enabling the fullest possible dialogue between the world’s major spiritual traditions, science and society.
A large-scale transformation of humanity, comparable to some of the major spiritual and sci-tech revolutions in history, will require a new strategy. We believe this to be necessary to overcome existing crises, which threaten our planetary habitat and the continued existence of humanity as a species. With the 2045 Initiative, we hope to realize a new strategy for humanity's development, and in so doing, create a more productive, fulfilling, and satisfying future.
The "2045" team is working towards creating an international research center where leading scientists will be engaged in research and development in the fields of anthropomorphic robotics, living systems modeling and brain and consciousness modeling with the goal of transferring one’s individual consciousness to an artificial carrier and achieving cybernetic immortality.
An annual congress "The Global Future 2045" is organized by the Initiative to give platform for discussing mankind's evolutionary strategy based on technologies of cybernetic immortality as well as the possible impact of such technologies on global society, politics and economies of the future.
Future prospects of "2045" Initiative for society
2015-2020
The emergence and widespread use of affordable android "avatars" controlled by a "brain-computer" interface. Coupled with related technologies “avatars’ will give people a number of new features: ability to work in dangerous environments, perform rescue operations, travel in extreme situations etc.
Avatar components will be used in medicine for the rehabilitation of fully or partially disabled patients giving them prosthetic limbs or recover lost senses.
2020-2025
Creation of an autonomous life-support system for the human brain linked to a robot, ‘avatar’, will save people whose body is completely worn out or irreversibly damaged. Any patient with an intact brain will be able to return to a fully functioning bodily life. Such technologies will greatly enlarge the possibility of hybrid bio-electronic devices, thus creating a new IT revolution and will make all kinds of superimpositions of electronic and biological systems possible.
2030-2035
Creation of a computer model of the brain and human consciousness with the subsequent development of means to transfer individual consciousness onto an artificial carrier. This development will profoundly change the world, it will not only give everyone the possibility of cybernetic immortality but will also create a friendly artificial intelligence, expand human capabilities and provide opportunities for ordinary people to restore or modify their own brain multiple times. The final result at this stage can be a real revolution in the understanding of human nature that will completely change the human and technical prospects for humanity.
2045
This is the time when substance-independent minds will receive new bodies with capacities far exceeding those of ordinary humans. A new era for humanity will arrive! Changes will occur in all spheres of human activity – energy generation, transportation, politics, medicine, psychology, sciences, and so on.
Today it is hard to imagine a future when bodies consisting of nanorobots will become affordable and capable of taking any form. It is also hard to imagine body holograms featuring controlled matter. One thing is clear however: humanity, for the first time in its history, will make a fully managed evolutionary transition and eventually become a new species. Moreover, prerequisites for a large-scale expansion into outer space will be created as well.
Key elements of the project in the future
• International social movement
• social network immortal.me
• charitable foundation "Global Future 2045" (Foundation 2045)
• scientific research centre "Immortality"
• business incubator
• University of "Immortality"
• annual award for contribution to the realization of the project of "Immortality”.