/ Experts

03.04.2012

The engineering challenge to make minds substrate-independent via whole brain emulation within our lifetimes

Randal Koene, Heads the organization carboncopies.org which is the outreach and roadmapping organization for action towards Advancing Substrate-Independent Minds (ASIM)

So, my name is Randal Koene, and part of my work, part of the job that I do is I work with a company called Halcyon Molecular, to build basically the world’s most accurate and long read length genome sequencing, which is something that we need people for with lots of different skills, and my background, for example, is physics, and electrical engineering, information theory, computational neuro-science and neural engineering. But in addition to that, what I do is I am the founder and organizer of a SIM group [Substrate-Independent Minds], which I’ll explain in a second, called Carbon Copies. What I’m going to try to explain today is what that group does, which is to bring together the experts and projects needed to do something called substrate-independent minds. And I want to tell you what that is, and why it’s important, and how it can be done. Specifically I want to talk to you about how feasible it is. 

So, in groups like this, we often talk about things like life extension or life expansion, augmentation, matters like that, and when we do, I think it’s important to try to consider what the objective really is, what is that we’re trying to do. What I mean is, what are we trying to extend, or augment. And so I want to take a little step back and think what are we, what are you? If you look at these slides, you can see in the background a lot of different experiences a person can have. So experiences are a part of you. Another part of you is your body. It’s the senses that you have, it’s the actions that you can take. And then on the right you see all these different expressions, I’m trying to represent the idea that each one of us has unique characteristic responses to things, so if you put us in the same situations, the same input, a different individual reacts slightly differently, there are unique responses that we have. 

So where does all of this actually take place? Now all of it takes place in the mind. That’s where we do all of this stuff. Really, I can’t even touch anything in the universe, that’s how unreal that is. Imagine that I’m trying to touch this table. If you think about the atoms in my finger and the atoms in the table, the protons and neutrons in there, they never actually collide. It’s mostly space. There’s some forces there, there’s electrical activity that goes up my arm, ends up in the brain, and then I don’t notice anything, I don’t realize that, until something’s been processed. So, when we talk about things like life extension, what we really mean is that we’re trying to safeguard those processes, that being, that experience of being that’s going on inside our minds. And there are really two different ways that you can do it. One is that you can say: we know that right now all of this processing is taking place in the brain. So what you need to do is you need to maintain the brain. You need to make sure it keeps functioning well. You need to keep the body functioning well, so that it keeps the brain going. And you need to make sure that your environment stays OK, because the body depends on that environment, it’s not built for every type of environment. Imagine if we suddenly lost the atmosphere. Well, this body would die, and the brain would go, and everything would stop. 

But we also just said that what we’re really talking about is the processes going on in the mind, that’s the experience of being. So the alternative is that you could address that directly, do what you would do if you were treating it like a valuable piece of information, or a valuable program. What would you do with a valuable program? The first thing you would do is that you would make a back-up. You would archive something. The next thing you might want to do is that you want to be able to fix errors if they’re there, or you may want to upgrade it to run on the next best hardware. So you want access to the code. That’s really what I’m talking about. Substrate-independent minds is about data acquisition. It’s about access. The notion being that right now, these processes in my mind are running on a kind of substrate. But if they can run on many different kinds of substrates, then we call it substrate-independent. If by analogy, think of computer programs that are platform-independent code, they can run on different kinds of platforms. 

Which of these two strategies is better? Well, I think they’re both valuable, and they should be looked at very concretely. If you’re thinking, where would I put resources, where would I put effort, what can we achieve within our lifetimes, then you need to look very concretely at the plans that are there. So that’s what I hope to do today. But before I rush into that, just a quick… two terms – what you hear a lot of people talking about in some communities when they talk about technologies like this is the term “mind-uploading”. Mind-uploading is a perfectly good term, but it doesn’t say anything about what you do once you’ve put your data somewhere, once you’ve uploaded. So it’s not very objective-oriented, it’s really about the transition from this biological substrate to some other one. And then, where would you like to end up? If you’re trying to do substrate-independent minds, there are many ways of doing it. And preferably, if this were an ideal case, on every substrate you would go to, the code would be optimized for that, it’s like compiling a program. But we don’t really understand it that well. If you ask a neuro-scientist, do you understand this part of the brain, they will always say, no. Because they know that what you’re actually asking is, do you understand all the different strategies the brain is using at different levels of the hierarchy, right down to the neurons. And of course we don’t. So we can’t use that approach right now. But what we can do is go down to the bottom level, where we know more about how neurons work, and how they pass electrical information off to one another, and this is what we call whole brain emulation. The idea that you can emulate the neuro-anatomy and the neuro-physiology at that level.

Now this isn’t an entirely new field. I know it hasn’t been talked about as much as maybe biological methods, like when you hear Aubrey de Grey talking about the SENS foundation and things like that, but there has been work going on at least since 1994, by researchers dedicated to the pursuit. There was the Mind-Uploading Research group that I inherited from Joe Strout, who’s the person in grey up there, the knife-edge scanning microscope was developed by Bruce McCormick who’s right next to him, and then in 2007 we had an Oxford workshop about whole brain emulation that a roadmap came out of, published by Anders Sandberg, who you see in the picture down here, next to Suzanne Gildert and myself. And many other interesting people like Ted Berger and Ken Hayworth, Winfried Denk, Sebastian Seung, who popularized the idea of the connectome, all the connections of the brain being important, at the 2008 Neuro-science conference, Peter Passaro, Young-Don Son, Zang-Hee Cho, Jeff Lichtman, Diane Becker, and then of course we have the formation of carbon copies down there with Suzanne Gildert and myself. And since then, it’s become much more mainstream. We have people like orthogenetics star Ed Boyden working on brain emulation, and extracting circuitry for a neural function emulation, and efforts by, for example, IARPA [Intelligence Advanced Research Projects Activity] are interested in our work. 

Now if you want to do substrate-independent minds, there are different routes you can take. And I’m only going to talk about one today. Another really interesting one, besides the whole brain emulation route, is the brain-computer interfaces route, but it has a lot of other driving factors behind it, because it has a lot of commercial potential at every stage. So it was so important that I thought that just here, in 2012, we’ll do a whole separate workshop on that, so I’m leaving that for then.

Now if you’re going to try to do something like whole brain emulation, then you need to make representations of things, and every time you make a model or a representation, you need to choose a resolution, some level at which you’re going to describe the elements, and describe what they do. That means you need to characterize them. And if you’re characterizing them at a resolution, that means that below that level, below that resolution at any higher level, you don’t talk about the mechanisms that are going on inside, instead what you’re doing is you’re looking at their input and their output, treating it like a black box and describing the functions that are there. And you want to capture all of them. You don’t want to miss any latent function. And in these boxes here are just some examples. In the top you see how Wuet al. did that for sensory neurons, in the bottom you see the famous Eugene Izhikevich neuron model, where it uses several parameters to easily model many different  biological types of neurons. And on the right is some work of my own, where I focus mostly on the currency of the brain which is the spikes in the system. After all, when the neuron spikes, and at the exact timing of that spike, is both what determines where memories are laid down, so what things are remembered, as well as which muscles are activated, so how we can interact with the environment around us, even just by speaking.

So, these are important issues if you want to model something correctly. Then above that level, the important thing to capture an emergent property. So you have these elements, you characterize them, and what you need to do is you need to understand how they all work together. Because this gives you emergent function. And a good way to find out what they’re doing together is that first of all, you know how they’re connected. That’s why extracting the human connectome, the connections in the brain, is really important. 

Now, you can do this black box selection, which level you want to work at, in many different ways. And in SIM, this happens, and depending on the route people choose, there’s a route called loosely-coupled offloading, or at least that’s what we call it generally speaking, where the entire body, or person, is considered a black box. And what you’re trying to do is model their behavior, model how they act, so you could basically make a simulacrum of them and say this is that person. What it means is you need video recordings, audio recording, self report, maybe an AI that learns how to interpret what you’re doing. This is very similar to a method called the Bainbridge-Rothblatt model which is used for trying to create what they call an upload.

Now there are differing opinions about that. There’s also the next level – you can take the entire brain or parts of the brain as the black box. This happens for example when you have a cognitive architecture that you find in interesting or correct architecture for the human brain, and you say, this area does that, that area does that, and you want to model how each one of them works, you personalize it you that it seems like one specific person. But then, at the next level, this is where it gets really interesting, you can take either neurons or parts of neurons, like the morphology of neurons, as the black boxes, and this is what’s done in computational neuroscience and neuro-informatics, and it’s the level that I’m going to be talking about, because it’s the one that’s the most concrete, and the most usable right now for whole brain emulation. They are also interesting for brain-computer interfaces of course. 

Now if you’re going to do whole brain emulation, and you look at it as a strategy, as a roadmap, there are really four big requirements, four things you need to do. One of them is you choose your resolution and scope, as I just said. You need to validate that, you need to do some tests on that. I know a few people who are, and it’s very interesting work. You also need to extract that connectome, the structural connectome, where are the things connected, as I was just saying. And because you need to characterize all of those elements that you chose at that bottom resolution throughout that connectome, basically you need to get a functional connectome. And finally, because you want to reimplement this, you need some sort of platform that can emulate. 

OK, just a quick word on function and structure. That’s something that often gets brought up about neural networks and made to look rather mystical, but of course function and structure and their interaction happens in every kind of system that we have, if you think for example about a computer chip. It’s full of transistors, those are the basic elements, and then they are arranged in a certain structure. If we’re looking at scope and resolution, then there are sometimes shortcuts that we like to try to take, because it’s a very difficult problem, so there are two different kinds of shortcuts that have been contemplated. One of them is look only at the function. Take neurons, record from the neurons, and if you record from many neurons at the same time, then instead of looking at how they’re connected, by actually looking at those connections, you can derive connectivity, because you see how they interact. So if you do something like range and causality for example, then you can try to calculate a functional connectivity map.

The problem with this is that you can easily mislabel function, because if you want to study a very complex system that has a lot of elements inside of it, then you want to see everything it may have remembered and how it can respond, you have to observe it over a very long period of time, perhaps since birth. Well, the other shortcut is to just look at structure. What you can do is that you can just look at what is the morphology of the neuron, how does it look. And you’ve studied many neurons that look the same way, and you know that they generally have a certain kind of receptor channel, so that you can use certain neuro-transmitters. You know that they respond in a certain way, so you can build a library, where you can map from this morphology to that kind of function, and you can map to parameter distributions. And then if you have a very detailed morphological model, you can make what you see on the right there, which is the compartmental model, where you build a very large model morphologically of what the neuron is like, and each of the little compartments is basically an electrical circuit model. You set the parameters according to what you found in your library, and then you hope that the entire system works. 

Now, the problem with this is that measurement is never quite precise, and also we need to understand that that library may not be the one-to-one mapping. Is there always just one morphology that maps to just one type of function, so that may break down there, and in terms of errors, because when we talk about error in neural networks, one of the things they’re famous for is that they deal well with error. If you have random errors, the system’s fairly robust. But that’s not true for systematic errors, or for punitive errors. If you have electro-microscopy, for example, you have areas that are out of focus, or where you don’t exactly what your resolution is, so you may be measuring things wrong. The same when you’re cutting slices of brain, you have a knife that has features in it, so you have characteristic errors, and those accumulate. 

Now it can be very difficult to tune a large system like that, if you take the entire thing, and you just try to tune it, with hundreds of billions of neurons in it, then this becomes a problem that is way too difficult as an optimization problem, even for quantum computers when they eventually come along. We can look at it, at least explore it, using something like this. This is a modeling environment that I originally built to try to explore what the emergent functions are that pop right out of the structure as neurons grow, and connecting to networks. But it can also be used to generate models where we know exactly what we’re putting in in terms of function and in terms of structure, so that if we can look at what kinds of errors tend to occur, you can find out how bad is it when you get these cumulative errors. 

But in the end, what you really need to do is simplify the problem. This is a problem of system identification, and what can you do in system identification to make it tractable, to make characterization possible, and to make the whole problem feasible computationally, is to make it a smaller problem, to simplify it. You want to take subsystems out of there, where it’s very easy to describe the input-output functions, where it’s easy to get a characterization. That means you need to have those functional responses at that level, so this means you need high-resolution functional characterization, functional measurements, not just structure. And that’s kind of what I’m trying to get to, which is we need both of those, structure and function. And there are real projects going on to do this, for whole brain emulation. You see here the four yellow requirements, and on the bottom, I didn’t put in the projects for the left-most, but that’s OK. Anyway, you see on the bottom you see a number of projects that are going on, and I’m only going to talk about two because of time constraints today, but they’re all very interesting, so I’ll quickly run through what they’re about.

If you want to get the structural connectome out, the obvious thing to do is to say this is structure, so it’s spatial, it’s something you want to look at. So what you can do is slice it really thin, and look at the brain under an electron microscope, and reconstruct. I’ll talk a bit about that. But the alternative is that you say what we’re really interested in is the connections, and not all of this other messy stuff. So Anthony Zador at Cold Spring Harbor, and Ed Calloway, they’re working on methods using a virus basically to transfect neurons in mass and to deliver unique DNA barcodes that go to the presynaptic and postsynaptic sites. Then when you pull out the biogen tag that’s connecting the presynaptic and postsynaptic sites, you pull out both the barcodes. It’s like pulling our pointers that are pointing to each other, saying this neuron’s connected to that neuron, and that neuron’s connected to this one. It’s an interesting approach.

Functionally, you also have a few different ways of going about things. There’s a general idea of hierarchical measurement that we call a Demux-tree for demultiplexing, and a specific implementation of that one, where Rudolfo Linas came up with the idea of making nanowires that you can push through the capillary system in the brain, so that it reaches every neuron and can measure from them. The nanowires actually exist. They’ve been developed at the New York University School of Medicine. But there is still no way of actually getting them to branch like that. Also, they consume a lot of volume in the brain, which is a bit of a problem.

On the other hand, you again have a biological approach, just like I was just saying about Anthony Zador. The good thing about biology is that you can easily apply it en masse, in large amounts and also it's already at the right scale. So there's this thing called the molecular ticker-tape which is a project that Ed Boyden at MIT and George Church at Harvard University, Konrad Korning at Northwestern and Halcyon Molecular are also working on, which is the idea that inside the neurons, you can record voltage events with voltage gated channels and you can affect the writing on a DNA, DNA being used as a medium for putting information on it, telling you when an event occurred. Then all you have to do is pluck out that DNA and sequence it and you know when activity's been going on, so you can do this in many neurons at the same time. The problem with this approach is that biology is still kind of difficult for us to work with as an engineering project. There's a lot of random searching around with the right tools to use. What we'd really like to do is work at the cellular scale and be able to engineer to our hearts’ delight.  That's kind of a third one which I'll talk about specifically. You see it already shown as a picture over there.

It would be nice if we could have hardware that’s actually designed for the problems or co-design of what we're trying to emulate. So neuromorphic chips are really a good thing coming out. There's just an example from the IBM synapse project that they're doing for DARPA.

OK, so volume microscopy or tape-to-SEM, which is the approach of actually looking at it to get the structure out. This is something that Winfried Denk has been doing in Germany for a while with something called SBF SEM, that’s Serial Block-Face Scanning Electron Microscopy. So he takes a block of brain tissue.  He takes an image then ablates off a piece of the surface, another image, image, image, goes all the way down. The problem with this approach is it doesn't work very well for large volumes.

But Ken Hayworth has been working at Harvard for many years on building a system that can deal with the whole volume of the brain. That's called Tape-to SEM, it used to be called the ATLUM automatic tape collecting lathe ultramicrotome. The idea is that a knife, a diamond knife, cuts off pieces of this block of brain tissue and puts it on a tape. The tape can be stored as you see on the right over there, although it looks a bit messy.  You have random access to all of those pieces so that you can do microscopy on it all the time.

Now, when you look at what these actually look like, you can see inside the red square here on one of these images, you can see where there's a synapse that's actually approaching a dendrite, so we’ve got that dark sort of area where the two meet, and you can reconstruct the morphology if you had many of these slices on top of one another. You can even see arrows pointing, those circles are vesicles containing neural transmitters, so you can even get an idea of the chemical strength of the synapse is by just looking at this.

Here you see an example of reconstructions like this, where you eventually get the whole cell and with the help of Winfried Denk, Briggman, et. al and Bock, et al, they have recently published two papers in Nature in 2011, in which they explored this method and Briggman, et al worked in the retina Bock, et al worked in visual cortex and the visual system. What they did is that, well for example Briggman first looked at the retinal cells functionally observing how they operate which perceptive fields they have, what they were sensitive for. Then he did the serial reconstruction, he used that to predict what they would be sensitive for, and found that they could predict that function from the structure,so they verified that indeed this is possible when you know a lot about the system.

OK so now we'd like to be able to do something similar functionally. Something that we are rather good at is working with the integrated circuit technology.  That's something we have experience with. We know how to make hierarchies of systems that work, communications, how to do aggregation of data, measurement, etc. When you actually look at what's possible right now, with the resolution that's available in integrated circuit technology, if you want to get down to the cellular level, and you make a circuit that's at the size of a red blood cell, so 8 micron, you can build something that can be powered, for example by distributed infrared radiation at wavelength of between 800 and 1,000 nanometers, in what we call a transparency window for tissue, it would get absorbed by the tissue. So you could use that for communication or for power. Also if you want to do passive communication, it's like RFID, you know RFID tags which work with wireless frequencies. If you want to do this in tissue again you can go to infrared.  And Al McGuire at MIT is doing this specifically for these kinds of ICs that they want to use inside the body.

So we've got both communication and power, and you can stuff about 2,300 transistors on something this size, that's the same number of transistors as in the original INTEL 4004 CPU. And if you look at today's technology, actually you can make that four times as much, and put on as many transistors there as were in the guidance system of the original cruise missiles.
Now you also need to make this work in the body, so it needs to be biocompatible. The easiest thing is you just stuff it in a little blob of silicon. But you could also for example in an artificial red blood cell, which is basically a protein shell that's been constructed, which can be functionalized so that we can use it in many different ways.

But again, this isn't exactly everything we want. These hubs, they're complex, they can do data aggregation they can do all sorts of tasks, but they're still fairly large. They work inside the vasculature which leads to every neuron but we'd really like to work outside the vasculature in the interstitial places between the cells as well.  For that you need something smaller. But This is where it's nice where with integrated circuits we can easily make these kinds of hierarchical systems, or TEAMS. 

So you can make for instance, 2 micron large ones, which have less transistor surface but just enough intelligence to be able to do something simple like sensing or stimulating when required.  As long as they're in contact with such a hub.

What you end up with is a cloud of computation basically that you can use in the brain concurrently with its activity in vivo. And the nice thing about this, because it doesn’t have all those long wires, but instead it’s just nodes, is that even if you had one of those little chips for each one of the neurons in the brain, it only takes up about one cubic centimeter of space, which is about 1/1700 of the  size of the brain. I’ll just skip that one. 

Anyway, so computational demands – I’m not going to the details of this, but if you calculate how much energy the brain uses and how much it takes for one action potential, and how many action potentials occur typically, we can calculate that if you had to translate this to the model, with the many components and compartmental model I was doing, for the whole brain emulation, so if you had 10,000 compartments worth you would need about 1.2 exaflops to be able to do the computations. That sounds like a lot right now, but for instance, the Indian Government has already just put 2 billion dollars down for building a computer that will run at 102 exaflops in 2017. That would be fast enough for 100 of those whole brain emulations at the same time.  And the price is quickly dropping for this. 

What I'm trying to say, the whole crux of the message is that these are very concrete requirements and projects. The volume microscopy exists and just needs to scaled up. Molecular ticker-tape is coming out in 3,6 maybe 18 months depending on whether we're talking prototype or real system. And the chips will also be made eventually. This is just an example showing the real work that is going on. For example on the right hand side there those are chips that been integrated inside cells and that were functioning while the cells were still alive. OK, I’ll skip that one. 

And I just wanted to compare quickly – how does this compare with the biological approach? On the left we see what we call the diagram of where we break down in aging, all the places in the biological systems where breakdowns happen. This was made by John Ferber. The problem with that is all of these different connections you see here all require different projects to solve them. It's not like you can just have one engineered solution for all of them. So it's a really big problem. And on the right when you look at the data acquisition approach, well it's quite a bit more direct, and simpler.

I’m going to skip all this stuff, and just say that what we do at Carbon Copies about this is that at the same is we're looking at the big picture because it's very important to look outside of the box with the different approaches that are possible. That's how we come across these kinds of projects, how we figure out how we put them together, how to get people working together and talking to each other, but also at this time because it really is something that's concrete and feasible. It's very important that when we talk about solutions we get down to the details, like what I am showing here at that corner on the right. We need to design and engineer the systems. It's time to do that. Often when you hear people talking, well it would be really cool to do this, cool to do that. That's nice, but we need actual projects going forward. So thank you very much for listening to me.

Source: http://gf2045.com/read/138/




/

International Manifesto of the "2045" Strategic Social Initiative

Mankind has turned into a consumer society standing at the edge of a total loss of the conceptual guidelines necessary for further evolution. The majority of people are almost exclusively absorbed  in merely maintaining their own comfortable lives.

Modern civilization, with its space stations, nuclear submarines, iPhones and Segways cannot save mankind from the limitations in the physical abilities of our bodies, nor from diseases and death.

We are not satisfied with modern achievements of scientific and technical progress. Science working for the satisfaction of consumer needs will not be able to ensure a technological breakthrough towards a radically different way of life.

We believe that the world needs a different ideological paradigm. Within its framework it is necessary to form a major objective  capable of pointing out a new direction for the development of all mankind and ensuring the achievement of a scientific and technical revolution.

The new ideology should assert, as one of its priorities, the necessity of using breakthrough technology for an improvement of man himself and not only of his environment.

We believe that it is possible and necessary to eliminate aging and even death, and to overcome the fundamental limits of the physical and mental capabilities currently set by the restrictions of the physical body.

Scientists from various countries in the world are already developing technology that ensures the creation of an artificial human body prototype within the next decade. We believe the biggest technological project of our times will become the creation of such artificial human body and a subsequent transfer of individual human consciousness to such a body.

Implementation of this technological project will inevitably result in an explosive development of innovations and global changes in our civilization  and will improve human life.

We believe that before 2045 an artificial body will be created that will not only surpass the existing body in terms of functionality, but will achieve perfection of form and be no less attractive than the human body. People will make independent decisions about the extension of their lives and the possibilities for personal development in a new body after the resources of the biological body have been exhausted.

The new human being will receive a huge range of abilities and will be capable of withstanding extreme external conditions easily: high temperatures, pressure, radiation, lack of oxygen, etc. Using a neural-interface humans will be able to operate several bodies of various forms and sizes remotely.

We suggest the implementation of not just a mechanistic project to create an artificial body, but a whole system of views, values and technology which will render assistance to humankind in intellectual, moral, physical, mental and spiritual development.

We invite all interested specialists: scientists, politicians, mass media personalities, philosophers, futurologists and businessmen to join the "2045" strategic social initiative. We welcome all who share our vision of the future and are ready to make the next jump.

 

The main objectives of our movement are:

1. To achieve the support of the International community and create conditions for international co-operation of interested specialists around the "2045" Initiative.  

2. To create an international research center for cybernetic immortality to advance practical implementations of the main technical project – the creation of the artificial body and the preparation for subsequent transfer of individual human consciousness to such a body.
 
3. To engage experts in the selection and support of the most interesting projects in the quest to ensure technological breakthroughs.

4. To support innovative industries and create special scientific education programs for schools and institutes of higher education.

5. To create educational programs for television, radio and internet, to hold forums, conferences, congresses and exhibitions, and to establish awards and produce books, movies and computer games with the view of raising the profile of the initiative and spreading its ideas.

6. To form a culture connected with the ideology of the future, promoting technical progress, artificial intellect, “multi-body”, immortality, and cyborgization.

Login as user:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Login to 2045.com

Email:
You do not have login to 2045.com? Register!
Dear colleagues, partners, friends! If you support ​the 2045 strategic social initiative goals and values, please register on our website.

Quick registration:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Registration

Name:
Surname:
Field of activity:
Email:
Password:
Enter the code shown:

Show another picture

Восстановить пароль

Email:

Text:
Contact Email:
Attachment ( not greater than 5 Mb. ):
 
Close
avatar project milestones