ARTIFICIAL BRAIN

Monday, February 25, 2008

An ambitious project in Switzerland was scoffed at - but researchers have just succeeded in simulating a rat's brain in silicon
• Clint Witchalls
• The Guardian,
• Thursday December 20 2007

Computer model of a single neocortical column from a rat's brain (Photo: IBM)
In a laboratory in Switzerland, a group of neuroscientists is developing a mammalian brain - in silicon. The researchers at the Ecole Polytechnique Fédérale de Lausanne (EPFL), in collaboration with IBM, have just completed the first phase of an ambitious project to reproduce a fully functioning brain on a supercomputer. By strange coincidence, their lab happens to lie on the same shores of Lake Geneva where Mary Shelley dreamt up her creation, Dr Frankenstein.
In June 2005, Henry Markram, director of the Blue Brain project, announced his intention to build a human brain using one of the most powerful supercomputers in the world. "The critics were unbelievable," recalls Markram. "Everybody thought we were crazy. Even the most eminent computational neuroscientists and theoreticians said the project would fail."
Some of Markram's peers said there simply wasn't enough data available to simulate a human brain. "There is no neuroscientist on the planet that has the authority to say we don't understand enough," says Markram. "We all know a tiny slice. Nobody even knows how much we know."
Markram was not dissuaded by the negative reaction to his announcement. Two years on, he has already developed a computer simulation of the neocortical column - the basic building block of the neocortex, the higher functioning part of our brains - of a two-week-old rat, and it behaves exactly like its biological counterpart. It's something quite beautiful when you watch it pulse on the giant 3D screens the researchers have constructed.
The neocortical column is the most recently evolved part of our brain and is responsible for such things as reasoning and self-awareness. It was a quantum leap in evolution. The human brain contains a thousand times more neocortical columns than a rat's brain, but there is very little difference, biologically speaking, between a rat's brain and our own. Build one column, and you can effectively build the entire neocortex - if you have the computational power.
Although a neocortical column is only 2 millimetres long and half a millimetre in diameter, it contains 10,000 neurons and 30m synapses. The machine that simulates this column is an IBM Blue Gene/L supercomputer is capable of speeds of 18.7 trillion calculations per second. It has 8,000 processors and is one of the most powerful supercomputers in the world.
Markram believes that with the state of technology today, it is possible to build an entire rat's neocortex, which is the next phase of the Blue Brain project, due to begin next year. From there, it's cats, then monkeys and finally, a human brain.
Markram is banking on Moore's law holding steady, as a computer with the power of the human brain, using today's technology, would take up several football pitches and run up an electricity bill of $3bn a year. But by the time Markram gets around to mimicking a full human brain, computing will have moved on.
Modelling the future
Modelling seems to be the way forward for neuroscience. Each year, there are about 35,000 neuroscience papers published - and the number of papers being published is increasing at a rate of between 20% and 30% a year. Most neuroscientists only get to read about 100 of these papers a year, if they're lucky. Pouring all of this knowledge into Blue Brain seems an obvious way to use and preserve it.
Markram, a 44-year-old South African, first became interested in recording the electrophysiology of neurons when he was at the Max Planck Institute in Germany. He was recording two neurons and he saw them communicate. "I thought, my God, this is incredible, you can actually capture neurons communicating," he says. "Then I wanted to find out how they all communicated, so I started to map the whole circuit. It took 15 years." Markram describes the data he has collected over the past decade and a half as "too boring to be published".
The model is there to unify the data and test that it works. A neurobiologist who wants to test a certain theory of how a specific brain function, such as memory retention and retrieval, works can use Blue Brain to do so. The model will be open to the entire world's research community.
Simulation-based research becomes possible when you have a critical power of computation. Today, every commercial aircraft that is built started life as a simulation. Even cameras are simulated before they're built. In physics, we don't let off nuclear weapons any more, we just use simulations.
"We don't use simulation in life sciences because biology requires the most powerful computers," says Markram. "We do experiments on animals, but that is going to change in the near future and this project will drive that change."
One thing Markram is keen to stress is that this isn't another artificial intelligence (AI) system. "We're not looking for the brain of a robot," he says. "You can get an engineer to do that. They are much better at it and they can do it really quickly. But in the end, it [Blue Brain] will probably be much better. If we build it right, it should speak."
Decoding dysfunction
However, Markram is not holding his breath, waiting for some emergent consciousness to arise from the silicon brain. What he is after is something more prosaic, but also a lot more useful than a talking machine. By understanding the function of the brain, we can also begin to understand its dysfunction.
Disorders such as depression, schizophrenia and dementia are the price we pay for having complicated brains. "We don't understand what goes wrong inside those circuits," says Markram. "We're still in empirical medicine. If a drug compound works: good. If not, we try another one." Blue Brain could accelerate experimentation tremendously. It will be much more efficient than wet-lab experiments and it will reduce animal experimentation.
However, Steven Rose, emeritus professor of biology at the Open University, is sceptical that a biologically accurate model of the entire human brain can be built, given our current state of knowledge and technology. The integration between the different regions of the brain is just too complex to recreate on a computer simulation. "I'm not against people playing with models," says Rose, "but the idea that you can use it for anything very sophisticated as opposed to looking at real animals with real behaviour at the moment seems to me to be pie in the sky."
Rose warns against underestimating the difficulties that still remain. Then, rather grudgingly, he admits that the Blue Brain project is impressive. "Impressive but modest," he adds. Clearly, Markram still has some doubters to win over.
Brain thinks positively when dying

A recent study indicates that when faced with death, the human brain tends to instinctively shift towards happier ideas and images.

This may shed light on the process of an individual's mind at the time of death which researchers say is ruled by happiness, not fear. The survey shows that people are emotionally stronger when faced with their own or a loved one's death than they may have ever thought possible.

According to Nathan DeWall and co-researcher Roy Baumeister of Florida State University, as humans became aware of death, they also evolved what's been called the "psychological immune system."

Due to this mechanism, thoughts and attitudes incline toward the positive, no matter how grim the events are. According to him, this behavior is normally an unconscious mental shift.

The study involved two groups of volunteers. While the first group was made to think about death as a reality and imagine the process of their own death, the other was asked to think about an unpleasant event, like a trip to the dentist's office, but not death.

The two groups then underwent standard word tests that tapped into unconscious emotional states like giving them a word stem - 'jo_' for example - and asked to complete it to form a word (i.e., 'job', 'jog', 'joy').

The scientists said that the individuals asked to think about death were more prone than the other participants to choose the word "joy," as against more neutral or negative words.


Brain Cells More Powerful Than You Think
WEDNESDAY, Dec. 19 (HealthDay News) -- The human brain constantly sorts through its 1 trillion cells, looking for perhaps only one or a handful of neurons to carry out a particular action, a trio of new studies says.
The research, conducted with rodents and published in the Dec. 20 issue of Nature, could rewrite the textbooks on just how important individual brain cells or cell clusters are to the working mind.
Before these insights, "The thinking was that very large ensembles of neurons [brain cells] had to be activated at some point for the animal to feel or perceive" a stimulus, explained the senior researcher of two of the studies, Karel Svoboda, a group leader at the Howard Hughes Medical Institute in Ashburn, Va.
"But it turns out that a remarkably small number -- on the order of 50 or so activated neurons -- is sufficient to drive reliable behaviors," said Svoboda, who is also associated with the Cold Spring Harbor Laboratory, in New York.
Another study, this one conducted by scientists at Humboldt University Berlin and Erasmus Medical Center in Rotterdam, the Netherlands, found that stimulating just one out of the estimated 100 million neurons in a rat's brain was enough to cause the rodent to act differently.
"The fact that a single cell can influence behavior in the cortex is fascinating," said neuroscientist Paul Sanberg, director of the Center for Excellence for Aging and Brain Repair at the University of South Florida, Tampa. The new findings are "allowing us to answer questions about how the brain controls behavior at the cellular level," added Sanberg, who was not involved in the studies.
In one of the studies, Svoboda and his colleagues genetically engineered a select few brain cells in active mice so that the cells would react to a light stimulus.
Then they exposed a part of the rodent's brain and placed a small light-emitting diode over the area. The experiment "was essentially a trick to stimulate [only] these cells," Svoboda explained.
Finally, they adjusted the amount of light downward until they found the lowest number of brain cells needed to evoke a measurable response in the mice. That number turned out to be less than 50 -- much fewer than the wide-flung networks of cellular activation neuroscientists had previously assumed would be necessary, Svoboda said.
The mouse brain's ability to tap into a mere 50 cells is even more remarkable when you consider that the activity of this cluster of cells takes place amid a background roar of other neurological "noise" from millions of cells, he said.
"At the same time, the functional brain area just chatters along and produces perhaps a hundred thousand spontaneous action potentials [electrical signals]," he noted. "So, the brain can actually distinguish the tiny, tiny number of action potentials from that huge background."
According to Svoboda, the experiment strongly supports a theory of brain function called "sparse coding," in which "neurons that listen to the neurons that we have activated have to be able to pull out very sparse subsets of activity."
In another study, Svoboda and co-researcher Christopher Harvey, also of the HHMI and Cold Spring Harbor Laboratory, focused on the synapse -- the microscopic gap separating individual neurons. Messages are passed neuron-to-neuron across the synapse by a complex mechanism of electrochemical signaling.
"Scientists had shown that synapses behave rather independently," Svoboda said, so that long-term electrical activation ("potentiation") of one synapse didn't directly affect a neighboring synapse. Long-term potentiation is, in essence, the key cellular step in how the brain lays down memory.
However, computer models had suggested that activation at one synapse might more subtly strengthen the synapses around it. In their experiments, Svoboda and Harvey found this to be true.
They report that "neighborhoods" of 10 or 20 synapses "influence each other cooperatively," strengthening discrete groups of synapses.
What's more, this type of synaptic teamwork happens within a specific time-frame -- about 10 minutes, a perfect amount of time for laying down the kinds of memories that can lead to learning, Svoboda said.
"That's a very behavioral timescale for learning and memory," he said. For example, a mouse can be placed in a chamber, explore it for a few minutes, then be removed from the chamber and yet retain a working memory of that chamber once it has been reintroduced to it.
That's probably due to the fact that the mouse's brain formed synaptic clusters (i.e., memory) specific to the new chamber while it was exploring it, Svoboda explained.
"In this way, they can be dissociated [from the stimulus] over several minutes but still lead to learning," he said.
While many of these experiments were done in mice, the human brain should work similarly, albeit on a much larger scale, Svoboda said. While the mouse brain contains about 100 million neurons, human brains top out at a trillion such cells, he said.
And even though the research looked at healthy brain function, it may have implications for research into aging or diseased brains, as well.
"You need to understand the fundamental mechanisms. Then you can gain better insight into what might go wrong during neurodevelopmental and neurodegenerative disorders," Svoboda said.
Sanberg agreed.
"This work clearly shows us that all cells are important, and we should try and maintain and keep as many brain cells as possible," he said. "But the number is always flexible and, as you can see, even one cell can influence a number of others."

0 Comments: