Login with your Social Account

neuron dendrites

Neurons’ “antennae” are unexpectedly active in neural computation

Most neurons have many branching extensions called dendrites that receive input from thousands of other neurons. Dendrites aren’t just passive information-carriers, however. According to a new study from MIT, they appear to play a surprisingly large role in neurons’ ability to translate incoming signals into electrical activity.

Neuroscientists had previously suspected that dendrites might be active only rarely, under specific circumstances, but the MIT team found that dendrites are nearly always active when the main cell body of the neuron is active.

“It seems like dendritic spikes are an intrinsic feature of how neurons in our brain can compute information. They’re not a rare event,” says Lou Beaulieu-Laroche, an MIT graduate student and the lead author of the study. “All the neurons that we looked at had these dendritic spikes, and they had dendritic spikes very frequently.”

The findings suggest that the role of dendrites in the brain’s computational ability is much larger than had previously been thought, says Mark Harnett, who is the Fred and Carole Middleton Career Development Assistant Professor of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and the senior author of the paper.

“It’s really quite different than how the field had been thinking about this,” he says. “This is evidence that dendrites are actively engaged in producing and shaping the outputs of neurons.”

Graduate student Enrique Toloza and technical associate Norma Brown are also authors of the paper, which appears in Neuron on June 6.

“A far-flung antenna”

Dendrites receive input from many other neurons and carry those signals to the cell body, also called the soma. If stimulated enough, a neuron fires an action potential — an electrical impulse that spreads to other neurons. Large networks of these neurons communicate with each other to perform complex cognitive tasks such as producing speech.

Through imaging and electrical recording, neuroscientists have learned a great deal about the anatomical and functional differences between different types of neurons in the brain’s cortex, but little is known about how they incorporate dendritic inputs and decide whether to fire an action potential. Dendrites give neurons their characteristic branching tree shape, and the size of the “dendritic arbor” far exceeds the size of the soma.

“It’s an enormous, far-flung antenna that’s listening to thousands of synaptic inputs distributed in space along that branching structure from all the other neurons in the network,” Harnett says.

Some neuroscientists have hypothesized that dendrites are active only rarely, while others thought it possible that dendrites play a more central role in neurons’ overall activity. Until now, it has been difficult to test which of these ideas is more accurate, Harnett says.

To explore dendrites’ role in neural computation, the MIT team used calcium imaging to simultaneously measure activity in both the soma and dendrites of individual neurons in the visual cortex of the brain. Calcium flows into neurons when they are electrically active, so this measurement allowed the researchers to compare the activity of dendrites and soma of the same neuron. The imaging was done while mice performed simple tasks such as running on a treadmill or watching a movie.

Unexpectedly, the researchers found that activity in the soma was highly correlated with dendrite activity. That is, when the soma of a particular neuron was active, the dendrites of that neuron were also active most of the time. This was particularly surprising because the animals weren’t performing any kind of cognitively demanding task, Harnett says.

“They weren’t engaged in a task where they had to really perform and call upon cognitive processes or memory. This is pretty simple, low-level processing, and already we have evidence for active dendritic processing in almost all the neurons,” he says. “We were really surprised to see that.”

Evolving patterns

The researchers don’t yet know precisely how dendritic input contributes to neurons’ overall activity, or what exactly the neurons they studied are doing.

“We know that some of those neurons respond to some visual stimuli, but we don’t necessarily know what those individual neurons are representing. All we can say is that whatever the neuron is representing, the dendrites are actively participating in that,” Beaulieu-Laroche says.

While more work remains to determine exactly how the activity in the dendrites and the soma are linked, “it is these tour-de-force in vivo measurements that are critical for explicitly testing hypotheses regarding electrical signaling in neurons,” says Marla Feller, a professor of neurobiology at the University of California at Berkeley, who was not involved in the research.

The MIT team now plans to investigate how dendritic activity contributes to overall neuronal function by manipulating dendrite activity and then measuring how it affects the activity of the cell body, Harnett says. They also plan to study whether the activity patterns they observed evolve as animals learn a new task.

“One hypothesis is that dendritic activity will actually sharpen up for representing features of a task you taught the animals, and all the other dendritic activity, and all the other somatic activity, is going to get dampened down in the rest of the cortical cells that are not involved,” Harnett says.

The research was funded by the Natural Sciences and Engineering Research Council of Canada and the U.S. National Institutes of Health.

Materials provided by Massachusetts Institute of Technology

Kismet robot at MIT Museum

Researchers use magnetic properties for improving artificial intelligence systems

A group of researchers and experts from Purdue University have developed a method to integrate magnets with networks similar to brain for programming and teaching devices such as robots, self-driving cars.

Kaushik Roy, professor of electrical and computer engineering at Purdue University said that the stochastic neural networks developed by them attempts to copy some of the activities performed by the human brain and compute them with the help of a network of neurons and synapses. This helps to distinguish between several objects and make inferences about them besides storing information.

This was announced in the German Physical Sciences Conference and the report has been published in the Frontiers in Neuroscience.

The dynamics of the switching processes of nano-magnets resemble the electrical dynamics of the neurons to a large extent. Switching behavior is also exhibited by magnetic junction devices. Stochastic neural networks have random variations built inside them through stochastic weights.

Researchers have suggested a new stochastic training algorithm with the help of spike timing dependent plasticity for the synapses named as Stochastic-STDP. It has been tried on a rat’s hippocampus. The magnet’s intrinsic stochastic behavior was used to alter the various states of magnetization which are stochastically based on the algorithm for understanding various object representations.

The synaptic weights which have been trained and encoded in the magnetization states of the magnets are used for making inference. The use of high energy barrier magnets come to a great advantage as it allows compact stochastic primitives and makes the device eligible for being a stable memory element which permits data retention. Roy who also leads Brain-inspired Computing Enabling Autonomous Intelligence of Purdue University said that the magnet technology which has been developed is highly energy efficient. The network comprising of neurons and synapses makes optimal use of the memory and energy available similar to the computations done by brain.

These networks resembling the brain can also be used for solving many other problems such as graph coloring, travelling salesman problem and optimization problems in combinatorics. The travelling salesman is a very good problem involving optimization of algorithms. It involves traversing locations in the minimal amount of resources available. It was first defined by Irish mathematician W.R Hamilton and British mathematician Thomas Kirkman. It is an example of a NP hard problem where the smaller component problems will be as complex as the main problem in the minimum case.

The work is aligned with the celebration of Giant Leaps of Purdue University which acknowledges the advancements made in artificial intelligence by the university.