Like the cell phone in your pocket or the television in your house, signals in your brain may be encoding several pieces of information at once, a Duke study found.
The paper, authored by a team of Duke neuroscientists and statisticians, found that monkeys had small populations of neurons that could encode information from two simultaneous sounds. These cells did so by switching between the individual signals for each sound—similar to the strategy in the telecommunications industry known as multiplexing.
All of the brain’s signaling may be more complicated than scientists thought. This mixing of signals may not only enable the processing of multifaceted and sensory information, but also be the culprit behind limitations on cognition and working memory. Researchers published the paper in the journal Nature Communications July 13.
“Neurons seemed to be showing these fluctuations between coding one sound and coding the other,” said Jennifer Groh, professor of psychology, neuroscience and neurobiology. “Much like in engineering and telecommunications, if you have got one wire and you want to send two signals, you could chunk [the signals] up across time.”
The study tried to address how the brain processes sound to locate the sources in space. With purely visual stimuli, the eye can easily locate objects because it works like a camera—light is projected onto the retina and then translated directly into electrical information.
The ear, however, has no such clear-cut map of the world.
“By the time [sounds] reach the eardrum, there is nothing about the location of the sound that is impacting the way that your drum is going to move, except in the timing and the intensity of it,” Groh said.
She added that the subtle differences in timing between the right and left ear are all the brain has to work with. Previous studies had found that the brain transmits the spatial information of different sounds not through different groups of neurons, but through different intensities in signal.
“It appears that it is more the level of neurons in the auditory pathway that indicate that angle rather than particular neurons being sensitive just to straight ahead or 10 o’clock to the right or to the left,” Groh said.
The question then became, if the signal intensity from one population of neurons is the only information available to locate a sound’s origin, how could this one population process multiple sound locations simultaneously?
By measuring the electrical activity in a brain region involved in processing sound, the Groh Lab sought to determine how single cells could transmit multiple signals simultaneously.
The researchers trained monkeys to look toward the source of two different sounds—each originating from a different location. They examined the responses of single neurons to hearing both sounds individually and then both sounds simultaneously. As expected, the monkeys were able to locate the source of both sounds even when the sounds were played simultaneously.
Get The Chronicle straight to your inbox
Signup for our editorially curated, weekly newsletter. Cancel at any time.
The next step was to determine how they were processing these sounds simultaneously. To do this, the team attempted to use the standard statistical analyses for these types of experiments. They had hoped that such “garden variety analyses” would do the trick.
“That didn’t turn out to be the case,” Groh said.
Consequently, the Groh Lab collaborated with Surya Tokdar, associate professor of statistical science of statistical science, to develop a new form of statistical analysis.
This phenomenon of signal mixing is not only present within auditory signaling.
“In fact, in the visual system, you can see some indications that when two stimuli are presented some neurons are pushing and pulling and responding to one and then responding to the other,” Groh said.
Although the current population of sampled cells found to exhibit this behavior is small, there is still room for improvement. Groh claimed that “the statistical analysis is really the key,” and in developing the power of this analysis, it may come to light that even more cells are capable of this behavior.
The lab is currently working to not only improve the statistical analysis, but also to offer it to labs that have previously or are currently conducting similar experiments.
“There are a lot of people who are ready for this,” Groh detailed, describing that many rich conclusions of neural signalling are emerging with the development and usage of this type of analysis.
Groh hypothesized that researches will find this signalling mechanism may manifest not only in how we perceive and process the world, but also in limitations to our working memory.
The rate at which the brain can process multiple signals “is going to limit how many [pieces of information] you can pack into a certain channel,” Groh said. This may support the claim that the brain can only handle around four items in working memory at a time.
Hypotheses aside, Groh says that this experiments supports the growing evidence that there is a “rhythm” or “cyclical sampling nature of the brain.” Her team’s paper, in working with this change in the framework of how neural signals are interpreted and analyzed, is beginning to pave a new road for how we view the brain.