How does the brain decode sensory information?


Researchers at the University of Pennsylvania are filling in gaps between two prevailing theories about how the brain generates our perception of the world.

The first theory termed as ‘Bayesian decoding theory’, which best recognizes what is in front of us, the brain combines the sensory signals it receives, a scene we are looking at with our preconceived notions. For example, human experience says that a school bus is typically yellow, so logically the one in front of us is more likely yellow than blue.

The second theory, termed as ‘efficient coding’, that explains, for sensory input resources like neurons in the brain and retinas in the eyes to efficiently do their jobs, they process and store more precise information about the most frequently viewed objects. If we see a yellow bus every day, our brain encodes an accurate picture of the vehicle so on next viewing you’ll know its color. Conversely, the brain won’t expend many resources to remember a blue bus.

Assistant professor Dr. Alan Stocker, in the psychology and the electrical and systems engineering departments, and Xue-Xin Wei, a psychology graduate student, combined these two concepts to create a new theory about how we perceive our world: How often we observe an object or scene shapes both what we expect of something similar in the future and how accurately we’ll see it.

“There are two forces that determine what we perceive,” Stocker said. According to his research, those sometimes work against each other; we end up observing the opposite of what our experience tells us should happen. Keeping with the bus example, a bus that is actually blue would look even bluer than it is.

Stocker and Wei also discovered that the accuracy of the object or scene itself changes the viewer’s perception. “If the stimulus is cloudy, for example, that would have a different effect than if the ‘noise’ was at the neural level,” Stocker said. “If I show you something briefly, the neurons in the brain only have time to respond with a few spikes. That gives a low accuracy representation in the brain.”

Given the prevalence of the Bayesian model, Stocker said there’s sometimes hesitance to discuss the notion of repulsive bias, something he said he hopes their research makes more acceptable. “It’s a big step forward bringing efficient coding into the picture,” said Stocker. “It opens up the door to a whole new class of data considered ‘anti-Bayesian’ before.”

More information can be found in the original publication in Nature Neuroscience

Source: Medical Xpress.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s