Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Congratulations are in order for Professor Andy King's lab, who just this week have had a paper published in eLife, a high impact journal, titled Sensory cortex is optimised for prediction of future input

Yosef Singer, a graduate student in the group, was the first author on the paper, and Nicol Harper, a Senior Postdoctoral Research Scientist in the lab, was the senior author on the paper and day-to-day supervisor. Yayoi Teramoto, Ben Willmore, Jan Schnupp, and Andy King were also authors on the publication. 

A large part of the brain is devoted to processing sensory input. This processing allows us to tell, for example, if the image we see is of a cat or a dog, or the sound we hear is a bark or a meow. Neurons respond to sensory input by generating spikes of activity. For example, in primary visual cortex, each neuron typically responds best to an edge-like structure moving before the eyes with a particular location, orientation, speed and direction of motion. In primary auditory cortex, each neuron typically responds best to changes in the loudness of sounds over a particular range of sound frequencies.

The King Group sought to understand the neural code used by primary sensory cortex -- why neurons respond to the particular set of stimulus features that they do. For example, why do visual neurons prefer moving oriented edges rather than say rotating hexagons, and why do auditory neurons prefer sounds that change in loudness or frequency composition over time rather than steady unchanging sounds? A dominant hypothesis, which can explain much of the behaviour of sensory neurons, is that neural codes are optimised to be sparse -- in other words to minimise the number of spikes required to represent stimuli. They show that a simple alternative principle may explain the code used by the sensory brain -- namely, that neurons use the code that most efficiently allows prediction of future input. This would make sense since features in the world that are predictive of the future will be informative for guiding future actions.

To do this, they simulated networks of neurons in a computer. The group optimised the connection strengths of these neurons so that they efficiently predicted the immediate future of videos of natural scenes from their past. They then examined the preferred stimuli of the simulated neurons. These turned out to be moving, oriented edges, just as in the real primary visual cortex of mammals. King's lab also optimised the same simulated network to predict the immediate future of recordings of natural sounds from their past.  The resulting auditory stimuli preferred by the neurons also closely matched those preferred by neurons in the real primary auditory cortex. In particular, for both vision and audition, the temporal structure of these preferred stimuli was similar to that found for real neurons -- which is not the case for other principled models such as sparse coding.

The group's results suggest that coding for efficient prediction of the future may be a general principle behind the way the brain represents the sensory world. Disorders of sensory processing are unfortunately all too common, and a better understanding of the computational principles underlying sensory processing should help us to interpret what goes wrong in the brain and why. Temporal prediction may also be relevant to machine learning and artificial intelligence applications, providing a simple method by which smart devices might be trained to process sensory inputs.