Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

A computational modelling study from the King Group demonstrates that the way sounds are transformed from the ear to the brain’s auditory cortex may be simpler than expected. These findings not only highlight the value of computational modelling for determining the principles underlying neural processing, but could also be useful for improving treatments for patients with hearing loss.

A simple model of the ear and auditory nerve serves as suitable input to predict responses of brain cells to natural sounds (e.g. human speech) in the auditory cortex

Our sensory systems such as the ear and the auditory regions of the brain are known to be extremely complicated. The ear is arguably the most mechanically complex part of the body and allows us to hear a vast array of sounds. The cochlea – the hearing part of the inner ear – converts these sounds into patterns of neural activity, which travel through the auditory brainstem, a cluster of brain regions that comprise many diverse cells and connections, to the auditory cortex. Auditory cortex is the brain region central to the processing of natural sound phenomena such as speech and music. Consequently, it is widely assumed that the computations behind our ability to hear, and indeed all computations performed by our sensory systems, are also complex.

A new study from King Group researchers sought to understand how the transformation of sounds by the ear and early levels of the auditory pathway impact on cortical activity. The team, led by Monzilur Rahman and Dr Nicol Harper, examined how well different models of the ear and auditory nerve could be used to predict responses of brain cells in the primary auditory cortex. The models ranged from detailed simulations of the cochlea and auditory nerve to simple models that were a rudimentary approximation of the information processing in these structures. The simple models only retained a few biological features. First, the models decomposed the sounds into different frequencies and did so more finely at lower frequencies. Second, their response increased steeply with intensity for quiet sounds, but then less so for louder sounds. Finally, some models had multiple outputs with different sensitivities to approximate the different kinds of fibre in the auditory nerve.

Remarkably, the simple models, which left out many of the biological details, predicted the neural responses to diverse natural and artificial sounds more consistently well than the biologically-detailed models. This implies that only certain features of the processing that takes place in the ear and nerve are transmitted through the brainstem to the cortex, and that many details have little impact on cortical activity. Last author Dr Nicol Harper said: “This suggests that there may be an underlying simplicity to the signal transformation from ear to cortex that is hidden among the detail. This hidden simplicity may be a feature of other sensory systems too.”

“Understanding the computations performed by the auditory pathway, as well as providing insight into brain function in general, will aid us in developing better hearing aids and ear and brain implants to help people with hearing loss.”

First author Monzilur Rahman said: "The ability to predict the time course of the responses of auditory neurons is very important when it comes to improving our understanding of how the brain processes the sounds we hear. However, achieving high accuracy in predicting the time course of neural responses has always proven to be very challenging. We have explored this hard problem, attempting to improve our ability to predict the responses of auditory cortical neurons, while also relating it to the complexity of the auditory periphery. I found it astonishing how a simple model aimed at capturing the computational essence of the auditory periphery can perform similarly to a biologically-detailed model. While measuring prediction performance for particular stimuli is a good test for a model, we have also put our models to a more rigorous test by assessing their ability to predict well across different datasets and brain states."

Find out more