Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The neocortex constructs an internal representation of the world, but the underlying circuitry and computational principles remain unclear. Inspired by self-supervised learning algorithms, we propose a computational theory in which layer 2/3 (L2/3) integrates past sensory input, relayed via layer 4, with top-down context to predict incoming sensory stimuli. Learning is self-supervised by comparing L2/3 predictions with the latent representations of actual sensory input arriving at L5. We demonstrate that our model accurately predicts sensory information in context-dependent temporal tasks, and that its predictions are robust to noisy and occluded sensory input. Additionally, our model generates layer-specific sparsity, consistent with experimental observations. Next, using a sensorimotor task, we show that the model's L2/3 and L5 prediction errors mirror mismatch responses observed in awake, behaving mice. Finally, through manipulations, we offer testable predictions to unveil the computational roles of various cortical features. In summary, our findings suggest that the multi-layered neocortex empowers the brain with self-supervised predictive learning.

Original publication

DOI

10.1038/s41467-025-61399-5

Type

Journal article

Journal

Nat Commun

Publication Date

04/07/2025

Volume

16

Keywords

Animals, Neocortex, Mice, Models, Neurological, Algorithms, Male, Learning, Female