Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles.

Original publication

DOI

10.1016/j.visres.2015.12.008

Type

Journal article

Journal

Vision Res

Publication Date

02/2016

Volume

119

Pages

16 - 28

Keywords

Continuous transformation, Inferior temporal cortex, Trace learning, Visual object recognition, Animals, Computer Simulation, Form Perception, Humans, Learning, Models, Neurological, Photic Stimulation, Primates, Visual Cortex, Visual Pathways