Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

We preset a new approach to modelling grouping in a highly-parallel and flexible system. The system is is based on the Selective Attention for Identification model (SAIM) [1], but extends it by incorporating feature extraction and grouping processes: the Grouping and Selective for Identification for Identification model (G-SAIM). The main grouping mechanism is implemented in a layered grouping-selection network. In this network activation spreads across similar adjacent pixels in a bottom-up manner based on similarity-modulated excitatory connections. This spread of activation is controlled by top-down connections from stored knowledge. These top-down connections assign different groups within a known object to different layers of the grouping-selection network in a way that the spatial relationship between the groups is maintained. In addition the top-down connections allow multiple stances of the same objects to be selected from an image. In contrast, selection operates on single objects when the multiple stimuli present are different. This implementation of grouping within and between objects matches a broad range of experimental data on human visual attention. Moreover, as G-SAIM maintains crucial features of SAIM, earlier modeling successes are expected to be repeated. © Springer-Verlag Berlin Heidelberg 2005.

Original publication




Journal article


Lecture Notes in Computer Science

Publication Date





148 - 158