Facial self-imitation: objective measurement reveals no improvement without visual feedback.
Cook R., Johnston A., Heyes C.
Imitation of facial gestures requires the cognitive system to equate the seen-but-unfelt with the felt-but-unseen. Rival accounts propose that this "correspondence problem" is solved either by an innate supramodal mechanism (the active intermodal-mapping, or AIM, model) or by learned, direct links between the corresponding visual and proprioceptive representations of actions (the associative sequence-learning, or ASL, model). Two experiments tested these alternative models using a new technology that permits, for the first time, the automated objective measurement of imitative accuracy. Euclidean distances, measured in image-derived principal component space, were used to quantify the accuracy of adult participants' attempts to replicate their own facial expressions before, during, and after training. Results supported the ASL model. In Experiment 1, participants reliant solely on proprioceptive feedback got progressively worse at self-imitation. In Experiment 2, participants who received visual feedback that did not match their execution of facial gestures also failed to improve. However, in both experiments, groups that received visual feedback contingent on their execution of facial gestures showed progressive improvement.