Browse by author
Lookup NU author(s): Professor Jenny ReadORCiD
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
Stereo depth perception depends on the fact that objects project to different positions in the two eyes. Because our eyes are offset horizontally, these retinal disparities are mainly horizontal, and horizontal disparity suffices to give an impression of depth. However, depending on eye position, there may also be small vertical disparities. These are significant because, given both vertical and horizontal disparities, the brain can deduce eye position from purely retinal information and, hence, derive the position of objects in space. However, we show here that, to achieve this, the brain need measure only the magnitude of vertical disparity; for physically possible stimuli, the sign then follows from the stereo geometry. The magnitude of vertical disparity - and hence eye position - can be deduced from the response of purely horizontal-disparity sensors because vertical disparity moves corresponding features off the receptive fields, reducing the effective binocular correlation. As proof, we demonstrate an algorithm that can accurately reconstruct gaze and vergence angles from the population activity of pure horizontal-disparity sensors and show that it is subject to the induced effect. Given that disparities experienced during natural viewing are overwhelmingly horizontal and that eye position measures require only horizontal-disparity sensors, this work raises two questions: Does the brain in fact contain sensors tuned to nonzero vertical disparities, and if so, why? © 2006 ARVO.
Author(s): Read JCA, Cumming BG
Publication type: Article
Publication status: Published
Journal: Journal of Vision
ISSN (electronic): 1534-7362
Publisher: Association for Research in Vision and Ophthalmology
PubMed id: 17209738
Notes: Article no. 1
Altmetrics provided by Altmetric