Skip to main content
Explore URMC


URMC / Labs / Freedman Lab / Projects / Modeling the Gaze Control System

Modeling the Gaze Control System

Schematic Model

Model Schematic. Gaze desired displacements is decomposed into separate eye and head
desired displacement signals. Desired eye displacement is compared to an internal
representation of current eye displacement. Their difference (eye motor error) is used as
input to a non-linear function: the saccadic burst generator. This portion of the model is
adapted from van Gisbergen et al. 1981. As illustrated the current hypothesis that a
head velocity command (Vh) modifies the gain of the saccade burst generator.
Details in Freedman 2001.

Using control systems modeling techniques, the following hypothesis has been developed into a gaze system model (Freedman, 2001). Hypothesis: a vectorial signal of desired gaze displacement is derived from the location of the active population in the deeper layers of the superior colliculus. This signal is subsequently decomposed into eye and head desired displacement signals which are used as input to separate controllers. A dynamic signal of horizontal head velocity inhibits the gain of the exponential function describing the horizontal eye burst generator.

Plots of Gaze Control

Figure 9 A – F. A pulse is added to (”with“) or subtracted from
(”against“) the input of the head plant, 350 ms after simulation onset.
Unperturbed (”control,“ light grey), perturbed in the direction of the
dead movement (”with,“ dark grey), and perturbations in the opposite
direction (”against,“ black ) during 80° gaze shifts are shown.
Head (A), gaze (B), and eye (C) velocity, and position (D, E, and F, respectively)
are shown. Note the minimal gaze error despite the lack of gaze feedback control.

Development of this model was motivated, in part, by the observation that during gaze shifts with large head components, gaze and eye velocity profiles have two peaks (cf. Freedman and Sparks 2000). The hypothesized interaction between eye and head control signals is sufficient to account for gaze, eye and head movement metrics and kinematics. This hypothesis is consistent with two classes of gaze control models which differ in the location of the decomposition of a gaze signal into separate eye and head control signals. In the current implementation this decomposition occurs upstream from two separate, interacting controllers. By using neck reflexes to compensate for externally applied head perturbations this model accounts for existing head perturbation data without the use of gaze feedback. Because it can account for the data which led to the gaze feedback proposal this model represents a viable alternative which makes differential predictions about the neural substrates of visual orienting movements.

« back to all projects