Skip to main content

URMC Logo

URMC / Labs / Romanski Lab

Processing and Integration of Face and Vocal Information in the Frontal Lobe

Romanski Lab Members

Romanski Lab

The integration of auditory and visual stimuli is crucial for recognizing objects by sight and sound, communicating effectively, and navigating through our complex world. While auditory and visual information are combined in many sites of the human brain, the frontal lobes have been identified as a region associated with memory and language, which depend on multisensory integration of complex auditory and visual stimuli. In our laboratory we are interested in how the ventral prefrontal cortex represents high level auditory information and the neuronal mechanisms which underlie integration of complex auditory and visual information, primarily face and vocal information during communication. Studies in our laboratory have shown that neurons within specific regions of the ventral prefrontal cortex are robustly responsive to complex sounds including species-specific vocalizations, while previous studies have shown that adjacent ventral prefrontal regions are selectively responsive to faces.

Recently we have demonstrated that neurons within ventral prefrontal cortex are multisensory and respond to both faces and to the corresponding vocalizations.

Functional Organization of Primate Prefrontal Cortex

The functional organization of the primate prefrontal cortex.

We have begun to explore some of the factors that affect the integration of dynamic faces and vocalizations in the frontal lobe including temporal coincidence, stimulus congruence, as well as the emotional expression conveyed in the face-vocalization and the identity of the speaker. To this end we have examined changes in neural activity when face-vocalization pairs are mismatched either in content or temporal sequence.

 

Coronal Section Through Prefronal Cortex



Coronal section through the prefrontal cortex

Our data indicates that incongruent and temporally offset stimuli evoke changes in single prefrontal neurons in both active and passive task conditions. Single unit recording demonstrate that prefrontal neurons can encode these mismatches with changes in response latency and/or changes in response magnitude. Additional studies are aimed at determining the connections of face/vocalization processing areas in the frontal lobe with unimodal and polymodal processing regions of the temporal lobe. Further analysis of the neural mechanisms which support face and voice integration in non-human primates may help us to understand the mechanisms underlying social communication and social cognition.