Skip to main content
menu
URMC / Labs / Bennetto Lab / Projects / Past Projects / Temporal Synchrony and Audiovisual Integration in Autism and Typical Development

 

Temporal Synchrony and Audiovisual Integration in Autism and Typical Development

Project Collaborator:

Dr. Rafael Klorman

Graph of Mismatch Negativity

In this study, we are investigating children's ability to detect and utilize temporal
synchrony cues from a speaker's voice (auditory) and lips (visual), using both behavioral
and psychophysiological measures. In one aim of this study, we use electroencephalography (EEG)
to monitor children's neural activity while watching videos that have high vs. low synchrony
of auditory and visual information. In particular, we are examining characteristics of the
mismatch negativity (MMN) component of the EEG waveform, which
provides information on the speed of auditory discriminability related to the presence
and synchrony of visual cues.

Have you ever noticed how difficult it is to watch television when the auditory and visual information are out of sync? Our brains use temporal cues to link information coming from different sources. Individuals with autism often have difficulty combining information from different sources, and part of this difficulty may come from problems using temporal information. Among other things, difficulties in this area can have a significant impact on communication (e.g., quickly picking up on which person in a group is talking to you). This study investigates whether children and adolescents with autism are able to use temporal information in the lips and voice to locate and identify different sounds. In addition, we will measure EEGs (a noninvasive way of looking at neural activity) to look at how temporal processing of speech is represented in the brain in individuals with autism and their peers without autism.

This study is being funded by the National Institutes of Health (NIH).

« back to all projects