We combine neurophysiological, behavioral, and computational modeling techniques towards our goal of understanding neural mechanisms underlying the perception of complex sounds. Most of our work is focused on hearing in listeners with normal hearing ability. We are also interested in applying the results from our laboratory to the design of physiologically based signal-processing strategies to aid listeners with hearing loss.
We are currently studying two specific problems:
- Detection of acoustic signals in background noise
- Detection of fluctuations in the amplitude of sounds
- Neural coding of speech sounds
These problems are of interest because they are tasks at which the healthy auditory system excels, but they are situations that can present great difficulty for listeners with hearing loss. We study the psychophysical limits of ability in these tasks, and we also study the neural coding and processing of these sounds using stimuli matched to those of our behavioral studies.
Computational modeling helps bridge the gap between our behavioral and physiological studies. For example, using computational models derived from neural population recordings, we make predictions of behavioral abilities that can be directly compared to actual behavioral results. The cues and mechanisms used by our computational models can be manipulated to test different hypotheses for neural coding and processing.
By identifying the cues involved in the detection of signals in noise and fluctuations of signals, our goal is to direct novel strategies for signal processors to preserve, restore, or enhance these cues for listeners with hearing loss.
Development of this site was supported by a NIH-NIDCD Administrative Supplement for Data and Resource Sharing associated with grant NIDCD-01641 (PI: L.H. Carney) and the National Organization for Hearing Research Foundation (NOHR).