Friday, September 22, 2017
Ross Maddox, PhD
Ross Maddox's lab has posted the preprint for his latest paper on biorXiv! They showed that it's possible to measure the response of the auditory brainstem to natural speech using EEG.
Speech is an ecologically essential signal whose processing begins in the subcortical nuclei of the auditory brainstem, but there are few experimental options for studying these early responses under natural conditions. While encoding of continuous natural speech has been successfully probed in the cortex with neurophysiological tools such as electro- and magnetoencephalography, the rapidity of subcortical response components combined with unfavorable signal to noise ratios has prevented application of those methods to the brainstem. Instead, experiments have used thousands of repetitions of simple stimuli such as clicks, tonebursts, or brief spoken syllables, with deviations from those paradigms leading to ambiguity in the neural origins of measured responses. In this study we developed and tested a new way to measure the auditory brainstem response to ongoing, naturally uttered speech. We found a high degree of morphological similarity between the speech-evoked auditory brainstem responses (ABR) and the standard click-evoked ABR, notably a preserved wave V, the most prominent voltage peak in the standard click-evoked ABR. Because this method yields distinct peaks at latencies too short to originate from the cortex, the responses measured can be unambiguously determined to be subcortical in origin. The use of naturally uttered speech to evoke the ABR allows the design of engaging behavioral tasks, facilitating new investigations of the effects of cognitive processes like language processing and attention on brainstem processing.Read More: Ross Maddox Finds Auditory Brainstem Responses to Continuous Natural Speech in Human Listeners
Professor Ross Maddox receives grant for collaborative project with ECE & CS professors
Monday, August 28, 2017
Professor Ross Maddox (BME and Neuroscience) and collaborators Zhiyao Duan (ECE), Chenliang Xu (CS) have received a pilot grant from the University of Rochester Arts Sciences and Engineering & the Center for Emerging and Innovative Sciences. Their project is titled, "Real-time synthesis of a virtual talking face from acoustic speech."