Sam V Norman-Haignere, Ph.D. - Columbia University
Mind Brain Behavior Institute
To understand the meaning of a sentence, recognize a familiar voice in a crowd, or pick out the melody in a song, the brain must rapidly recognize, remember, and synthesize information across timescales spanning milliseconds to seconds. Human auditory cortex is essential to this process, but many basic questions about its organization and computational properties remain unanswered, in part due to the inherent challenge of probing responses to high-dimensional natural stimuli using coarse and noisy neuroimaging methods. In this talk, I will describe three methodological advances that make it possible to: (1) infer brain organization from responses to high-dimensional natural stimuli (2) test whether a computational model can explain a neural response by comparing natural and “model-matched” stimuli and (3) infer how complex sensory responses integrate temporal information in natural stimuli. By applying these methods to neuroimaging data (fMRI, functional ultrasound) and human intracranial recordings, I have uncovered some of the dominant response dimensions that organize human auditory cortex, inferred computational principles that underlie their response, and begun to unravel how auditory cortex flexibly integrates across the multiscale temporal structure that defines natural sounds like speech and music.
Feb 11, 2021 @ 3:30 p.m.
Host: University of Rochester School of Medicine & Dentistry
Department of Biostatistics & Computational Biology
Del Monte Neuroscience Institute