Faculty Q&A: Samuel Norman-Haignere
Samuel Norman-Haignere, Ph.D., is an assistant professor of Neuroscience, and Biostatistics and Computational Biology. He received his B.A. in Cognitive Science from Yale University and completed his Ph.D. in Neuroscience at MIT. His research aims to understand how the brain represents natural sounds like speech and music.
Please tell us about your research.
My research is motivated by the fact that hearing is really challenging. We take for granted being able to understand one another during a conversation or recognize a familiar piece of music or melody. I'm broadly interested in understanding how the brain understands and codes natural sounds like speech and music. These sounds appear to be uniquely important to humans, and we have evidence that the human brain has mechanisms for encoding these sounds that are distinct from those present in other animals.
I have two main types of data modalities that I work with; I use MRI research and intracranial recordings. For the latter, I work with neurologists and neurosurgeons at the Medical Center to measure neural responses from epilepsypatients who have electrodes implanted in their brain (electrocorticography or ECOG) to localize seizure-related activity as a part of their clinical care. ECOG enables more precise measurements of electrical activity in the brain and is the only time we can measure responses with high spatiotemporal precision in the human brain.
Another central goal of my research is to develop computational methods and models that allow us to understand how the brain codes natural sounds using these neural recordings. We develop statistical methods to reveal underlying structure from high-dimensional neural responses to natural sounds, and we develop computational models that can predict those responses and link them with perception
and behavior.
How did you become interested in this field?
I first became interested in perception. Perception feels simple and effortless, but when you start to “look under the hood” and try to understand the underlying mechanisms you realize that what the brain is doing is incredibly impressive. The kinds of things we do every day – understand speech, recognize a familiar melody, etc. – are highly challenging to replicate in machine systems, and the reason we find them easy is because a large chunk of the brain is devoted to making sense of sounds, images, smells, etc. My research is specifically focused on the auditory cortex. In my doctoral and postdoctoral work, we found evidence that there are distinct neural populations in the human brain that respond highly selectively to music, speech, and singing; the music and song-selective populations were particularly surprising and had not been seen clearly before, in part because they required more sophisticated computational methods to uncover them.
What brought you to the University of Rochester?
I have a joint position between the departments of Biostatistics and Computational Biology and Neuroscience, which is an ideal fit for someone like me who has substantial methodological and experimental interests. Rochester has strong neuroscience, statistics, and auditory communities, and the Medical Center has an outstanding neurology team, with whom I am collaborating to collect intracranial data.