2011 Research Awards
Sensory Progenitors in the Inner Ear: Molecular and In Vitro Analysis
PI: Amy Kiernan, Ph.D.
Co-Investigator: Patricia White, Ph.D.
The sensory progenitors in the inner ear give rise to hair cells, supporting cells, and the cochlear-vestibular neurons. Thus, this progenitor cell population could be used to produce several essential cell types in the inner ear often defective in deafness and balance disorders. Unfortunately, little is known about molecular requirements for the development of this cell population, or when it becomes specified to develop as neurons or sensory cells in vitro. Previously we demonstrated that both the Notch and SOX2 pathways are important for sensory progenitor development. Using these genes, we have established a number of mouse genetic tools to investigate the sensory progenitor population, including loss and gain of function mutations in both pathways. In aim 1 we propose to use these genetic tools to discover novel downstream genes in the Notch and Sox2 pathways using a microarray approach. In aim 2 we will utilize a Sox2-GFP mouse allele in conjunction with cell sorting to investigate the differentiation capacities of the sensory progenitor population in vitro. The innovative aspects of our proposal include a unique combinatorial approach to the microarray experiments, which will overcome the data-overload problem inherent in large-scale expression studies. Moreover, by applying Dr. White’s unique experience in cell sorting and culture of inner ear cells, we will investigate the potential of a progenitor population that has previously not been studied in culture.
Multisensory Processing of Valence and Incongruence in the Primate Amygdala and Ventral Prefrontal Cortex
PI: Liz Romanski, Ph.D.
Co-Investigator: Katalin M Gothard, M.D., Ph.D. (University of Arizona)
The integration of auditory and visual stimuli is crucial during recognition and communication. While many areas within the brain process face and vocal information two regions stand out as being essential in the processing of social and emotional aspects of face and vocal stimuli. The amygdala and the orbitofrontal cortex have been implicated in the processing of social and emotional information in many studies in human and non-human primates (e.g., Kringelbach and Rolls, 2003; Bechara, 2004). These structures often show joint activity during tasks that require the evaluation of social-emotional significance of facial and vocal signals.
In the present proposal two laboratories will perform identical recordings, using identical stimuli in the amygdala (Gothard lab) and in the ventral and orbital (VLPFC) prefrontal cortex (Romanski lab). We will characterize neurons as multisensory on the basis of their responses to species-specific positively and negatively valenced face-vocalization stimuli. We will perform mismatches of the stimuli to create valence mismatches (negative face/positive voice) and semantic+valence mismatches (affiliative face/aggressive voice). We hypothesize that amygdala neurons will show superadditive multisensory responses when cross-modal stimuli have congruent and redundant valence information, but will show a decrease in response when face and vocalization stimuli are mismatched in valence. In contrast, VLPFC neurons will be less affected by valence but will show significant changes when semantic information is incongruent. These parallel experiments performed in different labs using identical experimental conditions and stimulus sets will allow us to compare response latencies, response amplitudes and the functional connectivity between these two areas. Findings from the present study will help to reveal the circuitry and cellular mechanisms underlying the integration of audiovisual information used during communication. Our findings will also have implications for understanding neurological disorders which affect social-emotional processing and communication, including schizophrenia and autism in which dysfunction of the amygdala and ventral frontal lobe has been suggested. In short, the aim of the present study is to determine how amygdala and ventral prefrontal neurons integrate positive and negatively valenced, dynamic face-vocalization information as part of a network which processes social information.
Estrogen-Modulation of Visual Cortical Processing and Plasticity
PI: Ania Majewska, Ph.D.
Co-Investigators: Greg DeAngelis, Ph.D. & Raphael Pinaud, Ph.D. (University of Oklahoma)
The classic estrogen 17β-estradiol (E2) regulates a wide array of brain processes including the brain’s regulation of reproductive behavior, mood and cognitive processes such as learning and memory. Mounting evidence suggests that E2 may also impact sensory processing. We recently provided the first demonstration that brain-generated E2 directly regulates auditory physiology in the vertebrate brain, in real time, by controlling the strength of local inhibitory transmission via a novel, non-genomic, pre-synaptic mechanism1. We also showed that one of the functional consequences of this E2’s modulation of auditory processing is to increase the information that auditory neurons carry about stimulus structure to enhance the neural and behavioral discrimination of auditory signals2. Finally, we showed that this brain-generated E2 is both necessary and sufficient to drive plasticity-associated gene expression required for sensory learning1. Surprisingly, all of these effects of E2 on auditory processing can be blocked by interfering with local E2 production within the auditory forebrain. This suggests that neuronal, rather than gonadal estrogen, can profoundly and rapidly modulate neuronal circuits within higher order brain regions. Interestingly, we now have preliminary data showing that the rodent primary visual cortex (V1) is a major site of synthesis and sensitivity to E2. The effects of E2, in particular the brain-generated E2, on visual processing are completely unstudied. Here, we will explore for the first time the hypothesis that E2 produced in V1 can affect visual cortical processing and plasticity in-vivo. We bring together three investigators with complementary expertise to investigate in depth the effects of local E2 production on visual cortical circuits using state of the art electrophysiological, computational and in-vivo imaging approaches.
CNS Axon Regeneration in Glaucoma and Traumatic Injury
PI: Rick Libby, Ph.D.
Co-Investigator: Peter Shrager, Ph.D.
In the CNS, axonal regeneration following injury is severely limited. Inhibitory factors have been identified in myelin, the insulating sheath that surrounds neurons. Myelin debris is not cleared efficiently after traumatic injury to the CNS, and several proteins within it, including Nogo, myelin-associated glycoprotein, and oligodendrocyte myelin glycoprotein can inhibit axonal growth. These proteins are thought to act by binding to neuronal receptors, but the identity and function of these receptors is not well understood. The Nogo receptors (NgRx family) and the paired immunoglobin-like receptor (PirB) bind all 3 myelin-associated inhibitors with high affinity. However, the role of these receptors in both the normal animal and following injury has not been established in vivo. Overcoming growth inhibition is important in trauma, e.g., in the spinal cord, but it is receiving increased attention in glaucoma as well, where it has been shown that the primary site of damage to the retinal ganglion cells is at the lamina cribrosa, the partition through which optic nerve axons pass as they exit the eye. This project investigates the use of a new approach to monitoring optic nerve function in a glaucoma mouse, the possible involvement of myelin-associated inhibitors in limiting regeneration in both trauma and glaucoma, and the applicability of nerve crush models in glaucoma. It is designed to link a laboratory with expertise in glaucoma with one that has a focus in CNS axonal regeneration.
Ability to Use Recovered Visual Motion Percepts in Cortical Blindness
PI: Krystel Huxlin, Ph.D.
Co-Investigator: Tania Pasternak, Ph.D.
Damage to the adult primary visual cortex (V1) causes a loss of conscious vision over the same part of the visual field in both eyes (cortical blindness - CB). This increasingly common cause of permanent disability in older, adult humans has long been considered untreatable. Huxlin and colleagues recently showed that perceptual learning strategies can be used to recover the ability to identify direction and awareness for motion stimuli at retrained, blind field locations. While this finding has exciting therapeutic implications, we know very little about the properties of the vision recovered. Because the requirements of the training task are currently limited to identifying motion direction, the person's ability to actively use this recovered percept is unknown. To address this problem, we propose a set of experiments using a task that requires subjects not only to identify motion direction from each stimulus presented but also retain and compare the remembered stimulus information across space and time. As such, the properties of recovered vision will be characterized for the first time, in the context of visual working memory in humans with permanent V1 damage. The proposed experiments will use novel behavioral paradigms to reveal fundamental properties of training induced visual re-learning. This work has important implications for our understanding of plasticity in a damaged, adult visual system, providing a base of knowledge from which more effective treatment regimens for people with visual disorders can be developed. The grant will enable us to collect pilot data in preparation for submission of an R01 application to the National Eye Institute.
Sensori-Motor Control and Adaptation of Head Movements
PI: Ed Freedman, Ph.D.
Co-Investigator: Marc Schieber, M.D., Ph.D.
Sensori-motor adaptation is the neural process that alters the output of the nervous system (actions, behaviors) in response to persistent errors or perceived errors. A focus of research in the Freedman lab has been the neural mediation of gaze (line of sight) adaptation. Recently we published a paper in J. Neuroscience (Quessy et al. 2010) describing the nature of signals in the superior colliculus during saccadic adaptation. We have also focused efforts on the control of movements of the head. Much less is known about brainstem control of head movements. One of the barriers to understanding the neural computations involved in head movement control is the limited knowledge of the connectivity of brainstem neurons and the neck muscles that accomplish the movements. The collaboration proposed here is designed to fill this gap by combining the Freedman Lab’s expertise in brainstem and cerebellar recording in head unrestrained subjects with the Schieber Lab’s expertise in measuring and analyzing functional connectivity of neurons and muscles (using spike- and stimulus triggered averaging techniques). The brainstem region of interest (nucleus reticularis gigantocellularis – NRG) is an important part of the head motor control circuit. It is also a key player in the brainstem cerebellar loop that may modulate this behavior during sensori-motor adaptation. The project we are proposing will set the stage for tapping into the adaptation circuitry in an effort to understand its failure during neurological disorders (cerebellar ataxia and cervical dystonia – two disorders with links to cerebellar function) and ultimately to provide insight into new techniques for manipulating this circuitry to improve motor performance.