Skip to main content
menu

Athena Willis, Ph.D.

Athena Willis, PhDAthena Willis, Ph.D., is a postdoctoral associate under the mentorship of John Foxe, Ph.D., in the Department of Neuroscience. In the Cognitive Neurophysiology Lab, she is investigating the role of semantics in multisensory perception utilizing electrophysiological studies of people’s sensory and language experience.

Before joining the Rochester Postdoc Partnership program, Willis earned a Ph.D. in Educational Neuroscience from Gallaudet University, where she studied in the Action & Brain Lab under the mentorship of Lorna Quandt, Ph.D. During her graduate education, she was part of an NSF-funded interdisciplinary team in human-centered computing and neuroscience where she investigated how the brain perceives one of the poorly understood basic language unit in signed languages: movement.

Her EEG time-frequency analyses and behavioral findings showed that embodied experience with the dynamic visual aspects of everyday human movement, whether from life-long experience of being deaf or the modality-specific effects of signed language, enhances people’s perception and understanding of biological motion in point-light display of complex actions and signing virtual humans. Those novel findings have implications for not only the design and development of digital information and media in signed languages, such as AI or educational technology, but also for clinical and translational science for deaf people and signers.

Willis has also presented 20 different posters at ten neuroscience and human-centered computing conferences and won 13 awards and grants, including a Neuroscience Scholars Program Fellow award from the Society for Neuroscience and an honorable mention for the NSF Graduate Research Fellowship Program.

Education

Gallaudet University
PhD Educational Neuroscience
2023

Gallaudet University
BA Psychology
2018

Research

Willis is interested in the role of semantics in multisensory perception utilizing electrophysiological studies of people’s sensory and language experience. Her past research focused on sensorimotor processing during embodied cognition, biological motion perception, and signed language.

Selected Publications

Willis, A.S., Leannah, C., Schwenk, M., Palagano, J., Quandt., L.C. (In process). Differences in Biological Motion Perception Associated with Hearing Status and Signed Language Use.

Willis, A.S. (2023). The Role of the Mirror Mechanism in Perception of Signing Avatars, dissertation. ProQuest. [https://www.proquest.com/docview/2805280683]

Leannah, C., Willis, A.S., & Quandt, L.C. (2022). Perceiving fingerspelling via point-light displays: the stimulus and the perceiver both matter. PLOS ONE. [https://doi.org/10.1371/journal.pone.0272838]

Quandt, L. C., Lamberton, J., Leannah, C., Willis, A. & Malzkuhn, M. (2022). Signing avatars in a new dimension: Challenges and opportunities in virtual reality. In Proceedings of the 7th

International Workshop on Sign Language Translation and Avatar Technology (SLTAT). [https://aclanthology.org/2022.sltat-1.13]

Quandt, L.C., Willis, A.S., Schwenk, M., Weeks, K. & Ferster, R. (2022). Attitudes toward signing avatars vary depending on hearing status, age of signed language acquisition, and avatar

type. Frontiers in Psychology. [https://doi.org/10.3389/fpsyg.2022.730917]

Quandt, L.C., Kubicek, E., Willis, A.S., & Lamberton, J. (2021). Enhanced biological motion perception in deaf native signers. Neuropsychologia. [https://doi.org/10.1016/j.neuropsychologia.2021.107996]

Quandt, L.C. & Willis, A.S. (2021). Earlier and more robust sensorimotor discrimination of ASL signs in deaf signers during imitation. Language, Cognition and Neuroscience, 1–17. [https://doi.org/10.1080/23273798.2021.1925712]

Quandt, L. C., Lamberton, J., Willis, A. S., Wang, J., Weeks, K., Kubicek, E., & Malzkuhn, M. (2020). Teaching ASL signs using signing avatars and immersive learning in virtual reality.

Presented at the 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’20), October 26–28, Virtual Event, Greece. [https://doi.org/10.1145/3373625.3418042]

Willis, A.S., Codick, E., Boudreault, P., Vogler, C., & Kushalnagar, R. (2019). Multimodal visual languages user interface, M3UI. The Journal on Technology and Person with Disabilities, 172. [http://hdl.handle.net/10211.3/210399]