Institution

Georgetown University

Key Question

How do humans recognize spoken words?

What We Did

Identify the relevant neural computations and units over which auditory recognition is performed.

Recognizing spoken words is vital to daily life. Identifying the relevant neural computations and units over which they are performed is critical to furthering our understanding of how the brain performs this process.

Research from both psycholinguistics and audition in the brain predict hierarchical speech representations, viz. from phonetic features, to phonemes, syllables, and finally spoken words.

Source

Studies of visual word recognition have found evidence for whole word (lexical) and sublexical representations in the brain, but similar evidence has not been found for spoken words. Building on our work on written word representations (e.g., Glezer et al., Neuron, 2009; J Neuroscience, 2015; NeuroImage, 2016), this project leverages functional magnetic resonance imaging (fMRI) and a rapid adaptation paradigm to investigate the existence and location of spoken word lexicon.

Investigating the existence and location of auditory lexica is critical for understanding the neural bases of speech processing and, consequently, the neural underpinnings of speech processing disorders.

Max Riesenhuber, PhD, Evidence for a Spoken Word Lexicon in the Auditory Ventral Stream

Using fMRI rapid adaptation techniques, we provide evidence for an auditory lexicon in the “Auditory Word Form Area” (AWFA) in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens their selectivity in the AWFA.

These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across speech modalities in the visual and auditory ventral streams.

More Work

Livermore takes implantable microsystems to the next level

Research proposal: Building a brain that speaks – programming the auditory-motor interface

Georgetown neuroscientists design a model to mirror human visual learning