Institution

Georgetown University

Researchers

Max Riesenhuber

Key Question

How does the brain process sensory information to help us learn to speak?

What We Did

Trained hearing participants to recognize speech syllables presented as vibrotactile stimulation patterns.

Our sensory organs are exquisitely designed for bringing sensory information about our world to our brain. However, it is our brain’s ability to process this information that allow us to actually perceive the world.

The goal of sensory substitution is to convey the information transduced by one sensory system through a novel sensory modality. One example is vibrotactile (VT) speech, for which acoustic speech is transformed into vibrotactile patterns. Despite an almost century-long history of studying vibrotactile speech, there has been no study of the neural bases of VT speech learning. In a collaboration with Dr. Lynne Bernstein’s group at George Washington University (that included Drs. Ed Auer and Silvio Eberhardt), we trained hearing adult participants to recognize words presented via a custom-built MRI-compatible vibrotactile transducer (built by Silvio Eberhard).

Training related differences in seed-to-voxel functional connectivity for vocoded VT stimuli.(A) Using the right mid-STG ROI (Fig. 4) as a seed revealed two significant clusters of increased functional connectivity after training in the left STG (MNI: -50, -19, 7) and in the left supramarginal gyrus (MNI: -55, -28, 21). (B) Using the left SII seed derived from the Glasser atlas revealed a significant cluster in the left central sulcus (MNI: -40, -19, 42). It also identified two significant clusters in the right hemisphere. The first encompassed right insula and Heschl’s gyrus (MNI: 40, -17, 11). The other is on the right STG (MNI: 63, -22, 7). All results shown are corrected at two-tailed voxel-wise α = 0.005 and cluster-p ≤ 0.05. Colors reflect across-subject t-statistics.

Learn more:
ResearchGate
Journal of Neuroscience

More Work

Livermore takes implantable microsystems to the next level

Research proposal: Building a brain that speaks – programming the auditory-motor interface

Georgetown neuroscientists design a model to mirror human visual learning