Researchers at Georgetown University Medical Center, in collaboration with George Washington University, leveraged their understanding of auditory speech processing in the brain to enable volunteers to perceive speech through the sense of touch. This may aid in the design of novel sensory substitution devices — swapping sound for touch, for example — for hearing-impaired people.
The findings appear in the Journal of Neuroscience on May 17, 2023.
“In the past few years, our understanding of how the brain processes information from different senses has expanded greatly as we are starting to understand how brain networks are connected across different sensory pathways, such as vision, hearing and touch,” says Maximilian Riesenhuber, PhD, professor in the Department of Neuroscience at Georgetown University Medical Center and senior author of the study.
“For instance, previous work has shown that Braille words can activate the visual brain areas of blind individuals. This suggests that input from the touch system activates their visual system. Similar connections exist in individuals without sensory impairments, such as those between the hearing and touch system. These connections offer the intriguing possibility that it might be possible to process speech through the sense of touch by coming up with a way to effectively couple information from the sense of touch into the auditory speech system.”
Given the possibility of connecting touch and hearing, the researchers’ challenge was to transform spoken words into patterns of vibration that would provide “natural” input to brain areas that process spoken words. They collaborated with Lynne Bernstein, PhD, Silvio Eberhardt, PhD, and Edward Auer, PhD, at George Washington University, who have a long track record of investigating vibrotactile speech and who developed a vibrotactile transducer that could be used inside a magnetic resonance scanner. To test their hypothesis, the research teams trained 20 volunteers to recognize 60 words generated by one of two auditory-to-vibration transformation processes. Volunteers’ brains were scanned before and after training to identify where the vibration-transformed speech was being recognized in the brain.
“Quite remarkably, we found that vibration-transformed speech could activate the brain’s auditory speech recognition system in the same way auditory speech does,” said lead author Srikanth Damera, MD, PhD, who performed the research at Georgetown but is currently a resident at Children’s National Hospital in Washington, D.C.
The key variable in the experiment was the auditory-to-vibration-transformation process. In the first group of 10 people, the words were transformed in a fluid manner aimed to resemble auditory speech, whereas in the second group, the transformation broke a word up into phonemes and produced distinct chunked patterns (i.e., like Morse code). It was only in the first group that the vibratory stimuli were perceived like spoken words in the brain, with the vibratory words activating the auditory speech system after training.
“This finding is significant because it opens the door to the development of better sensory substitution devices which aim to replace lost sensory function,” says Riesenhuber. “Our ultimate goal is to connect novel senses to networks that the brain is using to normally process input so that sensory substitution appears effortless and is perceived as natural — ideally, with people not even realizing that they are perceiving speech through touch.”
In addition to Riesenhuber and Damera, collaborators at Georgetown University include Patrick S. Malone, Benson W. Stevens and Richard Klein.
A portion of the funding for this research was provided by Facebook. Further support was provided by a National Science Foundation grant BCS-1439338.
The authors report having no personal financial interests related to the study.