Metamodal Coupling of Vibrotactile and Auditory Speech Processing Systems Through Matched Stimulus Representations

It has been postulated that the brain is organized by “metamodal”, sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both “standard” and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm preserved the dynamics and representational similarities of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.


It has been proposed that the brain is organized by “metamodal”, sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications such as sensory substitution devices, e.g., enabling blind individuals “to see” by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching representational similarities between stimuli in novel and standard modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that preserved representational similarities of auditory speech engaged auditory speech areas after training. This suggests that matching representational similarities is critical to unlocking the brain’s metamodal potential.

Learn More

  • The authors declare no competing financial interests
  • A portion of the funding for this research was provided by Facebook. Further support was provided by NSF (BCS-1439338). We would also like to acknowledge Ali Israr, Frances Lau, Keith Klumb, Robert Turcott, and Freddy Abnousi for their involvement in the early stages of the project, including the design and evaluation of the token-based algorithm. Finally, we would like to acknowledge Dr. Ella Striem-Amit for helpful feedback on earlier versions of this manuscript.