LAB MENU:
Ongoing Projects

Neural mechanisms of deep multimodal learning in the brain
The goal of the project is to test a mechanistic model of how the brain can incrementally learn new multimodal concepts without catastrophic interference but rather in a way that allows it to leverage prior learning to enable learning from few examples. The project is based on a biologically plausible computational model that extends previous models of sensory processing hierarchies and drives a set of experiments in which human participants are trained on novel multimodal (audiovisual) concepts. The experiments combine the high temporal resolution of electroencephalography (EEG) and the exquisite spatial resolution of high-field (7 Tesla) functional magnetic resonance imaging (fMRI) and layer-specific imaging.Testing and validating this model is expected to provide a blueprint for the development of more robust few-shot neuromorphic algorithms for multimodal object detection.

Interactions of memory and perception in semantic processing
When we are confronted with unclear or vague information about our world, we rely heavily on memory to fill in missing data and ultimately guide behaviors. However, we are often presented with imperfect visual information. What mechanisms in the brain are responsible for recognizing, identifying, and categorizing ambiguous visual stimuli, all of which can occur within hundreds of milliseconds? This question requires a complex integration of imperfect sensory information with our previous experiences, likely in the form of semantic memory, which refers to our preexisting knowledge of the meaning of words, images, and concepts. To address these questions, we are analyzing human ECoG data in a collaboration with Dr. Kareem Zaghloul’s lab at NINDS.

An Auditory Lexicon in the Brain
Recognizing spoken words is a vital to daily life. Identifying the relevant neural computations and units over which they are performed is critical to furthering our understanding of how the brain performs this process. Research from both psycholinguistics and audition in the brain predict hierarchical speech representations, viz. from phonetic features, to phonemes, syllables, and finally spoken words.
Studies of visual word recognition have found evidence for whole word (lexical) and sublexical representations in the brain, but similar evidence has not been found for spoken words. This project leverages functional magnetic resonance imaging (fMRI) and a rapid adaptation paradigm to investigate the existence and location of spoken word lexicon. Results can provide further evidence to adjudicate between various models of speech recognition in the brain.

Learning Speech Through Touch
Our sensory organs are exquisitely designed for bringing sensory information about our world to our brain. However, it is our brain’s ability to process this information that allow us to actually perceive the world. The goal of sensory substitution is to use one sensory system to provide information to the brain that is usually delivered via another sense. For example, braille conveys information about visual words through touch. In this project, we investigated the neural bases of learning to associate vibrotactile stimuli as spoken words. We use fMRI and representational similarity analysis (RSA) to test under what connections trained vibrotactile stimuli engage auditory word representations in the brain. Results can provide further evidence about interactions between sensory systems and principles of multisensory learning.

On the Specificity of Descending Intracortical Connections
Descending connections are believed to be involved in sensory, motor and intellectual processes including selective attention, object recognition, disambiguation, mental imagery, preparation for movement, visuo-motor coordination, visual surround suppression, efference copy and corollary discharge, and a variety of Gestalt phenomena.
What are the specificities of descending connections, and how do they arise? Answers will be complicated by the multiplicity of targets of descending connections onto pyramidal cells and inhibitory interneurons in superficial and deep layers of the cortex and by the multiplicity of sources of descending connections from both superficial and deep pyramidal cells in higher cortical areas. This project involves a review of the specificities of descending intracortical connections and how those specificities are acquired.

Investigating the Neural Bases of Internal Models During Speech Production
Language is a uniquely sophisticated cognitive process In humans, and language processing can be broken down into two core components: speech recognition (I,e. perception) and speech production. Speech recognition is the process of extracting meaning from what one hears (“sound to meaning”) whereas speech production involves generating a word from an idea (“meaning to articulation”). The neural bases of these processes, in particular speech production, are still poorly understood. Recent major models share an overall “two stream” architecture, with one stream mediating speech recognition and the other mediating speech production. However, these models have major disagreements regarding which brain regions are involved and how the two systems (perception and production) interact; these disagreements are barriers to developing improved treatments of disorders involved speech recognition and speech production. The present study is grounded in computational theories which predict interactions between recognition and articulatory motor control systems in the form of internal models.
We use bleeding edge multivariate imaging techniques, namely functional magnetic resonance (fMRI) representational similarity analysis (RSA) fMRI-RSA and electroencephalography representational similarity analysis (EEG-RSA), to test novel, model-based hypotheses regarding the spatial location and temporal dynamics of neural representations for speech production and speech perception. On the figure below to the right, you can see how we use EEG-RSA to probe the different neural representations within the speech systems during perception, overt and covert production tasks as a means of investigating the neural substrate of forward and inverse models in speech production.
If you’re interested in learning more, shoot Plamen an email at pn243 dot Georgetown dot edu!