Lab

New Research Building, WP-01
3970 Reservoir Rd NW
Washington, DC 20007

Maxlab

Who I Am

I am a Professor in the Department of Neuroscience at Georgetown University Medical Center and Co-Director of the CNE. My research uses computational modeling, brain imaging and EEG to understand how the brain makes sense of the world, and how these insights can be translated to neuromorphic AI and augmented cognition applications. I obtained my Master’s degree in physics from the University of Frankfurt, Germany, in 1995, and my PhD in computational neuroscience from MIT in 2000 (with Tomaso Poggio). I have received several awards, including Technology Review’s “TR100”, one of the “100 innovators 35 or younger whose technologies are poised to make a dramatic impact on our world” and an NSF CAREER award. My research has been funded by NIH, NSF, DARPA, DoD, and industry.

A Selection of My Work

Evidence for a Spoken Word Lexicon in the Auditory Ventral Stream

Metamodal Coupling of Vibrotactile and Auditory Speech Processing Systems through Matched Stimulus Representations

Leveraging Prior Concept Learning Improves Generalization From Few Examples in Computational Models of Human Object Recognition

Microsaccades During High Speed Continuous Visual Search

Visual Categorization: How the Monkey Brain Does It

My Research

My research combines computational models with human behavioral, functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data. This comprehensive approach addresses one of the major challenges in neuroscience today, that is, the necessity to combine experimental data from a range of approaches in order to develop a rigorous and predictive model of human brain function that quantitatively and mechanistically links neurons to behavior. This is of interest not only for basic research, but also for the investigation of the neural bases of behavioral deficits in disorders. Understanding the neural mechanisms underlying object recognition tasks and the acquisition of these skills is also of significant relevance for Artificial Intelligence, as the capabilities of pattern recognition systems in engineering (e.g., in machine vision or speech recognition) still lag far behind that of their human counterparts in terms of robustness, flexibility, and the ability to learn from few exemplars. Finally, a mechanistic understanding of the neural processing networks that enable the brain to make sense of stimuli across different senses opens the door to supporting and extending human cognitive abilities in this area through, for instance, hybrid brain-machine systems (“augmented cognition”) and novel technologies, e.g., for sensory substitution.

I am interested in understanding how the brain processes information in different sensory modalities, such as vision, audition, and touch, how the brain extracts meaning from sensory signals. A new research focus in my lab investigates how different senses interact (for instance in speech processing) and how sensory processing interacts with the motor system, for instance in speech production.

Meet the Alliance

We are a best-in-class group of technologists, clinicians, scientists, engineers, and ethicists forming a single synergistic alliance.

Meet Us