WarningWarningThis profile has not been verified yet. Information presented here may thus be incomplete or inaccurate.WarningWarning

The Beauchamp Lab studies the neural mechanisms for multisensory integration and visual perception in human subjects. Of special interest is human communication.

When conversing with someone, we use both visual information from the talker's face and auditory information from the talker's voice. While multisensory speech perception engages a broad network of brain areas, the most important is the the superior temporal sulcus. Multisensory integration is particularly beneficial in understanding speech when the auditory modality is degraded, such as in a noisy room. To understand the neural mechanisms of multisensory integration and visual perception, the Beauchamp Lab uses a variety of methods, including intracranial electroencephalography (iEEG) and blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Through these sophisticated studies, the lab hopes to unlock one of nature's great mysteries: how the brain performs amazing computational feats, such as understanding speech, that allow us to make sense of the auditory and visual world around us. Every advance in deepening our knowledge of these processes is not only exciting for its own sake but will also help children and patients with language and perceptual difficulties.