All Ears for You

Science Fields

Imagine sitting in a café on a busy Sunday afternoon with your friend. The place is buzzing with energy as people laugh, talk and sip on their favorite hot drinks. Meanwhile, your friend starts telling you about the plans she has for the new year. All of a sudden, everyone else’s conversation becomes blurred and migrates to the background. Enveloping the cup of coffee with your hands, you create an invisible bubble around you and your friend where the only conversation that matters is the one that is happening right here, right now.

Our brains are truly amazing. They are even more amazing when it comes to paying attention to and focusing on what is salient at the moment. We can focus on a single conversation among many others without much effort. Has anyone ever thought to themselves, “hey brain let’s switch conversations!”? Probably not, mainly because everything happens so fast in our brain and we can just hop in and out of conversations automatically.

A study recently published in The Journal of Neuroscience reveals how the brain is able to zero in on a single voice in a crowded room. To understand this phenomenon, Edmund Lalor, an associate professor of Neuroscience and Biomedical Engineering at the University of Rochester Medical Center, and colleagues examined the neural mechanisms that underlie attention and speech processing.

For the study, 14 participants were instructed to listen to two stories but to only pay attention to one during a “cocktail party” attention experiment. Meanwhile, electroencephalogram (EEG) machines record what the brain recognizes as specific sounds that can then be converted to words. 

The researchers suggest that our brain processes the story that we pay attention to and the one that we ignore similarly. However, the EEG recordings revealed that the story that the participants were told to listen to was converted into phonemes – units of sound that create a distinction between different words. In other words, the brain takes an extra step to distinguish the sounds and words coming from the voice it attends to versus the voices that are ignored.  

All in all, the findings shed new light on how the brain is able to selectively focus on a single talker in a crowded room and shows the differences between processing speech acoustics versus phonetic features of the language.



  • 1. Teoh, E. S., Ahmed, F., & Lalor, E. C. (2021) Attention differentially affects acoustic and phonetic feature encoding in a multispeaker environment. The Journal of Neuroscience; JN-RM-1455-20 DOI: 10.1523/JNEUROSCI.1455-20.2021
  • 2. https://www.sciencedaily.com/releases/2021/12/211217113242.htm