Human echolocation works step ...

Echolocation enables blind individuals to perceive and navigate their environment by emitting clicks and interpreting their returning echoes. While expert blind echolocators demonstrate remarkable spatial accuracy, the behavioral and neural mechanisms by which spatial echoacoustic cues are combined across repeated samples remain less explored. Here, we investigated the temporal dynamics of spatial information processing in human click-based echolocation using EEG. Blind expert echolocators (n=4, all males) and novice sighted participants (n=21, 12 males) localized virtual spatialized echoes derived from realistic synthesized mouth clicks, presented in trains of 2–11 clicks. Behavioral results showed that blind expert echolocators significantly outperformed sighted controls in spatial localization. For these experts, localization thresholds decreased as the number of clicks increased, a pattern consistent with cumulative integration of spatial information across repeated samples. EEG decoding analyses revealed reliable neural discrimination of echo laterality from the first click that correlated with overall spatial localization performance. Across successive clicks, neural responses evolved systematically, reflecting sequence-position–dependent changes in neural dynamics. EEG trial-level modeling further allowed us to distinguish accumulation-consistent decision readout policies from alternative repetition-based accounts, revealing individual differences in decision policies among expert echolocators. These findings provide, to our knowledge, the first fine-grained account of the temporal neural dynamics supporting human click-based echolocation, directly linked to behavioral performance across multiple samples. They reveal how, in expert echolocators who successfully performed the task, successive echoes are progressively integrated into coherent spatial representations, demonstrating adaptive sensory processing in the absence of vision. Significance Statement Remarkably, some blind individuals navigate the world using echolocation, producing mouth clicks and interpreting returning echoes to perceive their surroundings. Yet how the brain combines successive echoes to build spatial representations remains poorly understood. Here, we show that expert blind echolocators localized echoes more accurately than sighted novices and that their performance improved as additional clicks provided more spatial information over time. Brain recordings revealed that neural activity distinguished sound location from the earliest echoes and evolved systematically across click sequences in parallel with behavioral improvements. These findings provide a detailed account of how the human brain transforms repeated acoustic information into stable spatial representations, supporting navigation in the absence of vision.
