Human #echolocation works step by step https://www.sciencenews.org/article/human-echolocation-blind-brain "A study reveals how individual tongue clicks and their echoes contribute to object sensing"

Neural and behavioral correlates of evidence accumulation in human click-based echolocation https://www.eneuro.org/content/early/2026/03/26/ENEURO.0342-25.2026

Human echolocation works step by step

Experts in echolocation use multiple clicks and echoes to sense objects, offering insight into how the brain builds perception.

Science News
The brain stacks sound information to navigate the dark https://neurosciencenews.com/human-echolocation-brain-mapping-30462/ Each #echolocation "click acts like a brushstroke, building a high-resolution mental representation of the surroundings in real-time."
The Brain Stacks Sound Information to Navigate the Dark - Neuroscience News

Neuroscience News provides research news for neuroscience, neurology, psychology, AI, brain science, mental health, robotics and cognitive sciences.

Neuroscience News
(YouTube) The new AI depth view option of The vOICe for Android might bring it closer to a kind of "visual echolocation" that is far easier to master than interpreting the regular camera soundscapes? https://www.youtube.com/watch?v=gs8IjFhVCKM
vOICe Depth test run: live depth mapping for The vOICe vision BCI for the blind

YouTube
@seeingwithsound Wow! It's AI-processed image from ordinary webcam, isn't it?
@johan Yes, in this particular case it is, lugging my laptop around backwards to let the laptop's built-in webcam face the room. :-) It also works on Android smartphones now running The vOICe for Android app from Google Play https://play.google.com/store/apps/details?id=vOICe.vOICe Toggle in menu Options | AI depth view. I was at first hesitant to release it in view of liability concerns, so the user is warned whenever the AI depth view is turned on that AI can get things wrong, because it can never be 100% right.
The vOICe for Android - Apps on Google Play

Augmented Reality for the Blind: See with your Ears!

@seeingwithsound

But it looks really cool. In theory, you could even have AI filter the image, removing unnecessary background details while preserving important objects; for example, taking into account the parallax effect between individual frames. Sounds like a real breakthrough, doesn't it? πŸ˜‰

@johan Thanks. Yes, the current AI model works frame-by-frame, and future AI models may do still better by additionally exploiting motion effects when moving to and fro (where nearby objects appear to change faster in apparent size) and parallax (where nearby objects shift differently from the background when moving sideways, and parts of the background move in and out of occlusion). Keeping "important" objects is a form of censorship though: the blind user may have specific interests.

@seeingwithsound

Well, this is where the AI ​​has to β€œthink” and determine that the wallpaper texture is just a distraction, while the sign with the inscription, on the contrary, should be emphasized.

@johan My "hobby" or "designer" interest as a blind user might just lie in those wallpaper textures. I aim to support maximum human agency. I remember one blind user of The vOICe loving to "see" flower beds that in fact "distracted" her from efficiently going from A to B. If the user can indicate to the AI that wallpapers are a distraction, then it's fine to filter them out. It's delicate and requires more than thinking.

@seeingwithsound

Of course, flexible processing mode settings are needed. I just liked the idea that AI could provide real benefits, and not just be used to draw kitties πŸ˜‰