If you're near Besançon and interested in auditory science or psychophysics, check out this upcoming series of talks at the FEMTO-ST Institute https://neuro-team-femto.github.io/revcor25/
I'll be giving one of them, looking forward to great discussions!
#AuditoryScience #Psychophysics
Mini-workshop on reverse correlation

Recent advances in auditory reverse correlation On the occasion of the PhD Defense of Aynaz Adl Zarrabi, the FEMTO Neuro group is hosting...

Neuro group at the Dept of Automation and Robotics, FEMTO-ST Institute

New preprint/paper alert: a first effort at studying global properties for natural auditory scenes. This was just accepted at Open Mind, a relatively new diamond open access #cognitivescience journal.

This is Maggie McMullin's master's thesis plus some brilliant computational modeling by our colleagues at Johns Hopkins, Rohit and Mounya. Brian Gygi provided tons of code for acoustical analysis and our former post-doc Nate Higgins helped with a lot of matlab coding. Maggie recorded all of our stimuli using a Zoom Q8 recorder, and they are available on OSF.

#psychology #neuroscience #auditory #auditoryscience #deeplearning #Computational_Neuroscience #computational #

https://osf.io/preprints/psyarxiv/r7zx4

OSF

“A new experiment from Groh’s lab has now taken her observation a step further and suggests the faint sounds — dubbed “eye movement-related eardrum oscillations,” or EMREOs for short — serve to link two sensory systems.”

Such a shame - they had “oculomotor related eardrum oscillations” or OREOs right there waiting. #neuroscience #auditoryScience #DadJokes

https://www.thetransmitter.org/sensory-perception/tiny-eardrum-sounds-may-help-sync-visual-auditory-perception/

Tiny eardrum sounds may help sync visual, auditory perception

Studies of the oscillations reveal that horizontal and vertical eye movements generate distinct sounds.

The Transmitter: Neuroscience News and Perspectives

Today I want to see if I can predict preference for hearing aid amplification or predict speech recognition performance by looking at how long it takes someone to repeat back a sentence.

Listeners did a sentence recognition task, and I recorded audio of everything. I am running wav files through #Whisper to try to get automatic word-level time markers. Then I'll estimate participant response times.

https://github.com/linto-ai/whisper-timestamped

#Auditory #Science #AuditoryScience

GitHub - linto-ai/whisper-timestamped: Multilingual Automatic Speech Recognition with word-level timestamps and confidence

Multilingual Automatic Speech Recognition with word-level timestamps and confidence - linto-ai/whisper-timestamped

GitHub

Hello to the #auditory #auditoryneuroscience #auditoryscience people on Mastodon!

We have a guppe (🐟) group now. How to use:

"I want to get group messages in my feed":

Follow the account @audsci

"I want to broadcast messages to the group":

Just mention the group in your toot, like this: @audsci