@Chancerubbage

But it's an empirical question, and the answer could very well be yes despite the above!

(3/3)

#ToneLanguage #SpeechPerception #GoodQuestion

@Chancerubbage

1. Music is as much a part of culture in areas with tone languages as elsewhere.
2. Pitch is used in non-tone languages for lots of stuff (emphasis, enumerating lists, distinguishing statements from questions, etc), so maybe speakers of any language would have comparable effects of such distractors.
3. Whisper studies (and others) suggest that listeners have plenty of cues besides pitch to distinguish different tones.

(2/3)

#ToneLanguage #SpeechPerception #GoodQuestion

@Chancerubbage

First, I have no idea - that's a great question. However, here are some thoughts that lead me to guess that they'd have no problem (when compared to speakers of non-tone languages):

(1/3)

#ToneLanguage #SpeechPerception #GoodQuestion

How do we understand speech-eech-eech in the presence of an #echo-o-o? This study of the robustness of human #SpeechPerception in echoic environments, showing that speech and its echo are processed separately in the brain to help intelligibility #PLOSBiology https://plos.io/3uAUnRf
Original speech and its echo are segregated and separately processed in the human brain

Slow temporal modulations in speech are important for its perception, but are absent in the presence of echo, such as during online meetings. The authors study the robustness of human speech perception in echoic environments, showing that speech and its echo are processed separately in the brain to facilitate intelligibility.

...et dans le deuxième, je parle de la façon dont cette méthode peut être appliquée à l'étude de la perception de la parole ("A la recherche des indices acoustiques de la parole" http://dbao.leo-varnet.fr/2019/05/25/limage-de-classification-auditive-partie-2-a-la-recherche-des-indices-acoustiques-de-la-parole/). #psycholinguistics #psycholinguistique #epistemologie #SpeechPerception #Perception
L’Image de Classification Auditive, partie 2 : À la recherche des indices acoustiques de la parole | DBAO | Léo Varnet

Yesterday Géraldine Carranante presented our ongoing project to the #fa2023 conference. In short, we apply #ReverseCorrelation to the #auditory modality to uncover the #acoustic cues used during #SpeechPerception. More info here: https://qoto.org/web/statuses/110575198775228972 #acoustics #psychoacoustics #psycholinguistics
Léo Varnet (@[email protected])

Exciting news! Our conference paper titled "Auditory reverse correlation applied to the study of place and voicing: four new phoneme-discrimination tasks" has been accepted for presentation at #ForumAcusticum2023 #FA2023! This is the foundation stone for a bigger study to be published next year, and also a summary of our overall scientific aim in the team. https://hal.science/hal-04130939 #psycholinguistics @[email protected]

Qoto Mastodon
New release! 💻 the TMST toolbox v2.0 includes a modulation scalogram function and a step-by-step demonstration of the main features. https://github.com/LeoVarnet/TMST/blob/main/README.md#example-walkthrough #matlab #SpeechPerception #SpeechProduction #SpeechProcessing @psycholinguistics
TMST/README.md at main · LeoVarnet/TMST

Temporal Modulation Spectrum Toolbox. A Matlab toolbox for the computation of amplitude- and f0- modulation spectra and spectrograms. - LeoVarnet/TMST

GitHub
Happy to introduce the first release of TMST, a #Matlab toolbox for the computation of amplitude- and f0- modulation spectra and spectrograms. This toolbox provides different tools to explore the modulation content and dynamics of #audio signals, in particular #speech #sounds. https://github.com/LeoVarnet/TMST #SpeechPerception #SpeechProduction @psycholinguistics @linguistics
GitHub - LeoVarnet/TMST: Temporal Modulation Spectrum Toolbox. A Matlab toolbox for the computation of amplitude- and f0- modulation spectra and spectrograms.

Temporal Modulation Spectrum Toolbox. A Matlab toolbox for the computation of amplitude- and f0- modulation spectra and spectrograms. - GitHub - LeoVarnet/TMST: Temporal Modulation Spectrum Toolbox...

GitHub

Out now in Developmental Psychology -- Auditory and visual category learning in children and adults by me, Erica Lescht, Mandy Hampton Wray, and Bharath Chandrasekaran!

https://psycnet.apa.org/doi/10.1037/dev0001525

We found that adults learned better than children, BUT this enhanced performance was asymmetrical across categories in different modalities.

We link these asymmetrical differences to the #development of skills such as #SpeechPerception and #reading.

Pour commencer, deux articles qui présentent de façon vulgarisée la méthode qui est au centre de mes recherches : la corrélation inverse (#ReverseCorrelation ou #Revcorr pour les intimes).
Dans le premier billet je décris la philosophie de cette approche ("Le cerveau comme boîte noire" http://dbao.leo-varnet.fr/2018/11/29/limage-de-classification-auditive-partie-1-le-cerveau-comme-boite-noire/).
Dans le deuxième, je parle de la façon dont elle peut être appliquée à l'étude de la perception de la parole ("A la recherche des indices acoustiques de la parole" http://dbao.leo-varnet.fr/2019/05/25/limage-de-classification-auditive-partie-2-a-la-recherche-des-indices-acoustiques-de-la-parole/). #psycholinguistics #psycholinguistique #epistemologie #psychophysics #psychophysique #SpeechPerception #Perception
L’image de classification auditive, partie 1 : Le cerveau comme boîte noire | DBAO | Léo Varnet