๐—ฅ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฎ๐˜ ๐—จ๐—œ๐—ผ๐˜„๐—ฎ: ๐—ฅ๐—ฒ๐—ณ๐—ถ๐—ป๐—ถ๐—ป๐—ด ๐—›๐—ผ๐˜„ ๐—Ÿ๐—ฎ๐—ฟ๐—ด๐—ฒ ๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ ๐— ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ฃ๐—ฟ๐—ผ๐—ฐ๐—ฒ๐˜€๐˜€ ๐—”๐˜‚๐—ฑ๐—ถ๐—ผ

Weiran Wang has defined his career by exploring machine learning and speech processing. ๐Ÿ’ฌ

Google DeepMind is helping fund his personal research on advancing audio comprehension within Large Language Models. ๐Ÿ’ป

โ€œBy preventing phantom narratives and limiting AIโ€™s responses to facts present in the audio, the reliability of models increases.โ€

Read at https://cs.uiowa.edu/news/2026/04/ai-research-uiowa-refining-how-large-language-models-process-audio !

#LLM #ML #SpeechProcessing

Basically, non-blackbox interpretive AI seems a lot more useful than generative AI from a โ€œletโ€™s not destroy the worldโ€ standpoint

#AI #generativeAI #interpretiveAI #tokenization #blackbox #nonblackbox #savesocial #SaveTheUS #kanji #grammar #speech #speechprocessing #languages #language #LLM #dialectrecognition #SaveTheWorld #mediapreservation

Speech and Language Processing

Speech and Language Processing

Voxtral | Mistral AI

Introducing frontier open source speech understanding models.

Voxtral

We present Voxtral Mini and Voxtral Small, two multimodal audio chat models. Voxtral is trained to comprehend both spoken audio and text documents, achieving state-of-the-art performance across a diverse range of audio benchmarks, while preserving strong text capabilities. Voxtral Small outperforms a number of closed-source models, while being small enough to run locally. A 32K context window enables the model to handle audio files up to 40 minutes in duration and long multi-turn conversations. We also contribute three benchmarks for evaluating speech understanding models on knowledge and trivia. Both Voxtral models are released under Apache 2.0 license.

arXiv.org
New neuroscience research upends traditional cognitive models of reading

A new study finds that the left posterior inferior frontal cortex activates within 100 milliseconds during reading, playing a critical, early role in turning text into speech, challenging traditional models that assumed a slower, step-by-step process.

PsyPost
How to talk to your #dog... @giraudlab &co show that dogs & humans share similar but not identical #SpeechProcessing mechanisms and that dog-human vocal interactions match #dogs' sensory-motor tuning #PLOSBiology https://plos.io/3ZN8dgx
Dogโ€“human vocal interactions match dogsโ€™ sensory-motor tuning

Human-to-pet communication requires speech processing by the animal and adjustments of the human speaking rate to match their petโ€™s receptive abilities. This study shows that dogs and humans share similar but not identical speech processing mechanisms and that dog-human vocal interactions match dogsโ€™ sensory-motor tuning.

Apply for a fully funded PhD position now! Topics in my team range from privacy in speech processing, speech enhancement and low-resource speech processing to speech interaction modelling, while FCAI in general covers most areas of machine learning and AI. #phdposition #aaltouniversity #fcai #speechprocessing #privacy #machinelearning
https://www.linkedin.com/posts/tombackstrom_applications-are-open-for-the-doctoral-program-activity-7173576830879313921-FUT_?utm_source=combined_share_message&utm_medium=member_desktop
Tom Bรคckstrรถm on LinkedIn: #phdposition #aaltouniversity #fcai #speechprocessing #privacyโ€ฆ

Apply for a fully funded PhD position now! Topics in my team range from privacy in speech processing, speech enhancement and low-resource speech processing toโ€ฆ

Rhythmic modulation of prediction errors: A top-down gating role for the beta-range in #speechprocessing โ€“ new work by Hovsepyan et al. (2023).

๐ŸŒ โ€ชjournals.plos.org/ploscompbiol/aโ€ฆโ€ฌ

#betaoscillation #sensoryperception #language #modeling

Hi all,
This account is belong to Neural dynamics lab based in Geneva university.
We aim at identifying and modeling the spatiotemporal dynamics of neural activity at the local field potential and single-unit levels which support speech and language representations on the cognitive side, and epileptic seizures on the pathological side.
We use a variety of methods, from short- and long-term intracranial recordings from patients with epilepsy, statistical and mechanistic modeling of neural activity, and natural language processing tools such at GPT. Check out our website for more informations https://ndlab.ch

Give this massage a boost by a repost. And follow us if you're interested in our work.

#introduction #neuroscience #gpt #speechprocessing #epilepsy #ai #computationalneuroscience #computationalneuro #firstpost

Neural Dynamics lab |