16 Followers
13 Following
35 Posts
Postdoc at University of Washington. Opinions are my own.
websitehttps://changyeli.github.io/
ACL virtual poster session is such a waste 😅
Yesterday was spring equinox, marking the midpoint of spring in my hometown. Meanwhile me, who live in a place far far north from my hometown: 

So in this poster, we want to answer a question:

Do imperfect ASR transcripts provide sufficient information for acceptable performance on downstream tasks? The short answer: yes, and paradoxically, better performance than manual transcripts!

Check our preprint here: https://arxiv.org/abs/2211.07430
And code here: https://github.com/LinguisticAnomalies/paradox-asr

The Far Side of Failure: Investigating the Impact of Speech Recognition Errors on Subsequent Dementia Classification

Linguistic anomalies detectable in spontaneous speech have shown promise for various clinical applications including screening for dementia and other forms of cognitive impairment. The feasibility of deploying automated tools that can classify language samples obtained from speech in large-scale clinical settings depends on the ability to capture and automatically transcribe the speech for subsequent analysis. However, the impressive performance of self-supervised learning (SSL) automatic speech recognition (ASR) models with curated speech data is not apparent with challenging speech samples from clinical settings. One of the key questions for successfully applying ASR models for clinical applications is whether imperfect transcripts they generate provide sufficient information for downstream tasks to operate at an acceptable level of accuracy. In this study, we examine the relationship between the errors produced by several deep learning ASR systems and their impact on the downstream task of dementia classification. One of our key findings is that, paradoxically, ASR systems with relatively high error rates can produce transcripts that result in better downstream classification accuracy than classification based on verbatim transcripts.

arXiv.org
I will be (physically) at Machine Learning for Health (#ML4H) at #NeurIPS next Monday to present on ASR errors and their impact on the subsequent classification performance  I will also be around at the main conference. Ping me if you want to grab a coffee and chat!

If I trained a T5/Bart-like model (encoder-decoder transformer model) with paper from *ACL, some journals in my field, will this model write my thesis for me? Or write my thesis with a little prompt?

Turns out Meta already did it, and it went so wrong.  okay, time to open overleaf then 

ohhh I really missed the customized emoji I had on other instances
I will attend #NeurIPS and present a post at Machine Learning for Health symposium next week!

#Introduction

Hello everyone, I'm Changye, 4th year PhD candidate at the University of Minnesota. My research is #interpretability and #explainability of #NLP applications in behavioral health, especially in #dementia of Alzheimer's type. Recently moved from Twitter for the academic side of me :)