| website | https://changyeli.github.io/ |
| website | https://changyeli.github.io/ |

So in this poster, we want to answer a question:
Do imperfect ASR transcripts provide sufficient information for acceptable performance on downstream tasks? The short answer: yes, and paradoxically, better performance than manual transcripts!
Check our preprint here: https://arxiv.org/abs/2211.07430
And code here: https://github.com/LinguisticAnomalies/paradox-asr
Linguistic anomalies detectable in spontaneous speech have shown promise for various clinical applications including screening for dementia and other forms of cognitive impairment. The feasibility of deploying automated tools that can classify language samples obtained from speech in large-scale clinical settings depends on the ability to capture and automatically transcribe the speech for subsequent analysis. However, the impressive performance of self-supervised learning (SSL) automatic speech recognition (ASR) models with curated speech data is not apparent with challenging speech samples from clinical settings. One of the key questions for successfully applying ASR models for clinical applications is whether imperfect transcripts they generate provide sufficient information for downstream tasks to operate at an acceptable level of accuracy. In this study, we examine the relationship between the errors produced by several deep learning ASR systems and their impact on the downstream task of dementia classification. One of our key findings is that, paradoxically, ASR systems with relatively high error rates can produce transcripts that result in better downstream classification accuracy than classification based on verbatim transcripts.
If I trained a T5/Bart-like model (encoder-decoder transformer model) with paper from *ACL, some journals in my field, will this model write my thesis for me? Or write my thesis with a little prompt?
Turns out Meta already did it, and it went so wrong.
okay, time to open overleaf then 
Hello everyone, I'm Changye, 4th year PhD candidate at the University of Minnesota. My research is #interpretability and #explainability of #NLP applications in behavioral health, especially in #dementia of Alzheimer's type. Recently moved from Twitter for the academic side of me :)