Hosein Mohebbi

13 Followers
26 Following
5 Posts
PhD candidate at Tilburg University
Webhttps://hmohebbi.github.io/
Twitterhttps://twitter.com/hmohebbi75

Our upcoming #EACL2024 tutorial “Transformer-specific Interpretability” will focus on the trending type of interpretability that makes use of specific features of transformers for understanding LLMs, and discuss their pros & cons!

Jointly presented w/ @jaapjumelet, Michael Hanna, Afra Alishahi & @wzuidema

More info: https://projects.illc.uva.nl/indeep/tutorial/

Hope to see you in Malta!

Tutorials

New paper accepted to #EMNLP2023, with @gchrupala, @wzuidema, and Afra Alishahi.

We adapted and applied measures of 'context-mixing' developed for text models to models of spoken language, and discovered striking differences between the behavior of encoder-only and encoder-decoder speech Transformers.

🔶 Check it out: https://arxiv.org/abs/2310.09925

Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers

Transformers have become a key architecture in speech processing, but our understanding of how they build up representations of acoustic and linguistic structure is limited. In this study, we address this gap by investigating how measures of 'context-mixing' developed for text models can be adapted and applied to models of spoken language. We identify a linguistic phenomenon that is ideal for such a case study: homophony in French (e.g. livre vs livres), where a speech recognition model has to attend to syntactic cues such as determiners and pronouns in order to disambiguate spoken words with identical pronunciations and transcribe them while respecting grammatical agreement. We perform a series of controlled experiments and probing analyses on Transformer-based speech models. Our findings reveal that representations in encoder-only models effectively incorporate these cues to identify the correct transcription, whereas encoders in encoder-decoder models mainly relegate the task of capturing contextual dependencies to decoder modules.

arXiv.org

I shared a few thoughts here as a side note on interpretability with *Value Zeroing*. Hope the community finds it useful🤗:
https://hmohebbi.github.io/blog/value-zeroing

Your thoughts and comments are greatly appreciated!
#NLProc #XAI

Why Value Zeroing?

This post serves as a side note on Value Zeroing, an interpretability method for quantifying context mixing in Transformers. It is based on our recent research paper in which we show that the token importance scores obtained through Value Zeroing offer better interpretations compared to previous analysis methods in terms of plausibility, faithfulness, and agreement with probing.

Hosein Mohebbi

🥳 Thrilled to announce our paper got accepted to #EACL2023!
We introduce *Value Zeroing*, a new interpretability method for quantifying context mixing in Transformers.

A joint work w/ me, @wzuidema, @gchrupala, and Afra

📑Paper: https://arxiv.org/abs/2301.12971
☕Code: https://github.com/hmohebbi/ValueZeroing

#NLProc #InDeep

Quantifying Context Mixing in Transformers

Self-attention weights and their transformed variants have been the main source of information for analyzing token-to-token interactions in Transformer-based models. But despite their ease of interpretation, these weights are not faithful to the models' decisions as they are only one part of an encoder, and other components in the encoder layer can have considerable impact on information mixing in the output representations. In this work, by expanding the scope of analysis to the whole encoder block, we propose Value Zeroing, a novel context mixing score customized for Transformers that provides us with a deeper understanding of how information is mixed at each encoder layer. We demonstrate the superiority of our context mixing score over other analysis methods through a series of complementary evaluations with different viewpoints based on linguistically informed rationales, probing, and faithfulness analysis.

arXiv.org
Excited to be involved in organizing Blackbox next year with Sophie Hao, @jaapjumelet, @hmohebbi, @arya and @boknilev!