Thread: Excited to announce the v1.0 release of the Learning Interpretability Tool (🔥LIT), an interactive platform to debug, validate, and understand ML model behavior. This release brings exciting new features — including layouts, demos, and metrics — and a simplified Python API. https://pair-code.github.io/lit

(1/5)

Learning Interpretability Tool

LIT now supports 4 unique layout configurations — single panel, two-panel split top/bottom, two-panel split left/right, and three-panel — and provides the new, three-panel layout in its standard layout options. Please send us your feedback on this experimental but highly-requested feature!

(2/5)

Thanks to our Summer of Code contributor, Aryan Chaurasia, LIT has added two new demos: explore the performance of a multilingual question-answering model on the TyDi QA dataset, and generate and review images from text prompts with DALL-E Mini.

(3/5)

LIT now provides built-in metrics for analyzing multi-label classification (exact match, precision, recall, F1) and seq2seq generation tasks (exact match). Additionally, we have considerably simplified the Python API by consolidating LIT’s data representation and simplifying Model definitions.

Check out the full release notes at https://github.com/PAIR-code/lit/releases/tag/v1.0.1

(4/5)

Release v1.0.1 · PAIR-code/lit

This is a major release, covering many new features and API changes from the dev branch since the v0.5 release over 8 months ago. This release includes a variety of breaking changes meant to simpli...

GitHub

Many thanks to all of the contributors who supported this release: @Ryanmullins , @iftenney , @nhussein , Minsuk Kahng , @mahima , @cjqian , @jameswex , Bin Du, Cibi Arjun, and Oscar Wahltinez - as well as all those who have helped build LIT since the project began.

If you find LIT useful for your project, please cite our EMNLP paper (https://aclanthology.org/2020.emnlp-demos.15/), and drop us a line on GitHub or Mastodon!

(5/5)

The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models

Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, Ann Yuan. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. 2020.

ACL Anthology