438 Followers
36 Following
5 Posts
Professor, Programmer in NYC.
Cornell, Hugging Face ๐Ÿค—
Websitehttps://rush-nlp.com/

Named Tensor Notation (TMLR version, https://arxiv.org/abs/2102.13196)

A rigorous description, opinionated style guide, and gentle polemic for named tensors in math notation.

* Macros: https://ctan.org/tex-archive/macros/latex/contrib/namedtensor

Named Tensor Notation is an attempt to define a mathematical notation with named axes. The central conceit is that deep learning is not linear algebra. And that by using linear algebra we leave many technical details ambiguous to readers.

Named Tensor Notation

We propose a notation for tensors with named axes, which relieves the author, reader, and future implementers of machine learning models from the burden of keeping track of the order of axes and the purpose of each. The notation makes it easy to lift operations on low-order tensors to higher order ones, for example, from images to minibatches of images, or from an attention mechanism to multiple attention heads. After a brief overview and formal definition of the notation, we illustrate it through several examples from modern machine learning, from building blocks like attention and convolution to full models like Transformers and LeNet. We then discuss differential calculus in our notation and compare with some alternative notations. Our proposals build on ideas from many previous papers and software libraries. We hope that our notation will encourage more authors to use named tensors, resulting in clearer papers and more precise implementations.

arXiv.org

Hi ๐Ÿ˜.

If you are looking for a winter break project, here is the full collection of ML/coding puzzles.

* https://github.com/srush/tensor-puzzles
* https://github.com/srush/gpu-puzzles
* https://github.com/srush/autodiff-puzzles
* https://github.com/srush/raspy

GitHub - srush/Tensor-Puzzles: Solve puzzles. Improve your pytorch.

Solve puzzles. Improve your pytorch. Contribute to srush/Tensor-Puzzles development by creating an account on GitHub.

GitHub

The blog post is a Python reimplementation and visualizer of the paper "Thinking Like Transformers" (https://arxiv.org/abs/2106.06981) .

Full version of the RASP language is here

https://github.com/tech-srl/RASP

Thinking Like Transformers

What is the computational model behind a Transformer? Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, Transformers have no such familiar parallel. In this paper we aim to change that, proposing a computational model for the transformer-encoder in the form of a programming language. We map the basic components of a transformer-encoder -- attention and feed-forward computation -- into simple primitives, around which we form a programming language: the Restricted Access Sequence Processing Language (RASP). We show how RASP can be used to program solutions to tasks that could conceivably be learned by a Transformer, and how a Transformer can be trained to mimic a RASP solution. In particular, we provide RASP programs for histograms, sorting, and Dyck-languages. We further use our model to relate their difficulty in terms of the number of required layers and attention heads: analyzing a RASP program implies a maximum number of heads and layers necessary to encode a task in a transformer. Finally, we see how insights gained from our abstraction might be used to explain phenomena seen in recent works.

arXiv.org

Blog Post: On "Thinking Like Transformers"

In which, I get a bit obsessed with learning how to code in Transformer lang๐Ÿค–.

https://github.com/srush/raspy

(You can follow along or do the exercises yourself in a colab notebook.)

GitHub - srush/raspy: An interactive exploration of Transformer programming.

An interactive exploration of Transformer programming. - GitHub - srush/raspy: An interactive exploration of Transformer programming.

GitHub