A new adventure: mechanistic interpretability with NeuroScope

What if we could compose LLMs from reusable circuits? It’s always bugged me that we can’t explain how large language models do what they do. It makes the models difficult to trust, poss…

thisContext