Who's at #cosyne2023 and wants to send back highlights?

Every year has it's trends. Sparse coding, efficient coding, ring attractors, manifolds. What's the thing this year? Any new emerging big ideas?

Or just links to posters you liked?

@Catrina_Hacker @dlevenstein @tyrell_turing @cllantz @chrisXrodgers

@NicoleCRust Tatiana Engel asked us to move beyond the manifold. (So maybe the manifold is down but not out!) She wants to connect small latent circuits to the dynamics of the full larger system.

Two talks so far on heading changes in animals during directed motion - one in flies (Collie .. R Wilson) and one in mice (Green .. Harvey).

But we're still early on and I'm still rooting for team prediction!

@chrisXrodgers @NicoleCRust Tatiana’s talk was great! Making the connection between RNNs and classic circuit-level models.
@chrisXrodgers
Interesting! Please keep us updates as you have bandwidth.

@NicoleCRust Now that some time has passed after coming back from #cosyne23, I would say the main theme of the meeting this year was "interpretability of models". For a while people have been talking about explainable AI and how neural networks are a black box. What was different this year was that there was actually quite a bit of skepticism that interpretability is even possible, or in some cases even desirable.

My favorite take on this issue came from a workshop led by Sabera Talukder and Eric Trautmann. https://sites.google.com/view/taming-complexity-cosyne23/home I think it was best summarized by Yisong Yue, who said that for interpretability to mean anything at all, the modeler and observer need to at least agree on a shared language and objective function (he put it better but I can't remember his wording).

The backdrop for this is that Cosyne is becoming increasingly ML-driven, to the point where some people expressed confusion about why anyone is still trying to understand the brain itself, rather than neural networks, which are powerful objects of study in their own right.

Finally Cosyne continues to have unresolved culture issues, leading to some people feeling non-included based on gender, race, and ability. There was also concern about the ethics of holding it at an extraordinarily expensive ski resort. OTOH, the memorial for Krishna Shenoy was moving and inspired us all to remember that people come first and science after that.

My meeting report sounds kind of downer but overall it was certainly thought-provoking and I enjoyed reconnecting with great scientists and ideas!

TamingComplexityCosyne2023

Workshop Organizers

@chrisXrodgers @NicoleCRust I was rather saddened to hear that in the introductory session there was talk of who had "won" Cosyne by having most abstracts accepted, and even disputes about that on twitter afterwards. Certainly doesn't help less rich groups feel welcome. I had thought after the experience of the last few years of making everything more openly accessible to everyone that we might be past all that. 😔

@chrisXrodgers
So insightful. Not a downer at all, in my eyes. Every year has things that ebb and in a few years they fade, like waves. Your take on the scientific ebb for 2023 is really helpful. And Cosyne is part of the conversation we are all having around diversity issues and how to navigate.

As you say, people come first. RIP and gratitude, Krishna.

Thank you!

@chrisXrodgers @NicoleCRust Thanks for sharing that insight on explainability. That's really interesting, and makes a lot of sense. Definitely a little scary, though.

We like to talk about human beings having "general intelligence." We may be way more flexible than other animals or AIs, but we still have a very specialized, and particular kind of intelligence. When we design intelligent systems, we must remember they do not think or experience reality like we do. They may always be quite alien to us, and that seems to be a difficult idea for us to wrap our minds around.