@NicoleCRust Now that some time has passed after coming back from #cosyne23, I would say the main theme of the meeting this year was "interpretability of models". For a while people have been talking about explainable AI and how neural networks are a black box. What was different this year was that there was actually quite a bit of skepticism that interpretability is even possible, or in some cases even desirable.
My favorite take on this issue came from a workshop led by Sabera Talukder and Eric Trautmann. https://sites.google.com/view/taming-complexity-cosyne23/home I think it was best summarized by Yisong Yue, who said that for interpretability to mean anything at all, the modeler and observer need to at least agree on a shared language and objective function (he put it better but I can't remember his wording).
The backdrop for this is that Cosyne is becoming increasingly ML-driven, to the point where some people expressed confusion about why anyone is still trying to understand the brain itself, rather than neural networks, which are powerful objects of study in their own right.
Finally Cosyne continues to have unresolved culture issues, leading to some people feeling non-included based on gender, race, and ability. There was also concern about the ethics of holding it at an extraordinarily expensive ski resort. OTOH, the memorial for Krishna Shenoy was moving and inspired us all to remember that people come first and science after that.
My meeting report sounds kind of downer but overall it was certainly thought-provoking and I enjoyed reconnecting with great scientists and ideas!
