Tony Zador, co-founder of @CosyneMeeting, shares his drive to build community and how this conference came about, in honor of its 20th anniversary.
Blaise's presence stood in stark contrast to the other speakers and panelists at #COSYNE2024, who have so far given beautiful and thoughtful talks, considered questions deeply, and acknowledged the limitations of their work or the field at large.
Blaise's presence amplified all those toxic traits that are stereotypical of Slicon Valley and anti-thetical to the mission of societies like COSYNE.
- Uncritical acceptance of SOTA technology without any consideration of the limitations or criticisms that have been raised.
- A lack of acknowledgement of almost any prior work, particularly that done by anyone who isn't a white male.
- A lack of consideration of relevant ethical issues.
- Advocacy of positions that ignore or dismiss the humanity of others (e.g., the ableist stance on aging, putting considerations of AI "rights" over current civil rights issues affecting humans).
Now, I love COSYNE and its community dearly. It's a conference I always look forward to attending when I can and much of the work that I cite and study comes from the COSYNE community. I think having a Silicon Valley techbro come and lecture us about the glory of modern AI, without any clear awareness of the context of issues, was a notable stain on this otherwise wonderful conference. I hope we can avoid such unnecessary and toxic behavior in the future.
4/4
Now let's move on to his behavior during the panel discussion later in the evening.
Here's a sampling of his positions.
- Neuroscientists who do not accept modern AI in its current SOTA form (really only the models being pushed by Silicon Valley) are clearly ignorant about the issue of intelligence.
- Academic research cannot afford the $100+ million to train the next gen large models (so it just shouldn't try to work in this space).
- AI doesn't really need neuroscience, but neuroscientists need AI (a position also supported by a few other panelists).
- Aging is something to be cured (an extremely ableist position on aging)
- When asked what we will be debating in 20 years, his response was AI personhood and rights. This was said in a room filled with women and multiple trans/non-binary scientists from America whose rights are currently being taken away with no clear resolution in sight.
3/4
First, let's summarize Blaise's talk itself. I posted my opinions on it yesterday during the talk.
https://neuromatch.social/@tdverstynen/112020275221817874
Here's a quick summary of his main points:
- AGI is exemplified by the abilities of LLMs like ChatGPT, therefore AGI is here and LLMs are an imperfect example of it. People who cannot accept this fact are ignorant of the issues.
- Prediction is the core microfunction that makes intelligence work.
- Life is intelligence.
- Genetic algorithms work. (No seriously, he presented a genetic algorithm model as if it was the first time anyone had looked at the emergence of intelligent structure, with absolutely no acknowledgement of the prior work that goes back almost a half century now)
The talk was full of very bold, but unsupported, statements and a complete lack of acknowledgement of prior work (except for the occasional obligatory references to Schrodinger, Van Neumann, and Turing). Despite a bold talk title "What is intelligence?" we didn't really learn the answer to that question (unless you accept his premise that LLMs are AGI and thus intelligence)
2/4
Apparently, according to Blaise Aguera y Arcas at #COSYNE2024, AGI is defined as the abilities that LLMs do, thus LLMs have AGI and it has arrived. Maybe the folks working on #AI should study circular inference a little bit more.
The more that I think about it, the more I feel that Blaise Aguera y Arcas's presence at #COSYNE2024 was both toxic and counter productive to the mission and goals of the meeting and community.
Get out your sewing kits, a vent thread is inbound!
1/4
To whoever it was that asked the panelists (paraphrasing) “Why are you even here and what is the purpose of this debate?” I salute you. Thank you for your service!
Apparently, according to Blaise Aguera y Arcas at #COSYNE2024, AGI is defined as the abilities that LLMs do, thus LLMs have AGI and it has arrived.
Maybe the folks working on #AI should study circular inference a little bit more.