NancyKanwisher

@NancyKanwisher@sigmoid.social
882 Followers
190 Following
70 Posts

U.S. Senator Ed Markey (D-MA) is calling on Supreme Court Justice Clarence Thomas to resign over allegations of corruption spanning more than two decades. He is the first sitting U.S. Senator to do so.

https://www.alternet.org/send/clarence-thomas/

'Unsalvageable': Senator becomes first to call for Clarence Thomas to resign over corruption allegations

U.S. Senator Ed Markey (D-MA) is calling on Supreme Court Justice Clarence Thomas to resign over allegations of corruption spanning more than two decades. He is the first sitting U.S. Senator to do so.“I will say what needs to be said: Clarence Thomas should resign from the Supreme Court of the Unit...

Alternet.org

Are you also excited and/or want to learn more about the new opportunities #ANNs provide to ask #why questions of #minds and #brains?

Check out our new #review #paper with the amazing Meenakshi Khosla and @NancyKanwisher in @TrendsNeuro

https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(22)00262-4

#neuralnetworks #optimization #organization #auditory #visual #system

RT @social_brains
I'm teaching Searle's Chinese Room in my new class. So I had ChatGPT explain it and then gave that script to a http://d-id.com avatar to read. Seemed appropriately meta.
D-ID Creative Reality™️

D-ID, the leading AI Video Creation Platform, helps you make photorealistic videos using generative AI via D-ID’s API or Creative Reality™ studio.

D-ID

My little book is now officially published by Cambridge Univ Press and available for free here https://www.cambridge.org/core/elements/attending-to-moving-objects/0914ABF2EF7D03676124F7250874071A . I draw lessons from object tracking research to illuminate the nature of the bottlenecks on human visual processing.

The official version is available for free for two weeks. The version I published with bookdown (https://tracking.whatanimalssee.com/intro.html#summary) will be available for free at least until I die, or become too ashamed of the book. #attention #perception
#openaccess @cognition

To Any Recently Laid-off Tech Workers,
If you have strong ML/Python skills, consider applying this position in my lab at MIT:
https://careers.peopleclick.com/careerscp/client_mit/external/jobDetails/jobDetail.html?jobPostId=25473&localeCode=en-us
The pay is lower and the free food not as fancy, but we do cool research on the human brain and have a lot of fun. And I have a near-perfect record helping my lab techs get into top Ph.D. programs in cog sci/ neuroscience.
Check us out here: https://web.mit.edu/bcs/nklab/index.shtml
Technical Associate I - Kanwisher Lab

MIT - Technical Associate I - Kanwisher Lab - Cambridge MA 02139

It’s been fun working with a brilliant team of coauthors - @kmahowald @ev_fedorenko @ibandlank @NancyKanwisher & Josh Tenenbaum

We’ve done a lot of work refining our views and revising our arguments every time a new big model came out. In the end, we still think a cogsci perspective is valuable - and hope you do too :) 10/10

Congrats to
@neuranna @kmahowald
@IbanDlank
@kmahowald
@ev_fedorenko
on a fascinating analysis of large language models informed by the brain: don't blame LLMs for being unable to think; neither can the brain's language system!
https://arxiv.org/abs/2301.06627

https://twitter.com/neuranna/status/1615737072207400962

Dissociating language and thought in large language models

Large Language Models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence -- knowledge of linguistic rules and patterns -- and functional linguistic competence -- understanding and using language in the world. We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of mechanisms specialized for formal linguistic competence, distinct from functional competence.

arXiv.org
I argued in Nature Neuroscience in 2000 that the 'expertise hypothesis" seemed implausible because "the visual features that are diagnostic in discriminating cars (for example) are bound to be different from the features that are diagnostic in discriminating faces." Now @KathaDobs and @pranjulgupta have shown this rigorously with CNNs here:
https://mastodon.social/@KathaDobs/109711150946753646

I'm teaching a "machine learning for climate change" course to data science students. If you work in climate, what do you want them to learn? (boost for reach, please!)

#ClimateChange
#EnergyMastodon
#ClimateMastodon
#ClimateScience
#Biodiversity
#Climate

RT @KiaNobre
Mark Stokes (@StokesNeuro), RIP
You enriched us with your fortitude and gentleness.
You changed our scientific views with your brilliant mind.
Thank you. Now brighten the stars.