Appreciated @willknight piece amidst AI reporting dreck, but @emilymbender, one of the most perspicacious scholars in the AI space rn, notes that the article falls prey to what it argues against, anthropomorphizing the system as one that "reasons":
https://dair-community.social/@emilymbender/110749184977041975
(🧵1/n)
Emily M. Bender (she/her) (@[email protected])

And then he describes LLMs this way: "It may be best to think of them as preternaturally knowledgeable and gifted mimics that, although capable of surprisingly sophisticated reasoning, are deeply flawed and have only a limited “knowledge” of the world." >>

Distributed AI Research Community
@willknight @emilymbender
This subtly influences us readers to ascribe reasoning to these systems without even realizing it. Assume this effect is also at play w Will in the first place, from similar writing coming before, as it is for all of us. (2/n)
@willknight @emilymbender
Am so so so thankful we (people reading about AI) have Emily's input to shed light on our blind spots. Global understanding of AI systems is fundamentally more informed because of how well she can see things and describe what she sees. 😍 (/end)