Helped a local journalist understand the fundamental mechanisms behind LLMs and generative algorithms, to illuminate why they "get things wrong," and connected the dots between how they're currently trained and controlled with colonialism and rising authoritarianism.

It feels good to help clever and kind people connect these dots and see those lightbulb moments.

(One of these days I have to do that with my fellow Petaluma Pride board members... A few folks are a bit too taken by these things.)

@robin Was there anything that was particularly helpful in communicating that stuff? I see a lot of people advocating for LLM use who seem to misunderstand what the are and aren't capable of, so I'm interested in how to do that education effectively.

@elplatt I started by unpacking "AI" with familiar examples of ML in recommendation algorithms, autocorrect, and predictive texting — and then highlighted that LLMs are, at the core, that. That made it easier to explain why and how they "get things wrong," insofar as they have no sense of what's right, only what is most common.

Then highlighted the limitations of training on only digitized content, (ofc much without consent), and the problem of centralization and control by rich extremists.

@elplatt the journo is well versed in colonialism and issues of capitalism, so they were able to put all that in context quickly!
@robin Thanks! Sounds pretty similar to my approach. I've been working on a blog post about related issues in scientific research. I'll share here when I finish!
@elplatt Yesss glad I'm in good company and can't wait to read your blog post!