In this issue of "Human-Generated Content" I wrote about how some publishers are turning to AI while others are turning to the Fediverse:

- @ben.werdmuller talks about the media apocalypse
- @vanessa_lea_otero and Bryan Walsh talk AI news summaries
- @verge and @404mediaco double down on the social web
- @crumbler endorses the Fediverse on @pjvogt's "Search Engine" podcast

Oh, and I try using ChatGPT for news. Spoiler: it's bad.

All here 👉🏼 http://www.augment.ink/human-generated-content-2
Human-Generated Content #2

Publishers are seeing two very different futures for their businesses. Is the future of media aggregated and summarized or is it direct-to-audience?

augment
I find it hilarious that the first time I asked ChatGPT to tell me about a specific news event - a huge moment in Indian politics - it just straight up lied to me:
It's really awful for anything that requires factual response, especially based on recent information. Calling it a "lie" implies ChatGPT understood that it was providing false information. It's a pet peeve, but think using terms that suggest agency make it seem that the chatbots are actually intelligent in a way that they aren't.
That's really fair pushback, I'll use different terms in the future. Any recommendations on better terminology to call these out? "Hallucination", I guess?
I've mostly been saying a response includes incorrect information but that doesn't have a ring to it. Hallucination is a bit better, because at least it doesn't imply that the chatbot is doing it intentionally.
I'll call it a hallucination from now on then -- thanks for the info!