OK, now this is fascinating, and I’m not sure what I make of it: ChatGPT gives advice to Zelenskyy.

https://link.medium.com/4AWPidHRVwb

@elyseea I didn’t find ChatGPT’s advice all that interesting. It was very vague and seemed to fixate on “ceasefire” and “de-escalation” in response to every question. We tried that after 2014 already, and it got us where we are now.

@Gregnee I completely agree — the substance of the advice was anodyne and formulaic, though the ChatGPT ‘voice’ is always confident. I’m curious about whether folks would start to turn to a tool like this for advice about social and political issues. That’s kind of what blew my mind.

It made me curious to see if you could get the bot to recommend escalation, for example, or whether it would always default to bland balance. Just a curiosity…

@elyseea ChatGPT’s tagline should be, “Not just wrong, but smugly and confidently wrong!”

I wonder whether the lack of humility, doubt, and nuance in ChatGPT’s voice is all in the training data or is there something in the technology itself that makes it that way. I don’t know enough about AI/large language models to make an educated guess. Garbage in, garbage out, as we used to say.

@elyseea
We’re not far from a social media landscape where SM companies not only design algorithms to increase engagement, but actually employ ChatGPT-like technology to simulate users to elicit engagement and outrage. For all we know, we’re already in that environment now.
@Gregnee: Yes, the way that ChatGPT and similar AI tools could be used to "flood the media with @#$%" is impressive.