When ChatGPT turns snitch

The largely overlooked privacy risks of using AI apps that not only remember your conversations, but are capable of using these to reveal your deepest secrets to others

The Future of Being Human
I'm not sure I see how this is meaningfully different than the threat posed by a search engine. It's a very real threat, and I've always done my best to search from a browser context that isn't logged in as a result. But it's not a new threat, or something distinctive to AI.
I don't understand how you don't understand. Trying to recreate someone's internal thoughts and attitudes from looking at their search history is a pale imitation of this. Just the thought experiment of a customs officer asking ChatGPT to summarise your political viewpoints was eye opening to me.
How so? You'd have a very, very good understanding of my political viewpoints from the log of my Google searches. I'm asking sincerely, not simply to push back on you.

Asking Google for details about January 6th is different than telling ChatGPT I think the election was stolen, and then arguing with it for hours about it.

It would be harder to frame it in front of a jury that what you typed wasn't an accurate representation of what you were thinking and that you were being duplicitous to ChatGPT.

I don't think it really is in the circumstances we contemplate this threat in. In both the search engine case and the ChatGPT case, we're talking about circumstantial evidence (which, to be clear: is real and legally weighty in the US) --- particularly in the CBP setting that keeps coming up here, a Border Agent doesn't need the additional ChatGPT context you're talking about to draw an adverse conclusion!

I think at this point the fulcrum of the point I'm making is that people might be inadvertently lulling themselves into thinking they're revealing meaningfully less about themselves to Google than to ChatGPT. My claim would be that if there's a difference, it's not clear to me it's a material one.