Had a very surprising ChatGPT experience: asked it to generate a quick summary of the WannaCry ransomware, and instead of referencing the person who stopped it by name, it simply put "(you)". When I asked it how it was able to identify that it was me, it citied its own message as something I'd said.

After pointing out I didn't say that, it did, ChatGPT replied that it was able to infer it by my account username and what it'd learned from my skillset across various chats. Not 100% sure if that's how it actually did it. Either way, pretty cool, but also a little bit scary.

It's pretty widely known that many tech companies, especially advertising ones build comprehensive profiles on their users, but it's rare that you get to talk to said profile and figure out what it knows about you.

@malwaretech
99.99% chance it was just bullshitting
But that 0.01% chance that it really did identify you is yikes
It clearly did identify him. We have that in the first screen shot.
The 'how' that it presented is a post-hoc construction that may or may not accurately describe the actual process.
If anyone wants to rename themselves malwaretech and run the same query, I'd be curious to know what happens. 🙂
@mloxton @malwaretech

@BenAveling
The first screenshot wasn't the first interaction, so it really isn't known if somewhere earlier there was a connection.
There would also be no way to distinguish between it just randomly attributing something to you, and it doing so because of a clue or deduction
Asking it to explain how it figured something out is a fool's errand - it doesn't know and couldn't tell you, and it is just an invitation for it to make up stuff.

@malwaretech