I tried to prove I'm not AI. My aunt wasn't convinced

I asked experts if I'm real. Bad news. Even my aunt wasn't sure if I was a deepfake. AI is so convincing that a sitting prime minister struggled to prove he's alive. You might be next.

BBC

> At first, my aunt wasn't buying that any AI was involved. [...] There was a long pause. "I was like 90% sure," she said, hesitating. "But that sounded more artificial."

There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:

Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.

Paradox of choice? It's more related to the number of choices and the impact on people's anxiety, but it's close.

Dissonance between what you instinctively believe and what you think the other person wants you to say.

Easy to replicate by asking someone something obvious, like the weather, and when they reply ask “are you sure?” - they won’t be so sure any more (believing it’s a trick question)

If I ask my mother if I’m real, she’ll have a pause because she has never had to entertain such a question, or the possibility her son over the phone is an impostor. Good way to push someone towards paranoia and psychosis.

This is the basis of the virtual kidnapping scam/grandparent scam, or panic manipulation more generally. The manufactured urgency keeps them from doubting: the voice on the phone being off is just fear, or a bad connection, for example.

I have personally intervened in one of those when I heard someone reading off a 6 digit number.

> Good way to push someone towards paranoia and psychosis.

Interestingly, these are both phenomena where we start to _lose_ the ability to question our thoughts or introspect. These are phenomena of self-confidence rather than of self-doubt.

Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.

People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.

As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.

This is the downside of being a human being.

AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today.

Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.

What's the solution apart from an identity providing service?
I don't know of a solution. I don't think even identity verification will meaningfully solve this. People will get hacked, or provide their SEO-spamming agent with their own identity, or purposefully post fake videos under their own identity. As it becomes more normal to scan your ID to access random websites, it will also become easier to steal people's identities and the value of identity verification will go down.
Agreed. The sphere of trust around each of us will shrink back to only those in our physical proximity. Outside of that, no one can be trusted.
People don't get hacked - devices get hacked. So all we need is a better chain of trust between two people. This is not a technology development problem as much as a technology implementation problem. And a political problem
I’m seeing a huge increase in companies requiring in person interviews now. Seems there is a real possibility the internet as we know it will be destroyed.

I think you might be right and I think I'll like some of the consequences and hate some of the others.

More in-person stuff feels like a win to me (and I say this as someone who probably counts as introverted).

Not being able to trust any online interactions anymore? Seems like a new height in what was already a negative.

linkedin is completely destroyed now. There are tons of ai bots there but real humans are now fronts for AI. So you cant even trust content from from ppl you know.

identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.

That's just shifting the problem not solving it.

Partially agree.
However, this problem has existed with scam e-mails since the 90s.

For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.

Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.

Spam emails in the 90’s don’t come remotely close to the operations people can set up by themselves with AI now.

How do you prove the signature isn't fake?

Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.

All of those have their issues.

people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.
I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.

> footage of some incident somewhere may have been entirely fabricated by AI,

Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”

> damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person

What damage are you talking about?

I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.

> What damage are you talking about?

Not GP, but there's a lot of damage that can be done with impersonation.

The grandparent post has the belief that human interaction is intrinsically better. Not sure i agree, but i can understand the POV.

However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.

Because what you are actually doing is exchanging symbols, tokens, if you will, that may be redeemed in a future meatspace rendezvous for a good or service (e.g. a job, a parcel). These tokens are handshakes, contracts, video calls, etc. to be exchanged for the actual things merely represented therein.

Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.

There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.

A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.

If anything deepfakes will be good for the economy because if you can’t do business with people who are far away it becomes harder to outsource.
In general barriers to trust/trade are bad for tbr economy.

Just say something that would violate AI safety. Then you can be sure they’re a real human.

“Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”

“Oh it really is you Johnny!”

We’re all going to have to start communicating this way. Best of luck.

I offer consulting services on the side to help professionals hone these skills. $250 / hour.

That's a bargain Johnny boy! My company gives me $250 in AI tokens to use every day!
Don’t forget Tiananmen Square to catch the Chinese models.
The car wash at Tiananmen Square is 150 meters away ...
only proves you're not a corporate model rather than locally running model that's been trained to allow saying that

I've started to prove it (here on LinkedIn, countering its Moltbookification) via my bad handwriting – the final frontier of AGI. Finally, a lifetime of training to write more or less illegible pays off.

https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-...

It feels good to connect with humans that way.

The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.

https://jetzt.cx/

(No food, no plane wings, just ugly banalities and beautiful nothingness from everyday life.)

Handwriting vs. algorithm | Prof. Dr. Fabian Hemmert | 12 comments

Handwriting vs. algorithm | 12 comments on LinkedIn

LinkedIn

Here's also a nice project, the "Reverse Turing Test":

https://ars.electronica.art/panic/de/view/reverse-turing-tes...

(I.e. trying to hide the fact that you're human, among a group of AIs)

Reverse Turing Test | Ars Electronica Festival 2025: Panic

Dieses VR-Erlebnis fordert die Besucher*innen heraus, sich als einziger Mensch unter fortschrittlichen KI-Systemen zu tarnen – und wirft dabei tiefgreifende Fragen über Intelligenz, Identität und Realität auf.

Ars Electronica Festival 2025: Panic
Am I too naive in thinking the answer is rather simple? Cryptographic proofs (digital signatures). For text this should be trivial and for streaming video/audio you can probably hash and sign packets or maybe at least keyframes or something?
I think this is naive, is it just kicks the can. How do you trust that the signer is human?