When Does AI Fakery Become AI Reality?
We are living in the precise historical moment when the question “Is this real?” has become unanswerable in real time, and the fact that nobody seems particularly alarmed by this should alarm us all. The case study arrived this month with the force of a wartime broadcast, which is exactly what it was: Israeli Prime Minister Benjamin Netanyahu, whose physical whereabouts and physical condition have been the subject of intense speculation since the U.S. and Israel launched strikes on Iran on February 28, appeared in a video address on March 12. Social media users immediately claimed he had six fingers on his right hand. The rumor spread to millions of viewers within hours. Fact-checkers at Snopes, PolitiFact, and Newsweek scrambled to verify that the extra digit was, in fact, the hypothenar eminence, the fleshy pad at the base of the little finger, rendered ambiguous by video compression. Netanyahu’s office declared, flatly, that the Prime Minister was “fine.”
But here is the part that should keep you staring at the ceiling at three in the morning: even after the debunking, nobody believed it. When Netanyahu posted a follow-up video on March 15 showing himself at a Jerusalem cafe, ordering coffee, joking that he was “dying for coffee,” and holding up both hands to count his fingers for the camera, the conspiracy only deepened. Social media users analyzed the coffee cup for evidence of impossible fluid dynamics. They scrutinized his wedding ring. They compared his facial geometry frame by frame and declared that his face shape shifted from round to oval when he looked down. Elon Musk’s chatbot Grok, feeding on the volume of user suspicion, attached a community note to the cafe video labeling it as likely “deepfake” or “AI generated,” a determination it later reversed on a subsequent Nowruz greeting video. The coffee shop itself, The Sataf in the Jerusalem Hills, posted its own photographs confirming the visit. Reuters verified the location from archival interior imagery. None of it mattered. The suspicion is now self-sustaining.
This is new. We have moved past the familiar story of political misinformation, where a false claim circulates until it is corrected. What replaced it is an epistemological shift in which correction itself has become suspect. Video, photographic evidence, official statements, corroborating witnesses: the entire apparatus of verification is now presumed to be compromised. We have entered a period where the existence of deepfake technology has made all video evidence permanently conditional, regardless of whether any given video is actually fake. The capacity to fake has poisoned the capacity to trust what is not faked.
Consider the Iranian side of the same conflict. The Israeli disinformation detection firm Cyabra identified networks of tens of thousands of accounts that generated material receiving 145 million views in the first two weeks of the war, almost entirely pro-Iranian, and concentrated on TikTok. These networks circulated fabricated imagery showing missile strikes flattening Tel Aviv and American troops captured by Iranian forces, none of which occurred. The fog of war has always included propaganda, but the fog has never before been this photorealistic. The fabrications are no longer crude Photoshop composites that a trained eye can spot in seconds. They are rendered with sufficient fidelity that a reasonable person, scrolling quickly, would have no immediate basis for doubt.
The Netanyahu case is a wartime extreme, but the underlying dynamic is already operating in peacetime contexts that are far more intimate and far less examined. Live television broadcasts now routinely employ real-time processing that smooths skin, adjusts lighting, and enhances color in ways that alter the appearance of the people on screen. Forget the traditional makeup and lighting techniques of broadcast television, which have existed since the medium began. What we are watching now are computational interventions applied to the video signal itself, and they operate on a continuum that has no clear boundary between “enhancement” and “replacement.” When a news anchor’s wrinkles are softened in real time by a processing algorithm, at what point does the person on screen cease to be a representation of the actual person and become a representation of a computational ideal? The answer, of course, is that no one has defined the point, because defining the point would require acknowledging that the line has already been crossed.
The timeline for how this unfolds is not speculative. It is already underway, and its stages are visible if you are willing to look at them directly.
The first stage, which we passed through roughly between 2020 and 2024, was the period of detectable fakery. Deepfakes existed, but they carried tells: mismatched lighting, uncanny eye movement, hands with too many or too few fingers, audio that did not quite sync with lip movement. During this stage, the existence of deepfakes was alarming but containable. A sufficiently careful viewer could, with effort, distinguish real from generated. The six-finger test was, during this period, a reliable heuristic. It no longer is.
The second stage, which we entered in 2025 and now inhabit fully, is the period of plausible deniability in both directions. The technology has improved to the point where generated content is frequently indistinguishable from real content at normal viewing resolution and speed. But the critical feature of this stage has little to do with the quality of the fakes. The quality is a prerequisite. The real achievement is the weaponization of doubt itself. Because deepfakes are now plausible, all real video is also potentially fake, and all fake video is potentially real. This is the stage at which Netanyahu can hold up his hands and count to five and still not be believed. The doubt is no longer attached to any specific piece of evidence. It has become ambient. It is the atmosphere.
The third stage, which is arriving faster than anyone in a position of authority seems prepared to address, is the period of accepted substitution. This is the stage at which the question “Is this real?” is replaced by the question “Does it matter?” We are already seeing the leading edge of this transition. Virtual influencers with millions of followers sell products and cultivate parasocial relationships with audiences who know, on some level, that the person does not exist. Customer service interactions are conducted by chatbots that simulate empathy with increasing sophistication. Telehealth appointments are mediated through screens that already abstract the physical presence of the physician. Each of these represents a small concession, a minor substitution of the computational for the human, and each one makes the next substitution slightly easier to accept.
The fourth stage, which I believe will arrive within a decade if present trends continue unchallenged, is the period of preferential replacement. This is the stage at which people begin to prefer the generated version to the real one. The logic is seductive: the generated version is more consistent, more available, more accommodating, and more aesthetically optimized than any real person can be. A generated therapist never has a bad day. A generated teacher never loses patience. A generated romantic partner never gains weight, never ages, never says the wrong thing, never leaves. The appeal has nothing to do with indistinguishability. The generated version does not need to pass for real. It needs only to be better than real, or at least better than the parts of reality that cause friction, disappointment, and pain.
This is where the loss becomes catastrophic, and it is a loss that will not announce itself. It will arrive as convenience, then as optimization, then as the quiet replacement of difficult, unpredictable, mortal human beings with frictionless digital surrogates. The people who accept the replacement will not experience it as a loss at all. They will experience it as an improvement.
Information survives this exchange. Efficiency survives it. Even aesthetic pleasure survives it. The thing that does not survive is the specific quality of human presence that cannot be computed: the knowledge that another consciousness is regarding you, that another mortal being with its own fears and desires and confusions is making the choice to attend to you. Martin Buber called this the I-Thou relationship, the encounter between subjects that cannot be reduced to the encounter between a subject and an object. A generated face, no matter how perfect, is always an object. It is always an It. And the slow, imperceptible replacement of Thou with It across every domain of human interaction represents a spiritual impoverishment that no amount of technological sophistication can compensate for.
The Netanyahu case makes this concrete because it operates in the domain of political authority, where the stakes of presence are explicit. A head of state who cannot prove he is alive by appearing on camera has lost something fundamental about the nature of political legitimacy, which has always depended, at some level, on the leader’s physical embodiment. The king’s body was, for centuries, the metaphorical body of the state. When the body becomes optional, or interchangeable, or generatable, the concept of political authority itself becomes unmoored. If a sufficiently advanced system can generate a Netanyahu who gives speeches, responds to events, and projects confidence, and if that generated Netanyahu is indistinguishable from the real one, then the real Netanyahu becomes, in a functional sense, unnecessary. The office consumes the officer.
And here is the trap that no one operating inside this logic has thought through to its end: the fakery, once deployed, must eventually collide with the truth, and the collision destroys the credibility of every previous communication. If Netanyahu is alive and well, Israel will at some point have to produce him in a setting that satisfies even the most hostile skeptic, and at that moment, every algorithmically smoothed video, every carefully staged cafe visit, every broadcast conducted via video link rather than in the physical presence of journalists becomes retroactive evidence of a government that chose simulation over transparency during wartime. The question will no longer be “Was he dead?” but “Why did you make it so easy to believe he might be?” If, on the other hand, Netanyahu was in fact killed or incapacitated, then every video released after that event is a state-produced fabrication, and the Israeli government will have to explain why it fed generated imagery of a living leader to its own citizens and to the world while conducting a war in his name. Either outcome is devastating, because both outcomes reveal the same underlying choice: the decision to substitute a generated image for an accountable human presence. The lie, once constructed, offers no clean exit. You cannot quietly stop using a generated version of a leader and resume using the real one without admitting that the generated version existed. You cannot announce the leader’s death after weeks of generated appearances without admitting that the state lied about his survival. The technology that made the deception possible also makes the unwinding of the deception impossible, because every frame of every video released during the period of ambiguity is now permanently suspect. This is the structural problem with institutional fakery that no amount of technical sophistication can solve: reality always collects what it is owed, and the interest compounds.
But this dynamic does not stop at heads of state. It extends to every relationship mediated by a screen, which is to say, in 2026, nearly every relationship. If your doctor appears on a telehealth screen and you cannot be certain whether you are speaking to a human physician or a generated avatar trained on that physician’s mannerisms and medical knowledge, then the bond between patient and healer has been hollowed out. If your child’s teacher conducts class through a video feed that may or may not be running real-time enhancement or substitution, then the trust between student and mentor has been corrupted at its source. If your elderly parent’s weekly video call with you is, unbeknownst to you, mediated by a system that smooths their tremor and brightens their complexion to spare you worry, then the intimacy between parent and child has been overwritten by an act of tenderness, and the tenderness is what makes it worse.
Will we care? That is the question that matters most, and I suspect the honest answer is that most people will not, at least not in the way that caring requires. Caring, in this context, means insisting on the real even when the real is less pleasant, less convenient, and less optimized than the alternative. It means choosing the trembling hand over the steady avatar. It means tolerating the wrinkles, the bad lighting, the awkward pauses, the six-fingered ambiguity of actual human presence captured by imperfect technology. It means understanding that the imperfections are evidence of life, markers of the irreducible difference between a person and a performance.
The people who will care are the people who already understand that beauty, meaning, and truth are inseparable from vulnerability, impermanence, and risk. They are the people who prefer a live theater performance with a missed cue to a flawless film, who prefer a handwritten letter with a misspelling to a generated email with perfect grammar, who prefer the face of a friend aging in real time to a photograph retouched into permanent youth. These people will be, increasingly, a minority. They will be regarded as eccentrics, romantics, Luddites, people who refuse to accept improvement.
They will also be the last people who know what it means to be in the presence of another human being. And when they are gone, the knowledge will go with them, because it is not the kind of knowledge that can be stored or transmitted or generated. It is the kind of knowledge that can only be lived.
#ai #cyabra #fakery #israel #netanyahu #networks #politics #reality #snopes #video #virtual #war

