La justice française signale aux autorités US une possible valorisation artificielle de #X - RTBF Actus
Les investigations portent notamment sur des soupçons d'algorithmes biaisés, de complicité de détention d'images de mineurs présentant un caractère pédopornographique, de complicité de diffusion, offre ou mise à disposition en bande organisée d'image de mineurs présentant un caractère #pédopornographique, de #deepfake à caractère sexuel, ou encore de #négationnisme.
Wie hat der #Spiegel zum Fall #CollienFernandes recherchiert? Welche Beweise gegen #ChristianUlmen konnte der Spiegel einsehen und verifizieren? ⬇️
#DigitaleGewalt #FakePorn #DeepFake #AI #Missbrauch #sexualabuse #Collien #Ulmen #jerks
Browsing X for geopolitical news is like walking through quagmire. This is a difficult situation. X, and social media in general, now operates as a feedback loop driven by paranoia. Users often begin with a firm conclusion such as a belief that someone has died driven by geopolitical hopes, sentiments, or a desire for disorder. They then search backwards for anything that supports this view.
AI that creates realistic images adds a layer of plausible deniability. This transforms healthy skepticism into a form of nihilism where nothing is accepted as real.
The BOOM Live article effectively explains a key feature of modern information warfare [1]. When generative AI becomes sufficiently advanced, it does more than produce convincing fakes. It also weakens trust in authentic evidence.
The piece describes a repeating cycle. Rumors spread that a leader has died, often linked to conflict or attacks. Videos then appear to show the person is alive. Some viewers, and even AI tools, label these videos as fake. They cite reasons like video compression, lighting, or small visual errors. This creates more doubt, which leads to more video evidence, which then invites further scrutiny. The cycle continues without resolution.
AI models can contribute to this problem. When they hallucinate or make errors in complex situations, they can spread misinformation rather than correct it. This is especially true during fast-moving events where video quality is poor. In the case mentioned [1], this likely prolonged the confusion.
#AI #DeepFake #DeepFakes #GeoPolitics #SocialMedia #Disinformation
📰 Netanyahu morto e sostituito dall’AI? Cos’è la trappola dell’AIpocondria e come difendersi
#️⃣ #FACTCHECKING #BenjaminNetanyahu #Complotti #Deepfake #Disinformazione #Gaza #Intelligenzaartificiale #Iran #Israele #Palestina #Propaganda #Russia #Ucraina #VolodymyrZelensky #OpenOnline #TheLabSocial #News #Notizie #Italia
🔗 https://www.open.online/2026/03/21/aipocondria-video-netanyahu-morto-falso-ai/
I usually prefer the following fact checking organizations that adhere to the International Fact-Checking Network (IFCN) Code of Principles (administered by Poynter Institute [1]), which requires nonpartisanship, source transparency, clear corrections, and methodology disclosure.
* AP Fact Check
* Reuters Fact Check
* FactCheck.org
* Newschecker (India)
* FactChecker.in (India)
* BOOM (India)
* Alt News (India)