Lees tip -> Kabinet voert strijd tegen seksueel geweld en zedenzaken op | Minister Van Weel zet in op snellere zedenzaken en strengere aanpak van online seksueel kindermisbruik. | #ATKM #deepfakes #JustitieenVeiligheid #preventie #Seksueelgeweld #zedenzaken |

https://hbpmedia.nl/kabinet-voert-strijd-tegen-seksueel-geweld-en-zedenzaken-op/

👉🏿 Para seguir el debate sobre #DeepFakes "Crear videos sintéticos que presentan a las mujeres como productos en vitrinas o aprovechar herramientas como Grok, la IA de X, para generar imágenes de falsos desnudos son una muestra del uso creciente de la inteligencia artificial (IA) para producir contenidos desinformadores que degradan y cosifican a la mujer en internet." https://verifica.efe.com/ia-potencia-arsenal-desinformacion-degradante-contra-mujeres/ 🚨 ¿Lo han visto? ¿opinoines en las #RedesLibres ?
La IA potencia la producciĂłn de desinformaciĂłn degradante contra las mujeres

Crear videos sintéticos de mujeres en vitrinas o aprovechar herramientas como Grok para generar falsos desnudos son una muestra del uso creciente de la IA para producir contenidos desinformadores que cosifican a la mujer.

EFE Verifica

Deepfake Influencers Push Supplements Online

An Amish woman who rails against processed food and praises a $50 “detox” powder has amassed hundreds of thousands of followers online. She also doesn’t exist. The New York Times reports that “Melanskia” is one of several AI-generated personali…
#dining #cooking #diet #food #Nutrition #ArtificialIntelligence #deepfakes #instagram #nutrition #supplements #wellness
https://www.diningandcooking.com/2556732/deepfake-influencers-push-supplements-online/

RE: https://bildung.social/@mkz/116211254926023187

#KünstlicheIntelligenz ist eines der großen Hypethemen unserer Zeit. Während die einen in #KI eine Art Erlösung von den Fesseln der Menschlichkeit erwarten, sind andere in Sorge um den dadurch über uns hereinbrechenden Weltuntergang.

Wie so oft, dĂĽrfte beide Extreme nicht richtig sein. #AI beinhaltet sowohl Chancen als auch Risiken!

Unsere #Kinder, aber auch wir selbst sollten daher lernen diese zu erkennen und damit klug umzugehen, damit aus der #Technologie ein Mehrwert fĂĽr uns und die Gesellschaft entstehen kann.

Anwendung in der #Medizin der #IT oder die Fehleranalyse im #Industrie|bereich können eine echte Hilfe sein, während #Deepfakes von #Grok oder halluzinierte Urteil von #ChatGPT wirklich niemand braucht.

In der #Schule bzw. bei der #Bildung ist es nicht anders. KI kann z.B. bei der Recherche ein tolles Werkzeug sein, aber den kreativen Teil der Leistung muss von den SchĂĽlern ausgehen.

Das @mkz bietet Hilfe bei der Orientierung!

https://www.medienkulturzentrum.de/seminar/kuenstliche-intelligenz-in-der-bildung-chancen-risiken-orientierung/

Maine Morning Star: Maine considers regulation of AI-generated political ads. “Maine lawmakers advanced a proposal on Tuesday that would require political campaigns and political action committees to provide a disclosure label for any content significantly altered by artificial intelligence.”

https://rbfirehose.com/2026/03/15/maine-morning-star-maine-considers-regulation-of-ai-generated-political-ads/
Maine Morning Star: Maine considers regulation of AI-generated political ads

Maine Morning Star: Maine considers regulation of AI-generated political ads. “Maine lawmakers advanced a proposal on Tuesday that would require political campaigns and political action commi…

ResearchBuzz: Firehose
What to do when big tech don't give access to data showing what content is labelled as AI-generated? I wrote about this opacity in regard to flagging #deepfakes here: https://www.linkedin.com/posts/guy-berger-b641b2_informationintegrity-digitalpolicy-deepfakeresearch-activity-7438939907928109056--mbQ?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAHWvwBMzqENNprJJQLiV3Og7gUmSKkZtw
#informationintegrity #digitalpolicy #deepfakeresearch | Guy Berger

DEEPFAKES: WHEN PARALYSIS HITS ANALYSIS How are we supposed to understand - and effectively mitigate - deepfakes that pose a danger, without evidence? I investigated this in an Issue Brief for last year’s G20 hosted by South Africa. (Link in the comments) My assessment: we are part paralysed by opacity in the companies that create generative AI tech and that circulate the results. (These are increasingly the same corporate culprits, though the operations differ). A key research problem is in the datasets availed by platforms (directly – which is rare, but even when access is via costly data brokers). These sets don't show what content items are labelled as AI-generated. That deliberate design is common across most big platforms. All of whom committed in 2024 to flagging deepfakes, but today, the extent of their follow-through can’t be assessed at scale. Researchers are forced into less optimum ways of assessing detection, reach and engagement. Like anecdotal cases, or by scraping “samples” off public-facing content. On the audience consumption (and resharing) side, the field is more open: ·      In 2024, the OECD conducted a study in 21 countries involving 40,765 individuals. ·      In Brazil, the Regional Centre for Studies on the Development of the Information Society (Cetic.br|NIC.br) has been working with a representative panel drawn from the national ICT Household survey. Such huge efforts take time and money - that is way beyond the bounds of most actors or urgent cases. The alternative of experimental, ethnographic and interview research, into both producers and receivers of deepfakes, can be cheaper and quicker. But the results are hard to generalise and use for action. These simpler techniques also remain largely reactive – as well as confined to dark-mode about the mediating influence (or not) of the enabling toolmaking and distribution tech companies. The further upshot is: *      Anyone trying to assess the effectiveness of mitigation strategies is working in the dark. *     This in turn cascades into uninformed consumer education and regulation. Research isn’t pointless. But, as per the G20 Issue Brief, it needs to go hand in hand with more attention to foresight. Planning for dangerous deepfake scenarios, can help to write options for playbooks – and give guidance for a modicum of monitoring. Responses can then be more on the front-foot, even though deep research insight remains elusive. Some G20 work in 2025 & 2024 advocated for increased transparency on the part of tech companies. (links below)  #InformationIntegrity #DigitalPolicy #DeepfakeResearch

LinkedIn

New York Times: Cascade of A.I. Fakes About War With Iran Causes Chaos Online This link goes to a gift article. “The content has become a potent informational weapon for Tehran as it seeks to shake the public’s tolerance for war by depicting scenes of devastation and destruction across the region. The majority of A.I. videos about the war push pro-Iranian views, often to falsely demonstrate […]

https://rbfirehose.com/2026/03/15/new-york-times-cascade-of-a-i-fakes-about-war-with-iran-causes-chaos-online/
A glut of fake, AI-created videos circulate on Elon Musk's X despite a policy crackdown to curb wartime disinformation. https://www.japantimes.co.jp/business/2026/03/15/tech/ai-fakes-iran-us-war-x/?utm_medium=Social&utm_source=mastodon #business #tech #deepfakes #misinformation #socialmedia #x #iran #ai #grok #elonmusk
AI fakes about Iran-U.S. war swirl on X despite policy crackdown

The Middle East war has unleashed an avalanche of AI-generated visuals, leaving many social media users unable to distinguish fabrication from reality.

The Japan Times
Opinion | Why I’m Suing Grammarly

A tech company made a deepfake of my mind. I’m fighting back.

The New York Times

¡Bien!, ahora extiendan esta prohibición a TODOS los servicios comerciales de IA generativa, como venimos exigiendo desde que empezamos esta lucha, porque TODOS los modelos de IAG permiten la generación de esta clase de contenidos y fueron entrenados con datos SIN EL CONSENTIMIENTO de sus dueños.

#AI #UE #EuropeanCouncil #genAI #generativeAI #grok #Deepfakes #IA #IAgenerativa #porn #UnionEuropea #AIAct #Spain