
#informationintegrity #digitalpolicy #deepfakeresearch | Guy Berger
DEEPFAKES: WHEN PARALYSIS HITS ANALYSIS How are we supposed to understand - and effectively mitigate - deepfakes that pose a danger, without evidence? I investigated this in an Issue Brief for last year’s G20 hosted by South Africa. (Link in the comments) My assessment: we are part paralysed by opacity in the companies that create generative AI tech and that circulate the results. (These are increasingly the same corporate culprits, though the operations differ). A key research problem is in the datasets availed by platforms (directly – which is rare, but even when access is via costly data brokers). These sets don't show what content items are labelled as AI-generated. That deliberate design is common across most big platforms. All of whom committed in 2024 to flagging deepfakes, but today, the extent of their follow-through can’t be assessed at scale. Researchers are forced into less optimum ways of assessing detection, reach and engagement. Like anecdotal cases, or by scraping “samples” off public-facing content. On the audience consumption (and resharing) side, the field is more open: · In 2024, the OECD conducted a study in 21 countries involving 40,765 individuals. · In Brazil, the Regional Centre for Studies on the Development of the Information Society (Cetic.br|NIC.br) has been working with a representative panel drawn from the national ICT Household survey. Such huge efforts take time and money - that is way beyond the bounds of most actors or urgent cases. The alternative of experimental, ethnographic and interview research, into both producers and receivers of deepfakes, can be cheaper and quicker. But the results are hard to generalise and use for action. These simpler techniques also remain largely reactive – as well as confined to dark-mode about the mediating influence (or not) of the enabling toolmaking and distribution tech companies. The further upshot is: * Anyone trying to assess the effectiveness of mitigation strategies is working in the dark. * This in turn cascades into uninformed consumer education and regulation. Research isn’t pointless. But, as per the G20 Issue Brief, it needs to go hand in hand with more attention to foresight. Planning for dangerous deepfake scenarios, can help to write options for playbooks – and give guidance for a modicum of monitoring. Responses can then be more on the front-foot, even though deep research insight remains elusive. Some G20 work in 2025 & 2024 advocated for increased transparency on the part of tech companies. (links below) #InformationIntegrity #DigitalPolicy #DeepfakeResearch
