573 Followers
1,024 Following
1.4K Posts
Consultant. Former UNESCO director for free expression and media development. Emeritus professor journalism & media studies, Rhodes University, South Africa. https://commspolicy.africa/
What to do when big tech don't give access to data showing what content is labelled as AI-generated? I wrote about this opacity in regard to flagging #deepfakes here: https://www.linkedin.com/posts/guy-berger-b641b2_informationintegrity-digitalpolicy-deepfakeresearch-activity-7438939907928109056--mbQ?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAHWvwBMzqENNprJJQLiV3Og7gUmSKkZtw
#informationintegrity #digitalpolicy #deepfakeresearch | Guy Berger

DEEPFAKES: WHEN PARALYSIS HITS ANALYSIS How are we supposed to understand - and effectively mitigate - deepfakes that pose a danger, without evidence? I investigated this in an Issue Brief for last year’s G20 hosted by South Africa. (Link in the comments) My assessment: we are part paralysed by opacity in the companies that create generative AI tech and that circulate the results. (These are increasingly the same corporate culprits, though the operations differ). A key research problem is in the datasets availed by platforms (directly – which is rare, but even when access is via costly data brokers). These sets don't show what content items are labelled as AI-generated. That deliberate design is common across most big platforms. All of whom committed in 2024 to flagging deepfakes, but today, the extent of their follow-through can’t be assessed at scale. Researchers are forced into less optimum ways of assessing detection, reach and engagement. Like anecdotal cases, or by scraping “samples” off public-facing content. On the audience consumption (and resharing) side, the field is more open: ·      In 2024, the OECD conducted a study in 21 countries involving 40,765 individuals. ·      In Brazil, the Regional Centre for Studies on the Development of the Information Society (Cetic.br|NIC.br) has been working with a representative panel drawn from the national ICT Household survey. Such huge efforts take time and money - that is way beyond the bounds of most actors or urgent cases. The alternative of experimental, ethnographic and interview research, into both producers and receivers of deepfakes, can be cheaper and quicker. But the results are hard to generalise and use for action. These simpler techniques also remain largely reactive – as well as confined to dark-mode about the mediating influence (or not) of the enabling toolmaking and distribution tech companies. The further upshot is: *      Anyone trying to assess the effectiveness of mitigation strategies is working in the dark. *     This in turn cascades into uninformed consumer education and regulation. Research isn’t pointless. But, as per the G20 Issue Brief, it needs to go hand in hand with more attention to foresight. Planning for dangerous deepfake scenarios, can help to write options for playbooks – and give guidance for a modicum of monitoring. Responses can then be more on the front-foot, even though deep research insight remains elusive. Some G20 work in 2025 & 2024 advocated for increased transparency on the part of tech companies. (links below)  #InformationIntegrity #DigitalPolicy #DeepfakeResearch

LinkedIn
Suggestions to cope with how Agentic AI complicates Data Governance https://www.linkedin.com/feed/update/urn:li:share:7430649992094179329/
ON DATA GOVERNANCE AND AGENTIC AI This is a topic addressed in the recently published toolkit produced for the 2025 G20, to which I was pleased to be lead writer. (link in comments) The toolkit makes… | Guy Berger

ON DATA GOVERNANCE AND AGENTIC AI This is a topic addressed in the recently published toolkit produced for the 2025 G20, to which I was pleased to be lead writer. (link in comments) The toolkit makes plain: “Agentic AI amplifies the need to move beyond static data governance.” The moltbook phenomenon underlines this even further. Agentic systems are designed to autonomously access and process data from a range of often siloed sources. And… many AI agents operate on the basis of mammoth troves of data about the individual who uses them. And so? Here’s the impact on data governance: ·      Traditional governance issues like data minimization and purpose specification are thrown awry. ·      The risk of unintended data exposure or misuse escalates significantly. ·      Aligning agentic AI with ethics and privacy regulations becomes super complex. The result: data governance has to navigate real-time changes in the data lifecycle. Hence the need for: continuous monitoring, real-time risk assessment, and dynamic policy enforcement in data governance. Can agentic AI help in the face of the new governance challenges? Maybe. By at-scale automation of metadata tagging, data quality checks and detection of anomalies. Likewise, agents can possibly help track provenance and aid the monitoring of rule compliance. In other words, agents that monitor other agents. But if people are not involved in oversight, review and appeal, there that of agentic data processes operate within a “black box” beyond any effective governance or oversight. Regulators and enterprises now need to urgently regularly review extant data governance’s fitness-for-purpose.  This means, as per the toolkit the G20, frequent and ongoing exercises in foresight, scenario planning, and risk-opportunity assessments Also called for: continuous monitoring and auditing of compliance with governance regimes, assessing the reasons for shortfalls, and keeping up to speed with agentic AI.

How do you say something different to what an AI service would? One way is to use AI-generated images. Specifically, rock art style African animals to convey a message to the SADC’s forum of Election Management Bodies (EMBs), for their AGM in eSwatini in December. https://commspolicy.africa/?page_id=134
Latest news

Move slow... and move fast. That's the advice I gave to Election Management Bodies in southern Africa at their AGM this week. Concretely, it means a hard-nosed calculation of costs and benefits of AI adoption - and the importance of AI adaptation
A report here
https://www.linkedin.com/posts/guy-berger-b641b2_elections-adopt-ai-or-adapt-to-ai-what-share-7402307737134448641-79_v/?rcm=ACoAAAAHWvwBMzqENNprJJQLiV3Og7gUmSKkZtw
ELECTIONS: ADOPT AI OR ADAPT TO AI? What’s an African election management body (EMB) to do? You already got to defend your professionalism against political pressures.  And to run your elections on a… | Guy Berger

ELECTIONS: ADOPT AI OR ADAPT TO AI? What’s an African election management body (EMB) to do? You already got to defend your professionalism against political pressures.  And to run your elections on a shoestring. Now, you face the growing complication of AI in the mix. Your adversaries are adopting AI for cyber and other attacks. Your political parties are picking up on AI-driven campaigning.  Media, election observers and political analysts are doing AI data crunching. The social media behemoths continue driving their content and advertising through AI. Like some EMBs, you have some toes dipped in the AI water. Officially, you may be using Microsoft’s co-pilot for meeting summaries. And many of your staff are using AI informally. Like Google maps or social media, often unaware that AI is involved. Even with knowing use, many in your team likely assume these are tools free for the picking. Missing that AI and AI-mediated services are especially tools of data-rapacious foreign corporations with risks entailed. I addressed these issues in a keynote to the AGM of the Electoral Commissions Forum of the EMBs from the Southern African Development Community.  (Link in comments) “Ignore hype about ‘harnessing’ an apparent magic wand. Consider, instead, doing hard-nosed cost-benefit calculations,” I said. One: tote up the cash cost of officially adopting AI services. Don’t ignore the extra spend on risk assessment, beefed-up cybersecurity and human oversight.     Two; assess the benefits: -         Some of the extra costs could be offset by, for example, AI-induced economies in electoral logistics, thus saving some overall budget. -         Some AI use could lead to greater effectiveness  - like in detecting voting anomalies. But such gains don’t reduce the EMB’s wage bill. “To assess each case, means doing judicious and granular budgeting,” I said. Furthermore, I advised, recognise that the endemic errors and biases mean that investing in AI services can - unfortunately - give risky returns (like a chatbot’s wrong responses). And insist that AI vendors disclose how systems have been trained and stress-tested, and what the safeguards are when fed with the EMB’s own data. In short, take a cautious approach to adoption. Yet, there can also be a cost to EMBs being too slow to act when the wider ecosystem is becoming AI powered.  Willy nilly, they have to adapt. Like doing special risk assessments about AI-intensified threats.   Adapting to these could entail, for example, an EMB’s IT staff adopting AI-enhanced monitoring of DDOS attacks by AI-empowered trouble-makers. But to adapt does not mean all EMB responses need to be AI-based.  It just takes plain-old threat-analysis, and devising of playbooks, in order to be ready to rapidly respond to damaging deepfakes. For instance, anyone can anticipate public outcry, either fabricated or sincerely believed, that an EMB is deploying AI to skew an election. The takeaway: “adopt slowly; adapt swiftly”.

Tomorrow Tuesday, a webinar on prospects for information as a public good - in the face of Generative AI. Along with the idea of Re-generating AI. Work within the system, or outside of it? We'll debate this as part of the International Panel on Social Progress https://www.linkedin.com/posts/guy-berger-b641b2_generative-ai-how-does-it-impact-prospects-activity-7388631186199420928-IbjY?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAHWvwBMzqENNprJJQLiV3Og7gUmSKkZtw
Generative AI - how does it impact prospects for information as a public good? Generative AI companies aren’t exactly making things don’t look good for the information ecosystem. Courtesy of them… | Guy Berger

Generative AI - how does it impact prospects for information as a public good? Generative AI companies aren’t exactly making things don’t look good for the information ecosystem. Courtesy of them, we’re getting loads of multi-modal content as a public “bad” – including servicing lonely people (mainly men) with the compliant illusion that sex and trust can be had on tap. Then there are the intrinsic factual and logical errors in their outputs. But that's where today we find the money, the momentum and the manipulation of public opinion. So how then to strive for information with integrity, which transcends psychological exploitation, and which is accessible to everyone – including those who can’t afford to pay a subscription for reliable news and informed comment, and in own language, and those whose info diet is much-mediated by AI and social media platforms. .   Together with the French intellectual Christophe Gauthier from the Internet for Trust network, I've written a thinkpiece about the prospects to push back the tide of tackiness. (Link in the comment section) My pitch is what can be done within the limits of the dominant system; Christophe's is to strive for alternatives. Do we have to choose one or the other, or can it be both? We brainstormed this strategic question as part of the subgroup on Information as a Public Good, under the International Panel on Social Progress, and our resulting paper is one of the subjects in this week’s five days of online IPSP debates. Discussing the issues we've raised is a panel of super speakers - Robin Mansell, Tabani Moyo, PhD, Stephen Coupland and Markus Krebsz It's a bit clunky to follow the debate, but worth it. You have to register yourself with a name and password at the IPSP platform (link below in the comments), and then you can follow the programme.  

"AI systems are reshaping Africa’s relationship with the global economy, determining whether the continent remains locked into extractive relationships or develops genuine technological sovereignty." And journalists are missing this story. https://www.dailymaverick.co.za/opinionista/2025-08-06-african-media-must-follow-the-money-and-treat-ai-as-a-story-about-power-not-tech/
A new policy brief gives a little-heard perspective on AI. Informed by African interests in the 2025 processes of the G20, the authors signal potent connections and problems within the international AI tech stack https://www.linkedin.com/posts/guy-berger-b641b2_a-new-policy-brief-gives-a-little-heard-perspective-activity-7350520164964007936-0_XA?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAHWvwBMzqENNprJJQLiV3Og7gUmSKkZtw
A new policy brief gives a little-heard perspective on AI. | Guy Berger

A new policy brief gives a little-heard perspective on AI. Informed by African interests in the 2025 processes of the G20, the authors signal potent connections and problems within the international AI tech stack

Africa's Continental AI Strategy now has an implementation plan. The African Alliance for Access to Data (I'm convenor), urges the plan to reference: the relevance of Right to Information systems, Open Science, + work towards Guidelines on access to data. https://dataalliance.africa/alliance-highlights-data-access-as-key-for-implementing-the-aus-continental-ai-strategy/
Alliance highlights data access as key for implementing the AU’s Continental AI Strategy - African Alliance for Access to Data

Data access is a critical component for implementing the African Union's Continental AI Strategy. To realise this point, tpecific text can be added to the implementation plan for the Strategy

African Alliance for Access to Data
Can media get any mileage with the concept of "information integrity" and the 2025 agenda of the G20 countries? South African editors see possibilities https://media20.org/2025/05/16/m20-policy-brief-information-integrity/
M20 policy brief: Information Integrity - Media20

Media20