How do we stop AI-generated ‘poverty porn’ fake images?

There is an important and necessary conversation happening right now about the use of generative artificial intelligence in global health and humanitarian communications. Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering—the very tropes many of us have worked for decades to banish. The alarms are valid. The images are harmful. But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause. The problem is not the tool. The problem is the user. Generative artificial intelligence is not the cause of poverty porn. The root cause is the deep-seeded racism and colonial mindset that have defined the humanitarian aid and global health sectors since their inception. This is not a new phenomenon. It is a long-standing pattern. In my private conversations with colleagues and researchers ... Read More

Reda Sadki

How do we stop AI-generated ‘poverty porn’ fake images?

There is an important and necessary conversation happening right now about the use of generative artificial intelligence in global health and humanitarian communications.

Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering – the very tropes many of us have worked for decades to banish.

The alarms are valid.

The images are harmful.

But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.

The problem is not the tool.

The problem is the user.

Generative artificial intelligence is not the cause of poverty porn.

The root cause is the deep-seeded racism and colonial mindset that have defined the humanitarian aid and global health sectors since their inception.

This is not a new phenomenon.

It is a long-standing pattern.

In my private conversations with colleagues and researchers like Alenichev, I find we often agree on this point.

Yet, the public-facing writing and research seem to stop short, focusing on the technological symptom rather than the systemic illness.

It is vital we correct this focus before we implement the wrong solutions.

The old poison in a new bottle

Long before Midjourney, large organizations and their communications teams were propagating the worst kinds of caricatures.

I know this.

Many of us know this.

We remember the history of award-winning photographers being sent from the Global North to “find… miserable kids” and stage images to meet the needs of funders. Organizations have always been willing to manufacture narratives that “show… people on the receiving end of aid as victims”.

These working cultures, which demand images of suffering, which view Black and Brown bodies as instruments for fundraising, and which prioritize the “western gaze”, existed decades before artificial intelligence.

Artificial intelligence did not create this impulse.

It just made it cheaper, faster, and easier to execute.

It is an enabler, not an originator.

If an organization’s communications philosophy is rooted in colonial stereotypes, it will produce colonial stereotypes, whether it is using a 1000-dollar-a-day photographer or a 30-dollar-a-month software subscription.

The danger of a misdiagnosis

If we incorrectly identify artificial intelligence as the cause of this problem, our “solution” will be to ban the technology.

This would be a catastrophic mistake.

First, it is a superficial fix.

It allows the very organizations producing this content to performatively cleanse themselves by banning a tool, all while eluding the fundamental, painful work of challenging their own underlying racism and colonial impulses.

The problem will not be solved. It will simply revert to being expressed through traditional (and often staged) photography.

Second, it punishes the wrong people.

For local actors and other small organizations, generative artificial intelligence is not necessarily a tool for creating poverty porn.

It is a tactical advantage in a fight for survival.

Such organizations may lack the resources for a full communication team.

They are then “punished by algorithms” that demand a constant stream of visuals, burying stories of organizations that cannot provide them.

Furthermore, some organizations committed to dignity in representation are also using artificial intelligence to solve other deep ethical problems.

They use it to create dignified portraits for stories without having to navigate the complex and often extractive issues of child protection and consent.

They use it to avoid exploiting real people.

A blanket ban on artificial intelligence in our sector would disarm small, local organizations.

It would silence those of us trying to use the tool ethically, while allowing the large, wealthy organizations to continue their old, harmful practices unchanged.

The real work ahead

This is why I must insist we reframe the debate.

The question is not if we should use artificial intelligence.

The question is, and has always been, how we challenge the racist systems that demand these images in the first place.

My Algerian ancestors fought colonialism.

I cannot separate my work at The Geneva Learning Foundation from the struggle against racism and fighting for the right to tell our own stories.

That philosophy guides how I use any tool, whether it is a word processor or an image generator.

The tool is not the ethic.

We need to demand accountability from organizations like the World Health Organization, Plan International, and even the United Nations.

We must challenge the working cultures that green-light these campaigns.

We should also, as Arsenii rightly points out, support local photographers and artists.

But we must not let organizations off the hook by allowing them to blame a piece of software for their own lack of imagination and their deep, unaddressed colonial legacies.

Artificial intelligence is not the problem.

Our sector’s colonial mindset is.

References

Image: The Geneva Learning Foundation Collection © 2025

#ArseniiAlenichev #ArtificialIntelligence #decolonization #generativeAI #globalHealth #photography #povertyPorn #representation

Aid Agencies Are Using AI Images Instead of Real Photos

And stock photo websites are full of fake images.

PetaPixel

The Register: Aid groups use AI-generated ‘poverty porn’ to juice fundraising efforts. “The starving child whose picture broke your heart when you saw it on a charity website may not be real. Global health researchers say that stock image companies like Adobe are profiting from AI-generated ‘poverty porn’ that non-profits are using to drum up donations.”

https://rbfirehose.com/2025/10/21/the-register-aid-groups-use-ai-generated-poverty-porn-to-juice-fundraising-efforts/

The Register: Aid groups use AI-generated ‘poverty porn’ to juice fundraising efforts | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz
🎉 Breaking news: Aid agencies using AI-generated 'poverty porn' images because apparently, reality just wasn't dismal enough! 🤖🖼️ Now your donations can support a new era of digital despair! 🙃📉
https://www.theguardian.com/global-development/2025/oct/20/ai-generated-poverty-porn-fake-images-being-used-by-aid-agencies #AIgeneratedImages #DigitalDespair #PovertyPorn #AidAgencies #RealityCheck #HackerNews #ngated
AI-generated ‘poverty porn’ fake images being used by aid agencies

Exclusive: Pictures depicting the most vulnerable and poorest people are being used in social media campaigns in the sector, driven by concerns over consent and cost

The Guardian
AI-generated ‘poverty porn’ fake images being used by aid agencies

Exclusive: Pictures depicting the most vulnerable and poorest people are being used in social media campaigns in the sector, driven by concerns over consent and cost

The Guardian

@TheBreadmonkey @HarriettMB

Millionaire who made himself homeless and broke on purpose to prove he could make $1MILLION in 12 months for YouTube clicks QUITS his bizarre social experiment over health concerns.

https://www.reddit.com/r/homeless/comments/1c9vs22/millionaire_who_made_himself_homeless_and_broke/

#povertyPorn

#povertyPlay

Charity is a cold grey loveless thing. If a rich man wants to help the poor, he should pay his taxes gladly, not dole out money at a whim.

Clement Attlee.

the homeless situation is by design.

Studies have shown that – in practice, and not just in theory – providing people experiencing chronic homelessness with permanent supportive housing saves taxpayers money. Permanent supportive housing refers to permanent housing coupled with supportive services.

https://www.npscoalition.org/post/fact-sheet-cost-of-homelessness

Ex-Gresham city employee sentenced for identity theft, misuse of city funds

On Monday, Kevin Dahlgren pleaded guilty to one felony count of first-degree theft and one felony count of aggravated identity theft, as well as one count of first-degree general misconduct, which is a misdemeanor, according to Multnomah County court documents.

Dahlgren had 25,000 followers on X, formerly known as Twitter, where he posted videos of homeless people, often accompanied by his own opinions on "empowering not enabling" them. He now has nearly 56,000 followers and continues to post the same brand of content, referring to himself as an independent journalist.

Kevin Dahlgren is a horrible far-right grifter, whose activities directly endanger and harm the people he claims to advocate for.

#pdx #altright #kevindahlgren #PovertyPorn #exploitation #Portland #Oregon

Before you continue to YouTube

'#NicolaPeltzBeckham, a #billionaire’s daughter, made a movie about abject #poverty. It’s as bad as you think -

If a #NepoBaby makes a laughably oblique film portraying what she must imagine to be the strife of the impoverished class, but hardly anyone watches it, will it hurt her career?'
https://www.theguardian.com/film/2024/apr/12/lola-movie-nicola-peltz-beckham
#PovertyPorn

Nicola Peltz Beckham, a billionaire’s daughter, made a movie about abject poverty. It’s as bad as you think

Lola, whose protagonist careens from one traumatic experience to the next, doesn’t explore hardship – it exploits it

The Guardian
voluntourism poverty p*rn and white saviorism

YouTube