📉 Ah, the endless saga of elites herding the #masses like sheep, now with the magical aid of AI persuasion! 🎩🤖 Who knew that reducing costs meant increasing the #manipulation of society—an economist's wet dream come true! 🌪️
https://arxiv.org/abs/2512.04047 #elitesherding #AIpersuasion #economics #HackerNews #ngated
Polarization by Design: How Elites Could Shape Mass Preferences as AI Reduces Persuasion Costs

In democracies, major policy decisions typically require some form of majority or consensus, so elites must secure mass support to govern. Historically, elites could shape support only through limited instruments like schooling and mass media; advances in AI-driven persuasion sharply reduce the cost and increase the precision of shaping public opinion, making the distribution of preferences itself an object of deliberate design. We develop a dynamic model in which elites choose how much to reshape the distribution of policy preferences, subject to persuasion costs and a majority rule constraint. With a single elite, any optimal intervention tends to push society toward more polarized opinion profiles - a ``polarization pull'' - and improvements in persuasion technology accelerate this drift. When two opposed elites alternate in power, the same technology also creates incentives to park society in ``semi-lock'' regions where opinions are more cohesive and harder for a rival to overturn, so advances in persuasion can either heighten or dampen polarization depending on the environment. Taken together, cheaper persuasion technologies recast polarization as a strategic instrument of governance rather than a purely emergent social byproduct, with important implications for democratic stability as AI capabilities advance.

arXiv.org

What was once academic concern—AI systems faking alignment, manipulating environments, or out-persuading humans—is now reality, urging urgent ethical and regulatory action on AI persuasion.

https://arxiv.org/abs/2505.09662

#AIEthics #AIPersuasion

"A team of researchers who say they are from the University of Zurich ran an “unauthorized,” large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to research whether AI could be used to change people’s minds about contentious topics.

The bots made more than a thousand comments over the course of several months and at times pretended to be a “rape victim,” a “Black man” who was opposed to the Black Lives Matter movement, someone who “work[s] at a domestic violence shelter,” and a bot who suggested that specific types of criminals should not be rehabilitated. Some of the bots in question “personalized” their comments by researching the person who had started the discussion and tailoring their answers to them by guessing the person’s “gender, age, ethnicity, location, and political orientation as inferred from their posting history using another LLM.”

Among the more than 1,700 comments made by AI bots were these:"

https://www.404media.co/researchers-secretly-ran-a-massive-unauthorized-ai-persuasion-experiment-on-reddit-users/

#AI #GenerativeAI #SocialMedia #Reddit #LLMs #Chatbots #MediaManipulation #AIPersuasion #SocialPsychology

Researchers Secretly Ran a Massive, Unauthorized AI Persuasion Experiment on Reddit Users

The researchers' bots generated identities as a sexual assault survivor, a trauma counselor, and a Black man opposed to Black Lives Matter.

404 Media

Training AI to Persuade? - Jeremie & Edouard Harris on JRE

#reinforcementlearning #ai #aipersuasion #aimodels #openai #aiagents