"Two independent analyses of social media content in the lead-up to the German federal election in 2025 have shown that extremist parties, in particular the right-wing Alternative for Germany (AfD), were disproportionately favored by X, TikTok, YouTube, and Instagram. A report prepared by the German nonprofit organization Bertelsmann Stiftung found that on TikTok, for example, 50% of all suggested political content was found to be AfD-related, with the mainstream conservatives a distant second at 15%. The outsized prominence of extremist content cannot be explained by the parties’ actions alone, because they all used very similar social media strategies. Another study, which has been shared by the authors via the pre-publication platform Arxiv, showed that the X algorithm disproportionately amplified content by extreme parties, especially on the extreme right. This selective amplification is particularly concerning in light of earlier research conducted by me and my team, which showed that German politicians from the extreme right and left share far more untrustworthy content on Twitter than politicians of the four mainstream parties.

A recent field experiment investigated the consequences of algorithmic amplification by re-ranking content favored by the X/Twitter algorithm that expressed antidemocratic attitudes and partisan animosity. When antidemocratic content was downranked, participants’ outgroup animosity declined compared to a control group that was exposed to the standard X/Twitter algorithm, both during the study and afterwards. Reduced exposure to antidemocratic content also reduced people’s negative emotions during the study. This is not an isolated finding but adds to existing evidence that social media causally contributes to hate crimes and xenophobia.

The DSA was designed to address such challenges."

https://www.science.org/doi/full/10.1126/science.aee9835

#SocialMedia #SocialNetworks #ContentModeration #Algorithms #AlgorithmicRecommendation #EU #DSA

"The algorithmic recommender systems that select, filter, and personalize experiences across online platforms and services play a significant role in shaping user experiences online. These systems largely determine what users see, read, and watch, fueling debates around their potential to amplify harmful content, foster societal division, and prioritize engagement over user well-being. In reaction, some policymakers have turned to blanket bans on personalization or to the promotion of chronological feeds. But there are many better alternatives. Suggesting that users must choose between today’s default feeds and chronological or non-personalized feeds creates a false choice.

This report, prepared by the KGI Expert Working Group on Recommender Systems, offers comprehensive insights and policy guidance aimed at optimizing recommender systems for long-term user value and high-quality experiences. Drawing on a multidisciplinary research base and industry expertise, the report highlights key challenges in the current design and regulation of recommender systems and proposes actionable solutions for policymakers and product designers.

A key concern is that some platforms optimize their recommender systems to maximize certain forms of predicted engagement, which can prioritize clicks and likes over stronger signals of long-term user value. Maximizing the chances that users will click, like, share, and view content this week, this month, and this quarter aligns well with the business interests of tech platforms monetized through advertising. Product teams are rewarded for showing short-term gains in platform usage, and financial markets and investors reward companies that can deliver large audiences to advertisers."

https://kgi.georgetown.edu/research-and-commentary/better-feeds/

#SocialMedia #SocialNetworks #Algorithms #AlgorithmicRecommendation #RecommendationEngines

Better Feeds: Algorithms That Put People First – Knight-Georgetown Institute

Knight-Georgetown Institute

"An “error” in Instagram Reels caused its algorithm to show some users video after video of horrific violence, animal abuse, murders, dead bodies, and other gore, Meta told 404 Media. The company said “we apologize for the mistake.”

Sometime in the last few days, this error caused people’s Reels algorithms to suddenly change. A 404 Media reader who has a biking-related Instagram account reached out to me and said that his feed, which is “typically dogs and bikes,” had become videos of people getting killed: “I had never seen someone being eaten by a shark, followed by someone getting killed by a car crash, followed by someone getting shot,” he told 404 Media.

To test this, the person let me login to his Instagram account, and I scrolled Reels for about 15 minutes. There were a couple videos about dogs and a couple videos about bikes, but the vast majority of videos were hidden behind a “sensitive content” warning. I will describe videos I saw when I clicked through the warnings, many of which had thousands of likes and hundreds of comments:"

https://www.404media.co/instagram-error-turned-reels-into-neverending-scroll-of-murder-gore-and-violence/

#SocialMedia #Meta #Instagram #Algorithms #AlgorithmicRecommendation

Instagram 'Error' Turned Reels Into Neverending Scroll of Murder, Gore, and Violence

“Today’s algorithm showed me around 70 murders, 100+ accidents, and around 115 violence videos, is anyone on Instagram noticing it?”

404 Media

"I'm not saying that it's a sin to read an algorithmic feed, but relying on algorithmic feeds is a recipe for feeling empty, and regretful of your misspent attention. This is true even when the algorithm is good at its job, as with Tiktok, whose whole appeal is to take your hands off the wheel and give total control over to the autopilot. Even when an algorithm makes many good guesses about what you'll like, seeing something you like isn't as nice, as pleasing, as useful, as seeing that same thing as the result of someone else's intention.

And, of course, once you let the app drive, you become a soft target for the cupidity and deceptions of the app's makers. Tiktok, for example, uses its "heating tool" to selectively boost things into your feed – not because they think you'll like it, but because they want to trick the person whose content they're boosting into thinking that Tiktok is a good place to distribute their work through:"

https://pluralistic.net/2025/02/19/gimme-five/#jeffty

#AttentionEconomy #SocialMedia #Email #RSS #Algorithms #AlgorithmicFeeds #AlgorithmicRecommendation

Pluralistic: Pluralistic is five (19 Feb 2025) – Pluralistic: Daily links from Cory Doctorow

"This technical report presents findings from a two-phase analysis investigating potential algorithmic bias in engagement metrics on X (formerly Twitter) by examining Elon Musk’s account against a group of prominent users and subsequently comparing Republican-leaning versus Democrat-leaning accounts. The analysis reveals a structural engagement shift around mid-July 2024, suggesting platform-level changes that influenced engagement metrics for all accounts under examination. The date at which the structural break (spike) in engagement occurs coincides with Elon Musk’s formal endorsement of Donald Trump on 13th July 2024.

In Phase One, focused on Elon Musk’s account, the analysis identified a marked differential uplift across all engagement metrics (view counts, retweet counts, and favourite counts) following the detected change point. Musk’s account not only started with a higher baseline compared to the other accounts in the analysis but also received a significant additional boost post-change, indicating a potential algorithmic adjustment that preferentially enhanced visibility and interaction for Musk’s posts.

In Phase Two, comparing Republican-leaning and Democrat-leaning accounts, we again observed an engagement shift around the same date, affecting all metrics. However, only view counts showed evidence of a group-specific boost, with Republican-leaning accounts exhibiting a significant post-change increase relative to Democrat-leaning accounts. This finding suggests a possible recommendation bias favouring Republican content in terms of visibility, potentially via recommendation mechanisms such as the "For You" feed. Conversely, retweet and favourite counts did not display the same group-specific boost, indicating a more balanced distribution of engagement across political alignments."

https://eprints.qut.edu.au/253211/

#USA #Trump #PresidentialElections #SocialMedia #Musk #Twitter #AlgorithmicBias #Algorithms #AlgorithmicRecommendation

#SocialMedia #SocialSciences #BigTech #AlgorithmicRecommendation: "In downplaying the role of algorithmic content curation for issues such as misinformation and political polarisation, the study became a beacon for sowing doubt and uncertainty about the harmful influence of social media algorithms.

To be clear, I am not suggesting the researchers who conducted the original 2023 study misled the public. The real problem is that social media companies not only control researchers’ access to data, but can also manipulate their systems in a way that affects the findings of the studies they fund.

What’s more, social media companies have the power to promote certain studies on the very platform the studies are about. In turn, this helps shape public opinion. It can create a scenario where scepticism and doubt about the impacts of algorithms can become normalised – or where people simply start to tune out.

This kind of power is unprecedented. Even big tobacco could not control the public’s perception of itself so directly.

All of this underscores why platforms should be mandated to provide both large-scale data access and real-time updates about changes to their algorithmic systems."

https://theconversation.com/is-big-tech-harming-society-to-find-out-we-need-research-but-its-being-manipulated-by-big-tech-itself-240110

Is big tech harming society? To find out, we need research – but it’s being manipulated by big tech itself

Big tech doesn’t just fund research into itself – it also controls researchers’ access to data. Even big tobacco didn’t have this kind of power.

The Conversation

#AI #Algorithms #AlgorithmicRecommendation #Taiwan: "Taiwan’s regulator is aiming to strengthen investor protection against potential mis-selling and inappropriate investment advice from robo-advisers as take-up of artificial intelligence grows more widespread.

The move could see Taiwan become the first market in the world to regulate the algorithms that robo-advisers adopt to make recommendations.

Chang Tzu-min, deputy director-general of the Securities and Futures Bureau, in April said the authority planned to require industry participants to set up supervisory committees that include external experts to enhance investor protection.

Chang said Taiwan’s Financial Supervisory Commission would set up an external expert panel to review the algorithms to assess their ability to react to changes in the market and review whether financial groups could manipulate results generated by the algorithms."

https://www.ft.com/content/f07b6d17-0d63-4e0e-b982-262ad67f9006?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8#myft:notification:daily-email:content

Taiwan aims to enhance investor protections against robo-adviser risks

Regulator concerned about potential mis-selling as financial companies look to incorporate artificial intelligence

Financial Times

Learn from Ms. Rasha Abdul-Rahim (director Amnesty Tech) and Dr. Nakeema Stefflbauer (founder FrauenLoop) about #AlgorithmicRecommendation systems.

Listen to Mr. Tim Hwang (author of Subprime Attention Crisis) in a panel on the economics of #advertising.

And Prof. Tomasso Valetti (Imperial College) will go on to explain the #lobbying methods of big tech. And what to do against them. 2/2

#FAGAM #BigTech #DigitalSummit #Freedom