Today we released our threat research into influence operations we disrupted. A few highlights:
1. Lookback on Russia’s IO since invasion of Ukraine began (both covert + overt)
2. Three new CIB takedowns in Bolivia, Cuba, and Serbia
First, we dive into the increase in RU-origin covert IO, which remained largely ineffective but increasingly relied on slapdash, unsophisticated techniques. In contrast, overt IO (eg from RU state media) declined significantly in popularity on platform after we took steps to limit their reach, per latest research from Graphika (http://graphika.com/lose-influence).
This new research shows that engagement with RU state media content on our services declined considerably (~80%) after we launched interventions to label content as state-controlled and to limit their reach on platform.
This shows what the impact of non-binary content moderation levers can be: our goal here is to make sure people have context about the origin of content they encounter before amplifying it. But it also raises important questions about when/how such levers should be applied.
It also follows on research from @BrookingsInst that found similar drops in engagement across RU SCME in LatAm after our interventions were applied https://www.brookings.edu/research/working-the-western-hemisphere/
Second, our report details three new CIB cases, all of which were linked in some way to governments/ruling parties and targeted people in each country.
We’ve called out this trend as particularly concerning: it combines the deceptive nature of IO with the powers of a state. And domestic ops (gov + non-gov) have been outpacing foreign ones. Of 200+ CIB operations we’ve disrupted since 2017, more than 2/3 are wholly or partially domestic.
Finally, we continue to see threat actors target multiple platforms. In this report, that included Facebook, Instagram, Telegram, Twitter, YouTube, TikTok, Spotify, Picta, and websites created by the threat actors posing as news outlets. This cross-platform behavior requires defenders across the industry to be alert to threats, to share information, and to take action where appropriate.
Some additional info on those CIB cases:
A short thread on two of the CIB operations in LatAm we detail in our new report today (https://about.fb.com/news/2023/02/metas-adversarial-threat-report-q4-2022/):
1. An operation linked to the current gov and the MAS party in Bolivia
2. A government-linked operation in Cuba
In Bolivia, we removed over 1600 accounts, Pages, and Groups for violating our policies against both coordinated inauthentic behavior and coordinated abusive reporting (aka mass reporting). This network originated in Bolivia and focused primarily on domestic audiences in that country.
Our internal investigation linked it to the current Bolivian government and Movimiento al Socialismo (MAS), including individuals claiming to be part of a group known as “Guerreros Digitales” (“digital warriors”). We banned this group from our services.
Like other operations we’ve disrupted in LatAm, the network was reported to be operating fake accounts from office buildings in Santa Cruz, Bolivia, and was active across many internet services, including Facebook, Instagram, Twitter, YouTube, TikTok, Spotify, Telegram, and their own websites.
This operation engaged in both coordinated inauthentic behavior and coordinated abusive reporting in support of the Bolivian government and to criticize and attempt to silence the opposition by submitting false reports to try to get them taken down.
In Cuba, we removed 900+ accounts, pages and groups for violating our policy against coordinated inauthentic behavior. This network originated in Cuba and primarily targeted domestic audiences in Cuba and also the Cuban diaspora abroad.
Our internal investigation linked the activity to the Cuban government, and the operation focused on promoting the govt and criticizing opposition in Cuba.
The operation pursued two main efforts across many platforms, including Facebook, Instagram, Telegram, Twitter, YouTube and Picta, a Cuban social network: (1) fake amplification and (2) fake personas and brands designed to deceive.
A few tactics that we’ve seen in other campaigns showed up here, including AI-generated profile photos and calls to report critics in hopes of getting their content taken down. Neither of these tactics appeared to be very effective.
Notably, after we removed this deceptive campaign, we saw them try aggressively to rebuild their operations. We expect threat actors to do this and we moved quickly to block these attempts. We saw the operation eventually shift elsewhere, including to Telegram.
After our initial takedown, they’ve had to spend their resources and effort trying to evade our enforcement rather than pursuing their goals, leaving them with little to show for their efforts. That is exactly what we want to see.
