Sent an email to the "Global Internet Forum to Counter Terrorism" #GIFCT, an NGO with mostly corporate members. Phrased it as if I had only seen their "shoobidoo" music videos on how their database of perceptual hashes for presumed terrorist content is maintained.
That database is used by its members to flag posted content as originating from a "terrorist group".
Looking forward to their response.
Their public documents show how fuzzy and messy their processes and rules are – and how dangerous for groups that authoritarian regimes love to label as "terrorist".
Some are (naively) looking forward to #Mastodon admins being able to cross-check posted media against such a database. Even when giving the benefit of the doubt, that's at best naive.
Had they actually read all the caveats and cautioning the GIFCT members are advised to take into account, they'd soon realize that Fediverse admins are not able to cope with such a task:
Try figuring out, again and again, issues like whether you as an admin should consider a group "terrorist" that was labeled as such merely by an oppressive regime.
The obvious, desperate "solution" for admins will be to shrug helplessly, and go along with banning everything that is flagged. After all, workload reduction was why they considered a hash matching solution.
As a result, establishing the GIFCT database as a "tool" for the Fediverse would ironically resemble the establishment of a "Fediverse Interpol", with a perceptual hash effectively becoming kind of a #RedNotice.
Except that #Fediverse admins, unlike resourceful Interpol member states, would be hopelessly overwhelmed by having to assess raised flags, to weigh them against their standards.
Which, in turn, means that legitimate voices of an opposition may be silenced, on the Fediverse. Automatically.
Such hash matching is an integral part of many oppressive projects, like #ChatControl. Once the tech is in place, those who control the hash databases control what can be said.