• Adding in a risk component, a calculation of a potential for harm for a given piece, seems the next logical stage in the process. This prioritises content for moderation.

  • From this, and making a broad incursion into potentially Orwellian territory, is identifying risks by individuals or groups (detected clusters or networks) which might then affect moderation and distribution (prevalence/view rate) scores or targets.

  • Given that social media are, well, social, with groups tending to have a high level of coherence, behaviour towards standards might be determined and behaivours including failure-to-flag or amplifying offending content being further considered. Effectively a trust metric for posting and amplification.

#contentModeration #moderation #youtube #google #ViewRate #Prevalence

3/

On content moderation, the metric that seems to be getting picked up is #Prevalence (Facebook) or #ViewRate (YouTube/Google), which looks not just at the number of items posted, but the times each are viewed. This is beginning to approach a useful metric, but still poses several problems.

  • It's easy to focus on simple numbers or characteristics. These almost always provide an oversimplified and incomplete view. Noting how many violative / disinformational pieces of content are posted without accounting for their presentation within members' streams or search results is incomplete.

  • Computing items times impressions ... is much better. It also provides a basis for determining the moderation load required vs. degree of access granted, given distributions of prevalence which tend strongly to follow a power law distribution. (More below.)

#facebook #google #youtube #moderation #contentModeration

2/