Adding in a risk component, a calculation of a potential for harm for a given piece, seems the next logical stage in the process. This prioritises content for moderation.
From this, and making a broad incursion into potentially Orwellian territory, is identifying risks by individuals or groups (detected clusters or networks) which might then affect moderation and distribution (prevalence/view rate) scores or targets.
Given that social media are, well, social, with groups tending to have a high level of coherence, behaviour towards standards might be determined and behaivours including failure-to-flag or amplifying offending content being further considered. Effectively a trust metric for posting and amplification.
#contentModeration #moderation #youtube #google #ViewRate #Prevalence
3/