@Anupam_Guha it seems very free of jargon but also rather introductory? But since I've been reading about this topic for years, perhaps I'm not the intended audience.
A few points:
- It seems like saying "the machine did it" should be treated not as blaming a scapegoat, but as admitting an error, though a different kind. An organization must take responsibility for its machines.
But admitting this is just the first step. How are the errors remedied and new errors prevented? Is there a robust process to investigate and fix mistakes? Admitting a serious error and doing nothing about it is either arrogance or helplessness.
- An army of workers would also make mistakes when they don't have the background knowledge to understand the meaning of a photo or poem. I don't see how it can be avoided unless the moderator is sufficiently part of the community they are moderating to know the people and understand what they are saying. This seems incompatible with Twitter's flat, global structure and fits better with Mastodon, Reddit, or other places that have small subdivisions with their own moderators.
- Machine filtering seems to work well when combined with local overrides. Consider Gmail's spam filter, which seems to work for many people, because it allows the user to train the algorithm and override it. This early success perhaps made people too optimistic that it could be used globally for social media?