I've been speaking and writing lately about how tech companies try to shift the discussion on misinformation, polarization, harassment, etc., away from the systems and structures that are inherently toxic and toward questions of individual behavior.

This way they can blame their own users for any pathology and steer clear of calls for systemic change.

Today Elon Musk has come through with a perfect illustration for my future talks.

#socialmedia #twitter #ElonMusk

This should be obvious, but having an algorithm that behaves that way is a DELIBERATE CHOICE.

It would be easy enough, for example, to implement basic sentiment analysis so that the algorithm doesn't boost content that you have reacted negatively to in the future.

Musk is playing it both ways. He keeps the algorithm that boosts inflammatory content and drives the online conflicts that draw views and clicks, while pushing the blame for this off onto the individuals involved.

That sucks.

@ct_bergstrom @dalias and nobody wants to admit that software developers aren't that smart and can't really build a good algorithm that scales to a global audience, easier to blame the users.
@Paxxi @ct_bergstrom I'm skeptical of that claim. Regardless of whether you can build it (probably easy) it's entirely counter to business interests to do so.
@dalias @ct_bergstrom I haven't seen anyone doing sentiment analysis handling sarcasm as an example, then translate that to different languages and cultures and you end up with something that just weights likes, replies and retweets
@Paxxi @ct_bergstrom Don't do sentiment analysis on text contents. Just use social graphs and assume for example that someone in a deplorable circle interacting with someone in an antifa circle is hostile interaction.
@Paxxi @ct_bergstrom I suspect you could somewhat automate discovery of this kind of social graph knowledge by using trends in textual sentiment analysis - it's horribly wrong on individual messages but might be significant at scale looking at trends across all messages crossing social graph clusters.
@dalias @ct_bergstrom I still think that fails. If a group keeps telling nazis to fuck off, will they be part of the bad group? They keep being negative and interacting with the nazis. Will they be a separate group also labeled as bad?
@Paxxi @ct_bergstrom No, "social graph" means follows/mutual relationships. But you could also include likes.
@dalias @ct_bergstrom that solves being grouped with the nazis but it doesn't solve the other issue