Watching the House Oversight hearing and the ex-Twitter witnesses are doing a great job while being attacked by pretty much everybody.

Some initial reactions:
1) Despite the high-profile nature of this hearing, members don't seem much better prepared than in other tech hearings. Lots of misuse of technical terms (Rep. Jordan asking about "hard coding" multiple times) and confusion on the Hunter non-consensual tweets.

https://www.youtube.com/watch?v=-Fo_yD8r3w4

Full Committee Hearing - Part 1: Twitter’s Role in Suppressing the Biden Laptop Story

Learn more at https://republicans-oversight.house.gov/

YouTube

2) The panel, especially Yoel, is doing an excellent job of staying calm and explaining why the wildest theories exposed by the committee are not supported by the evidence.

3) This is a clear demonstration of the no-way-to-win dynamic on all political content moderation. I expect it will have the (intentional?) effect of reducing the willingness of companies to take any action on political accounts. This was a dynamic @evelyndouek, Nate Persily and I discussed here:

https://moderated-content.simplecast.com/episodes/meta-reinstates-trumps-accounts-7l3bLsEf

Meta Reinstates Trump's Accounts | Moderated Content

Evelyn sits down with Nate Persily, Professor at Stanford Law School, and Alex Stamos, director of the Stanford Internet Observatory, to discuss Meta's decision that it is reinstating former President Trump's accounts. Nate is pragmatic, Alex is cynical, and Evelyn is a naive little formalist about it all. Here's their quick takes.

Moderated Content

@alex @evelyndouek

Framing it as "no-way-to-win" is a false dichotomy. Both the panel and podcast did not discuss a third option, informing users when their content has been moderated, be it removed or reduced. In fact, you still champion new "reach reduction" techniques (16:12). That's the wrong approach.

Facebook still has a "Hide comment" button. Reddit still has over 50% of accounts that have removed comments they don't know about because the system hides the removal from them.

Platforms could take the hard step of being transparent, and we can hold them to that. Asking them to remove more content before demanding transparency makes you company to the censorship regime, or, a lot like those who want to build lists of toxic users without concern for how that data is used:

https://old.reddit.com/r/TheoryOfReddit/comments/ymeqaz/hate_on_reddit_a_global_lists_of_toxic_users/iv4aet6/

Hate on Reddit: A Global Lists of "Toxic" Users

This is problematic when combined with Shadow Moderation, which is how comment removals work on Reddit (comment in...

reddit
@rhaksw @evelyndouek Yoel's testimony specifically talked about moderation transparency, and you are apparently not familiar with my work as I've been championing such things for years. https://youtu.be/ATmQj787Jcc?t=1823
The Platform Challenge: Balancing Safety, Privacy and Freedom — Alex Stamos (DataEDGE 2019)

YouTube

@alex @evelyndouek

Alex, I built Reveddit. My argument pertains to the ordering of your priorities.

Demands for transparency must be louder than demands to remove disinformation. In other words, it's not worth helping platforms flag content when the moderation, be it removal or reduction, is done without the knowledge of that content's author.

The reason we're all stuck on the same platforms is people don't know they're getting moderated, both on the left and the right, on every major platform. That's no "wild theory".

About Yoel, he only advocated transparency of rules, not judicial outcomes as you related in the linked video,

https://oversight.house.gov/wp-content/uploads/2023/02/Roth-House-Oversight-opening-statement-V4-Final.pdf

"what we tried to do at Twitter — across every decision — was to create a rules-based system of governance that would make clear what’s allowed, or not, on Twitter, and why. Transparency is at the heart of this work, and it’s where I think Twitter — and all of social media — can and must do better"