Social Media as Common Carrier and Policing

I've argued for a while that The phone company does not promote content or connections, while algorithmically-driven social media platforms have been doing just that in the name of “driving engagement”.

Pointing this out on Diaspora, Simons Mith responds:

Therefore: if social media companies either choose to or are forced to become common carriers, the ‘driving engagement’ activities that they currently perform will transfer to other parties. But those activities will remain just as pervasive and odious as it they are now, because that’s what works, and the social media companies, once they’re common carriers, won’t be obliged to police it. And I also reckon the police will continue to remain exactly as interested in policing it as they are now.

That's highly cogent.

I'm not entirely sure how to respond, though I do note that earlier networked common carriers are not entirely limited from restricting types of conduct or types of exchanges. Additionally, postal services, railroads, and transit agencies have their own inspectors or police forces. In the case of broadcast networks, there is the network censor and government oversite (FCC in the US). Even hotels have detectives.

@pluralistic might be interested.

https://joindiaspora.com/posts/36cdd860cef60139a303002590d8e506#ed282c70cfde0139eab1005056264835

#CommonCarrier #Internet #PostalService #Telegraph #Telephone #Communications #Railroads #TransitCops #PostalInspectors #RailroadPolice #NetworkCensor #Regulation #Moderation

On the anti-usability of the Modern Web

On the anti-usability of the Modern Web So ... I've discovered a since-discontinued radio programme which has online archives. There is an "archives" feature for the originating station. This presents ... ten episodes at a time, scattered over 50+ pages. Fortunately, some digging into HTML source reveals that the audio file naming convention is consistent, featuring a YYMMDD sequence. Though the programme air-dates (or recording dates) are inconsistent enough that there's not a predictable 7-day stride to that pattern. So, as one does, I write a simple Bash loop to sequence through 4,000+ possible dates (year, month, day) using curl to check to see which are found (HTTP status 200). I can then loop through downloading those... That'll take an hour or so to run. Fortunately I don't have to be there for it. (I could parallelise the queries by year or month to speed that up markedly, I'm being forgiving to the website. The 404s seem to be expensive taking a second or so to return...