ourdumbfuture

1 Followers
74 Following
54 Posts
East coast enthusiast. Used to work in big tech. Posting about tech+society/software/labor issues/rap music mostly, probably.

New blog: Twitter released some of its code, but not the ML models that recommend tweets, while cutting off researchers' API access. It shows the limited utility of source code transparency.

The one useful thing we learned was how Twitter defines engagement. But that wasn't actually part of the source and was published separately! This type of transparency about how algorithms are configured should be considered essential, and doesn't require release of source code.

https://knightcolumbia.org/blog/twitter-showed-us-its-algorithm-what-does-it-tell-us

Twitter showed us its algorithm. What does it tell us?

The more we talk about AI image generation, the more I think back to Susan Sontag's essay 'Regarding the Pain of Others' where she discusses what makes a picture authentic or inauthentic (amongst many other fascinating topics about the history of the photo in culture).

It's a devastating essay wether you're interested in AI image gen or not, though. Here's a pdf!

https://monoskop.org/images/a/a6/Sontag_Susan_2003_Regarding_the_Pain_of_Others.pdf

I’m realizing that growing up as a wired magazine-reading cyberpunk believer— a kid who thought adbusters and the internet could save the world— and then watching as every technological miracle was turned not towards revolution or saving humanity but instead more pernicious and intractable methods of exploitation has made me uncomfortable with technological optimism. “assume all new technology will be used against you” —
Seeing some great amateur code analysis from the Twitter recommender release. Waiting for hell to break loose when when this is discovered
Maybe it's just me but extremely strong 'Lennie from Of Mice and Men' energy on this illustration

Boy do I have some thoughts about this CEO's quote in @drewharwell's great reporting on Midjourney.

1) THIS IS HOW YOU END UP WITH TOOLS THAT APPLY ONE COUNTRY'S AUTHORITARIAN RULES TO A GLOBAL AUDIENCE

2) He doesn't at all consider that Chinese people might also want to satirize Xi Jinping. Do they not matter?

3) "Minimize drama" is condescending nonsense. Being able to criticize one of the most authoritarian leaders in emerging tech is not a question of "drama."

https://www.washingtonpost.com/technology/2023/03/30/midjourney-ai-image-generation-rules/

How a tiny company with few rules is making fake images go mainstream

Midjourney, the year-old firm behind recent fake visuals of Trump and the pope, illustrates the lack of oversight accompanying spectacular strides in AI.

The Washington Post
For the people who have advocated fruitlessly for years for the US to have any substantial data privacy law, it has to feel like gaslighting to see the country's national security apparatus finally focus on what TikTok collects and then conclude the answer is to ban that one app.

Yesterday I had a number of conversations with people working in the scholarly publishing sphere about what happens when AI chatbots pollute our information environment and then start feeding on this pollution.

As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at.

https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation

Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

Microsoft’s AI chatbot Bing incorrectly reported the demise of Google’s AI chatbot Bard. It’s an early warning sign that this technology is fueling a massive game of misinformation telephone.

The Verge
One of the interesting and disturbing aspects of modern LLMs isn’t what new they do, but what they make cheap. Low-quality, duplicative content makes retrieving quality information incredibly hard, especially in commercially lucrative domains such as product reviews. Making it 3+ orders of magnitude less expensive to produce that content will not help our informarion ecosystem, even if AI content is identical in quality and substance to what humans are already producing.
Reminded this morning that the NYT Oped pages maybe one of the worst places on the internet