0 Followers
0 Following
1 Posts
It’s not always easy to distinguish between existentialism and a bad mood.
Trailer for “the AI doc”, an upcoming doc featuring the usual suspects, notably the big yud himself - awful.systems

Full doc title: “The AI Doc: Or How I Became an Apocaloptimist” Per wiki: >The AI Doc: Or How I Became an Apocaloptimist is a 2026 American documentary film directed by Daniel Roher and Charlie Tyrell. It is produced by the Academy Award-winning teams behind Everything Everywhere All at Once (Daniel Kwan and Jonathan Wang) and Navalny (Shane Boris and Diane Becker). What to say here? This is a doc being produced by the producer and one of the directors of Everything Everywhere All At Once, who notably have been making efforts to, uh, negotiate? I guess? with AI companies vis a vis making movies. Anyway the title is a piece of shit and this trailer makes it look like this is just critihype the movie. I guess we’ll hear more about it in the coming month. Really interesting framing this as brought about by thinking about the director’s child, given Yud’s recent comments about how one should raise a daughter if you had certain beliefs about AI.

exciting new roles of liquid management

algorithmic uh sovereignity

fantastic

Sam Altman wants his eye scanning crypto bullshit to be used to verify AI agents so he can save the internet from himself.

Rather than blocking automated traffic outright as a safety or data-protection measure, World suggests sites could instead require AI agents to present an associated World ID token to prove they represent an actual human who’s behind any request. In this way, the site could allow agents to access limited resources like restaurant reservations, ticket purchase opportunities, free trials, or even bandwidth without worrying about a single user flooding the process with thousands of anonymous bots. The same idea could apply to sensitive reputational systems like online forums and polls, where it’s important to prevent automated astroturfing or dogpiling.

World ID wants you to put a cryptographically unique human identity behind your AI agents

Iris scan-backed tokens could help stop agent swarms from overwhelming online systems.

Ars Technica

increasing fidelity of game graphics was actually making games better, or just more expensive

I really liked what Control did with cranking up the verisimilitude and the photorealism, namely to accentuate the weirdness and really up the uncanny vibe.

Maybe it’s just me but even the enhanced lighting aspect doesn’t look especially good, at least where faces are concerned; shining a hard light sideways so every facial nook and cranny gets highlighted in excruciating detail looks less natural and more like the old android HDR photo filter, even before you realize it’s giving some characters instagram make-overs.
Probably should’ve written ‘not a deal breaker’ instead of not a big deal.

It’s possible the attempt to shove AI in every nook and cranny in the pentagon didn’t especially pan out and since his face was all over that project, he’s desperate for a scapegoat.

Like for sure he’d have had the logistics of the entire US army running smoothly despite layoffs by now, if it weren’t for the wokies in anthropic acting up.

It is nuts to deny the experiences these people are having. They’re not vibe-coding mission-critical AWS modules. They’re not generating tech debt at scale:

pluralistic.net/2026/01/06/1000x-liability/#grace…

They’re just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won’t always choose wisely, but that’s normal too. There’s plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.

Whoa, the whole thing is indefensibly capital-W wrong, just an utterly weird rosy-colored-glass view of the current corporate experience.

Pluralistic: Code is a liability (not an asset) (06 Jan 2026) – Pluralistic: Daily links from Cory Doctorow

The one-shotting phenomenon (or how a positive initial experience with the technology seems to lead to a heavily biased view of its merits) should probably be considered a distinct cognitive bias at this point.

Turns out a lot of bright people can’t deal with a technology being utterly subjective in its efficiency, and also how that’s specifically the part that reduces it to being so narrowly useful as to force the existential question, given the insane resource burn and the socioeconomic disruption that’s part and parcel, even if like Doctorow you think that their rape and pillage of artist’s rights and intellectual property in general isn’t an especially big deal.

Also, local LLMs are hardly extricable from the whole mess, they are basically a byproduct, and updated versions only will keep coming as long as their imperial size online counterparts remain a viable concern.

In the original post he kept referring to Ollama like it was an LLM instead of a server app that hosts LLMs so I’d say they jury’s out on that.