0 Followers
0 Following
1 Posts

Cory Doctorow: The real (economic) AI apocalypse is nigh

https://awful.systems/post/5842461

Cory Doctorow: The real (economic) AI apocalypse is nigh - awful.systems

Lemmy

Do leaders even believe that generative AI is useful?

https://awful.systems/post/5159223

Do leaders even believe that generative AI is useful? - awful.systems

There’s a very long history of extremely effective labor saving tools in software. Writing in C rather than Assembly, especially for more than 1 platform. Standard libraries. Unix itself. More recently, developing games in Unity or Unreal instead of rolling your own engine. And what happened when any of these tools come on the scene is that there is a mad gold rush to develop products that weren’t feasible before. Not layoffs, not “we don’t need to hire junior developers any more”. Rank and file vibe coders seem to perceive Claude Code (for some reason, mostly just Claude Code) as something akin to the advantage of using C rather than Assembly. They are legit excited to code new things they couldn’t code before. Boiling the rivers to give them an occasional morale boost with “You are absolutely right!” is completely fucked up and I dread the day I’ll have to deal with AI-contaminated codebases, but apart from that, they have something positive going for them, at least in this brief moment. They seem to be sincerely enthusiastic. I almost don’t want to shit on their parade. The bigwigs on the other hand, are firing people, closing projects, talking about not hiring juniors any more, and got the media to report on it as AI layoffs. The standard answer is that they hate having employees. But they always hated having employees. And there were always labor saving technologies. So I have a thesis here, or a synthesis perhaps. The bigwigs who tout AI (while acknowledging that it needs humans for now) don’t see AI as ultimately useful, in the way in which C compiler was useful. Even if its useful in some context, they still don’t. They don’t believe it can be useful. They see it as more powerfully useless. Each new version is meant to be a bit more like AM or (clearly AM-inspired, but more familiar) GLaDOS, that will get rid of all the employees once and for all.

Meta was “allegedly” seeding porn to speed up their book downloads.

https://awful.systems/post/5119182

Meta was “allegedly” seeding porn to speed up their book downloads. - awful.systems

Sounds like meta’s judge will have to invent a grand unified theory of fair use to excuse this. I kept saying about various lawsuits that the important thing is discovery. Nobody knew all the idiotic shit these folks were doing, so nobody could sue them properly.

Hmm, maybe too premature - chatgpt has history on by default now, so maybe that’s where it got the idea it was a classic puzzle?

With history off, it still sounds like it has the problem in the training dataset, but it is much more bizarre:

markdownpastebin.com/?id=68b58bd1c4154789a493df96…

Could also be randomness.

MarkdownPastebin

Free markdown pastebin service with rich text editor, preview and easy to use

We did it. 2 people and many boats problem is a classic now. [content warning: botshit]

https://awful.systems/post/5053309

We did it. 2 people and many boats problem is a classic now. [content warning: botshit] - awful.systems

They train on sneer-problems now: > Here’s the “ferry‑shuttle” strategy, exactly analogous to the classic two‑ferryman/many‑boats puzzle, but with planes and pilots And lo and behold, singularity - it can solve variants that no human can solve: https://chatgpt.com/share/68813f81-1e6c-8004-ab95-5bafc531a969 [https://chatgpt.com/share/68813f81-1e6c-8004-ab95-5bafc531a969] > Two ferrymen and three boats are on the left bank of a river. Each boat holds exactly one man. How can they get both men and all three boats to the right bank?

AI solves every river crossing puzzle, we can go home now [content warning: botshit]

https://awful.systems/post/4738230

AI solves every river crossing puzzle, we can go home now [content warning: botshit] - awful.systems

I think this summarizes in one conversation what is so fucking irritating about this thing: I am supposed to believe that it wrote that code. No siree, no RAG, no trickery with training a model to transform the code while maintaining identical expression graph, it just goes from word-salading all over the place on a natural language task, to outputting 100 lines of coherent code. Although that does suggest a new dunk on computer touchers, of the AI enthusiast kind, you can point at that and say that coding clearly does not require any logical reasoning. (Also, as usual with AI it is not always that good. sometimes it fucks up the code, too).

Oh and also for the benefit of our AI fanboys who can’t understand why we would expect something as mundane from this upcoming super-intelligence, as doing math, here’s why:

lmao: they have fixed this issue, it seems to always run python now. Got to love how they just put this shit in production as “stable” Gemini 2.5 pro with that idiotic multiplication thing that everyone knows about, and expect what? to Eliza Effect people into marrying Gemini 2.5 pro?

Google's Gemini 2.5 pro is out of beta.

https://awful.systems/post/4694071

Google's Gemini 2.5 pro is out of beta. - awful.systems

I love to show that kind of shit to AI boosters. (In case you’re wondering, the numbers were chosen randomly and the answer is incorrect). They go waaa waaa its not a calculator, and then I can point out that it got the leading 6 digits and the last digit correct, which is a lot better than it did on the “softer” parts of the test.

Actually, having read it carefully, it is interesting that they actually don’t claim it was hacked, they claim that the modification was unauthorized. They also don’t claim that they removed the access from that mysterious “employee” who modified it.