Angus McIntyre

@angusm
1.8K Followers
148 Following
4.1K Posts
I play with words, cameras & computers
Science-Fiction/Fantasy Writinghttps://angus.pw
Photography & Morehttps://raingod.com
Travel/Photo Bloghttps://disoriented.net
Languagesen (native) / fr, it (fluent) / es, nl (some)

Reading all the "Happy World Backup Day!” posts and discovering that storage devices that were moderately affordable a year ago are now well into “downpayment on a house" territory is … depressing.

To make matters worse, it's now emerged that the entire component industry was basically turned upside down by Sam Altman going to visit a couple of Korean manufacturers and pinky-swearing that he was going to buy a lot of RAM chips some day.

https://xcancel.com/aakashgupta/status/2038813799856374135

Aakash Gupta (@aakashgupta)

The timeline on this is genuinely insane. October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time. Those deals were letters of intent. Non-binding. No RAM actually changed hands. But the market treated them as gospel. Contract DRAM prices jumped 171%. A 64GB DDR5 kit went from $190 to $700 in three months. December 2025: Micron kills Crucial, its 29-year-old consumer memory brand, to reallocate every wafer to AI and enterprise customers. The company explicitly said it was exiting consumer memory to "improve supply and support for our larger, strategic customers in faster-growing segments." Translation: the AI demand signal was so loud that selling RAM to PC builders stopped making financial sense. March 2026: Google publishes TurboQuant, a compression algorithm that reduces AI memory requirements by 6x with zero accuracy loss. Cloudflare's CEO called it "Google's DeepSeek." The entire thesis that AI would consume infinite memory forever just got a six-month expiration date on it. Same month: OpenAI and Oracle cancel the Abilene Stargate expansion. The $500 billion data center vision that justified the RAM deals couldn't survive its own financing terms. Bloomberg attributed the collapse partly to OpenAI's "often-changing demand forecasting." MU is now down ~33% from its post-earnings high. Revenue up 196% year over year, EPS up 682%, and the stock is in freefall because the company restructured its entire business around a demand signal that came from non-binding letters and is now being compressed out of existence by a research paper. Micron bet the consumer division on Sam Altman's signature. The signature was worth exactly what the paper said: nothing binding.

Nitter
Free Photography Tool Wants to Be a Landscape Photographer's Best Friend

Free tool for photographers built by a university student and photography enthusiast.

PetaPixel

Signs from today’s “No Kings” protest in Manhattan.

#NoKings #NYC #protests

(4/4)

Signs from today’s “No Kings” protest in Manhattan.

#NoKings #NYC #protests

(3/4)

Signs from today’s “No Kings” protest in Manhattan.

#NoKings #NYC #protests

(2/4)

Signs from today’s “No Kings” protest in Manhattan.

#NoKings #NYC #protests

(1/4)

For #caturday, here's a comfortable kitty taking a nap in the gardens of the Kasbah of the Udayas, in Rabat, Morocco.

#cats #CatsOfMastodon #Morocco

It's also the case that the more untrustworthy LLM output becomes, the harder the people who have invested hundreds of billions in the tech will try to convince us that we must Trust the Superintelligent Machine That Knows Everything, and, indeed, to cut us off from competing knowledge sources. So we have that to look forward to.

Anyway, TL;DR: artificial gullibility is a problem that's only going to get worse, so brace yourselves.

/END

I once described the US as a complex distributed system with an attack surface of 300 million people. Gullible LLMs are a new vector for attacking that system, one that targets the weakest links in the chain, the people who don't know enough not to distrust those handy-dandy “AI Overview" boxes in their favorite search engine.

7/

LLMs are essentially gullible. And many people, even otherwise smart people, are gullible enough to believe that "AI" distillations of facts are trustworthy. It's a problem of gullibility compounded. But there's also an entire industry that's devoted to trying to convince us NOT to be skeptical of AI, not to see it for what it is -- an often-naive statistical model that can and will increasingly be gamed by bad actors.

6/