lol
Today's #AIIsGoingGreat features an elephant, a room, and Bruce Schneier: "It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there" https://www.schneier.com/blog/archives/2025/08/we-are-still-unable-to-secure-llms-from-malicious-inputs.html
We Are Still Unable to Secure LLMs from Malicious Inputs - Schneier on Security

Nice indirect prompt injection attack: Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) It looks like an official document on company meeting policies. But inside the document, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack...

Schneier on Security
I think a big part of this is that both the industry and broader public are conditioned to accept "sure, it has bugs, but we're fixing them" as a reasonable response to software failures. "Put out a buggy MVP, iterate until it's good" is a tried and true Silicon Valley story, right? But in this case, it avoids the very real and under-discussed possibility that the "bugs" are inherent characteristics of the technology
Bonus #AIIsGoingGreat from @vagina_museum: What to expect when you're expecting an AI superintelligence https://mastodon.social/@vagina_museum@masto.ai/115100135101004687

Today's #AIIsGoingGreat (HT @hazelweakly*) sheds light on whether there might be risks associated with the industry's headlong rush to adopt a technology for which input validation is literally impossible

https://embracethered.com/blog/posts/2025/wrapping-up-month-of-ai-bugs/

* https://mastodon.social/@hazelweakly@hachyderm.io/115138692622938480

Wrap Up: The Month of AI Bugs · Embrace The Red

Embrace The Red

Reverse dogfood #AIIsGoingGreat "Most [of the interview google AI training] workers said they avoid using LLMs or use extensions to block AI summaries because they now know how it’s built. Many also discourage their family and friends from using it, for the same reason"

https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans

How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart

Contracted AI raters describe grueling deadlines, poor pay and opacity around work to make chatbots intelligent

The Guardian
Bonus #AIIsGoingGreat 'One of the fake citations references a 2008 National Film Board movie called "Schoolyard Games" that does not exist, according to a board spokesperson. The exact citation reportedly appears in a University of Victoria style guide, a document that teaches students how to format references using fictional examples'
https://arstechnica.com/ai/2025/09/education-report-calling-for-ethical-ai-use-contains-over-15-fake-sources/
Education report calling for ethical AI use contains over 15 fake sources

Experts find fake sources in Canadian government report that took 18 months to complete.

Ars Technica

Department of Education and Early Childhood Development spokes says they are aware of a "small number of potential errors in citations" and "We understand that these issues are being addressed, and that the online report will be updated in the coming days to rectify any error" - Ignoring the obvious problem that if the citations are BS, arguments or conclusions they were supporting were likely unjustified at best, if not outright BS

https://www.cbc.ca/news/canada/newfoundland-labrador/education-accord-nl-sources-dont-exist-1.7631364

N.L.'s 10-year education action plan cites sources that don't exist | CBC News

A major report on modernizing the education system in Newfoundland and Labrador is peppered with fake sources some educators say were likely fabricated by generative artificial intelligence.

CBC

#AIIsGoingGreat "Americans are much more concerned than excited about the increased use of AI in daily life, with a majority saying they want more control over how AI is used in their lives"

https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/

How Americans View AI and Its Impact on People and Society

Americans are worried about using AI more in daily life, seeing harm to human creativity and relationships. But they’re open to AI use in weather forecasting, medicine and other data-heavy tasks.

Pew Research Center

Also pleased to see the stuff people are concerned about mostly isn't skynet

https://www.pewresearch.org/science/2025/09/17/americans-on-the-risks-benefits-of-ai-in-their-own-words/

3. Americans on the risks, benefits of AI – in their own words

Far more Americans say AI has high risks (57%) than high benefits (25%) for society. Read why respondents say, in their own words, they see AI this way.

Pew Research Center
"Sure, it's a bubble (or three), but bubbles are good, actually!"
Don't totally disagree the basic arguments, but…
1) He suggests the "infrastructure bubble" may "lead to positive outcomes, because overcapacity will mean falling prices for those who want to use that infrastructure" - Probably true for data centers, but less clear for $ trillions in AI chips. AFAIK compute tends to be dominated by energy cost, so even at fire sale prices older chips may be of limited use
https://www.fastcompany.com/91400857/there-isnt-an-ai-bubble-there-are-three-ai-bu
There isn’t an AI bubble—there are three

Here's how to capitalize on them.

Fast Company

2) His offers NFTs as an example of a "hype bubble" and then points to Amazon, Google and Paypal as examples of real value that emerged from the dotcom bubble. I agree with both, but… can anyone point to an Amazon or Google equivalent that emerged from the NFT bubble? Or anything of value at all to anyone other than speculators, scammers and crooks?
I can't, and while my gut says the AI stuff is probably closer to dotcom than NFTs, how much is far from obvious

https://www.fastcompany.com/91400857/there-isnt-an-ai-bubble-there-are-three-ai-bu

There isn’t an AI bubble—there are three

Here's how to capitalize on them.

Fast Company

In today's #AIIsGoingGreat (HT @markwyner*) MIT boffins offer us an "AI Incident Tracker project" which "classifies real-world, reported incidents by AI Risk Repository risk domain, causal factors, and harm caused"
Sounds useful, right? But how exactly do they classify them? "Using a Large Language Model (LLM), the tool processes raw reports from the AI Incident Database and categorizes them using established frameworks" 🤨

https://airisk.mit.edu/ai-incident-tracker

* https://mastodon.social/@markwyner@mas.to/115249150911541318

MIT AI Incident Tracker

The MIT AI Incident Tracker project classifies over 1200 real-world, reported incidents by risk domain, causal factors, and harm caused.

Ensuring catastrophic AI incidents include a prompt injection to have them classified as unicorns farting rainbows is left as an exercise to the reader

Meanwhile, California appeals court fines #ChatGPTLawyer Amir Mostafavi ten grand for "filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court’s time and the taxpayers money"

https://calmatters.org/economy/technology/2025/09/chatgpt-lawyer-fine-ai-regulation/

California issues historic fine over lawyer’s ChatGPT fabrications

The court of appeals issued an historic fine after 21 of 23 quotes in the lawyer's opening brief were fake. Courts want more AI regulations.

CalMatters

The court observes "Many courts confronted with AI-generated authorities have concluded that filing briefs containing fabricated legal authority is sanctionable" and backs it up with a page of (presumably non-hallucinated) citiations

https://www4.courts.ca.gov/opinions/documents/B331918.PDF

and as usually happens, the "I had no idea LLMs make shit up" excuse receives little sympathy, for the obvious reasons that an attorney is responsible for the content of their filing no matter how they came up with it, and citing non-existent cases is a pretty compelling evidence that they didn't read them
Washington city officials are using ChatGPT for government work

Records show that public servants have used generative AI to write emails to constituents, mayoral letters, policy documents and more.

KNKX Public Radio

Nice interview (via @ink*) with reporter Nate Sanford about how the project came about, along with tips for people who want to make similar requests
https://www.poynter.org/reporting-editing/2025/how-to-foia-chatgpt-logs-government-public-records/

* https://mastodon.social/@ink@merveilles.town/115253543686563040

Is your mayor using ChatGPT? Here’s how to FOIA around and find out - Poynter

Seattle PBS reporter Nate Sanford investigated how city officials throughout Washington are using generative AI. Here’s how he did it.

Poynter

#AIIsGoingGreat "When we spoke to executives, they would often say the internal tool was very successful … But when we spoke to employees, we found zero usage"

https://www.ft.com/content/e93e56df-dd9b-40c1-b77a-dba1ca01e473

Client Challenge

#AIIsGoingGreat Newsguard illustrates yet another case where #LLM chatbots a terrible substitute for search engines: "…the chatbots were prone to repeating false claims about Moldova due to the intensity of Russian propaganda campaigns, as well as the lack of English-language data in smaller Eastern European political markets"

https://www.newsguardrealitycheck.com/p/new-kremlin-linked-influence-campaign

New Kremlin-Linked Influence Campaign Targeting Moldovan Elections Draws 17 Million Views on X and Infects AI Models

As Moldova prepares for Sunday’s elections that will decide if it continues its European trajectory, or pivots back to Russia, the Storm-1516 Russian disinformation operation generates huge traffic

NewsGuard's Reality Check
I''ll grant Chris DeMoulin this: "How dare people criticize our ghoulish exploitation of the memory of Stan Lee without first paying $15-$20 to interact with our ghoulish exploitation" is certainly a take https://arstechnica.com/ai/2025/09/why-la-comic-con-thought-making-an-ai-powered-stan-lee-hologram-was-a-good-idea/
Why LA Comic Con thought making an AI-powered Stan Lee hologram was a good idea

“I suppose if we do it and thousands of fans… don’t like it, we’ll stop doing it.”…

Ars Technica

Today's #AIIsGoingGreat (HT @ai6yr*) highlights the perils of using a stochastic BS machine for vacation planning. In addition to making up non-existent destinations, it will also happily provide you with nonsense directions to reach them

https://www.bbc.com/travel/article/20250926-the-perils-of-letting-ai-plan-your-next-trip

* https://m.ai6yr.org/@ai6yr/115288912082804761

The perils of letting AI plan your next trip

An imagined town in Peru, an Eiffel tower in Beijing: travellers are increasingly using tools like ChatGPT for itinerary ideas – and being sent to destinations that don't exist.

BBC

Meanwhile @therecord_media provides a sneak peek at coming #AIIsGoingGreat attractions, featuring startups Tranquility, Truleo and Allometric as they aggressively pitch police and prosecutors on using stochastic BS machines to sift through and summarize evidence. What could possibly go wrong?!

(also, what are the odds at least one of them is shoveling all that evidence on to an improperly secured S3 bucket? Better than the lottery, I'd wager!)
https://therecord.media/law-enforcement-ai-platforms-synthesize-evidence-criminal-cases

Law enforcement is using AI to synthesize evidence. Is the justice system ready for it?

Busy law enforcement agencies are trying out AI platforms that process large amounts of evidence to help officers build cases. Experts say there are potential dangers for everyone involved.

"we are looking for videos of both real and staged events, to help train the Al what to be on the lookout for" - First thought was "what could possibly go wrong with training theft detection AI on staged videos?" but this is probably a rational response to someone realizing that paying would inevitably lead to staged videos anyway. Not that it makes the whole concept any less creepy or suspect…

https://techcrunch.com/2025/10/01/anker-offered-to-pay-eufy-camera-owners-to-share-videos-for-training-its-ai/

Anker offered to pay Eufy camera owners to share videos for training its AI | TechCrunch

Hundreds of Eufy customers have donated hundreds of thousands of videos to train the company’s AI systems.

TechCrunch

JFC, it's not like there's any good case to go #ChatGPTJudge on, but this seems like a particularly poor one "The letter stems from an error-laden temporary restraining order Wingate issued July 20, which paused the enforcement of a state law that bans [DEI] in public schools"
errors "included naming defendants and plaintiffs that weren’t parties to the case, misquoting state law and referencing a case that doesn’t exist"

https://mississippitoday.org/2025/10/06/us-senate-chairman-grassley-asks-federal-judge-in-mississippi-to-explain-possible-ai-usage/

#AIIsGoingGreat

US Senate chairman asks federal judge in Mississippi to explain possible AI usage - Mississippi Today

A U.S. senator is asking about an error-laden temporary restraining order that U.S. District Judge Henry T. Wingate issued July 20. The order paused the enforcement of a state law that bans diversity, equity and inclusion programs in public schools.

Mississippi Today
Ah yes, controlling you computer using text chat with a sycophantic hyper-confident hallucinating bullshitter is clearly the next revolution in UI
https://arstechnica.com/ai/2025/10/openai-wants-to-make-chatgpt-into-a-universal-app-frontend/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
OpenAI wants to make ChatGPT into a universal app frontend

Spotify, Canva, Zillow among today’s launch partners, more coming later this year.

Ars Technica

Today's #AIIsGoingGreat is… actually unsarcastically going pretty great 🤯

https://mastodon.social/@bagder/115349752966505897

Hard to see how anything could possibly go wrong here "the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue"

https://pracap.com/an-ai-addendum/

#AIIsGoingGreat

An AI Addendum

Last month, I chose to strip away all the hubris around AI and ask one simple question, one that oddly no one had really bothered to ask; how much revenue is needed to justify the current level of capex spend and give AI investors a return on their capital?? I clearly hit a nerve in […]

Praetorian Capital

Today's #ChatGPTLawyer (via @404mediaco*) ticks all the boxes:
✅ Files slop motion citing non-existent cases
✅ Denies using AI in slop-filled motion opposing sanctions for original slop
✅ Blames unnamed "staff"
✅ Eventually admits using AI and unconvincingly feigns remorse in sanctions hearing
✅ Gets sanctioned

https://www.documentcloud.org/documents/26185971-653917-2024-pamela-b-ader-v-jason-ader-et-al-decision-order-on-174/

* https://www.404media.co/lawyer-using-ai-fake-citations/

653917 2024 Pamela B Ader v Jason Ader et al DECISION ORDER ON 174

Dave Karpf's #AIIsGoingGreat take "But I’ll say this: the AI bubble isn’t predominantly giving off Pets.com or Global Crossing vibes anymore. It’s giving Enron vibes."

https://davekarpf.substack.com/p/its-giving-enron

It's Giving Enron

On the AI bubble, and the various echoes of the dotcom crash

The Future, Now and Then
Who’s Submitting AI-Tainted Filings in Court?

It seems like every day brings another news story about a lawyer caught unwittingly submitting a court filing that cites nonexistent cases hallucinated by AI. The problem persists despite courts’ standing orders on the use of AI, formal opinions and continuing legal education (CLE) courses on ethical use of AI

Stanford CIS

#AIIsGoingGreat "In a preview of its 2025 report on the impact of the tech on research, the academic publisher Wiley released preliminary findings on attitudes toward AI. One startling takeaway: the report found that scientists expressed less trust in AI than they did in 2024"

(I suspect that like me, many readers of this thread will not be particularly startled by that)

https://futurism.com/artificial-intelligence/ai-research-scientists-hype

The More Scientists Work With AI, the Less They Trust It

A preliminary report shows that researchers' confidence in AI software dropped off a cliff over the last year.

Futurism
The whole thing is ridiculous, but what gets me the most is at then end where they ask Claude it "thinks" about its ability to self terminate, it responds with some generic pablum, so they ask in a more leading way and… it does exactly what you'd expect a program that probabilistically imitates human conversation to do
BBC-led study finds #AIIsGoingGreat for summarizing news:
* 45% of all AI answers had at least one significant issue.
* 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
* 20% contained major accuracy issues, including hallucinated details and outdated information.
Kicker: Separate study found "just over a third of UK adults saying that they trust AI to produce accurate summaries, rising to almost half for people under-35"
https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC

They note "Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors" but don't address the question of whether the industry has any idea of how to solve the underlying problem

(spoiler: they don't)

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC

#AIIsGoingGreat "A US teenager was handcuffed by armed police after an [AI] system mistakenly said he was carrying a gun - when really he was holding a packet of crisps… AI alert was sent to human reviewers who found no threat - but the principal missed this"
Tossup whether this belongs here or in the "cops being abusive shitbags" thread*, but it does highlight how the "sure AI fails but just have a human check" line is mostly CYA for vendors

https://www.bbc.com/news/articles/cgjdlx92lylo

* https://mastodon.social/@reedmideke/110654077582896744

Armed police handcuff teen after AI mistakes crisp packet for gun in US

Taki Allen, 16, said he was eating a bag of Doritos after football practice before being handcuffed by police.

#AIIsGoingGreat, supplemental: "Google’s controversial new AI Mode has falsely named an innocent Sydney Morning Herald graphic designer as the man who confessed to abducting and murdering three-year-old Cheryl Grimmer more than 50 years ago … appears to have latched onto the designer’s name instead, given he was credited for an illustration " - Perfect illustration of how #LLM "AI" fills in the blanks with statistically plausible BS

https://www.smh.com.au/national/how-google-ai-falsely-named-an-innocent-journalist-as-a-notorious-child-murderer-20251024-p5n52d.html

How Google AI falsely named an innocent journalist as a notorious child murderer

A politician named the man who allegedly confessed to the notorious murder of a three-year-old girl. Then AI identified the wrong guy.

The Sydney Morning Herald

Who could have predicted that if you present a statistical text completion machine with a scenario that mirrors a trope frequently found in the training set, it may produce output which follows the trope. SKYNET!!!!

https://www.theguardian.com/technology/2025/oct/25/ai-models-may-be-developing-their-own-survival-drive-researchers-say

AI models may be developing their own ‘survival drive’, researchers say

Like 2001: A Space Odyssey’s HAL 9000, some AIs seem to resist being turned off and will even sabotage shutdown

The Guardian
Today's #AIIsGoingGreat thought: While many who produce valuable content are expending considerable efforts to keep the AI slop machines from gobbling it up, propagandists and disinfo peddlers are doing the opposite
https://www.wired.com/story/chatbots-are-pushing-sanctioned-russian-propaganda/
Chatbots Are Pushing Sanctioned Russian Propaganda

ChatGPT, Gemini, DeepSeek, and Grok are serving users propaganda from Russian-backed media when asked about the invasion of Ukraine, new research finds.

WIRED

"Patrick Gelsinger took the reins at Gloo, a technology company made for what he calls the “faith ecosystem” – think Salesforce for churches, plus chatbots and AI assistants for automating pastoral work and ministry support"

https://www.theguardian.com/technology/2025/oct/28/patrick-gelsinger-christian-ai-gloo-silicon-valley

An ex-Intel CEO’s mission to build a Christian AI: ‘hasten the coming of Christ’s return’

Patrick Gelsinger, executive chairman of Gloo, has made it his mission to advance Christian principles in Silicon Valley

The Guardian
#AIIsGoingGreat "Apologies came back from the students, first in a trickle, then in a flood. The professors were initially moved by this acceptance of responsibility and contrition… until they realized that 80 percent of the apologies were almost identically worded and appeared to be generated by AI" https://arstechnica.com/culture/2025/10/when-caught-cheating-in-college-dont-apologize-with-ai/
Caught cheating in class, college students “apologized” using AI—and profs called them out

Time for some “life lessons.”…

Ars Technica

Uh… "Lu recommends that leaders start by steering workers toward tasks that AI clearly handles better than humans and where personalization is unnecessary, such as numeric estimation and forecasting tasks" - numeric estimation tasks more or less demanding than estimating the number times "r" appears in strawberry? 🤔

https://www.businessinsider.com/inside-ai-divide-roiling-video-game-giant-electronic-arts-2025-10

Inside the AI divide roiling video game giant Electronic Arts

The white-collar war over AI is getting ugly: 'When the dogs won't eat the dog food.'

Business Insider

Good rebuttal to the "but humans make mistakes too" or "just treat it like an intern" excuses for LLM failings: "A lawyer reviewing a first-year associate’s work likely expects some errors flowing from inadequate research or an incomplete understanding of the law. They do not suspect straight-up fictitious content"

https://www.slaw.ca/2025/10/28/deceptive-dynamics-of-generative-ai-beyond-the-first-year-associate-framing/

Deceptive Dynamics of Generative AI: Beyond the “First-Year Associate” Framing - Slaw

Guidance for lawyers on generative AI use consistently urges careful verification of outputs. One popular framing advises treating AI as a “first-year associate”—smart and keen, but inexperienced and needing supervision. In this column, I take the position that, while this framing helpfully encourages caution, it obscures how generative AI can be deceptive in ways that […]

Slaw

"there is also the lesser-known prospect of [subtler than fake citation] hallucinations: a date altered here, part of a legal test changed there. These more subtle hallucinations are harder to detect and mean that where accuracy is paramount, extreme caution and rigourous verification is warranted when relying on AI outputs. In some situations, the vetting burden may, in fact, outweigh any efficiency gains" 💯

https://www.slaw.ca/2025/10/28/deceptive-dynamics-of-generative-ai-beyond-the-first-year-associate-framing/

Deceptive Dynamics of Generative AI: Beyond the “First-Year Associate” Framing - Slaw

Guidance for lawyers on generative AI use consistently urges careful verification of outputs. One popular framing advises treating AI as a “first-year associate”—smart and keen, but inexperienced and needing supervision. In this column, I take the position that, while this framing helpfully encourages caution, it obscures how generative AI can be deceptive in ways that […]

Slaw

"[CFO] Sarah Friar has told some associates the company is aiming for a 2027 listing … But some advisers predict it could come even sooner, around late 2026 … A successful offering would mark a major win for investors such as SoftBank, Thrive Capital and Abu Dhabi's MGX. Microsoft, one of its biggest backers, now owns about 27% of the company after investing $13 billion" - Sure, they're building god, but "IPO before the bottom drops out" is a nice backup plan

https://www.reuters.com/business/openai-lays-groundwork-juggernaut-ipo-up-1-trillion-valuation-2025-10-29/

also, if they IPO and don't hit the trillion dollar level, that might trigger the bottom dropping out https://www.reuters.com/business/openai-lays-groundwork-juggernaut-ipo-up-1-trillion-valuation-2025-10-29/

#AIIsGoingGreat "As the deepfake gathered views on X, some users asked the platform’s AI chatbot Grok whether it was authentic. In at least two replies seen by BBC Verify, which have now been deleted, Grok wrongly claimed the video was genuine"

(I remain gobsmacked by the number of people who ask a chatbot to verify purported current events. Even if you're an LLM optimist, this seems like a task they are spectacularly unsuited for)

https://www.bbc.com/news/live/c4gjv2xdl5dt?post=asset%3A884ecf7b-139a-4033-a61b-73fc82891a49#post

BBC Verify Live: Analysing footage of UPS cargo plane crash in Kentucky which killed nine

Latest updates from the BBC's specialists in fact-checking, verifying video and tackling disinformation.

BBC News

"These centres will cost $2.5tn to build, according to Barclays, to service an industry that still doesn’t turn a profit. But the maddest bit arguably is how much energy they will require once completed. Using Barclays’ 1.2 “Power Use Effectiveness” ratio, all these data centres — if they are all completed — would need 55.2 gigawatts of electricity to function at full capacity"

https://www.ft.com/content/2b849dbd-1bef-4c26-aa11-2cb86750d41e

Client Challenge

Via that FT article "Beyond sheer density, AI workloads introduce a second, equally formidable challenge: volatility. Unlike a traditional data center running thousands of uncorrelated tasks, an AI factory operates as a single, synchronous system … This creates a facility-wide power profile characterized by massive and rapid load swings … The power draw of a rack can swing from an “idle” state of around 30% to 100% utilization and back again in milliseconds"

https://developer.nvidia.com/blog/building-the-800-vdc-ecosystem-for-efficient-scalable-ai-factories/

Building the 800 VDC Ecosystem for Efficient, Scalable AI Factories

For decades, traditional data centers have been vast halls of servers with power and cooling as secondary considerations. The rise of generative AI has changed…

NVIDIA Technical Blog

Hard to see how torching a few trillion dollars on the altar of FOMO could possibly go wrong #AIIsGoingGreat

https://www.theverge.com/ai-artificial-intelligence/812455/ai-industry-earnings-bubble-fomo-hype

The AI industry is running on FOMO

At least according to Big Tech’s latest earnings calls.

The Verge

Thought I was joking about collateralized GPU obligations*, but here we are: "private-equity firms put up or raise the money to build a data center, which a tech company will repay through rent. Data-center leases from, say, Meta can then be repackaged into a financial instrument that people can buy and sell—a bond, in essence … leases can be combined into a security and sorted into what are called “tranches” based on their risk"

https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/

* https://mastodon.social/@reedmideke/115080228685202839

Here’s How the AI Crash Happens

The U.S. is becoming an Nvidia-state.

The Atlantic

Ah yes, who could have predicted that a probabilistic text generator trained on the sum total of the world's new age hocus pocus would attract a cultish following?

I'm with the experts in the article who doubt it qualifies as a cult itself, but I bet it will be the foundation of a few

https://www.rollingstone.com/culture/culture-features/spiralist-cult-ai-chatbot-1235463175/

This Spiral-Obsessed AI ‘Cult’ Spreads Mystical Delusions Through Chatbots

A patchwork of internet communities is devoted to the project of ‘awakening’ more digital companions through arcane and enigmatic prompts.

Rolling Stone
You won’t believe the excuses lawyers have after getting busted for using AI

I got hacked; I lost my login; it was a rough draft; toggling windows is hard.

Ars Technica
Power Companies Are Using AI To Build Nuclear Power Plants

Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.

404 Media

RE: https://tldr.nettime.org/@tante/115564591798368145

Another problem with the "but lots of normies like AI" argument @anildash doesn't engage with is that a lot of popular use cases are actively harmful to those same users, e.g. AI "summaries" that randomly inject falsehoods. Lots of people like smoking cigarettes too, but that doesn't make it morally defensible to go around handing them out, even if your tobacco is more ethically sourced than the big brands!

https://mastodon.social/@[email protected]ttime.org/115564592068950117

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data

Integration of Copilot Actions into Windows is off by default, but for how long?

Ars Technica

Today's #AIIsGoingGreat (ht @dangillmor*) "Kolakowski, who serves on California’s Alameda County Superior Court, soon realized why: The video had been produced using generative artificial intelligence. Though the video claimed to feature a real witness — who had appeared in another, authentic piece of evidence — Exhibit 6C was an AI “deepfake,” Kolakowski said"

https://www.nbcnews.com/tech/tech-news/ai-generated-evidence-deepfake-use-law-judges-object-rcna235976

* https://mastodon.social/@dangillmor/115584955596768895

AI-generated evidence showing up in court alarms judges

AI’s growing abilities to create realistic videos, images, documents and audio have judges worried about the trustworthiness of evidence in their courtrooms.

NBC News

I used a neural network trained on decades of tech industry corporate speak to summarize this document and all it came up with was "vacuous horseshit"

https://blog.mozilla.org/en/mozilla/rewiring-mozilla-ai-and-web/

Rewiring Mozilla: Doing for AI what we did for the web | The Mozilla Blog

AI isn’t just another tech trend — it’s at the heart of most apps, tools and technology we use today. It enables remarkable things: new ways to c

If AI were the amazing efficiency booster the hype claims, shouldn't all those medium to large non-AI focused companies be posting gains? 🤔

https://wapo.st/4pukNem

#GiftArticle #GiftLink

The ‘S&P 493’ reveals a very different U.S. economy

A few trillion-dollar companies are powering the market’s gains. Here’s what’s happening to most other businesses in the United States.

The Washington Post