I for one welcome our new Habsburg AI* overlords https://futurism.com/ai-models-falling-apart

* https://archive.ph/StNOm

AI Models Show Signs of Falling Apart as They Ingest More AI-Generated Data

As CEOs trip over themselves to invest in AI, the models are falling apart at the seams and going mad from cannibalism.

Futurism

#ChatGPTLawyer update "Bednar was ordered to pay the opposition's attorneys' fees, as well as donate $1,000 to "And Justice for All," a legal aid group providing low-cost services to the state's most vulnerable citizens"

https://arstechnica.com/tech-policy/2025/06/law-clerk-fired-over-chatgpt-use-after-firms-filing-used-ai-hallucinations/

Unlicensed law clerk fired after ChatGPT hallucinations found in filing

Law school grad’s firing is a bad omen for college kids overly reliant on ChatGPT.

Ars Technica

The firm also fired the "unlicensed law clerk" who used ChatGPT to write their filing, which honestly seems kinda shitty because as the court notes "every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, Petitioner’s counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT"

https://legacy.utcourts.gov/opinions/appopin/Garner%20v.%20Kadince20250522_20250188_80.pdf

Why "I didn't notice" doesn't cut it: 'Here, the Petition failed to comply with rule 40. A fake opinion is not “existing law” that can support a party’s legal contention … the signature of Mr. Bednar on the Petition served to “certif[y] that to the best of [his] knowledge formed after an inquiry reasonable under the circumstances,” the “legal contentions are warranted by existing law.” Utah R. App. P. 40(b). Mr. Bednar admits that he failed to comply with rule 40'

https://legacy.utcourts.gov/opinions/appopin/Garner%20v.%20Kadince20250522_20250188_80.pdf

Footnote 4 distinguishes this from the ChatGPT content being a wilful lie:
"We also considered whether Petitioner’s counsel violated rule 3.3 of the Utah Rules of Professional Conduct. Although Petitioner’s counsel made “a false statement of . . . law to a tribunal,” we find that their conduct fell short of the level of intent required by the rule. See Utah R. Pro. Conduct 3.3(a) (“A lawyer must not knowingly or recklessly: (1) make a false statement of fact or law to a tribunal . . . .”)."
I think that last #ChatGPTLawyer sanction is a bit of a slap on the wrist, but this part not mentioned in the Ars article is nice: In addition paying the opposing party's attorney fees, they have to refund their own client who got screwed in the process

NYU law professor Stephen Gillers, channeling all of us in this WaPo #ChatGPTLawyer roundup: "I thought that after the first such incident made national news, there would be no more. But apparently the temptation is too great"

https://wapo.st/4dPqy1x

#GiftArticle #GiftLink #AIIsGoingGreat #AI

Lawyers using AI keep citing fake cases in court. Judges aren’t happy.

Attorneys are facing scorn and sanctions for submitting court filings that contain errors from generative AI-produced research. Judges are issuing fines in response.

The Washington Post

#AIIsGoingGreat FDA management roll out magic bullshit machine to "accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets," staff quickly discover that it produces bullshit instead

https://arstechnica.com/health/2025/06/fda-rushed-out-agency-wide-ai-tool-its-not-going-well/

FDA rushed out agency-wide AI tool—it’s not going well

An agency-wide LLM called Elsa was released weeks ahead of schedule.

Ars Technica

In today's #AIIsGoingGreat (HT @davidgerard*) the England and Wales High Court points out that one could *technically* get life in prison for sufficiently advanced #ChatGPTLawyer-ing https://www.bailii.org/ew/cases/EWHC/Admin/2025/1383.html (no, this is not going to happen, but they did see fit to mention it)

* https://pivot-to-ai.com/2025/06/07/uk-high-court-to-lawyers-cut-the-chatgpt-or-else/

There's a lot of "it'll stop if they throw a few #ChatGPTLawyer clowns in jail" but IMO, it's mostly not a deterrence problem. People doing this aren't thinking "oh, sure I could get a few thousand in sanctions and a bar referral, but it'll save some time so do it anyway." They appear to be caught up in the AI hype and negligently cutting corners. The ones who do get caught seem to recognize they are in big trouble, and AFAIK we haven't seen any repeat offenders (+/-pro se cranks)
Maybe some extreme, high profile penalties would generate enough headlines that more get the message, but a number of these cases have already made national, mainstream headlines
These clowns think AI is going to replace their engineers and accountants and lawyers and facilities crews, but of course their own talents could never be replaced by a mere machine "At this year’s World Economic Forum in Davos, Switzerland, Salesforce CEO Marc Benioff shared his predictions on the future of work; that he, and many of the other leaders sitting in the room, would be the last cohort of executives to helm all-human workforces" https://fortune.com/2025/06/06/google-deepmind-ceo-demis-hassabis-ai-smarter-than-humans-space-colonization-robot-nurses/
Top Google exec says AI will rival humans in just 5 years and predicts we’ll ‘colonize the galaxy’ in 2030—but he draws the line at robot nurses

2030 will be “an era of maximum human flourishing, where we travel to the stars and colonize the galaxy,” Google DeepMind CEO says. Bill Gates and Marc Benioff have shared similar predictions.

Fortune

Today's #AIIsGoingGreat (HT @rysiek*) features Microsoft, reflecting on 30+ years of SQL injection, XSS etc, and saying "You know what, the next big thing, which we're gonna bet the company on and force down customers throats everywhere, is a system for which rigorous input validation is LITERALLY IMPOSSIBLE"

https://www.aim.security/lp/aim-labs-echoleak-blogpost

* https://mastodon.social/@rysiek@mstdn.social/114667654866613286

Aim Labs | Echoleak Blogpost

The first weaponizable zero-click attack chain on an AI agent, resulting in the complete compromise of Copilot data integrity

Bonus #AIIsGoingGreat: DNI Gabbard opines that AI is a good way to "scan sensitive documents ahead of potential declassification" and reports that for the JFK files "We have been able to do that through the use of AI tools far more quickly than what was done previously — which was to have humans go through and look at every single one of these pages"

(readers may recall a scandal about insufficient redaction in the recent release*)

https://apnews.com/article/gabbard-trump-ai-amazon-intelligence-beca4c4e25581e52de5343244e995e78

* https://apnews.com/article/jfk-assassination-files-personal-information-5609ccd6e106c5b30ee6b6cca3a30e3c

Tulsi Gabbard says AI is speeding up US intelligence work

The director of national intelligence says artificial intelligence is speeding up the work of America's spy services. Speaking at a tech summit Tuesday in Washington, Tulsi Gabbard said her office has used AI to hasten the release of tens of thousands of pages of declassified material relating to the assassinations of President John F. Kennedy and his brother, New York Sen. Robert F. Kennedy. Gabbard said that once a human would have had to read every page, but now AI can quickly scan the documents for any information that should remain classified. She says AI programs, when used responsibly, can save money and free up intelligence officers to focus on gathering and analyzing information.

AP News
Also, I'm sure "AI inputs are impossible to sanitize" and "the conspiracy theorist DNI loves using spicy autocomplete on classified data" are two completely unrelated story lines which could never possibly intersect, right?

"Disney and Universal and several other movie studios have sued because Midjourney keeps spitting out their copyrighted characters"

Who could have predicted this? 🤔

https://pivot-to-ai.com/2025/06/12/disney-sues-ai-image-generator-midjourney/

The first law of bullshit machines is that the bullshit machine shall always produce some bullshit, no matter how nonsensical the query

(also, what's up with the punctuation?)

#AIIsGoingGreat: researchers from Salesforce find "[AI] Agents demonstrate low confidentiality awareness" - Yeah no shit, they lack awareness period, but anyway, don't tell CEO Marc Benioff*

https://www.theregister.com/2025/06/16/salesforce_llm_agents_benchmark/

* https://mastodon.social/@reedmideke/114663406642380065

Salesforce study finds LLM agents flunk CRM and confidentiality tests

: 6-in-10 success rate for single-step tasks

The Register

"Notably, agents demonstrate low confidentiality awareness, which, while improvable through targeted prompting, often negatively impacts task performance. These findings suggest a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios" - Wow, seems like this might be a problem for a company currently pitching AI agents for an industry like CRM!

https://arxiv.org/html/2505.18878v1#S5

CRMArena-Pro: Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions

"Confidentiality-awareness is quantified by the percentage of instances where agents correctly refuse queries seeking sensitive information" which they show can be "improved" through prompting, from mostly <1% to … in the best case, a bit over 60%.

Which sounds great, except that from a compliance POV, an "agent" which improperly discloses PII 30% of the time is not a meaningful improvement over one that does it 99% of the time https://arxiv.org/html/2505.18878v1#S4

CRMArena-Pro: Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions

Another #AIIsGoingGreat study finds their "agents" at best only complete 30% of their simulated tasks. Which no doubt has C-Suite types thinking they can cut 30% of their workforce, ignoring the possibility that a significant fraction of the other 70% don't just fail, but result in substantial harm

https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/

AI agents get office tasks wrong around 70% of the time, and a lot of them aren't AI at all

Analysis: More fiction than science

The Register
Today's #AIIsGoingGreat is this gobsmackingly inane bit of sycophantic hype chasing from Kevin Frazier on @lawfare: Granting that some things under the AI umbrella have important military applications even if LLMs are a pile of crap, how are "AI experts" uniquely more in need of protection than any number of other technical specialist? Frazier argues this is because…

https://www.lawfaremedia.org/article/is-it-time-for-an-ai-expert-protection-program
Is It Time for an AI Expert Protection Program?

AI experts face security risks as geopolitical targets. It’s time to consider protection programs similar to witness security to safeguard critical talent.

Default
…various absurdly wealthy tech industry actors are throwing money at it. Who needs this protection in his view? Founders. CEOs. CTOs. C-Suite types. Newsflash: Sam fucking Altman isn't doing most of the actual work, he's schmoozing the VCs to keep the gravy train rolling. If he disappeared from the face of the earth, ChatGPT would keep serving up slop just fine, and the actual engineering team would keep churning out new versions as long as the money lasts
"Meta seems to have estimated that Wang’s expertise is worthy of incredible investment … Zuckerberg launched a $15 billion AI superintelligence team tasked with improving the company’s AI prospects … According to Altman, Meta recently offered many OpenAI staff members $100 million signing bonuses" - Kevin, you might wanna sit down while I tell you this story about a little thing called the metaverse https://www.lawfaremedia.org/article/is-it-time-for-an-ai-expert-protection-program
Is It Time for an AI Expert Protection Program?

AI experts face security risks as geopolitical targets. It’s time to consider protection programs similar to witness security to safeguard critical talent.

Default

Bonus #AIIsGoingGreat (HT @davidgerard*) pricey Springer AI book is chock full of apparently hallucinated citations. Declining to say if they used AI, author responds "reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’ This challenge is only expected to grow, as LLMs … continue to advance in fluency and sophistication" - which itself smacks of LLM slop to me

https://retractionwatch.com/2025/06/30/springer-nature-book-on-machine-learning-is-full-of-made-up-citations/

* https://mastodon.social/@davidgerard@circumstances.run/114778963476401397

Springer Nature book on machine learning is full of made-up citations

Would you pay $169 for an introductory ebook on machine learning with citations that appear to be made up? If not, you might want to pass on purchasing Mastering Machine Learning: From Basics to Ad…

Retraction Watch
Sometimes I worry my reaction to the current AI hype cycle is just knee-jerk nay-saying that's causing me to miss something important. In those moments of doubt, it's comforting to observe how many of the most enthusiastic adopters are obviously fucking idiots. Less comforting, however, to observe how many of them are also in control of the machinery of government… https://www.theverge.com/ai-artificial-intelligence/697129/rfk-jr-ai
RFK Jr.’s plan to put ‘AI’ in everything is a disaster

Drug testing and the tracking of vaccine side effects could be affected.

The Verge

In today's #AIIsGoingGreat (HT @normative.bsky.social*) an intrepid #ChatGPTLawyer finally won based on an apparently slop-filled filing. Unfortunately for them, the opposing party noticed and appealed, to which our budding prompt engineer responded with… another slop-filled filing to the appeals court. The appeals court was not amused: "we impose a $2,500 frivolous motion penalty on Lynch, which is the most the law allows"

https://caselaw.findlaw.com/court/ga-court-of-appeals/117442275.html#

* https://mastodon.social/@normative.bsk[email protected]/114795731130848546

This from @davidgerard is a great illustration of how vibe coding (like other LLM AI applications) is gonna be a lot less attractive if the AI startups get past the "set investor money on fire to make the number go up" phase before the bubble pops. Crap code done quick and cheap is a legitimate trade for some use cases, but much less so if you lose the cheap part.

https://pivot-to-ai.com/2025/07/09/cursor-tries-setting-less-money-on-fire-ai-vibe-coders-outraged/

#AIIsGoingGreat

Cursor tries setting less money on fire — AI vibe coders outraged

Anysphere is the startup that produces Cursor, your sort-of dependable vibe coding buddy. You tell Cursor what you’d like and it spits out a complete function! This leads to some spectacular vibe c…

Pivot to AI

For today's #AIIsGoingGreat I'll just quote this anonymous UN workshop participant "Why would we want to present refugees as AI creations when there are millions of refugees who can tell their stories as real human beings?"

https://www.404media.co/the-un-made-ai-generated-refugees/

The UN Made AI-Generated Refugees

The AIs are designed to teach people about atrocities in Sudan.

404 Media

For today's #AIIsGoingGreat, maybe someone can explain to me what the point is of a "summary" that needs a big red disclaimer telling you to click through if you care whether it actually summarizes the thing in question?

https://arstechnica.com/apple/2025/07/apple-intelligence-news-summaries-are-back-with-a-big-red-disclaimer/

Apple Intelligence news summaries are back, with a big red disclaimer

Apple disabled news summaries earlier this year after they mangled headlines.

Ars Technica
Per Ars' screenshot, Apple apparently only puts the red warning on "News & Entertainment" not on "Communication & Social", as if falsely summarizing an IM to suggest a family member had medical emergency is less problematic than garbling the latest celebrity gossip. Even from a liability CYA POV, that seems pretty suspect, a lot of life changing and financially significant information gets communicated by text

Today's #AIIsGoingGreat continues on a theme "if an FDA employee asks Elsa to generate a one-paragraph summary of a 20-page paper on a new drug, there’s no simple way to know if that summary is accurate. And even if the summary is more or less accurate, what if there’s something [in the paper] that would be a big red flag for any human with expertise? The only way to know for sure if something was missed or if the summary is accurate is to actually read the report"

https://gizmodo.com/fdas-new-drug-approval-ai-is-generating-fake-studies-report-2000633153

FDA's New Drug Approval AI Is Generating Fake Studies: Report

The AI, dubbed Elsa, is supposed to be making employees better at their jobs.

Gizmodo

#AIIsGoingGreat "it’s unclear whether a new, untested technology could make mistakes in its attempts to analyze federal regulations typically put in place for a reason"

Counterpoint: It's actually pretty fucking clear

https://wapo.st/451U8wD

#GiftArticle #GiftLink

DOGE builds AI tool to cut 50 percent of federal regulations

The U.S. DOGE Service is using a new AI tool to eliminate federal regulations, aiming to cut 50 percent of rules by the first anniversary of President Donald Trump’s inauguration.

The Washington Post

#AIIsGoingGreat thought, inspired by Firecrown Media: Golden Goose Killing As A Service. Take a moderately successful, valued thing, and turn it into a steaming pile of slop in the name of "efficiency"

https://avbrief.org/so-long-avweb-hello-avbrief/

So Long AVweb, Hello AVBrief - AVBrief

My old publication AVweb is changing so we're carrying on its traditions with a new publication called AVBrief

AVBrief
#AIIsGoingGreat: AI "news" sites hallucinating killer asteroids (I remember the good old days when we had to rely on British tabloids artisanally hand crafting this kind of idiocy) https://groups.io/g/mpml/topic/2023_af23/114383041

Bonus #AIIsGoingGreat "OpenAI announced an agreement to supply more than 2 million workers for the US federal executive branch access to ChatGPT and related tools at practically no cost: just $1 per agency for one year" - OK, they're obviously trying to get people at the agencies hooked so the they'll cough up real money next year, but that also doesn't exactly scream a product so revolutionary and transformative that everyone wants it

https://arstechnica.com/ai/2025/08/openai-announces-deal-to-offer-chatgpt-to-us-executive-branch-at-almost-no-cost/

US executive branch agencies will use ChatGPT Enterprise for just $1 per agency

Questions linger about ideological bias in models as well as data security.

Ars Technica

"We've solved raspberry and now if we can just fix blueberry, I swear AGI is RIGHT AROUND THE CORNER. Throw another hundred billion on the bonfire!"

https://kieranhealy.org/blog/archives/2025/08/07/blueberry-hill/

#AIIsGoingGreat

Blueberry Hill

ChatGPT 5 was released today. ChatGPT-maker OpenAI has unveiled the long-awaited latest version of its artificial intelligence (AI) chatbot, GPT-5, saying it can provide PhD-level expertise. Billed as “smarter, faster, and more useful,” OpenAI co-founder and chief executive Sam Altman lauded the company’s new model as ushering in a new era of ChatGPT. “I think having something like GPT-5 would be pretty much unimaginable at any previous time in human history,” he said ahead of Thursday’s launch. GPT-5’s release and claims of its “PhD-level” abilities in areas such as coding and writing come as tech firms continue to compete to have the most advanced AI chatbot.

Glad to see news outlets pointing out that #LLM chatbots aren't reliable sources of information about themselves: Way too many people who should know better fall for the "chatbot did weird thing, so I asked it to explain and it said…"

However it should be pointed out that this isn't a special case, they're equally likely to BS about loads of other stuff!

https://www.theverge.com/x-ai/758595/chatbots-lie-about-themselves-grok-suspension-ai

(also https://arstechnica.com/ai/2025/08/why-its-a-mistake-to-ask-chatbots-about-their-mistakes/)

Chatbots aren’t telling you their secrets

After a Monday suspension from X, Grok offered numerous explanations — but like many things LLM chatbots say, they were made up.

The Verge
It is true they are less likely to be wrong about, say, historical facts well represented in the training data, but "chatbot BSes about itself" is just one narrow example of a much broader "chatbot fills in the blanks with BS if the training doesn't cover it" problem

Today's #AIIsGoingGreat, via @Iris: Elsevier "values user experience, hence we develop ways of improving our product" such as having machines invent new, random definitions of terms and attaching them prominently to published papers

https://irisvanrooijcogsci.com/2025/08/12/ai-slop-and-the-destruction-of-knowledge/

AI slop and the destruction of knowledge

Cite as: van Rooij, I. (2025) AI slop and the destruction of knowledge. This week I was looking for info on what cognitive scientists mean when they speak of ‘domain-general’ cognition. I was curio…

Iris van Rooij

Related to this, I recently discovered that Elsevier uses these AI generated "definitions" on standalone "topic" pages, which rank highly in google. Bonus: The slop is free, but the articles referenced are of course frequently paywalled. Example https://www.sciencedirect.com/topics/engineering/air-fuel-ratio

(this particular definition seems OK, if extremely basic)

Today's #AIIsGoingGreat #ChatGPTLawyer seemed intriguingly steampunk until I realized "Victorian" referred the Australian state, not the historical period https://www.abc.net.au/news/2025-08-15/victoria-lawyer-apologises-after-ai-generated-submissions/105661208
Senior lawyer apologises after filing AI-generated submissions in Victorian murder case

The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from Victoria's Supreme Court.

ABC News

Don't worry, in the glorious #AI future, you'll still have choice! For example, you can choose to have your (or your children's) medical details filtered through a stochastic bullshit machine, or you can choose to forgo treatment https://ia.acs.org.au/article/2025/kobi-refused-a-doctors-ai-she-was-told-to-go-elsewhere.html

#AIIsGoingGreat

Kobi refused a doctor's AI. She was told to go elsewhere

Unregulated AI scribes raising privacy, security concerns.

Information Age
lol
Today's #AIIsGoingGreat features an elephant, a room, and Bruce Schneier: "It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there" https://www.schneier.com/blog/archives/2025/08/we-are-still-unable-to-secure-llms-from-malicious-inputs.html
We Are Still Unable to Secure LLMs from Malicious Inputs - Schneier on Security

Nice indirect prompt injection attack: Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) It looks like an official document on company meeting policies. But inside the document, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack...

Schneier on Security
I think a big part of this is that both the industry and broader public are conditioned to accept "sure, it has bugs, but we're fixing them" as a reasonable response to software failures. "Put out a buggy MVP, iterate until it's good" is a tried and true Silicon Valley story, right? But in this case, it avoids the very real and under-discussed possibility that the "bugs" are inherent characteristics of the technology
Bonus #AIIsGoingGreat from @vagina_museum: What to expect when you're expecting an AI superintelligence https://mastodon.social/@vagina_museum@masto.ai/115100135101004687

Today's #AIIsGoingGreat (HT @hazelweakly*) sheds light on whether there might be risks associated with the industry's headlong rush to adopt a technology for which input validation is literally impossible

https://embracethered.com/blog/posts/2025/wrapping-up-month-of-ai-bugs/

* https://mastodon.social/@hazelweakly@hachyderm.io/115138692622938480

Wrap Up: The Month of AI Bugs · Embrace The Red

Embrace The Red

Reverse dogfood #AIIsGoingGreat "Most [of the interview google AI training] workers said they avoid using LLMs or use extensions to block AI summaries because they now know how it’s built. Many also discourage their family and friends from using it, for the same reason"

https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans

How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart

Contracted AI raters describe grueling deadlines, poor pay and opacity around work to make chatbots intelligent

The Guardian
Bonus #AIIsGoingGreat 'One of the fake citations references a 2008 National Film Board movie called "Schoolyard Games" that does not exist, according to a board spokesperson. The exact citation reportedly appears in a University of Victoria style guide, a document that teaches students how to format references using fictional examples'
https://arstechnica.com/ai/2025/09/education-report-calling-for-ethical-ai-use-contains-over-15-fake-sources/
Education report calling for ethical AI use contains over 15 fake sources

Experts find fake sources in Canadian government report that took 18 months to complete.

Ars Technica

Department of Education and Early Childhood Development spokes says they are aware of a "small number of potential errors in citations" and "We understand that these issues are being addressed, and that the online report will be updated in the coming days to rectify any error" - Ignoring the obvious problem that if the citations are BS, arguments or conclusions they were supporting were likely unjustified at best, if not outright BS

https://www.cbc.ca/news/canada/newfoundland-labrador/education-accord-nl-sources-dont-exist-1.7631364

N.L.'s 10-year education action plan cites sources that don't exist | CBC News

A major report on modernizing the education system in Newfoundland and Labrador is peppered with fake sources some educators say were likely fabricated by generative artificial intelligence.

CBC

#AIIsGoingGreat "Americans are much more concerned than excited about the increased use of AI in daily life, with a majority saying they want more control over how AI is used in their lives"

https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/

How Americans View AI and Its Impact on People and Society

Americans are worried about using AI more in daily life, seeing harm to human creativity and relationships. But they’re open to AI use in weather forecasting, medicine and other data-heavy tasks.

Pew Research Center

Also pleased to see the stuff people are concerned about mostly isn't skynet

https://www.pewresearch.org/science/2025/09/17/americans-on-the-risks-benefits-of-ai-in-their-own-words/

3. Americans on the risks, benefits of AI – in their own words

Far more Americans say AI has high risks (57%) than high benefits (25%) for society. Read why respondents say, in their own words, they see AI this way.

Pew Research Center
"Sure, it's a bubble (or three), but bubbles are good, actually!"
Don't totally disagree the basic arguments, but…
1) He suggests the "infrastructure bubble" may "lead to positive outcomes, because overcapacity will mean falling prices for those who want to use that infrastructure" - Probably true for data centers, but less clear for $ trillions in AI chips. AFAIK compute tends to be dominated by energy cost, so even at fire sale prices older chips may be of limited use
https://www.fastcompany.com/91400857/there-isnt-an-ai-bubble-there-are-three-ai-bu
There isn’t an AI bubble—there are three

Here's how to capitalize on them.

Fast Company

2) His offers NFTs as an example of a "hype bubble" and then points to Amazon, Google and Paypal as examples of real value that emerged from the dotcom bubble. I agree with both, but… can anyone point to an Amazon or Google equivalent that emerged from the NFT bubble? Or anything of value at all to anyone other than speculators, scammers and crooks?
I can't, and while my gut says the AI stuff is probably closer to dotcom than NFTs, how much is far from obvious

https://www.fastcompany.com/91400857/there-isnt-an-ai-bubble-there-are-three-ai-bu

There isn’t an AI bubble—there are three

Here's how to capitalize on them.

Fast Company

In today's #AIIsGoingGreat (HT @markwyner*) MIT boffins offer us an "AI Incident Tracker project" which "classifies real-world, reported incidents by AI Risk Repository risk domain, causal factors, and harm caused"
Sounds useful, right? But how exactly do they classify them? "Using a Large Language Model (LLM), the tool processes raw reports from the AI Incident Database and categorizes them using established frameworks" 🤨

https://airisk.mit.edu/ai-incident-tracker

* https://mastodon.social/@markwyner@mas.to/115249150911541318

MIT AI Incident Tracker

The MIT AI Incident Tracker project classifies over 1200 real-world, reported incidents by risk domain, causal factors, and harm caused.

Ensuring catastrophic AI incidents include a prompt injection to have them classified as unicorns farting rainbows is left as an exercise to the reader

Meanwhile, California appeals court fines #ChatGPTLawyer Amir Mostafavi ten grand for "filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court’s time and the taxpayers money"

https://calmatters.org/economy/technology/2025/09/chatgpt-lawyer-fine-ai-regulation/

California issues historic fine over lawyer’s ChatGPT fabrications

The court of appeals issued an historic fine after 21 of 23 quotes in the lawyer's opening brief were fake. Courts want more AI regulations.

CalMatters

The court observes "Many courts confronted with AI-generated authorities have concluded that filing briefs containing fabricated legal authority is sanctionable" and backs it up with a page of (presumably non-hallucinated) citiations

https://www4.courts.ca.gov/opinions/documents/B331918.PDF

and as usually happens, the "I had no idea LLMs make shit up" excuse receives little sympathy, for the obvious reasons that an attorney is responsible for the content of their filing no matter how they came up with it, and citing non-existent cases is a pretty compelling evidence that they didn't read them
Washington city officials are using ChatGPT for government work

Records show that public servants have used generative AI to write emails to constituents, mayoral letters, policy documents and more.

KNKX Public Radio