Bonus #AIIsGoingGreat (HT @acdha*) features Cascade PBS and KNKX using public records requests to get local Washington governments #LLM chatlogs
* https://mastodon.social/@acdha@code4lib.social/115253478967518855
Nice interview (via @ink*) with reporter Nate Sanford about how the project came about, along with tips for people who want to make similar requests
https://www.poynter.org/reporting-editing/2025/how-to-foia-chatgpt-logs-government-public-records/
* https://mastodon.social/@ink@merveilles.town/115253543686563040
#AIIsGoingGreat "When we spoke to executives, they would often say the internal tool was very successful … But when we spoke to employees, we found zero usage"
https://www.ft.com/content/e93e56df-dd9b-40c1-b77a-dba1ca01e473
#AIIsGoingGreat Newsguard illustrates yet another case where #LLM chatbots a terrible substitute for search engines: "…the chatbots were prone to repeating false claims about Moldova due to the intensity of Russian propaganda campaigns, as well as the lack of English-language data in smaller Eastern European political markets"
https://www.newsguardrealitycheck.com/p/new-kremlin-linked-influence-campaign
As Moldova prepares for Sunday’s elections that will decide if it continues its European trajectory, or pivots back to Russia, the Storm-1516 Russian disinformation operation generates huge traffic
Today's #AIIsGoingGreat (HT @ai6yr*) highlights the perils of using a stochastic BS machine for vacation planning. In addition to making up non-existent destinations, it will also happily provide you with nonsense directions to reach them
https://www.bbc.com/travel/article/20250926-the-perils-of-letting-ai-plan-your-next-trip
Meanwhile @therecord_media provides a sneak peek at coming #AIIsGoingGreat attractions, featuring startups Tranquility, Truleo and Allometric as they aggressively pitch police and prosecutors on using stochastic BS machines to sift through and summarize evidence. What could possibly go wrong?!
(also, what are the odds at least one of them is shoveling all that evidence on to an improperly secured S3 bucket? Better than the lottery, I'd wager!)
https://therecord.media/law-enforcement-ai-platforms-synthesize-evidence-criminal-cases
"we are looking for videos of both real and staged events, to help train the Al what to be on the lookout for" - First thought was "what could possibly go wrong with training theft detection AI on staged videos?" but this is probably a rational response to someone realizing that paying would inevitably lead to staged videos anyway. Not that it makes the whole concept any less creepy or suspect…
JFC, it's not like there's any good case to go #ChatGPTJudge on, but this seems like a particularly poor one "The letter stems from an error-laden temporary restraining order Wingate issued July 20, which paused the enforcement of a state law that bans [DEI] in public schools"
errors "included naming defendants and plaintiffs that weren’t parties to the case, misquoting state law and referencing a case that doesn’t exist"
A U.S. senator is asking about an error-laden temporary restraining order that U.S. District Judge Henry T. Wingate issued July 20. The order paused the enforcement of a state law that bans diversity, equity and inclusion programs in public schools.
Today's #AIIsGoingGreat is… actually unsarcastically going pretty great 🤯
Hard to see how anything could possibly go wrong here "the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue"
Last month, I chose to strip away all the hubris around AI and ask one simple question, one that oddly no one had really bothered to ask; how much revenue is needed to justify the current level of capex spend and give AI investors a return on their capital?? I clearly hit a nerve in […]
Today's #ChatGPTLawyer (via @404mediaco*) ticks all the boxes:
✅ Files slop motion citing non-existent cases
✅ Denies using AI in slop-filled motion opposing sanctions for original slop
✅ Blames unnamed "staff"
✅ Eventually admits using AI and unconvincingly feigns remorse in sanctions hearing
✅ Gets sanctioned
Dave Karpf's #AIIsGoingGreat take "But I’ll say this: the AI bubble isn’t predominantly giving off Pets.com or Global Crossing vibes anymore. It’s giving Enron vibes."
Excellent deep dive into who goes #ChatGPTLawer by @riana
https://cyberlaw.stanford.edu/blog/2025/10/whos-submitting-ai-tainted-filings-in-court/

It seems like every day brings another news story about a lawyer caught unwittingly submitting a court filing that cites nonexistent cases hallucinated by AI. The problem persists despite courts’ standing orders on the use of AI, formal opinions and continuing legal education (CLE) courses on ethical use of AI
#AIIsGoingGreat "In a preview of its 2025 report on the impact of the tech on research, the academic publisher Wiley released preliminary findings on attitudes toward AI. One startling takeaway: the report found that scientists expressed less trust in AI than they did in 2024"
(I suspect that like me, many readers of this thread will not be particularly startled by that)
https://futurism.com/artificial-intelligence/ai-research-scientists-hype
RE: https://mastodon.social/@lawfare/115390271811742445
Pass the bong, @lawfare https://mastodon.social/
@lawfare/115390271811742445
They note "Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors" but don't address the question of whether the industry has any idea of how to solve the underlying problem
(spoiler: they don't)
https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
#AIIsGoingGreat "A US teenager was handcuffed by armed police after an [AI] system mistakenly said he was carrying a gun - when really he was holding a packet of crisps… AI alert was sent to human reviewers who found no threat - but the principal missed this"
Tossup whether this belongs here or in the "cops being abusive shitbags" thread*, but it does highlight how the "sure AI fails but just have a human check" line is mostly CYA for vendors
#AIIsGoingGreat, supplemental: "Google’s controversial new AI Mode has falsely named an innocent Sydney Morning Herald graphic designer as the man who confessed to abducting and murdering three-year-old Cheryl Grimmer more than 50 years ago … appears to have latched onto the designer’s name instead, given he was credited for an illustration " - Perfect illustration of how #LLM "AI" fills in the blanks with statistically plausible BS
Who could have predicted that if you present a statistical text completion machine with a scenario that mirrors a trope frequently found in the training set, it may produce output which follows the trope. SKYNET!!!!
"Patrick Gelsinger took the reins at Gloo, a technology company made for what he calls the “faith ecosystem” – think Salesforce for churches, plus chatbots and AI assistants for automating pastoral work and ministry support"
Uh… "Lu recommends that leaders start by steering workers toward tasks that AI clearly handles better than humans and where personalization is unnecessary, such as numeric estimation and forecasting tasks" - numeric estimation tasks more or less demanding than estimating the number times "r" appears in strawberry? 🤔
https://www.businessinsider.com/inside-ai-divide-roiling-video-game-giant-electronic-arts-2025-10
Good rebuttal to the "but humans make mistakes too" or "just treat it like an intern" excuses for LLM failings: "A lawyer reviewing a first-year associate’s work likely expects some errors flowing from inadequate research or an incomplete understanding of the law. They do not suspect straight-up fictitious content"
Guidance for lawyers on generative AI use consistently urges careful verification of outputs. One popular framing advises treating AI as a “first-year associate”—smart and keen, but inexperienced and needing supervision. In this column, I take the position that, while this framing helpfully encourages caution, it obscures how generative AI can be deceptive in ways that […]
"there is also the lesser-known prospect of [subtler than fake citation] hallucinations: a date altered here, part of a legal test changed there. These more subtle hallucinations are harder to detect and mean that where accuracy is paramount, extreme caution and rigourous verification is warranted when relying on AI outputs. In some situations, the vetting burden may, in fact, outweigh any efficiency gains" 💯
Guidance for lawyers on generative AI use consistently urges careful verification of outputs. One popular framing advises treating AI as a “first-year associate”—smart and keen, but inexperienced and needing supervision. In this column, I take the position that, while this framing helpfully encourages caution, it obscures how generative AI can be deceptive in ways that […]
"[CFO] Sarah Friar has told some associates the company is aiming for a 2027 listing … But some advisers predict it could come even sooner, around late 2026 … A successful offering would mark a major win for investors such as SoftBank, Thrive Capital and Abu Dhabi's MGX. Microsoft, one of its biggest backers, now owns about 27% of the company after investing $13 billion" - Sure, they're building god, but "IPO before the bottom drops out" is a nice backup plan
#AIIsGoingGreat "As the deepfake gathered views on X, some users asked the platform’s AI chatbot Grok whether it was authentic. In at least two replies seen by BBC Verify, which have now been deleted, Grok wrongly claimed the video was genuine"
(I remain gobsmacked by the number of people who ask a chatbot to verify purported current events. Even if you're an LLM optimist, this seems like a task they are spectacularly unsuited for)
https://www.bbc.com/news/live/c4gjv2xdl5dt?post=asset%3A884ecf7b-139a-4033-a61b-73fc82891a49#post
"These centres will cost $2.5tn to build, according to Barclays, to service an industry that still doesn’t turn a profit. But the maddest bit arguably is how much energy they will require once completed. Using Barclays’ 1.2 “Power Use Effectiveness” ratio, all these data centres — if they are all completed — would need 55.2 gigawatts of electricity to function at full capacity"
https://www.ft.com/content/2b849dbd-1bef-4c26-aa11-2cb86750d41e
Via that FT article "Beyond sheer density, AI workloads introduce a second, equally formidable challenge: volatility. Unlike a traditional data center running thousands of uncorrelated tasks, an AI factory operates as a single, synchronous system … This creates a facility-wide power profile characterized by massive and rapid load swings … The power draw of a rack can swing from an “idle” state of around 30% to 100% utilization and back again in milliseconds"
Hard to see how torching a few trillion dollars on the altar of FOMO could possibly go wrong #AIIsGoingGreat
https://www.theverge.com/ai-artificial-intelligence/812455/ai-industry-earnings-bubble-fomo-hype
Thought I was joking about collateralized GPU obligations*, but here we are: "private-equity firms put up or raise the money to build a data center, which a tech company will repay through rent. Data-center leases from, say, Meta can then be repackaged into a financial instrument that people can buy and sell—a bond, in essence … leases can be combined into a security and sorted into what are called “tranches” based on their risk"
https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/
Ah yes, who could have predicted that a probabilistic text generator trained on the sum total of the world's new age hocus pocus would attract a cultish following?
I'm with the experts in the article who doubt it qualifies as a cult itself, but I bet it will be the foundation of a few
https://www.rollingstone.com/culture/culture-features/spiralist-cult-ai-chatbot-1235463175/
#ChatGptLawer roundup from @arstechnica (leaning heavily on Damien Charlotin's excellent database*)
Today's #AIIsGoingGreat: Going so great we gotta wear shades
https://www.404media.co/power-companies-are-using-ai-to-build-nuclear-power-plants/
RE: https://tldr.nettime.org/@tante/115564591798368145
Another problem with the "but lots of normies like AI" argument @anildash doesn't engage with is that a lot of popular use cases are actively harmful to those same users, e.g. AI "summaries" that randomly inject falsehoods. Lots of people like smoking cigarettes too, but that doesn't make it morally defensible to go around handing them out, even if your tobacco is more ethically sourced than the big brands!
https://mastodon.social/@[email protected]ttime.org/115564592068950117
Achievement unlocked: Scoffing critic
Today's #AIIsGoingGreat (ht @dangillmor*) "Kolakowski, who serves on California’s Alameda County Superior Court, soon realized why: The video had been produced using generative artificial intelligence. Though the video claimed to feature a real witness — who had appeared in another, authentic piece of evidence — Exhibit 6C was an AI “deepfake,” Kolakowski said"
I used a neural network trained on decades of tech industry corporate speak to summarize this document and all it came up with was "vacuous horseshit"
https://blog.mozilla.org/en/mozilla/rewiring-mozilla-ai-and-web/
If AI were the amazing efficiency booster the hype claims, shouldn't all those medium to large non-AI focused companies be posting gains? 🤔
In today's #AIIsGoingGreat (ht @daedalus) Deloitte charges Newfoundland and Labrador $1.6 million (CAD, presumably) for a report with AI hallucinated citations, and then insists it "stands by its conclusions and findings" and just needs to fix the citations. As ever in these cases, the question of how they came up with the assertions the citations supposedly supported is not addressed
https://theindependent.ca/news/lji/deloitte-breaks-silence-on-n-l-healthcare-report/
* https://mastodon.social/@daedalus@eigenmagic.net/115619136163500955
#AIIsGoingGreat 'Instead, per [District Judge Sara Ellis] footnote, body camera footage revealed that an agent “asked ChatGPT to compile a narrative for a report based off of a brief sentence about an encounter and several images.” The officer reportedly submitted the output from ChatGPT as the report'
https://gizmodo.com/judge-says-ice-used-chatgpt-to-write-use-of-force-reports-2000692370
This suggests a good question to ask healthcare providers who are falling over themselves to shove* #AI into everything: Does your malpractice insurance cover AI related errors?
* e.g. https://mastodon.social/@reedmideke/115047332404466187
In today's #AIIsGoingGreat (ht @GossiTheDog*) the Economist brings us this chart of Goldman Sachs index of companies with the "largest estimated potential change to baseline earnings from AI adoption via increased productivity" vs the S&P500
* https://mastodon.social/@GossiTheDog@cyberplace.social/115638306307720246
The same article notes that "According to a poll of executives by Deloitte, a consultancy, and the Centre for AI, Management and Organisation at Hong Kong University, 45% reported returns from AI initiatives that were below their expectations"
Loyal readers may recall that Deloitte themselves was recently featured in this thread* charging big bucks for hallucinated BS
I feel like the various surveys about "what percent of workers use AI at work" would be more informative if "use" was defined more specifically. You can hardly use Microsoft or Google's business suites without stepping in AI somewhere, but that doesn't mean users are benefiting from it. The Census Bureau's "in producing goods and services" qualification may be confusing, but it at least it suggests the AI has to have some material role
#AIIsGoingGreat. See replies in thread for more greatness. Apologists will say stuff like "that's a silly question, just look at the calendar on your phone, no one uses google for that" but I'm sorry, if you dumped a few hundred billion dollars into this magic answer machine and you can't get it to stop doing stupid shit like this, I'm gonna be a *little* skeptical that it's ready to run health care, solve climate change and revolutionize science
Bonus #AIIsGoingGreat - With the power of #AI, I predict that by 2026 there will be at least 30 "r"s in "year"
(I did this a second time in a new private window because I realized after I closed the first one I should see what the supposedly supporting link was…)
edit: one more for old times sake
RE: https://infosec.exchange/@timb_machine/115657160615736269
A succinct "WTF are we even doing here" that applies to vast swathes of the use cases GenAI is being hyped for, to which the entire industry has no coherent response 👇
https://mastodon.social/@timb_machine@infosec.exchange/115657160659807487
The optimistic scenario here is this is just a cynical attempt to jump on the AI gravy train knowing the bubble will pop before anything gets built…
https://www.404media.co/nuclear-rian-bahran-iaea-international-symposium-on-artificial-intelligence/
Today's #AIIsGoingGreat, courtesy of the UK NCSC: "SQL injection can be properly mitigated with parameterised queries, but there's a good chance prompt injection will never be properly mitigated in the same way. The best we can hope for is reducing the likelihood or impact of attacks" - Will this affect the market's willingness to throw more billions on the #LLM bonfire? Probably not, but only time will tell
¯\_(ツ)_/¯
https://www.ncsc.gov.uk/blog-post/prompt-injection-is-not-sql-injection
RE: https://infosec.exchange/@malwarejake/115695789576148295
Infosec industry AI hype: AI agents automating full attack chains, AI polymorphic code, SKYNET!!
Infosec AI reality: Using AI products as a glorified pastebin
https://mastodon.social/@malwarejake@infosec.exchange/115695789609999560