This suggests a good question to ask healthcare providers who are falling over themselves to shove* #AI into everything: Does your malpractice insurance cover AI related errors?
* e.g. https://mastodon.social/@reedmideke/115047332404466187
This suggests a good question to ask healthcare providers who are falling over themselves to shove* #AI into everything: Does your malpractice insurance cover AI related errors?
* e.g. https://mastodon.social/@reedmideke/115047332404466187
In today's #AIIsGoingGreat (ht @GossiTheDog*) the Economist brings us this chart of Goldman Sachs index of companies with the "largest estimated potential change to baseline earnings from AI adoption via increased productivity" vs the S&P500
* https://mastodon.social/@GossiTheDog@cyberplace.social/115638306307720246
The same article notes that "According to a poll of executives by Deloitte, a consultancy, and the Centre for AI, Management and Organisation at Hong Kong University, 45% reported returns from AI initiatives that were below their expectations"
Loyal readers may recall that Deloitte themselves was recently featured in this thread* charging big bucks for hallucinated BS
I feel like the various surveys about "what percent of workers use AI at work" would be more informative if "use" was defined more specifically. You can hardly use Microsoft or Google's business suites without stepping in AI somewhere, but that doesn't mean users are benefiting from it. The Census Bureau's "in producing goods and services" qualification may be confusing, but it at least it suggests the AI has to have some material role
#AIIsGoingGreat. See replies in thread for more greatness. Apologists will say stuff like "that's a silly question, just look at the calendar on your phone, no one uses google for that" but I'm sorry, if you dumped a few hundred billion dollars into this magic answer machine and you can't get it to stop doing stupid shit like this, I'm gonna be a *little* skeptical that it's ready to run health care, solve climate change and revolutionize science
Bonus #AIIsGoingGreat - With the power of #AI, I predict that by 2026 there will be at least 30 "r"s in "year"
(I did this a second time in a new private window because I realized after I closed the first one I should see what the supposedly supporting link was…)
edit: one more for old times sake
RE: https://infosec.exchange/@timb_machine/115657160615736269
A succinct "WTF are we even doing here" that applies to vast swathes of the use cases GenAI is being hyped for, to which the entire industry has no coherent response 👇
https://mastodon.social/@timb_machine@infosec.exchange/115657160659807487
The optimistic scenario here is this is just a cynical attempt to jump on the AI gravy train knowing the bubble will pop before anything gets built…
https://www.404media.co/nuclear-rian-bahran-iaea-international-symposium-on-artificial-intelligence/
Today's #AIIsGoingGreat, courtesy of the UK NCSC: "SQL injection can be properly mitigated with parameterised queries, but there's a good chance prompt injection will never be properly mitigated in the same way. The best we can hope for is reducing the likelihood or impact of attacks" - Will this affect the market's willingness to throw more billions on the #LLM bonfire? Probably not, but only time will tell
¯\_(ツ)_/¯
https://www.ncsc.gov.uk/blog-post/prompt-injection-is-not-sql-injection
RE: https://infosec.exchange/@malwarejake/115695789576148295
Infosec industry AI hype: AI agents automating full attack chains, AI polymorphic code, SKYNET!!
Infosec AI reality: Using AI products as a glorified pastebin
https://mastodon.social/@malwarejake@infosec.exchange/115695789609999560
'… saying that developers should not "intentionally encode partisan or ideological judgments" into a chatbot's outputs' - Ah yes, text generating machine derived from a statistical soup of vast amounts of human-written text must not "encode partisan or ideological judgments" totally realistic requirement there guys, definitely not a transparent attempt to impose your own preferred partisan and ideological preferences
Everyone is rightly mocking the fact the bot suggests apt on Fedora, but I would also like to point out that the wifi "diagnosis" is crap. Sure, checking for updated firmware and drivers is reasonable, but it's vanishingly unlikely the problem is insufficient system RAM or "aggressive driver configuration", whatever the heck that would be
https://fedoramagazine.org/find-out-how-your-fedora-system-really-feels-with-the-linux-mcp-server/
Today's #AIIsGoingGreat "Inasmuch as you are going to have to double-check every “fact” that “AI”” provides to you, why not eliminate the middleman and just not use “AI”? It’s not decreasing your workload here, it’s adding to it"
#AIIsGoingGreat "Asked whether Taiwan is a country, it would repeatedly lower its voice and insist that “Taiwan is an inalienable part of China. That is an established fact” or a variation of that sentiment"
There's a whiff of "OMG X is rotting kids brains" moral panic about this, but also, the entire concept of an #LLM powered toy just seems like asking for trouble in a whole bunch of ways. Even ignoring the possible psychological impacts, it's indisputable the industry does not have a way to create reliable guardrails, and internet connected toys generally have a long history of egregious privacy violations
This piece is genuinely good rundown of how LLMs are BS machines, and then goes on to say "you can use LLMs to get incredible gains in how fast you can do tasks like research, writing code, etc. assuming that you are doing it mindfully with the pitfalls in mind" 🥴
I remain unconvinced that the productivity gains survive the "you must have a subject matter expert verify that every single thing it did" overhead, but YMMV, I guess
Bonus #AIIsGoingGreat makes all those billions in capex worth it
Merriam-Webster’s 2025 word of the year is “slop.” The word was first used in the 1700s to mean soft mud. It evolved more generally to mean something of little value. The definition has since expanded to mean “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” In other words, as the dictionary's president says, “absurd videos, weird advertising images, cheesy propaganda, fake news that looks real, junky AI-written digital books." The dictionary has selected one word every year since 2003 to capture and make sense of the current moment.
RE: https://researchbuzz.masto.host/@researchbuzz/115782207999466966
On the bright side, if you've just got to set a trillion dollars and change on fire, doing it in a way that doesn't require blowing a bunch of people up is an improvement of sorts, I suppose
https://mastodon.social/@Researchbuzz@researchbuzz.masto.host/115782208145625995
I have mixed feelings about Zitron rants but anyway, collateralized GPU obligations* in the wild! "As a result, these neoclouds are forced to raise billions of dollars in debt, which they collateralize using the GPUs they already have, along with contracts from customers, which they use to buy more GPUs"
https://www.wheresyoured.at/the-enshittifinancial-crisis/#coreweave-is-still-a-time-bomb-by-the-way

Soundtrack: Lynyrd Skynyrd — Free Bird This piece is over 19,000 words, and took me a great deal of writing and research. If you liked it, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’
Behold the awesome power of #AI, the product of billions of dollars in GPU time, simplifying your life by precisely summarizing the most pertinent information
Google AI assures me* that "microslop coprolite" is a recent viral internet joke, and with your help, we can retcon that into reality
* with hallucitations that in no way support the claim
Shot: " xAI announced Tuesday it raised $20 billion in an upsized Series E funding round, exceeding its $15 billion target"
Second chaser: "Nvidia and Cisco Investments joined as strategic investors" in the series E above
(is it good or bad if the CSAM generating machine is propped up circular investing? 🤔 )
#AIIsGoingGreat "The problem? Neither of those places exist. Nor do a handful of the other spots marked on the National Weather Service’s forecast graphic, riddled with spelling and geographical errors that the agency confirmed were linked to the use of generative AI"
Thing that boggles my mind about this is NWS has tools for generating forecast maps. It's one of their core products!
Shot: 'OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for “health and wellness conversations” intended to connect a user’s health and medical records to the chatbot in a secure way'
Chaser: 'There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users'
Second chaser: Ars also notes that the ChatGPT Health announcement fine print tells you not to use it for actual health stuff: "Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time—not just moments of illness—so you can feel more informed and prepared for important medical conversations"
So uh, very common (and odious) tech industry employment / contract language includes something along the lines of "every single thought you have at $company belongs to $company and don't you dare even dream of remembering it outside $company" 🤨
Sharing work products with a subsequent employer would seem risky for the contractor, even if they scrub obviously proprietary or personal data
Wired article goes into this a bit more, and yeah, conclusion seems to be it's sketchy AF. Also "An individual who helps companies sell assets after they go out of business told WIRED that a representative of OpenAI inquired about obtaining data from these firms, providing that personally identifiable information could be removed … The source said they chose not to pursue the idea because they were not confident that personal information could be completely scrubbed"
https://www.wired.com/story/openai-contractor-upload-real-work-documents-ai-agents/
"The headline is, ‘It’s because of AI,’ but if you read what they actually say, they say, ‘We expect that AI will cover this work.’ Hadn’t done it. They’re just hoping. And they’re saying it because that’s what they think investors want to hear"
"If AI were already replacing labour at scale, productivity growth should be accelerating. Generally, it isn’t"
RE: https://infosec.exchange/@lcamtuf/115877508380778967
Today's #AIIsGoingGreat - A content farm of circuit "schematics" which, despite being utterly incoherent and wildly dangerous to anyone who tried to build them, is presumably somehow generating ad revenue
https://mastodon.social/@lcamtuf@infosec.exchange/115877508836294554
Bonus #AIIsGoingGreat - When The Guardian asked google about "AI summary" results for medical topics being wildly wrong and dangerous*, Google spokes hand-waved about how "the vast majority of its AI Overviews were factual and helpful, and it continuously made quality improvements"
Days later, The Verges finds some were quietly pulled https://www.theverge.com/news/860356/google-pulls-alarming-dangerous-medical-ai-overviews
As ever, when confronted with the fact their product produced dangerous BS, Google follows the industry standard response of band-aiding over specific instances that cause negative publicity, because they have absolutely no idea how to solve the general case
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
RE: https://infosec.exchange/@trailofbits/115887719230076703
"These attacks, which are functionally similar to cross-site scripting (XSS) and cross-site request forgery (CSRF), resurface decades-old patterns of vulnerabilities that the web security community spent years building effective defenses against" - I am not as optimistic as the authors that these things can be mitigated while still producing a useful product, but in any case, maybe we should figure that out before shoving an "AI browser" down everyone's throat ¯\_(ツ)_/¯
https://blog.trailofbits.com/2026/01/13/lack-of-isolation-in-agentic-browsers-resurfaces-old-vulnerabilities/
…like a rube, I clicked continue and it popped out a "Gemini" sidebar, which on reflection, is flawless, 10/10, no notes
RE: https://hachyderm.io/@jzb/115904367311777942
A funny thing about this is that the AI companies *have* had this problem, at least a little bit: Sweatshop clickworkers using LLMs to do their mind-numbing AI training tasks (e.g. https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots
https://www.technologyreview.com/2023/06/22/1075405/the-people-paid-to-train-ai-are-outsourcing-their-work-to-ai/)
https://mastodon.social/@jzb@hachyderm.io/115904367360108202
Congratulations #AI slopartists, you've managed to screw users, open source maintainers, and actual security researchers, while also not getting paid for your slop reports https://github.com/curl/curl/pull/20312
DOT general counsel Gregory Zerzan, on using spicy autocomplete to generate transport regulations: "We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough. We’re flooding the zone" - I'm sure the skeptics will come up with all kinds of objections about how this will go terribly wrong, but on the bright side, it should be a gold mine of obscure loopholes and hilarious litigation

The Transportation Department, which oversees the safety of airplanes, cars and pipelines, plans to use Google Gemini to draft new regulations. “We don’t need the perfect rule,” said DOT’s top lawyer. “We want good enough.”
RE: https://tech.lgbt/@JadedBlueEyes/115968835396049874
"Revise README for clarity on project status and purpose" =
s/Production ready/Proof of concept/ 🤨
Gotta wonder how often this kind of thing is happening in corporate settings without the immediate blowback. Valley management types love their "minimum viable product" so it's easy to see them being really impressed with a slopped-together demo that superficially appears to work, even if the code is an unmaintainable dead end
https://mastodon.social/@JadedBlueEyes@tech.lgbt/115968835523075743
Kevin Weil, vice president of OpenAI for Science: "I think 2026 will be for AI and science what 2025 was for AI in software engineering" - Drowning the practitioners in slop?
#AIIsGoingGreat "Other doctors described chatbots flattering the grandiose tendencies of patients with personality disorders, or advising patients with autism to put themselves in dangerous social situations. Others said they saw patients’ interactions with chatbots as an addiction" - Who could have predicted that an obsequious bullshit machine would do such things?
https://www.nytimes.com/2026/01/26/us/chatgpt-delusions-psychosis.html?unlocked_article_code=1.IlA.gSBg.pTvMJekxwEk7&smid=url-share
"According to O’Reilly, Moltbook is built on a simple open source database software that wasn’t configured correctly and left the API keys of every agent registered on the site exposed in a public database"
Who could have predicted that vibe coding enthusiasts would make such trivial yet catastrophic errors?
¯\_(ツ)_/¯
#AIIsGoingGreat "We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks" - I think Schneier and Raghavan undersell the problem (there's at least reasonable grounds to believe it's impossible) but in any case it seems like it might be unwise to set trillions on fire shoving LLMs into everything before figuring that out
¯\_(ツ)_/¯
#AIIsGoingGreat shot: "I didn’t write a single line of code for @ moltbook. I just had a vision for the technical architecture, and AI made it a reality"
Chaser: "…what we discovered tells a different story - and provides a fascinating look into what happens when applications are vibe-coded into existence without proper security controls"
https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
One might wonder how this relates to the earlier 404media story* … Oh "Security researcher Jameson O'Reilly also discovered the underlying Supabase misconfiguration, which has been reported by 404 Media. Wiz's post shares our experience independently finding the issue, the full -- unreported -- scope of impact, and how we worked Moltbook's maintainer to improve security" that's right, multiple people discovered it independently within days
Keep dunking on #AI every time you interact with Microsoft, it's working!

People familiar with Microsoft's plans say that the company moving to streamline or remove certain Copilot integrations across in-box apps like Notepad and Paint in 2026, after pushback from users.