RE: https://infosec.exchange/@malwarejake/115695789576148295

Infosec industry AI hype: AI agents automating full attack chains, AI polymorphic code, SKYNET!!
Infosec AI reality: Using AI products as a glorified pastebin

https://mastodon.social/@malwarejake@infosec.exchange/115695789609999560

'… saying that developers should not "intentionally encode partisan or ideological judgments" into a chatbot's outputs' - Ah yes, text generating machine derived from a statistical soup of vast amounts of human-written text must not "encode partisan or ideological judgments" totally realistic requirement there guys, definitely not a transparent attempt to impose your own preferred partisan and ideological preferences

https://www.reuters.com/world/us/us-mandate-ai-vendors-measure-political-bias-federal-sales-2025-12-11/

Everyone is rightly mocking the fact the bot suggests apt on Fedora, but I would also like to point out that the wifi "diagnosis" is crap. Sure, checking for updated firmware and drivers is reasonable, but it's vanishingly unlikely the problem is insufficient system RAM or "aggressive driver configuration", whatever the heck that would be

https://fedoramagazine.org/find-out-how-your-fedora-system-really-feels-with-the-linux-mcp-server/

Today's #AIIsGoingGreat "Inasmuch as you are going to have to double-check every “fact” that “AI”” provides to you, why not eliminate the middleman and just not use “AI”? It’s not decreasing your workload here, it’s adding to it"

https://mastodon.social/@scalzi/115713494369759678

#AIIsGoingGreat "Asked whether Taiwan is a country, it would repeatedly lower its voice and insist that “Taiwan is an inalienable part of China. That is an established fact” or a variation of that sentiment"

https://www.nbcnews.com/tech/tech-news/ai-toys-gift-present-safe-kids-robot-child-miko-grok-alilo-miiloo-rcna246956

AI kids' toys give explicit and dangerous responses in tests

AI-powered kids' toys like Miko 3 have hit shelves this holiday season, claiming to rely on sophisticated chatbots to animate interactive robots for children.

NBC News

There's a whiff of "OMG X is rotting kids brains" moral panic about this, but also, the entire concept of an #LLM powered toy just seems like asking for trouble in a whole bunch of ways. Even ignoring the possible psychological impacts, it's indisputable the industry does not have a way to create reliable guardrails, and internet connected toys generally have a long history of egregious privacy violations

https://pirg.org/edfund/resources/ai-toys/

The risks of AI toys for kids

AI toys use chatbots to have conversations with kids. With new tech comes new risks, from inappropriate content to long-term social developmental harms.

U.S. PIRG Education Fund

This piece is genuinely good rundown of how LLMs are BS machines, and then goes on to say "you can use LLMs to get incredible gains in how fast you can do tasks like research, writing code, etc. assuming that you are doing it mindfully with the pitfalls in mind" 🥴

I remain unconvinced that the productivity gains survive the "you must have a subject matter expert verify that every single thing it did" overhead, but YMMV, I guess

https://blog.kagi.com/llms

LLMs are bullshitters. But that doesn't mean they're not useful | Kagi Blog

*Note:* This is a personal essay by Matt Ranger, Kagi’s head of ML In 1986, Harry Frankfurt wrote On Bullshit ( https://en.wikipedia.org/wiki/On_Bullshit ).

'Slop' is Merriam-Webster's 2025 word of the year

Merriam-Webster’s 2025 word of the year is “slop.” The word was first used in the 1700s to mean soft mud. It evolved more generally to mean something of little value. The definition has since expanded to mean “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” In other words, as the dictionary's president says, “absurd videos, weird advertising images, cheesy propaganda, fake news that looks real, junky AI-written digital books." The dictionary has selected one word every year since 2003 to capture and make sense of the current moment.

AP News
#AIIsGoingGreat, getting high on your own supply edition
"… and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group" https://www.404media.co/anthropic-exec-forces-ai-chatbot-on-gay-discord-community-members-flee/
Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee

“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.

404 Media
#AIIsGoingGreat "The benefits of using AI in the workplace are not always obvious. According to employees, the most common AI adoption challenge is “unclear use case or value proposition.” Even among those who report using AI, only 16% strongly agree that the AI tools provided by their organization are useful for their work" https://www.gallup.com/workplace/691643/work-nearly-doubled-two-years.aspx
#AIIsGoingGreat "Together, these AI slop channels have amassed more than 63bn views and 221 million subscribers, generating about $117m (£90m) in revenue each year, according to estimates" https://www.theguardian.com/technology/2025/dec/27/more-than-20-of-videos-shown-to-new-youtube-users-are-ai-slop-study-finds

RE: https://researchbuzz.masto.host/@researchbuzz/115782207999466966

On the bright side, if you've just got to set a trillion dollars and change on fire, doing it in a way that doesn't require blowing a bunch of people up is an improvement of sorts, I suppose

https://mastodon.social/@Researchbuzz@researchbuzz.masto.host/115782208145625995

I have mixed feelings about Zitron rants but anyway, collateralized GPU obligations* in the wild! "As a result, these neoclouds are forced to raise billions of dollars in debt, which they collateralize using the GPUs they already have, along with contracts from customers, which they use to buy more GPUs"

https://www.wheresyoured.at/the-enshittifinancial-crisis/#coreweave-is-still-a-time-bomb-by-the-way

* https://mastodon.social/@reedmideke/115518712476917668

The Enshittifinancial Crisis

Soundtrack: Lynyrd Skynyrd — Free Bird This piece is over 19,000 words, and took me a great deal of writing and research. If you liked it, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’

Ed Zitron's Where's Your Ed At

Behold the awesome power of #AI, the product of billions of dollars in GPU time, simplifying your life by precisely summarizing the most pertinent information

#AIIsGoingGreat

Google AI assures me* that "microslop coprolite" is a recent viral internet joke, and with your help, we can retcon that into reality

* with hallucitations that in no way support the claim

#Microslop #Coprolite

Shot: " xAI announced Tuesday it raised $20 billion in an upsized Series E funding round, exceeding its $15 billion target"

https://www.reuters.com/business/musks-xai-raises-20-billion-upsized-series-e-funding-round-2026-01-06/

Chaser: "A WIRED review of outputs hosted on Grok’s official website shows it’s being used to create violent sexual images and videos, as well as content that includes apparent minors" https://www.wired.com/story/grok-is-generating-sexual-content-far-more-graphic-than-whats-on-x/
Grok Is Generating Sexual Content Far More Graphic Than What's on X

A WIRED review of outputs hosted on Grok’s official website shows it’s being used to create violent sexual images and videos, as well as content that includes apparent minors.

WIRED

Second chaser: "Nvidia and Cisco Investments joined as strategic investors" in the series E above

(is it good or bad if the CSAM generating machine is propped up circular investing? 🤔 )

#AIIsGoingGreat "The problem? Neither of those places exist. Nor do a handful of the other spots marked on the National Weather Service’s forecast graphic, riddled with spelling and geographical errors that the agency confirmed were linked to the use of generative AI"

Thing that boggles my mind about this is NWS has tools for generating forecast maps. It's one of their core products!

https://wapo.st/49r5tse

#GiftArticle #GiftLink

‘Whata Bod’: An AI-generated NWS map invented fake towns in Idaho

Amid a big agency push to use AI models in weather prediction, an AI-generated forecast graphic with errors was pulled from NWS sites.

The Washington Post

Shot: 'OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for “health and wellness conversations” intended to connect a user’s health and medical records to the chatbot in a secure way'

https://arstechnica.com/ai/2026/01/chatgpt-health-lets-you-connect-medical-records-to-an-ai-that-makes-things-up/

ChatGPT Health lets you connect medical records to an AI that makes things up

New feature will allow users to link medical and wellness records to AI chatbot.

Ars Technica

Chaser: 'There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users'

https://arstechnica.com/security/2026/01/chatgpt-falls-to-new-data-pilfering-attack-as-a-vicious-cycle-in-ai-continues/

ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues

Will LLMs ever be able to stamp out the root cause of these attacks? Possibly not.

Ars Technica

Second chaser: Ars also notes that the ChatGPT Health announcement fine print tells you not to use it for actual health stuff: "Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time—not just moments of illness—so you can feel more informed and prepared for important medical conversations"

https://arstechnica.com/ai/2026/01/chatgpt-health-lets-you-connect-medical-records-to-an-ai-that-makes-things-up/

ChatGPT Health lets you connect medical records to an AI that makes things up

New feature will allow users to link medical and wellness records to AI chatbot.

Ars Technica

So uh, very common (and odious) tech industry employment / contract language includes something along the lines of "every single thought you have at $company belongs to $company and don't you dare even dream of remembering it outside $company" 🤨
Sharing work products with a subsequent employer would seem risky for the contractor, even if they scrub obviously proprietary or personal data

https://techcrunch.com/2026/01/10/openai-is-reportedly-asking-contractors-to-upload-real-work-from-past-jobs/

OpenAI is reportedly asking contractors to upload real work from past jobs | TechCrunch

An intellectual property lawyer says OpenAI is "putting itself at great risk" with this approach.

TechCrunch

Wired article goes into this a bit more, and yeah, conclusion seems to be it's sketchy AF. Also "An individual who helps companies sell assets after they go out of business told WIRED that a representative of OpenAI inquired about obtaining data from these firms, providing that personally identifiable information could be removed … The source said they chose not to pursue the idea because they were not confident that personal information could be completely scrubbed"

https://www.wired.com/story/openai-contractor-upload-real-work-documents-ai-agents/

OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents

To prepare AI agents for office work, the company is asking contractors to upload projects from past jobs, leaving it to them to strip out confidential and personally identifiable information.

WIRED
I would also be shocked if OpenAI's own employment agreements don't contain a very strong version of the kind of thing they're demanding their contractors violate. I mean, they had a "no whistle-blowing or reporting crimes" clause https://mastodon.social/@reedmideke/112781518384976211

"The headline is, ‘It’s because of AI,’ but if you read what they actually say, they say, ‘We expect that AI will cover this work.’ Hadn’t done it. They’re just hoping. And they’re saying it because that’s what they think investors want to hear"

"If AI were already replacing labour at scale, productivity growth should be accelerating. Generally, it isn’t"

https://fortune.com/2026/01/07/ai-layoffs-convenient-corporate-fiction-true-false-oxford-economics-productivity/

AI layoffs are looking more and more like corporate fiction that’s masking a darker reality, Oxford Economics suggests

"Firms don't appear to be replacing workers with AI on a significant scale," the firm said. It suspects some are trying to "dress up layoffs" as good news.

Fortune

RE: https://infosec.exchange/@lcamtuf/115877508380778967

Today's #AIIsGoingGreat - A content farm of circuit "schematics" which, despite being utterly incoherent and wildly dangerous to anyone who tried to build them, is presumably somehow generating ad revenue

https://mastodon.social/@lcamtuf@infosec.exchange/115877508836294554

Bonus #AIIsGoingGreat - When The Guardian asked google about "AI summary" results for medical topics being wildly wrong and dangerous*, Google spokes hand-waved about how "the vast majority of its AI Overviews were factual and helpful, and it continuously made quality improvements"
Days later, The Verges finds some were quietly pulled https://www.theverge.com/news/860356/google-pulls-alarming-dangerous-medical-ai-overviews

* https://www.theguardian.com/technology/2026/jan/02/google-ai-overviews-risk-harm-misleading-health-information

Google pulls AI overviews for some medical searches

Google pulls “alarming” and “dangerous” AI overviews for some medical searches.

The Verge

As ever, when confronted with the fact their product produced dangerous BS, Google follows the industry standard response of band-aiding over specific instances that cause negative publicity, because they have absolutely no idea how to solve the general case

https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation

‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk

Guardian investigation finds AI Overviews provided inaccurate and false information when queried over blood tests

The Guardian
I wouldn't expect these "commitments" to survive one bad quarter (on the generous assumption they ever amount to more than greenwashing), but it's nice of Microsoft put them out in writing for the internet to remember
https://blogs.microsoft.com/on-the-issues/2026/01/13/community-first-ai-infrastructure/
Building Community-First AI Infrastructure

Microsoft is launching a new initiative to build what we call Community-First AI Infrastructure—a commitment to do this work differently than some others and to do it responsibly.

Microsoft On the Issues

RE: https://infosec.exchange/@trailofbits/115887719230076703

"These attacks, which are functionally similar to cross-site scripting (XSS) and cross-site request forgery (CSRF), resurface decades-old patterns of vulnerabilities that the web security community spent years building effective defenses against" - I am not as optimistic as the authors that these things can be mitigated while still producing a useful product, but in any case, maybe we should figure that out before shoving an "AI browser" down everyone's throat ¯\_(ツ)_/¯
https://blog.trailofbits.com/2026/01/13/lack-of-isolation-in-agentic-browsers-resurfaces-old-vulnerabilities/

#AIIsGoingGreat "After repeatedly denying for weeks that his force used AI tools, the chief constable of the West Midlands police has finally admitted that a hugely controversial decision to ban Maccabi Tel Aviv football fans from the UK did involve hallucinated information from Microsoft Copilot"
https://arstechnica.com/ai/2026/01/deny-deny-admit-uk-police-used-copilot-ai-hallucination-when-banning-football-fans/
Deny, deny, admit: UK police used Copilot AI “hallucination” when banning football fans

Police finally come clean about botched use of AI tools.

Ars Technica
More brilliant UX from Google: What does "Continue" in this context mean, continue to the thing they're promoting, or continue doing whatever you were doing before this annoying dialog popped up in your face, or some secret third thing? Is it different from the X?
Anyway…

…like a rube, I clicked continue and it popped out a "Gemini" sidebar, which on reflection, is flawless, 10/10, no notes

#AIIsGoingGreat

Ah yes, the world changing, second industrial revolution if it doesn't come alive and kill us all technology will be paid for with… banner ads (and porn, obviously)
https://arstechnica.com/information-technology/2026/01/openai-to-test-ads-in-chatgpt-as-it-burns-through-billions/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
OpenAI to test ads in ChatGPT as it burns through billions

Ads coming to free tier and new $8/month ChatGPT Go plan in US.

Ars Technica

Congratulations #AI slopartists, you've managed to screw users, open source maintainers, and actual security researchers, while also not getting paid for your slop reports https://github.com/curl/curl/pull/20312

#AIIsGoingGreat

BUG-BOUNTY.md: we stop the bug-bounty end of Jan 2026 by bagder · Pull Request #20312 · curl/curl

Remove mentions of the bounty and hackerone. There will be more mentions, blog posts, timings etc in the coming weeks.

GitHub
It cheats by adding new routes to the maze, and still doesn't manage a valid solution. PHD level reasoning any day now!
https://mastodon.social/@zachweinersmi[email protected]/115922447320479210
"One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all" - As noted in @ploum's post, this small sample is probably biased by the circumstances, but still, good to see
https://ploum.net/2026-01-19-exam-with-chatbots.html
Giving University Exams in the Age of Chatbots

Giving University Exams in the Age of Chatbots par Ploum - Lionel Dricot.

DOT general counsel Gregory Zerzan, on using spicy autocomplete to generate transport regulations: "We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough. We’re flooding the zone" - I'm sure the skeptics will come up with all kinds of objections about how this will go terribly wrong, but on the bright side, it should be a gold mine of obscure loopholes and hilarious litigation

https://www.propublica.org/article/trump-artificial-intelligence-google-gemini-transportation-regulations

#AIIsGoingGreat

Government by AI? Trump Administration Plans to Write Regulations Using Artificial Intelligence

The Transportation Department, which oversees the safety of airplanes, cars and pipelines, plans to use Google Gemini to draft new regulations. “We don’t need the perfect rule,” said DOT’s top lawyer. “We want good enough.”

ProPublica
@reedmideke All regulations are written in blood. Until now.