An extremely weird take which doesn't engage at all with the possibility wikipedians rejected AI summaries because they're obviously garbage and completely antithetical to everything wikipedia stands for
An extremely weird take which doesn't engage at all with the possibility wikipedians rejected AI summaries because they're obviously garbage and completely antithetical to everything wikipedia stands for
Former dropbox CTO Aditya Agarwal: "It was very clear that we will never ever write code by hand again"
I was gonna say if you have anything you value on dropbox, you might wanna fix that, but apparently he left in 2017
https://www.ft.com/content/fd134065-c2c6-4a99-99df-404d658127e6
Haven't had a #ChatGPTLawyer on here for a while but here's a new achievement unlocked: Steven Feldman prompt-engineered his client's case all the way to a default judgment against them
Anthropic's C compiler brings to mind a submarine made of cheese* - Sure, it's not a *good* compiler, but even with the absurd amount of compute, the not-insignificant human baby sitting, and the fact it had every open source compiler ever written to plagiarize from, it's still incredibly impressive that it works to the extent it does
WaPo on the scale of the #AI cash bonfire, and how it's distorting markets far outside the immediate tech industry: "There are not enough skilled electricians and other specialized trade workers for both data center projects and other complex construction … such as apartment buildings, factories and health care facilities. AI data centers tend to be more lucrative for construction firms, which relegates anything else to a lower priority"
Roger McNamee: "It is possible that the amount invested in AI in the U.S. since the middle of 2022 exceeds all prior investments in the entire tech industry. That alone should give everyone pause."
Who could have predicted this? "Product managers and designers began writing code … engineers, in turn, spent more time reviewing, correcting, and guiding AI-generated or AI-assisted work produced by colleagues. These demands extended beyond formal code review. Engineers increasingly found themselves coaching colleagues who were “vibe-coding” and finishing partially complete pull requests"
https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it

Una de las promesas de la IA es que puede reducir la carga de trabajo para que los empleados puedan centrarse más en tareas de mayor valor y más interesantes. Pero según una nueva investigación, las herramientas de IA no reducen el trabajo, sino que lo intensifican constantemente: en el estudio, los empleados trabajaban a un ritmo más rápido, asumían un mayor número de tareas y ampliaban su jornada laboral, a menudo sin que se les pidiera. Puede parecer una ventaja, pero no es tan sencillo. Estos cambios pueden ser insostenibles y provocar un aumento de la carga de trabajo, fatiga cognitiva, agotamiento y un debilitamiento de la capacidad de toma de decisiones. El aumento de la productividad que se disfruta al principio puede dar paso a un trabajo de menor calidad, a la rotación de personal y a otros problemas. Para corregir esto, las empresas deben adoptar una «práctica de IA», es decir, un conjunto de normas y estándares en torno al uso de la IA que pueden incluir pausas intencionadas, secuenciar el trabajo y añadir más base humana.
RE: https://mastodon.online/@tagir_valeev/116057271527521893
#AIIsGoingGreat: Who could have predicted that a machine which produces statistically pleasing text sequences without reasoning or real-time data could have trouble distinguishing between a real fire and a test?
(this is not a dunk on Tagir, see his followup post for details)
https://mastodon.social/@tagir_valeev@mastodon.online/116057271668356153
Hot take: If you accept "Correctness and rigor are paramount" most of the rest becomes irrelevant, because there is no indication anyone, anywhere has a coherent plan to get LLMs to reliably exhibit those characteristics
(In fairness, Hogg addresses this in the comments section, noting "I have made the strong—and possibly unrealistic—assumption that, over the next months and years, the LLMs will get far better" and that current LLMs produce slop)
Oh look, FT* says the AI payoff is starting to show up in economic data https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419dc5
* and by FT, I mean Erik Brynjolfsson, "director of Stanford University’s Digital Economy Lab and co-founder of Workhelix**"
** Workhelix is a startup with the tagline "Your partner for AI success"
So yeah, @arstechnica confirms the Benj Edwards and Kyle Orland piece with the fabricated quotes* was AI slop and retracted. Doesn't really explain what happened, but Aurich did initially say not to expect a response until after the weekend, so maybe a more fulsome response will follow. Certainly seems like they should explain how it came about, what they're doing to prevent it happening again, and whether other articles were affected
* https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/

Sorry all this is my fault; and speculation has grown worse because I have been sick in bed with a high fever and unable to reliably address it (still am sick) I was told by management not to comment until they did. Here is my statement in images below https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
RE: https://mstdn.social/@dazfuller/116080728819432902
Today's #AIIsGoingGreat … or should I say Al iz gong grate?
https://mastodon.social/@dazfuller@mstdn.social/116080728964221553
Also, the bottom of that Microsoft learn page has an "AI Disclaimer" link in the footer, which says, among other things "We're transparent about articles that contain AI-generated content. All articles that contain any AI-generated content include text acknowledging the role of AI. You'll see this text at the end of the article"
https://learn.microsoft.com/en-us/principles-for-ai-generated-content
RE: https://infosec.exchange/@josephcox/116086679934852156
#AIIsGoingGreat "Despite thoroughly documenting the AI-generated errors in its lesson plans, Alpha School relies on AI to test the quality of its AI-generated lessons, creating a situation where a faulty AI is tasked with fixing its own faulty generations" https://mastodon.social/@josephcox@infosec.exchange/116086680010329307
RE: https://flipboard.social/@newsguyusa/116092087598905246
Periodic reminder that when an LLM "explains" why it produced some nonsense, the "explanation" is just an explanation shaped thing, without any particular insight into the actual workings of the model
https://mastodon.social/@newsguyusa@flipboard.social/116092088111798499
"If you don't adopt AI, you'll be left behind!" - specifically, your confidential data may be left behind, on your premises, where it belongs.
Also funny coincidence*, EU just banned** a bunch of AI shit from their IT systems for pretty much this reason
https://mastodon.social/@zackwhittaker/116092191816673310
* or "coincidence" 😉
** https://www.politico.eu/article/eu-parliament-blocks-ai-features-over-cyber-privacy-fears/
#AIIsgoingGreat "This AI agent ran within a GitHub Actions workflow and ran with broad privileges. You might be able to guess where this is heading…"
oooh I got this… AGI? The Singularity? To the store for ice cream?
Also: The #AI workflow was added to "automate first-response to reduce maintainer burden"
One might ask the maintainers how the burden of dealing with this shitshow compares with triaging issues
"Amazon said it was a coincidence that AI tools were involved in the outages, and that there was no evidence that such technology led to more errors than human engineers. “In both instances, this was user error, not AI error,” it said" - Ah yes, the engineers probably prompted it wrong
Original FT story (via @arstechnica, sans paywall) states "In these two cases, the engineers involved did not require a second person’s approval before making changes, as would normally be the case" and quotes Amazon saying it was "a user access control issue, not an AI autonomy issue" because the engineer involved had "broader permissions than expected" 🤨
Lot of different scenarios could be read between the lines there
https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/
RE: https://infosec.exchange/@josephcox/116120627475419783
Today's #AIIsGoingGreat aligns perfectly with expectations https://mastodon.social/@josephcox@infosec.exchange/116120627585647890
"Anthropic said the jailbreaking technique used in the Stanford and Yale research was impractical for normal users and would require more effort to extract the text than just purchasing the content" - Ah yes, the old "it's not infringement because I stored it in an inconvenient format" defense, just like my punch-card MP3 collection
RE: https://vmst.io/@jalefkowit/116162471325793098
Turns out, you can lose your job to AI https://mastodon.social/@jalefkowit@vmst.io/116162471447876131
#AIIsGoingGreat 'An AP reporter followed prompts for Spanish-language options and was met with a voice speaking accented English that used Spanish only for numbers. “Your estimated wait time is less than ‘tres’ minutes,” the voice said' - Of course, the real problem isn't so much the AI system per se, but the fact such a monumental screwup made it to production and went unfixed for months
https://apnews.com/article/washington-dol-spanish-accent-ai-3a1b8438a5674c07242a8d48c057d5a3

Callers to Washington state’s driver’s license agency who select automated service in Spanish have instead been hearing an AI voice speaking English with a strong Spanish accent. The voice slipped Spanish numbers into key phrases. A recording of the odd-sounding accent drew attention on social media. And one person described the experience as “hilarious,” “absurd” and like a scene out of “Parks and Recreation.” The Department of Licensing has apologized and says it fixed the problem.
I am less concerned over alleged violations of Anthropic's TOS or Trump's retaliatory ban and more concerned the DOD is reportedly using spicy autocomplete for "intelligence purposes, as well as to help select targets"
https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military
Fedi nerds (including yours truly): I wouldn't let that slop anywhere near my hobby open source project
DOD: As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance … The AI tools also evaluate a strike after it is initiated
😬
https://wapo.st/4rcM6dC
(email walled)
Three years into the AI hype wave, it keeps happening:
1) AI vendor tries to make BS machine look more reliable by having it link sources
2) BS machine BSes the sources
"…its “sources” linked to spammy copies of legit websites, or other archived copies that aren’t the actual source page. Some sources even went to completely unrelated links that weren’t written by the person whose work they were supposedly an example of"
https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews
With a little help from a Ouija board, even the dead ones can opt out, I suppose. I mean, it's as much their voice as the Grammarly version
"you can create a DLP policy to help protect against the use of sensitive information types (SIT), such as credit card numbers, passport identification, or social security numbers in Microsoft Copilot 365 prompts" - So hypothetically, if one were to include random but formally valid SSN or CC values hidden in your emails or documents, would it stop users of this feature from using their microslop on it? 🤔
https://learn.microsoft.com/en-us/purview/dlp-microsoft365-copilot-location-learn-about

You can use Microsoft Purview Data Loss Prevention (DLP) targeted at the Microsoft 365 Copilot and Copilot Chat location to help prevent the use of sensitive information types in prompts and files and emails that have sensitivity labels in Microsoft 365 Copilot and Copilot Chat prompts.
In today's #AIIsGoingGreat (ht @platypus*), Microsoft security does a nice writeup of SEO bros abusing "summarize with AI" buttons to inject "memories" into AI assistants… and then goes on to offer mitigations like "be sure to hover links before you click them" and "regularly check your AI memories" … because if the last 30 years of infosec has taught us anything, it's that user vigilance is the first and best line of defense, right?
https://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning/

That helpful “Summarize with AI” button? It might be secretly manipulating what your AI recommends. Microsoft security researchers have discovered a growing trend of AI memory poisoning attacks used for promotional purposes, a technique we call AI Recommendation Poisoning.
Bonus #AIIsGoingGreat: After assuring us recent incidents were only coincidentally connected to AI*, Amazon "summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools"
RE: https://mstdn.social/@rysiek/116211625230754185
Not that humans are immune to screwing up TZ/DST logic of course, but I feel like the odds the offending logic was Claude vomit are pretty high, and the fact the RFO doesn't address this is pretty telling
Also this is a good illustration of why the "buT It WRiTes WorKInG CodE" argument is fairly unpersuasive on its own
https://mastodon.social/@rysiek@mstdn.social/116211625348719630
LOL. But will there be any reflection on how it got this far? Did no one stand up and point out the many obvious reasons was likely to be a total shit-show, or were they ignored?
I mean, one of their examples is "where is the closest public bathroom that isn’t completely disgusting" and what are the odds Google's LLM has accurate, up to date information about this? (and if, in fact, google does have realtime surveillance of public restrooms, I may have a privacy-related followup)
https://www.theverge.com/tech/893262/google-maps-gemini-ai-ask-maps-immersive-navigation
RE: https://mathstodon.xyz/@mjd/116224397839379268
This will be an interesting test of the AI companies fine print "don't use this great amazing world transforming genius machine for anything serious, lol" disclaimer.
My totally uneducated IANAL guess is OpenAI will win, if they don't settle to make it all go away. As much as the disclaimers are obvious CYA, OpenAI hasn't (AFAIK) explicitly promoted it ChatGPT for litigation
https://mastodon.social/@mjd@mathstodon.xyz/116224398083939471
Original pro se case https://www.courtlistener.com/docket/69634076/dela-torre-v-nippon-life-insurance-company-of-america/ which as far as I can tell seems to have effectively ended with the plaintiff agreeing to arbitration and not being sanctioned into oblivion
Case against OpenAI
https://www.courtlistener.com/docket/72365583/nippon-life-insurance-company-of-america-v-openai-foundation/
RE: https://infosec.exchange/@josephcox/116256386324754543
Shot: "Kantor told 404 Media that artificial intelligence is writing more than half the app’s code these days"
Chaser:
https://mastodon.social/@josephcox@infosec.exchange/116256386410352613
I do wonder though, does anyone involved actually want it or think it will work, or does it exist purely for management have have an ✨AI story?
https://www.404media.co/tinder-plans-to-let-ai-scan-your-camera-roll/
"The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours" - I would like to see a description of what happened not filtered through generalist press…
#AI takes another journalism job: https://www.theguardian.com/technology/2026/mar/20/mediahuis-suspends-senior-journalist-over-ai-generated-quotes