DOT general counsel Gregory Zerzan, on using spicy autocomplete to generate transport regulations: "We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough. We’re flooding the zone" - I'm sure the skeptics will come up with all kinds of objections about how this will go terribly wrong, but on the bright side, it should be a gold mine of obscure loopholes and hilarious litigation

https://www.propublica.org/article/trump-artificial-intelligence-google-gemini-transportation-regulations

#AIIsGoingGreat

Government by AI? Trump Administration Plans to Write Regulations Using Artificial Intelligence

The Transportation Department, which oversees the safety of airplanes, cars and pipelines, plans to use Google Gemini to draft new regulations. “We don’t need the perfect rule,” said DOT’s top lawyer. “We want good enough.”

ProPublica

RE: https://tech.lgbt/@JadedBlueEyes/115968835396049874

"Revise README for clarity on project status and purpose" =
s/Production ready/Proof of concept/ 🤨
Gotta wonder how often this kind of thing is happening in corporate settings without the immediate blowback. Valley management types love their "minimum viable product" so it's easy to see them being really impressed with a slopped-together demo that superficially appears to work, even if the code is an unmaintainable dead end

https://mastodon.social/@JadedBlueEyes@tech.lgbt/115968835523075743

#AIIsGoingGreat

Kevin Weil, vice president of OpenAI for Science: "I think 2026 will be for AI and science what 2025 was for AI in software engineering" - Drowning the practitioners in slop?

https://arstechnica.com/ai/2026/01/new-openai-tool-renews-fears-that-ai-slop-will-overwhelm-scientific-research/

#AIIsGoingGreat

New OpenAI tool renews fears that “AI slop” will overwhelm scientific research

New "Prism" workspace launches just as studies show AI-assisted papers are flooding journals with diminished quality.

Ars Technica

#AIIsGoingGreat "Other doctors described chatbots flattering the grandiose tendencies of patients with personality disorders, or advising patients with autism to put themselves in dangerous social situations. Others said they saw patients’ interactions with chatbots as an addiction" - Who could have predicted that an obsequious bullshit machine would do such things?
https://www.nytimes.com/2026/01/26/us/chatgpt-delusions-psychosis.html?unlocked_article_code=1.IlA.gSBg.pTvMJekxwEk7&smid=url-share

#GiftArticle #GiftLink

How Bad Are A.I. Delusions? We Asked People Treating Them.

Dozens of doctors and therapists said chatbots had led their patients to psychosis, isolation and unhealthy habits.

The New York Times

"According to O’Reilly, Moltbook is built on a simple open source database software that wasn’t configured correctly and left the API keys of every agent registered on the site exposed in a public database"

Who could have predicted that vibe coding enthusiasts would make such trivial yet catastrophic errors?
¯\_(ツ)_/¯

https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/

Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site

'It exploded before anyone thought to check whether the database was properly secured.'

404 Media

#AIIsGoingGreat "We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks" - I think Schneier and Raghavan undersell the problem (there's at least reasonable grounds to believe it's impossible) but in any case it seems like it might be unwise to set trillions on fire shoving LLMs into everything before figuring that out
¯\_(ツ)_/¯

https://spectrum.ieee.org/prompt-injection-attack

Why AI Keeps Falling for Prompt Injection Attacks

Why AI falls for scams that wouldn't trick a fast-food worker—and what that reveals about AI security.

IEEE Spectrum

#AIIsGoingGreat shot: "I didn’t write a single line of code for @ moltbook. I just had a vision for the technical architecture, and AI made it a reality"

Chaser: "…what we discovered tells a different story - and provides a fascinating look into what happens when applications are vibe-coded into existence without proper security controls"

https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys

Hacking Moltbook: AI Social Network Reveals 1.5M API Keys | Wiz Blog

Learn how a misconfigured Supabase database at Moltbook exposed 1.5M API keys, private messages, and user emails, enabling full AI agent takeover.

wiz.io

One might wonder how this relates to the earlier 404media story* … Oh "Security researcher Jameson O'Reilly also discovered the underlying Supabase misconfiguration, which has been reported by 404 Media. Wiz's post shares our experience independently finding the issue, the full -- unreported -- scope of impact, and how we worked Moltbook's maintainer to improve security" that's right, multiple people discovered it independently within days

* https://mastodon.social/@reedmideke/115994694484628029

You won: Microsoft is walking back Windows 11’s AI overload — scaling down Copilot and rethinking Recall in a major shift

People familiar with Microsoft's plans say that the company moving to streamline or remove certain Copilot integrations across in-box apps like Notepad and Paint in 2026, after pushback from users.

Windows Central

An extremely weird take which doesn't engage at all with the possibility wikipedians rejected AI summaries because they're obviously garbage and completely antithetical to everything wikipedia stands for

https://spectrum.ieee.org/wikipedia-at-25

Wikipedia Faces a Generational Disconnect Crisis

Wikipedia's 25th anniversary sparks a debate: Can it adapt to the needs of Gen Z and beyond?

IEEE Spectrum
He acknowledges "contributors raising legitimate concerns about AI hallucinations and the erosion of editorial oversight" and then just goes on his merry way to blame the community for being close minded and out of touch with the youngs

Former dropbox CTO Aditya Agarwal: "It was very clear that we will never ever write code by hand again"

I was gonna say if you have anything you value on dropbox, you might wanna fix that, but apparently he left in 2017

https://www.ft.com/content/fd134065-c2c6-4a99-99df-404d658127e6

Client Challenge

Today's #AIIsGoingGreat: PwC says "AI not paying off? Keep throwing money on the bonfire!" https://mastodon.social/@reedmideke/116027153048078552

Haven't had a #ChatGPTLawyer on here for a while but here's a new achievement unlocked: Steven Feldman prompt-engineered his client's case all the way to a default judgment against them

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-from-losing-case-over-ai-errors/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

Lawyer sets new standard for abuse of AI; judge tosses case

Behold the most overwrought AI legal filings you will ever gaze upon.

Ars Technica

Anthropic's C compiler brings to mind a submarine made of cheese* - Sure, it's not a *good* compiler, but even with the absurd amount of compute, the not-insignificant human baby sitting, and the fact it had every open source compiler ever written to plagiarize from, it's still incredibly impressive that it works to the extent it does

https://www.anthropic.com/engineering/building-c-compiler

* https://www.thetimes.com/uk/politics/article/now-brexit-makes-sense-thanks-to-my-cheese-submarine-cbh8gmljd

Building a C compiler with a team of parallel Claudes

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

I still doubt this is really the future of software, let alone a desirable one, but it's certainly not nothing
It's not hard to imagine a future where it becomes very easy to generate code that is not as good as typical human written applications today (as low a bar as that is) but functional enough that people ship it anyway. Sure, it might be flaky, inefficient, insecure and unmaintainable, but it more or less does the thing, for far less time and money than writing it the old fashioned way…
One might say, well, the market will sort it out, slop will thrive in low stakes applications and professional stuff will win where it matters. And maybe that's so, but I fear there's another path where large parts of the ecosystem just get worse. Kinda like SSDs (or SD cards, USB chargers etc) on aliexpress, where it's virtually impossible to find (or presumably, make a living selling) a decent one for a decent price because it's totally drowned out by garbage
A more optimistic possibility is that a system which can spit out a more or less working app can also incrementally improve it, and so one can just throw compute at it until it reaches the desired cost / quality balance. But it's not obvious to me this should be true for acceptable values of cost and quality in the general case, and the very least, rigorously defining the desired "quality" and how to verify it is going to involve a lot more than telling the bot "make cool app plz"

WaPo on the scale of the #AI cash bonfire, and how it's distorting markets far outside the immediate tech industry: "There are not enough skilled electricians and other specialized trade workers for both data center projects and other complex construction … such as apartment buildings, factories and health care facilities. AI data centers tend to be more lucrative for construction firms, which relegates anything else to a lower priority"

https://wapo.st/3ZkaI8N

#GiftArticle #GiftLink

The AI boom is so huge it’s causing shortages everywhere else

The hundreds of billions of dollars being spent by tech companies on AI projects are diverting resources from other parts of the economy.

The Washington Post

Roger McNamee: "It is possible that the amount invested in AI in the U.S. since the middle of 2022 exceeds all prior investments in the entire tech industry. That alone should give everyone pause."

https://wapo.st/3ZkaI8N

The AI boom is so huge it’s causing shortages everywhere else

The hundreds of billions of dollars being spent by tech companies on AI projects are diverting resources from other parts of the economy.

The Washington Post
There's been a number of think pieces "debunking" AI data center water use concerns because it's relatively small in global terms, but that doesn't mean it can't be a very big deal at a regional level. Particular in regions that are already stressed
https://www.texasobserver.org/texas-ai-data-centers-water-usage-regulation/
The Texas AI Boom is Outpacing Water Regulations

Each data center can “drink” as much as an entire community. Yet, Texas does not require these tech firms to disclose projected or actual water consumption.

The Texas Observer

Who could have predicted this? "Product managers and designers began writing code … engineers, in turn, spent more time reviewing, correcting, and guiding AI-generated or AI-assisted work produced by colleagues. These demands extended beyond formal code review. Engineers increasingly found themselves coaching colleagues who were “vibe-coding” and finishing partially complete pull requests"

https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it

#AIIsGoingGreat

La IA no reduce el trabajo, lo intensifica

Una de las promesas de la IA es que puede reducir la carga de trabajo para que los empleados puedan centrarse más en tareas de mayor valor y más interesantes. Pero según una nueva investigación, las herramientas de IA no reducen el trabajo, sino que lo intensifican constantemente: en el estudio, los empleados trabajaban a un ritmo más rápido, asumían un mayor número de tareas y ampliaban su jornada laboral, a menudo sin que se les pidiera. Puede parecer una ventaja, pero no es tan sencillo. Estos cambios pueden ser insostenibles y provocar un aumento de la carga de trabajo, fatiga cognitiva, agotamiento y un debilitamiento de la capacidad de toma de decisiones. El aumento de la productividad que se disfruta al principio puede dar paso a un trabajo de menor calidad, a la rotación de personal y a otros problemas. Para corregir esto, las empresas deben adoptar una «práctica de IA», es decir, un conjunto de normas y estándares en torno al uso de la IA que pueden incluir pausas intencionadas, secuenciar el trabajo y añadir más base humana.

Harvard Business Review
Lemme guess, the three benchmarks are a Ouija board, a magic 8 ball, and the entrails of a freshly sacrificed goat? https://mastodon.social/@ieeespectrum/116048225933397508
In fairness, the researchers appear not to be complete shills: "The results were bleak, with all three models obtaining low accuracy scores. Although they excelled in information extraction and image recognition, the LLMs sometimes hallucinated and struggled with counting objects precisely and measuring specific distances" https://spectrum.ieee.org/ai-agent-benchmarks
AI Agent Benchmark: New Safety Standards Revealed

Can AI agents safely run business operations without human oversight? Not yet, but new benchmarks can help.

IEEE Spectrum

RE: https://mastodon.online/@tagir_valeev/116057271527521893

#AIIsGoingGreat: Who could have predicted that a machine which produces statistically pleasing text sequences without reasoning or real-time data could have trouble distinguishing between a real fire and a test?

(this is not a dunk on Tagir, see his followup post for details)

https://mastodon.social/@tagir_valeev@mastodon.online/116057271668356153

Hot take: If you accept "Correctness and rigor are paramount" most of the rest becomes irrelevant, because there is no indication anyone, anywhere has a coherent plan to get LLMs to reliably exhibit those characteristics

(In fairness, Hogg addresses this in the comments section, noting "I have made the strong—and possibly unrealistic—assumption that, over the next months and years, the LLMs will get far better" and that current LLMs produce slop)

https://arxiv.org/html/2602.10181v1

Why do we do astrophysics?

Oh look, FT* says the AI payoff is starting to show up in economic data https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419dc5

* and by FT, I mean Erik Brynjolfsson, "director of Stanford University’s Digital Economy Lab and co-founder of Workhelix**"
** Workhelix is a startup with the tagline "Your partner for AI success"

Client Challenge

So yeah, @arstechnica confirms the Benj Edwards and Kyle Orland piece with the fabricated quotes* was AI slop and retracted. Doesn't really explain what happened, but Aurich did initially say not to expect a response until after the weekend, so maybe a more fulsome response will follow. Certainly seems like they should explain how it came about, what they're doing to prevent it happening again, and whether other articles were affected

https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/

* https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/

Editor’s Note: Retraction of article containing fabricated quotations

We are reinforcing our editorial standards following this incident.

Ars Technica
Benj Edwards did offer his own explanation https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
Benj Edwards (@benjedwards.com)

Sorry all this is my fault; and speculation has grown worse because I have been sick in bed with a high fever and unable to reliably address it (still am sick) I was told by management not to comment until they did. Here is my statement in images below https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/

Bluesky Social

Also, the bottom of that Microsoft learn page has an "AI Disclaimer" link in the footer, which says, among other things "We're transparent about articles that contain AI-generated content. All articles that contain any AI-generated content include text acknowledging the role of AI. You'll see this text at the end of the article"

https://learn.microsoft.com/en-us/principles-for-ai-generated-content

Principles for AI generated content

Describes Microsoft's approach for using AI-generated content on Microsoft Learn

RE: https://infosec.exchange/@josephcox/116086679934852156

#AIIsGoingGreat "Despite thoroughly documenting the AI-generated errors in its lesson plans, Alpha School relies on AI to test the quality of its AI-generated lessons, creating a situation where a faulty AI is tasked with fixing its own faulty generations" https://mastodon.social/@josephcox@infosec.exchange/116086680010329307

RE: https://flipboard.social/@newsguyusa/116092087598905246

Periodic reminder that when an LLM "explains" why it produced some nonsense, the "explanation" is just an explanation shaped thing, without any particular insight into the actual workings of the model

https://mastodon.social/@newsguyusa@flipboard.social/116092088111798499

"If you don't adopt AI, you'll be left behind!" - specifically, your confidential data may be left behind, on your premises, where it belongs.

Also funny coincidence*, EU just banned** a bunch of AI shit from their IT systems for pretty much this reason

https://mastodon.social/@zackwhittaker/116092191816673310
* or "coincidence" 😉
** https://www.politico.eu/article/eu-parliament-blocks-ai-features-over-cyber-privacy-fears/

#AIIsgoingGreat "This AI agent ran within a GitHub Actions workflow and ran with broad privileges. You might be able to guess where this is heading…"
oooh I got this… AGI? The Singularity? To the store for ice cream?

https://adnanthekhan.com/posts/clinejection/

Clinejection — Compromising Cline's Production Releases just by Prompting an Issue Triager | Adnan Khan - Security Research

Clinejection — Compromising Cline's Production Releases just by Prompting an Issue Triager - Security research by adnanthekhan

Adnan Khan - Security Research

Also: The #AI workflow was added to "automate first-response to reduce maintainer burden"

One might ask the maintainers how the burden of dealing with this shitshow compares with triaging issues

https://adnanthekhan.com/posts/clinejection/

Clinejection — Compromising Cline's Production Releases just by Prompting an Issue Triager | Adnan Khan - Security Research

Clinejection — Compromising Cline's Production Releases just by Prompting an Issue Triager - Security research by adnanthekhan

Adnan Khan - Security Research

"Amazon said it was a coincidence that AI tools were involved in the outages, and that there was no evidence that such technology led to more errors than human engineers. “In both instances, this was user error, not AI error,” it said" - Ah yes, the engineers probably prompted it wrong

https://www.theguardian.com/technology/2026/feb/20/amazon-cloud-outages-ai-tools-amazon-web-services-aws

Amazon’s cloud ‘hit by two outages caused by AI tools last year’

Reported issues at Amazon Web Services raise questions about firm’s use of artificial intelligence as it cuts staff

The Guardian

Original FT story (via @arstechnica, sans paywall) states "In these two cases, the engineers involved did not require a second person’s approval before making changes, as would normally be the case" and quotes Amazon saying it was "a user access control issue, not an AI autonomy issue" because the engineer involved had "broader permissions than expected" 🤨
Lot of different scenarios could be read between the lines there

https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/

An AI coding bot took down Amazon Web Services

Blames "user error, not AI error" for incident in December involving its Kiro tool.

Ars Technica

"Anthropic said the jailbreaking technique used in the Stanford and Yale research was impractical for normal users and would require more effort to extract the text than just purchasing the content" - Ah yes, the old "it's not infringement because I stored it in an inconvenient format" defense, just like my punch-card MP3 collection

https://arstechnica.com/ai/2026/02/ais-can-generate-near-verbatim-copies-of-novels-from-training-data/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

AIs can generate near-verbatim copies of novels from training data

LLMs memorize more training data than previously thought.

Ars Technica
Superintelligence™

#AIIsGoingGreat 'An AP reporter followed prompts for Spanish-language options and was met with a voice speaking accented English that used Spanish only for numbers. “Your estimated wait time is less than ‘tres’ minutes,” the voice said' - Of course, the real problem isn't so much the AI system per se, but the fact such a monumental screwup made it to production and went unfixed for months

https://apnews.com/article/washington-dol-spanish-accent-ai-3a1b8438a5674c07242a8d48c057d5a3

Washington state hotline callers hear AI voice with Spanish accent

Callers to Washington state’s driver’s license agency who select automated service in Spanish have instead been hearing an AI voice speaking English with a strong Spanish accent. The voice slipped Spanish numbers into key phrases. A recording of the odd-sounding accent drew attention on social media. And one person described the experience as “hilarious,” “absurd” and like a scene out of “Parks and Recreation.” The Department of Licensing has apologized and says it fixed the problem.

AP News

I am less concerned over alleged violations of Anthropic's TOS or Trump's retaliatory ban and more concerned the DOD is reportedly using spicy autocomplete for "intelligence purposes, as well as to help select targets"

https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military

US military reportedly used Claude in Iran strikes despite Trump’s ban

Trump calls Anthropic a ‘Radical Left AI company run by people who have no idea what the real World is all about’

The Guardian

Fedi nerds (including yours truly): I wouldn't let that slop anywhere near my hobby open source project

DOD: As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance … The AI tools also evaluate a strike after it is initiated
😬

https://wapo.st/4rcM6dC
(email walled)

#GiftArticle #GiftLink #AIIsGoingGreat

Anthropic’s AI tool Claude central to U.S. campaign in Iran, amid a bitter feud

Anthropic’s AI tool Claude is playing a key role in the U.S. military’s campaign in Iran, amid a bitter fight with the Pentagon over the terms of its use in war.

The Washington Post
Once again begging reporters to focus less on the EULA fight and more digging into how the most powerful military in the world is using spicy autocomplete to decide who to bomb

Three years into the AI hype wave, it keeps happening:
1) AI vendor tries to make BS machine look more reliable by having it link sources
2) BS machine BSes the sources
"…its “sources” linked to spammy copies of legit websites, or other archived copies that aren’t the actual source page. Some sources even went to completely unrelated links that weren’t written by the person whose work they were supposedly an example of"

https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews

Grammarly is using our identities without permission

An AI feature in Grammarly called “expert review” has been using the names of staff members at The Verge in AI-generated comments without their knowledge or permission.

The Verge

With a little help from a Ouija board, even the dead ones can opt out, I suppose. I mean, it's as much their voice as the Grammarly version

https://mastodon.social/@caseynewton/116201968304541983

"you can create a DLP policy to help protect against the use of sensitive information types (SIT), such as credit card numbers, passport identification, or social security numbers in Microsoft Copilot 365 prompts" - So hypothetically, if one were to include random but formally valid SSN or CC values hidden in your emails or documents, would it stop users of this feature from using their microslop on it? 🤔

https://learn.microsoft.com/en-us/purview/dlp-microsoft365-copilot-location-learn-about

Learn about using Microsoft Purview Data Loss Prevention to protect interactions with Microsoft 365 Copilot and Copilot Chat

You can use Microsoft Purview Data Loss Prevention (DLP) targeted at the Microsoft 365 Copilot and Copilot Chat location to help prevent the use of sensitive information types in prompts and files and emails that have sensitivity labels in Microsoft 365 Copilot and Copilot Chat prompts.

In today's #AIIsGoingGreat (ht @platypus*), Microsoft security does a nice writeup of SEO bros abusing "summarize with AI" buttons to inject "memories" into AI assistants… and then goes on to offer mitigations like "be sure to hover links before you click them" and "regularly check your AI memories" … because if the last 30 years of infosec has taught us anything, it's that user vigilance is the first and best line of defense, right?

https://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning/

* https://glammr.us/@platypus/116206592321083176

Manipulating AI memory for profit: The rise of AI Recommendation Poisoning | Microsoft Security Blog

That helpful “Summarize with AI” button? It might be secretly manipulating what your AI recommends.  Microsoft security researchers have discovered a growing trend of AI memory poisoning attacks used for promotional purposes, a technique we call AI Recommendation Poisoning.

Microsoft Security Blog

Bonus #AIIsGoingGreat: After assuring us recent incidents were only coincidentally connected to AI*, Amazon "summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools"

https://arstechnica.com/ai/2026/03/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes/

* https://mastodon.social/@reedmideke/116107554568232236

After outages, Amazon to make senior engineers sign off on AI-assisted changes

AWS has suffered at least two incidents linked to the use of AI coding assistants.

Ars Technica

RE: https://mstdn.social/@rysiek/116211625230754185

Not that humans are immune to screwing up TZ/DST logic of course, but I feel like the odds the offending logic was Claude vomit are pretty high, and the fact the RFO doesn't address this is pretty telling

Also this is a good illustration of why the "buT It WRiTes WorKInG CodE" argument is fairly unpersuasive on its own

https://mastodon.social/@rysiek@mstdn.social/116211625348719630

LOL. But will there be any reflection on how it got this far? Did no one stand up and point out the many obvious reasons was likely to be a total shit-show, or were they ignored?

https://mastodon.social/@verge/116212039751696511

… and receive complex, real-sounding but bullshit answers? https://mastodon.social/@verge/116216247214442701

I mean, one of their examples is "where is the closest public bathroom that isn’t completely disgusting" and what are the odds Google's LLM has accurate, up to date information about this? (and if, in fact, google does have realtime surveillance of public restrooms, I may have a privacy-related followup)

https://www.theverge.com/tech/893262/google-maps-gemini-ai-ask-maps-immersive-navigation

You can now ask Google Maps ‘complex, real-world questions’ — and Gemini will answer

Google Maps announced two new AI-powered features: Ask Map and Immersive Navigation, letting users ask conversational questions and upgrading the navigation setting to include more details.

The Verge

RE: https://mathstodon.xyz/@mjd/116224397839379268

This will be an interesting test of the AI companies fine print "don't use this great amazing world transforming genius machine for anything serious, lol" disclaimer.

My totally uneducated IANAL guess is OpenAI will win, if they don't settle to make it all go away. As much as the disclaimers are obvious CYA, OpenAI hasn't (AFAIK) explicitly promoted it ChatGPT for litigation

https://mastodon.social/@mjd@mathstodon.xyz/116224398083939471

Original pro se case https://www.courtlistener.com/docket/69634076/dela-torre-v-nippon-life-insurance-company-of-america/ which as far as I can tell seems to have effectively ended with the plaintiff agreeing to arbitration and not being sanctioned into oblivion

Case against OpenAI
https://www.courtlistener.com/docket/72365583/nippon-life-insurance-company-of-america-v-openai-foundation/

RE: https://infosec.exchange/@josephcox/116256386324754543

Shot: "Kantor told 404 Media that artificial intelligence is writing more than half the app’s code these days"
Chaser:
https://mastodon.social/@josephcox@infosec.exchange/116256386410352613

I do wonder though, does anyone involved actually want it or think it will work, or does it exist purely for management have have an ✨AI story?

https://www.404media.co/tinder-plans-to-let-ai-scan-your-camera-roll/

Tinder Plans to Let AI Scan Your Camera Roll

In a feature the dating app says is set to roll out in the U.S. later this spring, Tinder plans to access users' camera rolls to pick photos and determine what they're into.

404 Media