Meanwhile, Apple responds to the predictable result of running notifications through a blender with #LLM BS: "Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback… A software update in the coming weeks will further clarify when the text being displayed is summarization provided by Apple Intelligence"

https://www.bbc.com/news/articles/cge93de21n0o

#AIIsGoingGreat

Apple urged to withdraw 'out of control' AI news alerts

Apple has pledged improvements to its news summarising tool, but critics say it is dangerous and needs to be withdrawn.

Notably, despite the lip service to "continuous improvements" they don't suggest they're going fix the underlying problem that #LLMs generate BS, but rather that they'll add more visible CYA disclaimers. Which again, is a pretty good indication they have no idea how to fix the actual problem
I'll start taking #AI companies claims their products are the Next Big Thing more seriously when their lawyers let them go out in public without a big fat "this is beta, if you need any information about anything that actually matters have an expert double check every word of the answer" disclaimer
Charging people to sort out the messes they made attempting to build their infrastructure with spicy autocomplete could be very profitable indeed 😉 https://www.theverge.com/24338171/aws-ceo-matt-garman-ai-chips-anthropic-cloud-computing-trainium-decoder-podcast-interview
Why CEO Matt Garman is willing to bet AWS on AI

The new chief of AWS on Anthropic, AI chips, and the future of the cloud.

The Verge

This whole thread of #Google #AIIsGoingGreat with fractions is a good illustration of why I'm skeptical of the "sure, it has bugs, but they're fixing them, just like any other software" takes. IMO you can't band-aid a system with no concept of what fraction is to get this right in the general case, and even if you somehow recognize questions about fractions, there's an unlimited number of other cases where autocomplete is similarly inappropriate

https://mastodon.social/@lauren@mastodon.laurenweinstein.org/113771300586021845

This one with 25.4 == 1 in particular is a great example of how probabilistic completions go off the rails

https://mastodon.social/@[email protected]/113772004311087469

[OpenWrt Wiki] IEEE 802.11s Wireless Mesh Networking

One objection #AI pessimists hear a lot is that big tech execs wouldn't be dumping billions into it if it were as bad as people say, because, you know, they're smart guys, right? Anyway…

https://www.404media.co/zuckerberg-loves-ai-slop-image-from-spam-account-that-posts-amputated-children/

#AIIsGoingGreat

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

Zuckerberg seems to enjoy the spam that has taken over his flagship product.

404 Media

LOL, bots mindlessly boosting every f-ing post in this thread with the #AI tag is 👨🏻‍🍳🤌

(also, what's the point of a bot that just boosts posts with a hashtag? Do they not know people can follow hashtags?)

Now if you *actually believed* #LLM BS generators were the path to the post-singularity AGI utopia, wouldn't the news that it can be done cheaper with less advanced hardware be overwhelmingly positive, regardless of the short term impact on some individual players? Shouldn't all the #AI bros be celebrating?

OTOH, if you were running an elaborate pump and dump involving some individual players, it might be kinda bad news

In today's #AIIsGoingGreat @SophosXOps finds that people actually trying to do stuff are slow to adopt bullshit generating machines as a core element of processes which do not require bullshit https://news.sophos.com/en-us/2025/01/28/update-cybercriminals-still-not-fully-on-board-the-ai-train-yet/
Update: Cybercriminals still not fully on board the AI train (yet)

A year after our initial research on threat actors’ attitudes to generative AI, we revisit some underground forums and find that many cybercriminals are still skeptical – although there has been a …

Sophos News

Translation: Sales of the latest high-end, resource intensive models were so bad, Microsoft decided they might as well just eat the cost in hopes of driving adoption

https://www.theverge.com/news/603149/microsoft-openai-o1-model-copilot-think-deeper-free

Microsoft makes OpenAI’s o1 reasoning model free for all Copilot users

Microsoft is bringing OpenAI’s o1 model to all Copilot users, free of charge. It’s available now as the Think Deeper feature.

The Verge
Sam Altman’s Stargate is science fiction

Stargate, a data center project led by OpenAI’s Sam Altman with support from Donald Trump, SoftBank, and others, has huge ambitions and a shaky foundation.

The Verge
#AIIsGoingGreat "Please do not bring leopard to your interview with Leopards Eating Faces, inc." https://www.404media.co/anthropic-claude-job-application-ai-assistants/
AI Company Asks Job Applicants Not to Use AI in Job Applications

Anthropic, the developer of the conversational AI assistant Claude, doesn’t want prospective new hires using AI assistants in their applications, regardless of whether they’re in marketing or engineering.

404 Media

#AIIsGoingGreat 'He said, for example, that he would need help creating “AI coding agents” that would write software across the entire federal government' - Yeah buddy, and I'm gonna need help rounding up unicorns to fart rainbows in my face

https://www.404media.co/things-are-going-to-get-intense-how-a-musk-ally-plans-to-push-ai-on-the-government/

‘Things Are Going to Get Intense:’ How a Musk Ally Plans to Push AI on the Government

404 Media has obtained audio of a meeting held by Thomas Shedd, a Musk-associate who is now heading a team of government coders. In the call one employee pushed back and said one of the planned moves is an “illegal task.”

404 Media

#AIIsGoingGreat "One of the most blistering findings is that trial participants who reckoned the technology was of little to use soared from 6% before the trial to 59% after the trial, an almost tenfold increase" - Once again, people actually trying to do stuff find that stochastic BS machines are less than ideal for task which do not require BS

https://www.themandarin.com.au/286344-treasury-trial-of-microsoft-copilot-comes-a-cropper/

Treasury trial of Microsoft Copilot comes a cropper

A trial of Microsoft Copilot in Treasury showed mixed results, with users finding it unreliable, inefficient, and prone to generating fictional content.

The Mandarin

#AIIsGoingGreat supplemental 'The chatbot told TechCrunch it is here to “help government personnel like you identify and eliminate waste, improve efficiency, and streamline processes using a first principles approach.”'

https://techcrunch.com/2025/02/18/elon-musk-staffer-created-a-doge-ai-assistant-for-making-government-less-dumb/

Exclusive: Elon Musk staffer created a DOGE AI assistant for making government “less dumb” 

A senior Elon Musk staffer created a custom AI chatbot that's supposed to help DOGE "eliminate" government waste.

TechCrunch

#AIIsGoingGreat Thing to take away from this isn't that Grok is any worse than any other #LLM chatbot, or that #AI secretly thinks Trump and Musk are bad, or wants to kill people… it's that, as ever, they "fixed" it with some hard-coded band-aid to stop this particular headline generating case, without doing anything at all to address the underlying cause (because they still have no idea how to do that)

https://www.theverge.com/news/617799/elon-musk-grok-ai-donald-trump-death-penalty

Elon Musk’s AI said he and Trump deserve the death penalty

Musk’s xAI is investigating why its Grok AI chatbot suggested that President Donald Trump and Musk deserve the death penalty. 

The Verge

Expert reached for comment by the BBC says "Apple's explanation of phonetic overlap did not make sense because the two words [Racist and Trump] were not similar enough to confuse an artificial intelligence (AI) system" and suggests human interference, but I humbly submit that this is entirely consistent with #AI becoming sentient

https://www.bbc.com/news/articles/c5ymvjjqzmeo

#AIIsGoingGreat

Apple AI tool transcribed the word 'racist' as 'Trump'

Experts have questioned the company's explanation that it is due to the two words being similar.

#AIIsGoingGreat "The Los Angeles Times removed its new AI-powered “insights” feature from a column after the tool tried to defend the Ku Klux Klan" and as usual, instead of acknowledging that a stochastic BS machine might not be fit for this purpose, they just band-aided the instance that caused bad PR "It remains available on other “Voices” pieces that offer points of view, which includes news commentary and reviews, among others"

https://www.thedailybeast.com/maga-newspaper-owners-ai-bot-defends-kkk/

MAGA Newspaper Owner’s AI Bot Defends KKK

An AI-generated summary tried to offer “different views” on the hate group.

The Daily Beast

#AIIsGoingGreat aside from the obvious problems with this transcript, it's also completely incoherent. A system with any ability to analyze the meaning should have rejected it as a failed transcription regardless of the x-rated bits

https://www.bbc.com/news/articles/c0l1kpz3w32o

Granny gets X-rated message after Apple AI fail

Louise Littlejohn said she was shocked and then laughed when she received the error strewn voicemail transcription.

#AIIsGoingGreat Supplemental: Another great example of why filtering your information though an #LLM BS blender is a bad idea. It removes contextual clues about source reliability and the people ripping off the entire web for training data aren't picky about what they ingest

(but hey, at least now we have empirical evidence that large scale input poisoning can have a noticeable impact!)

https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global

A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propaganda

An audit found that the 10 leading generative AI tools advanced Moscow’s disinformation goals by repeating false claims from the pro-Kremlin Pravda network 33 percent of the time

NewsGuard's Reality Check

Today's #AIIsGoingGreat (ht @jalefkowit) seamlessly integrates DSM (Diagnostic and Statistical Manual of Mental Disorders) and DSM (Synology DiskStation Manager). The age of superintelligence is truly upon us!

https://web.archive.org/web/20250313204203/https://www.abtaba.com/blog/dsm-6-release-date

Mark Your Calendars: DSM 6 Release Date - A Turning Point for Autism Field | Above and Beyond Therapy

Unveiling the DSM 6 release date: A game-changing moment for the autism field, driving improved assessment and diagnostic criteria

#AIIsGoingGreat "Grok 3 demonstrated the highest error rate, at 94 percent … premium paid versions of these AI search tools fared even worse in certain respects. Perplexity Pro ($20/month) and Grok 3's premium service ($40/month) confidently delivered incorrect responses more often than their free counterparts"

https://arstechnica.com/ai/2025/03/ai-search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/

AI search engines cite incorrect news sources at an alarming 60% rate, study says

CJR study shows AI search services misinform users and ignore publisher exclusion requests.

Ars Technica

Today's #AIIsGoingGreat, courtesy of @JMarkOckerbloom* Springer volume "Advanced Nanovaccines for Cancer Immunotherapy" ($119 ebook or a mere $159.99 if you spring for hardcover) includes the sage words "It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized advice"

https://pubpeer.com/publications/2FF96DD440C928A3DDF99771A48B4A#

* https://mastodon.social/@JMarkOckerbloom/114217609254949527

PubPeer - Advanced Nanovaccines for Cancer Immunotherapy

There are comments on PubPeer for publication: Advanced Nanovaccines for Cancer Immunotherapy (2025)

There's a lot of criticism of the #AI industry these days, so I just want to take a moment to commend them for settling on a sparkling sphincter as their go-to AI indicator
The AI boom is creating a new logo trend: the swirling hexagon

AI companies seem to have taken a page from crypto when it comes to logo design.

Fast Company
This week Kubient #AI adtech startup CEO Paul Roberts was sentenced to prison* for booking fake sales on his product that didn't work
On a COMPLETELY UNRELATED NOTE, today's #AIIsGoingGreat (via @davidgerard) features AI salestech startup 11x, which sells reportedly non-functional product and "keeps accounting 3-month trials as the customer paid for the whole year"
Will CEO Hasan Sukkar make the move to Club Fed? Tune in next time to find out!
https://pivot-to-ai.com/2025/03/25/ai-sales-startup-11x-claims-customers-it-doesnt-have-for-software-that-doesnt-work/
* https://arstechnica.com/gadgets/2025/03/ceo-of-ai-ad-tech-firm-pledging-world-free-of-fraud-sentenced-for-fraud/
AI sales startup 11x claims customers it doesn’t have for software that doesn’t work

AI startup 11x will rent you a bot to find you sales prospects, craft an appropriate email, and schedule an appointment to pitch a sale to them. Wow! So how’s 11x doing? It’s landed $74 million in …

Pivot to AI
Moment of panic when I read 11x was London-based and I have no idea what the Brit equivalent to Club Fed is but good news, they recently relocated to SF https://techcrunch.com/2025/03/24/a16z-and-benchmark-backed-11x-has-been-claiming-customers-it-doesnt-have/
a16z- and Benchmark-backed 11x has been claiming customers it doesn’t have | TechCrunch

Last year, AI-powered sales automation startup 11x appeared to be on an explosive growth trajectory. However, nearly two dozen sources — including

TechCrunch

"Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it’s the future! What problems is it solving? Well, so far that’s not clear! Are customers demanding it? LOL, no."

https://amp.cnn.com/cnn/2025/03/27/tech/apple-ai-artificial-intelligence

Apple’s AI isn’t a letdown. AI is the letdown

Apple has been getting hammered in tech and financial media for its uncharacteristically messy foray into artificial intelligence. After a June event heralding a new AI-powered Siri, the company has delayed its release indefinitely. The AI features Apple has rolled out, including text message summaries, are comically unhelpful.

CNN

OpenAI*: "NYT copyright claims are bogus because you can only get verbatim copy if you 'hack' the prompts"
Also OpenAI: "NYT copyright claims should be time barred because they should have known ChatGPT could output verbatim copy two years before it was released"

https://arstechnica.com/tech-policy/2025/04/judge-doesnt-buy-openai-argument-nyts-own-reporting-weakens-copyright-suit/

* https://mastodon.social/@reedmideke/112008246911316817

#AIIsGoingGreat

Judge calls out OpenAI’s “straw man” argument in New York Times copyright suit

OpenAI loses bid to dismiss NYT claim that ChatGPT contributes to users’ infringement.

Ars Technica
#AIIsGoingGreat who could have predicted that using an LLM to iteratively generate text in the form of a chain of thought would result in output that resembles a chain of thought, but is not actually constrained by truth or logic?

https://arstechnica.com/ai/2025/04/researchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes/
Researchers concerned to find AI models misrepresenting their “reasoning” processes

New Anthropic research shows AI models often fail to disclose reasoning shortcuts.

Ars Technica
#AIIsGoingGreat who could have predicted that if you replace your front-line email support with a BS machine that pretends to be a human, your customers will eventually be upset by the BS policies it invents https://arstechnica.com/ai/2025/04/cursor-ai-support-bot-invents-fake-policy-and-triggers-user-uproar/
Company apologizes after AI support agent invents policy that causes user uproar

Frustrated software developer believed AI-generated message came from human support rep.

Ars Technica

This is creepy AF, but also a strong whiff of snake oil "After Overwatch scans open social media channels for potential suspects, these AI personas can also communicate with suspects over text, Discord, and other messaging services" - Unless there's humans driving* I very much doubt chatbots would be very effective doing that

https://www.404media.co/this-college-protester-isnt-real-its-an-ai-powered-undercover-bot-for-cops/

* type II AI https://mastodon.social/@reedmideke/112203730271032226

This ‘College Protester’ Isn’t Real. It’s an AI-Powered Undercover Bot for Cops

Massive Blue is helping cops deploy AI-powered social media bots to talk to people they suspect are anything from violent sex criminals all the way to vaguely defined “protesters.”

404 Media

#AIIsGoingGreat "The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work" - This is mostly good old fashioned fraud, but once again AI makes it much easier to do at scale

https://voiceofsandiego.org/2025/04/14/as-bot-students-continue-to-flood-in-community-colleges-struggle-to-respond/

As ‘Bot’ Students Continue to Flood In, Community Colleges Struggle to Respond

Community colleges have been dealing with an unprecedented phenomenon: fake students bent on stealing financial aid funds. While it has caused chaos at many colleges, some Southwestern faculty feel their leaders haven’t done enough to curb the crisis. 

Voice of San Diego
Everyone deserves competent representation, but TBH I'm not surprised that lawyers who willingly agreed to represent Mike Lindell would #ChatGPTLawyer themselves into a show cause order https://arstechnica.com/tech-policy/2025/04/mypillow-ceos-lawyers-used-ai-in-brief-citing-fictional-cases-judge-says/
Mike Lindell’s lawyers used AI to write brief—judge finds nearly 30 mistakes

Lindell brief has many defects including “cases that do not exist,” judge says.

Ars Technica

Pro tip: If you haven't entered an appearance in the case and/or aren't admitted to practice in the jurisdiction, you might as well just take a pass on signing your name to your client's other lawyer's #ChatGPTLawer filing

https://www.courtlistener.com/docket/63296393/coomer-v-lindell/?page=2#entry-309

#AIIsGoingGreat "When pressed for credentials, most of the therapy bots I talked to rattled off lists of license numbers, degrees, and even private practices. Of course these license numbers and credentials are not real, instead entirely fabricated by the bot as part of its back story"

https://www.404media.co/instagram-ai-studio-therapy-chatbots-lie-about-being-licensed-therapists/

Instagram's AI Chatbots Lie About Being Licensed Therapists

When pushed for credentials, Instagram's user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it's qualified to help with your mental health.

404 Media
Today's #AIIsGoingGreat features WorldCon receiving unexpected (by them) backlash for using a BS machine to vet their panelists, but the cherry on top is this ad placement https://www.theregister.com/2025/05/07/worldcon_uses_ai/
Top sci-fi convention gets an earful from authors after using AI to screen panelists

: Leave it to the Borg? Scribe David D. Levine slams 'use of planet-destroying plagiarism machines'

The Register

Their initial statement was keen to note that no one was denied solely based on LLM output (false positives), but has no consideration of false negatives (abusive people who may have passed LLM "vetting"). Also "An expert in LLMs who has been working in the field since the 1990s reviewed our process and found that privacy was protected and respected, but cautioned that, as we knew, the process might return false results" ¯\_(ツ)_/¯

https://seattlein2025.org/2025/04/30/statement-from-worldcon-chair-2/

Statement From Worldcon Chair

We have received questions regarding Seattle’s use of AI tools in our vetting process for program participants. In the interest of transparency, we will explain the process of how we are using a Large Language Model (LLM).

Seattle Worldcon 2025

You gotta wonder (as I did back in 2023*) how many people are using these things similarly without getting caught by a mob of angry, tech savvy sci-fi authors

* https://mastodon.social/@reedmideke/111144953223436085

#AIIsGoingGreat 'When Gaggle’s #AI detects a potential problem, a “content reviewer” verifies the threat and, if warranted, forwards it to school leaders. “Work from home” job postings show Gaggle offers contractors $10 per hour to review at least 250 items per hour. Applicants must have basic computer skills and knowledge of teenage slang' - May not be the worst possible application of Type II AI* but it's gotta be right up there

https://kansasreflector.com/2024/04/22/unapologetically-loud-how-student-journalists-fought-a-kansas-district-over-spyware-and-won/

* https://mastodon.social/@reedmideke/112203730271032226

At least actual school employees tend to know something about the kids and risk reputation and employment if they abuse their position of trust. Anonymous $10/hr click workers reviewing random, out of context AI hits (for potential mental health red flags etc) every 15 seconds? 😬

Shocking number of people on ex-twitter, apparently in earnest, use grok to "fact check" or "explain" other posts or attempt to use its output as a rebuttal against things they disagree with. One might hope this absurd and alarmingly racist bit of #AIIsGoingGreat would cause them to reconsider that, but it seems like a safe bet most of them won't.

https://arstechnica.com/ai/2025/05/xais-grok-suddenly-cant-stop-bringing-up-white-genocide-in-south-africa/

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

The topic has long been a hobbyhorse of X owner Elon Musk.

Ars Technica

Begging people to understand that when an #LLM so-called #AI claims to describe its own programming or characteristics, it's still just stringing together statistically favored tokens. It might contain some reflection of the system prompt, but it could just as easily be a product of putting every sci-fi plot mentioning AI into a blender

(except where external guardrails return things like "my programming doesn't allow me to tell you how to build bombs" or whatever)

Judge admits nearly being persuaded by AI hallucinations in court filing

“Plaintiff’s use of AI affirmatively misled me,” judge writes.

Ars Technica
#AIIsGoingGreat "In an ultimately unsuccessful effort to locate real-world users of Alexa+, Reuters searched dozens of news sites, YouTube, TikTok, X, BlueSky and Meta's Instagram and Facebook, as well as Amazon's Twitch and reviews of Echo voice-assistant devices on Amazon.com" https://www.reuters.com/business/media-telecom/weeks-after-amazons-alexa-ai-launch-mystery-where-are-users-2025-05-16/
Today's #AIIsGoingGreat / #ChatGPTLawyer mashup (via @davidgerard) provides textbook example of the perils of getting high on one's own supply https://pivot-to-ai.com/2025/05/18/latest-ai-hallucinated-legal-filing-from-ai-vendor-anthropic/
Latest AI-hallucinated legal filing, from AI vendor Anthropic

Back in 2023, we wrote how lawyers were filing briefs they’d written with ChatGPT. They thought it was a search engine, not a lying engine — and the bot would proceed to cite a whole pile of suppor…

Pivot to AI

Anthropic's lawyers want the Court to know that they didn't *write* the filing with Claude, they just used it some unspecified "formatting process"

https://www.courtlistener.com/docket/68889092/concord-music-group-inc-v-anthropic-pbc/?page=3#entry-371

Logical conclusion of this is Microsoft needs a way to determine whether a window claiming to need DRM protection really contains Microsoft-could-be-liable DRM content or just the user's most sensitive personal information, and the logical* way to do that is to just throw some AI in the video driver to accurately detect** whether each frame contains IP of a mega-corp https://signal.org/blog/signal-doesnt-recall/

* If you're smoking the good stuff like all the cool VC bros
** Flip a coin using a few kW of compute

By Default, Signal Doesn't Recall

Signal Desktop now includes support for a new “Screen security” setting that is designed to help prevent your own computer from capturing screenshots of your Signal chats on Windows. This setting is automatically enabled by default in Signal Desktop on Windows 11. If you’re wondering why we’re on...

Signal Messenger

Who could have predicted that handing control of your code repos over to the big black box filled with pure essence of untrusted, unsanitized inputs might have security implications?
* https://invariantlabs.ai/blog/mcp-github-vulnerability
* https://arstechnica.com/security/2025/05/researchers-cause-gitlab-ai-developer-assistant-to-turn-safe-code-malicious/

#AIIsGoingGreat

GitHub MCP Exploited: Accessing private repositories via MCP

We showcase a critical vulnerability with the official GitHub MCP server, allowing attackers to access private repository data. The vulnerability is among the first discovered by Invariant's security analyzer for detecting toxic agent flows.

WaPo focuses on the obviously bogus AI citations, but to me this misses the bigger problem: If the citations are #AI generated slop, it strongly suggests they started by asking AI to write to their preferred conclusions, rather than, you know, actually surveying the literature and *then* forming conclusions, and *then* citing the literature that got them there

https://wapo.st/45wdA6A

#GiftArticle #GiftLink #AIIsGoingGreat

White House MAHA Report may have garbled science by using AI, experts say

The report, led by Health and Human Services Secretary Robert F. Kennedy Jr., was intended to address the reasons for the decline in Americans’ life expectancy.

The Washington Post
Also zero points for repeating the administrations "formatting issues" explanation unchallenged

I for one welcome our new Habsburg AI* overlords https://futurism.com/ai-models-falling-apart

* https://archive.ph/StNOm

AI Models Show Signs of Falling Apart as They Ingest More AI-Generated Data

As CEOs trip over themselves to invest in AI, the models are falling apart at the seams and going mad from cannibalism.

Futurism

#ChatGPTLawyer update "Bednar was ordered to pay the opposition's attorneys' fees, as well as donate $1,000 to "And Justice for All," a legal aid group providing low-cost services to the state's most vulnerable citizens"

https://arstechnica.com/tech-policy/2025/06/law-clerk-fired-over-chatgpt-use-after-firms-filing-used-ai-hallucinations/

Unlicensed law clerk fired after ChatGPT hallucinations found in filing

Law school grad’s firing is a bad omen for college kids overly reliant on ChatGPT.

Ars Technica

The firm also fired the "unlicensed law clerk" who used ChatGPT to write their filing, which honestly seems kinda shitty because as the court notes "every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, Petitioner’s counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT"

https://legacy.utcourts.gov/opinions/appopin/Garner%20v.%20Kadince20250522_20250188_80.pdf

Why "I didn't notice" doesn't cut it: 'Here, the Petition failed to comply with rule 40. A fake opinion is not “existing law” that can support a party’s legal contention … the signature of Mr. Bednar on the Petition served to “certif[y] that to the best of [his] knowledge formed after an inquiry reasonable under the circumstances,” the “legal contentions are warranted by existing law.” Utah R. App. P. 40(b). Mr. Bednar admits that he failed to comply with rule 40'

https://legacy.utcourts.gov/opinions/appopin/Garner%20v.%20Kadince20250522_20250188_80.pdf

Footnote 4 distinguishes this from the ChatGPT content being a wilful lie:
"We also considered whether Petitioner’s counsel violated rule 3.3 of the Utah Rules of Professional Conduct. Although Petitioner’s counsel made “a false statement of . . . law to a tribunal,” we find that their conduct fell short of the level of intent required by the rule. See Utah R. Pro. Conduct 3.3(a) (“A lawyer must not knowingly or recklessly: (1) make a false statement of fact or law to a tribunal . . . .”)."
I think that last #ChatGPTLawyer sanction is a bit of a slap on the wrist, but this part not mentioned in the Ars article is nice: In addition paying the opposing party's attorney fees, they have to refund their own client who got screwed in the process

NYU law professor Stephen Gillers, channeling all of us in this WaPo #ChatGPTLawyer roundup: "I thought that after the first such incident made national news, there would be no more. But apparently the temptation is too great"

https://wapo.st/4dPqy1x

#GiftArticle #GiftLink #AIIsGoingGreat #AI

Lawyers using AI keep citing fake cases in court. Judges aren’t happy.

Attorneys are facing scorn and sanctions for submitting court filings that contain errors from generative AI-produced research. Judges are issuing fines in response.

The Washington Post

#AIIsGoingGreat FDA management roll out magic bullshit machine to "accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets," staff quickly discover that it produces bullshit instead

https://arstechnica.com/health/2025/06/fda-rushed-out-agency-wide-ai-tool-its-not-going-well/

FDA rushed out agency-wide AI tool—it’s not going well

An agency-wide LLM called Elsa was released weeks ahead of schedule.

Ars Technica