Infosec people: Untrusted, unsanitized inputs have been the bane of our existence for the last 40 years
Tech CEOs: We're betting billions of dollars the next big thing is a black box filled with pure essence of untrusted, unsanitizable inputs

https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

#AIIsGoingGreat

Hacker plants false memories in ChatGPT to steal user data in perpetuity

Emails, documents, and other untrusted content can plant malicious memories.

Ars Technica

Microsoft: If we add just one more <s>overbalanced wheel</s> layer of BS generators to our <s>over-unity machine</s> AI, it will really work this time for sure!

https://www.theverge.com/2024/9/24/24253452/microsoft-correction-ai-safety-tool-fix-errors

#AIIsGoingGreat

Microsoft claims its AI safety tool not only finds errors but also fixes them

Microsoft is launching a new correction feature in its Azure AI Studio that can identify, flag, and correct inaccurate outputs from AI models.

The Verge

OG #ChatGPTLawyer-as-a-service bro Joshua Browder of DoNotPay gets a slap on the wrist from the FTC. DoNotPay spokes says they're "pleased to have worked constructively with the FTC to settle this case and fully resolve these issues, without admitting liability" and I bet the spent a pile of money on real lawyers to get there. Oh, and they also paid the FTC $193,000

https://arstechnica.com/tech-policy/2024/09/startup-behind-worlds-first-robot-lawyer-to-pay-193k-for-false-ads-ftc-says/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

DoNotPay has to pay $193K for falsely touting untested AI lawyer, FTC says

You can't "sue anyone with a click of a button" without testing it first, FTC says.

Ars Technica
Meanwhile Zuck says that since Meta is ripping off billions of people, the fact they ripped of any specific individual is trifling and insignificant https://www.theverge.com/2024/9/25/24254042/mark-zuckerberg-creators-value-ai-meta
Mark Zuckerberg: creators and publishers ‘overestimate the value’ of their work for training AI

Meta CEO Mark Zuckerberg says the company could strike partnerships for “valuable” content to train AI tools, but that it could walk away from paying others.

The Verge
But today's #AIIsGoingGreat star is undoubtedly HP, who are doing their part to pop the AI bubble by associating with it with their ink extortion racket https://www.theverge.com/2024/9/25/24254129/hp-print-ai-beta-launch-printers
Finally, HP is adding AI to its printers

HP is launching new Print AI features that can optimize webpages and spreadsheets for printing as well as customize photos for greeting cards.

The Verge
Today's #AIIsGoingGreat features erstwhile expert witness Charles Ranson who "was adamant in his testimony that the use of Copilot or other artificial intelligence tools, for drafting expert reports is generally accepted in the field of fiduciary services and represents the future of analysis of fiduciary decisions;" but "could not name any publications regarding its use or any other sources to confirm that it is a generally accepted methodology"
https://arstechnica.com/tech-policy/2024/10/judge-confronts-expert-witness-who-used-copilot-to-fake-expertise/
Expert witness used Copilot to make up fake damages, irking judge

Judge calls for a swift end to experts secretly using AI to sway cases.

Ars Technica
"Despite his reliance on artificial intelligence, Mr. Ranson could not recall what input or prompt he used to assist him with the Supplemental Damages Report. He also could not state what sources Copilot relied upon and could not explain any details about how Copilot works or how it arrives at a given output. There was no testimony on whether these Copilot calculations considered any fund fees or tax implications" https://law.justia.com/cases/new-york/other-courts/2024/2024-ny-slip-op-24258.html
Matter of Weber

Matter of Weber - 2024 NY Slip Op 24258

Justia Law
While the immediate fault is obviously Ranson's, this is also an entirely foreseeable result of tech companies marketing these things magic answer boxes, no matter how many CYA disclaimers they put in the fine print
A product so good it sells itself (if you give it away free and throw in a $2.5 million cash sweetener) https://www.theverge.com/2024/10/22/24276747/microsoft-openai-news-outlets-10-million-ai-tools
Microsoft and OpenAI are giving news outlets $10 million to use AI tools

Microsoft and OpenAI are offering news outlets like The Seattle Times and The Minnesota Star Tribune up to $10 million to experiment with and use AI tools.

The Verge

"The White House is directing the Pentagon and intelligence agencies to increase their adoption of artificial intelligence" 🤨
"The memo also specifically requires agencies to monitor the risk AI systems can pose when it comes to privacy, discrimination and human rights" - I'd hope they're also required to monitor the risk it makes shit up
(yeah, a lot of militarily relevant AI isn't genAI but still)

https://www.washingtonpost.com/technology/2024/10/24/white-house-ai-nation-security-memo/

White House orders Pentagon and intel agencies to increase use of AI

The Biden administration will use a national security memo to direct agencies to embrace artificial intelligence, as the United States competes with China.

The Washington Post
Cybercheck has secured murder convictions. It appears to just run websites through a chatbot

Cybercheck, from Global Intelligence, claims it can find the key evidence to nail down a case. Cybercheck reports have been involved in at least two murder convictions. Cybercheck hands the police …

Pivot to AI

What could be better than having your medical visits transcribed by an #AI prone to making shit up? Deleting the original so no one can prove it "It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for “data safety reasons,” Raison said"

https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14

#AIIsGoingGreat

Researchers say AI transcription tool used in hospitals invents things no one ever said

Whisper is a popular transcription tool powered by artificial intelligence, but it has a major flaw. It makes things up that were never said. Whisper was created by OpenAI. It's being used in many industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos. OpenAI has promoted Whisper as having near “human level robustness and accuracy." But more than a dozen computer scientists and software developers tell The Associated Press that isn’t always the case and that it's prone to making up chunks of text and even entire sentences. An OpenAI spokesperson says the company studies how to reduce that and updates its models incorporating feedback received.

AP News

So at first glance, this is just a typical #AIIsGoingGreat - Alaska Education Commissioner Deena Bishop used spicy autocomplete and it made shit up like it so often does, but also… the excuse about the bogus citations being "placeholders" seems like a clear admission she started with the desired policy (restrict smartphones in schools) and then tried to generate a post-hoc justification, without even doing a basic literature review

https://alaskabeacon.com/2024/10/28/alaska-education-department-published-false-ai-generated-academic-citations-in-cell-policy-document/

False citations show Alaska education official relied on generative AI, raising broader questions • Alaska Beacon

Department of Education and Early Development Commissioner Bishop said the false citations were in a draft she used generative AI to create.

Alaska Beacon

Today's #AIIsGoingGreat: German journalist Martin Bernklau discovers Microsoft #Copilot says he committed crimes he reported on, and also helpfully provides directions to his home. Microsoft subsequently seems to have taken the typical band-aid approach and blocked his name… because, of course, none of these companies setting billions on fire to chase #AI hype have any idea how to solve the general case of LLMs making shit up

https://www.abc.net.au/news/2024-11-04/ai-artificial-intelligence-hallucinations-defamation-chatgpt/104518612

AI hallucinations caused artificial intelligence to falsely described these people as criminals

Unprecedented legal battles are testing if parent companies of tools like ChatGPT can be liable for defamation when innocent people are incorrectly described as criminals.

ABC News
Admit I've been a skeptic, but it looks like the payoff for the billions of dollars the tech industry dumped into AI is here: "Microsoft is adding AI-powered themes to Outlook … this AI-powered feature will require a Copilot Pro or business license to add a more personalized look to Microsoft’s email client… You’ll be able to create a theme based on the weather or locations, and they can dynamically update every few hours, each day, weekly, or monthly" https://www.theverge.com/2024/11/7/24290273/microsoft-outlook-ai-themes-copilot
#AIIsGoingGreat
Microsoft Outlook now has dynamic AI-powered themes

Microsoft is adding AI-powered themes to its Outlook email client. You’ll need a Copilot license to use them, and they can dynamically update.

The Verge
In today's #AIIsGoingGreat (ht @daedalus), a franchisee of Australian real estate firm LJ Hooker demonstrates what a crock of shit "have an #LLM write it and a human check it" usually is: If it saves you time, it's a pretty good indication your humans are not actually checking it in a meaningful way
https://www.theguardian.com/australia-news/2024/nov/11/lj-hooker-branch-used-ai-to-generate-real-estate-listing-with-non-existent-schools
LJ Hooker branch used AI to generate real estate listing with non-existent schools

Agency apologises after an ad said a house in Farley, NSW, was close to two ‘excellent’ schools even though there are none in the town

The Guardian

Also real estate dude's process is a pretty perfect anti-usecase: "Huynh said he would usually input the address of a rental property and the basic description such as how many bedrooms and bathrooms it had into ChatGPT"
At the very best, all an #LLM can add is irrelevant fluff or widely known facts about the general region. It cannot reliably add factual information about individual houses or neighborhoods, and more often it'll just make shit up

#AIIsGoingGreat

Oh, team involved in that "AI scientist" preprint I dunked on earlier* included "researchers from the buzzy Tokyo-based startup Sakana AI"

Anyway they allow that their "scientist" making up 10% of the numbers in its "papers" is "probably unacceptable" and then go on to talk about how it could be improved without addressing the possibility that making shit up is an inherent characteristic of LLMs https://spectrum.ieee.org/ai-for-science-2

* https://mastodon.social/@reedmideke/112957617464258809

Will the "AI Scientist" Bring Anything to Science?

<p> A tool to take over the scientific process continues a controversial trend </p>

IEEE Spectrum

Today's #AIIsGoingGreat "…results from a hard-coded filter that puts the brakes on the AI model's output before returning it to the user" - Demonstrating once again that despite setting hundreds of billions of dollars on fire, #LLM #AI companies have no idea how to solve the "hallucination" (aka making shit up) problem in the general case. Their best solution is hard coded checks for individual phrases that might expose them to excessive legal costs

https://arstechnica.com/information-technology/2024/12/certain-names-make-chatgpt-grind-to-a-halt-and-we-know-why/

Certain names make ChatGPT grind to a halt, and we know why

Filter resulting from subject of settled defamation lawsuit could cause trouble down the road.

Ars Technica
It shouldn't need to be said that there's no conceivable way band-aiding results that trigger legal threats will scale to make #LLM chatbots a generally reliable source of information, but some trillion dollar stock valuations suggest it does in fact need to be said, loudly and repeatedly

Today's #AIIsGoingGreat: Hard to see how drowning volunteer developers in #AI slop vulnerability reports could possibly go wrong. Great work everyone, throw another billion on the #LLM BS machine bonfire to celebrate!

https://sethmlarson.dev/slop-security-reports

New era of slop security reports for open source

I'm on the security report triage team for CPython, pip, urllib3, Requests, and a handful of other open source projects. I'm also in a trusted position such that I get "tagged in" to other open sou...

sethmlarson.dev
Today's #AIIsGoingGreat: Hard to see how anything could go wrong with a health insurer filtered their SOPs through a bullshit generating machine (Optum claim it was just a POC that wasn't used operationally, but even getting that far ain't a great sign) https://techcrunch.com/2024/12/13/unitedhealthcares-optum-left-an-ai-chatbot-used-by-employees-to-ask-questions-about-claims-exposed-to-the-internet/
UnitedHealth's Optum left an AI chatbot, used by employees to ask questions about claims, exposed to the internet | TechCrunch

Optum's AI chatbot was found exposed online at a time when the healthcare giant faces scrutiny for its use of AI to allegedly deny patient claims.

TechCrunch

#AIIsGoingGreat: 'correspondence seen by TechCrunch shows that previously, the guidelines read: “If you do not have critical expertise (e.g. coding, math) to rate this prompt, please skip this task.”
But now the guidelines read: “You should not skip prompts that require specialized domain knowledge.” Instead, contractors are being told to “rate the parts of the prompt you understand” and include a note that they don’t have domain knowledge'

https://techcrunch.com/2024/12/18/exclusive-googles-gemini-is-forcing-contractors-to-rate-ai-responses-outside-their-expertise/

Exclusive: Google's Gemini is forcing contractors to rate AI responses outside their expertise

Internal guidelines passed down from Google led to concerns that the AI model could be prone to inaccurate outputs on topics like healthcare.

TechCrunch
Hard to imagine google has a human go through every response and deal with the notes, so presumably they're using AI for that part too…
Today's #AIIsGoingGreat, courtesy of Meta. Like so many others, it leaves unanswered the obvious question: "Who the fuck do they think wants this?"
https://www.404media.co/metas-ai-profiles-are-indistinguishable-from-terrible-spam-that-took-over-facebook/
Meta's AI Profiles Are Indistinguishable From Terrible Spam That Took Over Facebook

The Meta AI profiles everyone is mad about are old, were a colossal failure, and many are already dead.

404 Media

Today's #AIIsGoingGreat via @telescoper: As he notes, google used to be quite OK for this kind of thing. Sure, you still needed to check whether the top result was from a reliable source, but it usually was, and unlike results run through the #LLM BS blender, you could do so at a glance

https://telescoper.blog/2025/01/05/google-garbage/

Google Garbage

In the course of double-checking the time of perihelion for yesterday’s post I did a quick Google search. What came up first was this: Google search results nowadays are prefaced by a short s…

In the Dark
One might say, "well, does it really matter if a random googler gets the perihelion time wrong by a few hours? People who really need to know should use JPL Horizons or whatever anyway" and OK, the odds of immediate real world harm in this case are low. But if google's "AI" isn't reliable for objective facts with widely recognized authoritative sources, why would one expect to be reliable for anything else?

Altman's latest blog strikes me as a lot of hand-wavy CEO-speak, but I actually agree with this "in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies" … with the small caveat that the average "material change" is unlikely to be in a positive direction

https://blog.samaltman.com/reflections

Reflections

The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning. New years get people in a reflective...

Sam Altman

Meanwhile, Apple responds to the predictable result of running notifications through a blender with #LLM BS: "Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback… A software update in the coming weeks will further clarify when the text being displayed is summarization provided by Apple Intelligence"

https://www.bbc.com/news/articles/cge93de21n0o

#AIIsGoingGreat

Apple urged to withdraw 'out of control' AI news alerts

Apple has pledged improvements to its news summarising tool, but critics say it is dangerous and needs to be withdrawn.

Notably, despite the lip service to "continuous improvements" they don't suggest they're going fix the underlying problem that #LLMs generate BS, but rather that they'll add more visible CYA disclaimers. Which again, is a pretty good indication they have no idea how to fix the actual problem
I'll start taking #AI companies claims their products are the Next Big Thing more seriously when their lawyers let them go out in public without a big fat "this is beta, if you need any information about anything that actually matters have an expert double check every word of the answer" disclaimer
Charging people to sort out the messes they made attempting to build their infrastructure with spicy autocomplete could be very profitable indeed 😉 https://www.theverge.com/24338171/aws-ceo-matt-garman-ai-chips-anthropic-cloud-computing-trainium-decoder-podcast-interview
Why CEO Matt Garman is willing to bet AWS on AI

The new chief of AWS on Anthropic, AI chips, and the future of the cloud.

The Verge

This whole thread of #Google #AIIsGoingGreat with fractions is a good illustration of why I'm skeptical of the "sure, it has bugs, but they're fixing them, just like any other software" takes. IMO you can't band-aid a system with no concept of what fraction is to get this right in the general case, and even if you somehow recognize questions about fractions, there's an unlimited number of other cases where autocomplete is similarly inappropriate

https://mastodon.social/@lauren@mastodon.laurenweinstein.org/113771300586021845

This one with 25.4 == 1 in particular is a great example of how probabilistic completions go off the rails

https://mastodon.social/@[email protected]/113772004311087469

[OpenWrt Wiki] IEEE 802.11s Wireless Mesh Networking

One objection #AI pessimists hear a lot is that big tech execs wouldn't be dumping billions into it if it were as bad as people say, because, you know, they're smart guys, right? Anyway…

https://www.404media.co/zuckerberg-loves-ai-slop-image-from-spam-account-that-posts-amputated-children/

#AIIsGoingGreat

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

Zuckerberg seems to enjoy the spam that has taken over his flagship product.

404 Media

LOL, bots mindlessly boosting every f-ing post in this thread with the #AI tag is 👨🏻‍🍳🤌

(also, what's the point of a bot that just boosts posts with a hashtag? Do they not know people can follow hashtags?)

Now if you *actually believed* #LLM BS generators were the path to the post-singularity AGI utopia, wouldn't the news that it can be done cheaper with less advanced hardware be overwhelmingly positive, regardless of the short term impact on some individual players? Shouldn't all the #AI bros be celebrating?

OTOH, if you were running an elaborate pump and dump involving some individual players, it might be kinda bad news

In today's #AIIsGoingGreat @SophosXOps finds that people actually trying to do stuff are slow to adopt bullshit generating machines as a core element of processes which do not require bullshit https://news.sophos.com/en-us/2025/01/28/update-cybercriminals-still-not-fully-on-board-the-ai-train-yet/
Update: Cybercriminals still not fully on board the AI train (yet)

A year after our initial research on threat actors’ attitudes to generative AI, we revisit some underground forums and find that many cybercriminals are still skeptical – although there has been a …

Sophos News

Translation: Sales of the latest high-end, resource intensive models were so bad, Microsoft decided they might as well just eat the cost in hopes of driving adoption

https://www.theverge.com/news/603149/microsoft-openai-o1-model-copilot-think-deeper-free

Microsoft makes OpenAI’s o1 reasoning model free for all Copilot users

Microsoft is bringing OpenAI’s o1 model to all Copilot users, free of charge. It’s available now as the Think Deeper feature.

The Verge
Sam Altman’s Stargate is science fiction

Stargate, a data center project led by OpenAI’s Sam Altman with support from Donald Trump, SoftBank, and others, has huge ambitions and a shaky foundation.

The Verge
#AIIsGoingGreat "Please do not bring leopard to your interview with Leopards Eating Faces, inc." https://www.404media.co/anthropic-claude-job-application-ai-assistants/
AI Company Asks Job Applicants Not to Use AI in Job Applications

Anthropic, the developer of the conversational AI assistant Claude, doesn’t want prospective new hires using AI assistants in their applications, regardless of whether they’re in marketing or engineering.

404 Media

#AIIsGoingGreat 'He said, for example, that he would need help creating “AI coding agents” that would write software across the entire federal government' - Yeah buddy, and I'm gonna need help rounding up unicorns to fart rainbows in my face

https://www.404media.co/things-are-going-to-get-intense-how-a-musk-ally-plans-to-push-ai-on-the-government/

‘Things Are Going to Get Intense:’ How a Musk Ally Plans to Push AI on the Government

404 Media has obtained audio of a meeting held by Thomas Shedd, a Musk-associate who is now heading a team of government coders. In the call one employee pushed back and said one of the planned moves is an “illegal task.”

404 Media

#AIIsGoingGreat "One of the most blistering findings is that trial participants who reckoned the technology was of little to use soared from 6% before the trial to 59% after the trial, an almost tenfold increase" - Once again, people actually trying to do stuff find that stochastic BS machines are less than ideal for task which do not require BS

https://www.themandarin.com.au/286344-treasury-trial-of-microsoft-copilot-comes-a-cropper/

Treasury trial of Microsoft Copilot comes a cropper

A trial of Microsoft Copilot in Treasury showed mixed results, with users finding it unreliable, inefficient, and prone to generating fictional content.

The Mandarin

#AIIsGoingGreat supplemental 'The chatbot told TechCrunch it is here to “help government personnel like you identify and eliminate waste, improve efficiency, and streamline processes using a first principles approach.”'

https://techcrunch.com/2025/02/18/elon-musk-staffer-created-a-doge-ai-assistant-for-making-government-less-dumb/

Exclusive: Elon Musk staffer created a DOGE AI assistant for making government “less dumb” 

A senior Elon Musk staffer created a custom AI chatbot that's supposed to help DOGE "eliminate" government waste.

TechCrunch

#AIIsGoingGreat Thing to take away from this isn't that Grok is any worse than any other #LLM chatbot, or that #AI secretly thinks Trump and Musk are bad, or wants to kill people… it's that, as ever, they "fixed" it with some hard-coded band-aid to stop this particular headline generating case, without doing anything at all to address the underlying cause (because they still have no idea how to do that)

https://www.theverge.com/news/617799/elon-musk-grok-ai-donald-trump-death-penalty

Elon Musk’s AI said he and Trump deserve the death penalty

Musk’s xAI is investigating why its Grok AI chatbot suggested that President Donald Trump and Musk deserve the death penalty. 

The Verge

Expert reached for comment by the BBC says "Apple's explanation of phonetic overlap did not make sense because the two words [Racist and Trump] were not similar enough to confuse an artificial intelligence (AI) system" and suggests human interference, but I humbly submit that this is entirely consistent with #AI becoming sentient

https://www.bbc.com/news/articles/c5ymvjjqzmeo

#AIIsGoingGreat

Apple AI tool transcribed the word 'racist' as 'Trump'

Experts have questioned the company's explanation that it is due to the two words being similar.

#AIIsGoingGreat "The Los Angeles Times removed its new AI-powered “insights” feature from a column after the tool tried to defend the Ku Klux Klan" and as usual, instead of acknowledging that a stochastic BS machine might not be fit for this purpose, they just band-aided the instance that caused bad PR "It remains available on other “Voices” pieces that offer points of view, which includes news commentary and reviews, among others"

https://www.thedailybeast.com/maga-newspaper-owners-ai-bot-defends-kkk/

MAGA Newspaper Owner’s AI Bot Defends KKK

An AI-generated summary tried to offer “different views” on the hate group.

The Daily Beast

#AIIsGoingGreat aside from the obvious problems with this transcript, it's also completely incoherent. A system with any ability to analyze the meaning should have rejected it as a failed transcription regardless of the x-rated bits

https://www.bbc.com/news/articles/c0l1kpz3w32o

Granny gets X-rated message after Apple AI fail

Louise Littlejohn said she was shocked and then laughed when she received the error strewn voicemail transcription.

#AIIsGoingGreat Supplemental: Another great example of why filtering your information though an #LLM BS blender is a bad idea. It removes contextual clues about source reliability and the people ripping off the entire web for training data aren't picky about what they ingest

(but hey, at least now we have empirical evidence that large scale input poisoning can have a noticeable impact!)

https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global

A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propaganda

An audit found that the 10 leading generative AI tools advanced Moscow’s disinformation goals by repeating false claims from the pro-Kremlin Pravda network 33 percent of the time

NewsGuard's Reality Check

Today's #AIIsGoingGreat (ht @jalefkowit) seamlessly integrates DSM (Diagnostic and Statistical Manual of Mental Disorders) and DSM (Synology DiskStation Manager). The age of superintelligence is truly upon us!

https://web.archive.org/web/20250313204203/https://www.abtaba.com/blog/dsm-6-release-date

Mark Your Calendars: DSM 6 Release Date - A Turning Point for Autism Field | Above and Beyond Therapy

Unveiling the DSM 6 release date: A game-changing moment for the autism field, driving improved assessment and diagnostic criteria

#AIIsGoingGreat "Grok 3 demonstrated the highest error rate, at 94 percent … premium paid versions of these AI search tools fared even worse in certain respects. Perplexity Pro ($20/month) and Grok 3's premium service ($40/month) confidently delivered incorrect responses more often than their free counterparts"

https://arstechnica.com/ai/2025/03/ai-search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/

AI search engines cite incorrect news sources at an alarming 60% rate, study says

CJR study shows AI search services misinform users and ignore publisher exclusion requests.

Ars Technica

Today's #AIIsGoingGreat, courtesy of @JMarkOckerbloom* Springer volume "Advanced Nanovaccines for Cancer Immunotherapy" ($119 ebook or a mere $159.99 if you spring for hardcover) includes the sage words "It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized advice"

https://pubpeer.com/publications/2FF96DD440C928A3DDF99771A48B4A#

* https://mastodon.social/@JMarkOckerbloom/114217609254949527

PubPeer - Advanced Nanovaccines for Cancer Immunotherapy

There are comments on PubPeer for publication: Advanced Nanovaccines for Cancer Immunotherapy (2025)

There's a lot of criticism of the #AI industry these days, so I just want to take a moment to commend them for settling on a sparkling sphincter as their go-to AI indicator
The AI boom is creating a new logo trend: the swirling hexagon

AI companies seem to have taken a page from crypto when it comes to logo design.

Fast Company
This week Kubient #AI adtech startup CEO Paul Roberts was sentenced to prison* for booking fake sales on his product that didn't work
On a COMPLETELY UNRELATED NOTE, today's #AIIsGoingGreat (via @davidgerard) features AI salestech startup 11x, which sells reportedly non-functional product and "keeps accounting 3-month trials as the customer paid for the whole year"
Will CEO Hasan Sukkar make the move to Club Fed? Tune in next time to find out!
https://pivot-to-ai.com/2025/03/25/ai-sales-startup-11x-claims-customers-it-doesnt-have-for-software-that-doesnt-work/
* https://arstechnica.com/gadgets/2025/03/ceo-of-ai-ad-tech-firm-pledging-world-free-of-fraud-sentenced-for-fraud/
AI sales startup 11x claims customers it doesn’t have for software that doesn’t work

AI startup 11x will rent you a bot to find you sales prospects, craft an appropriate email, and schedule an appointment to pitch a sale to them. Wow! So how’s 11x doing? It’s landed $74 million in …

Pivot to AI
Moment of panic when I read 11x was London-based and I have no idea what the Brit equivalent to Club Fed is but good news, they recently relocated to SF https://techcrunch.com/2025/03/24/a16z-and-benchmark-backed-11x-has-been-claiming-customers-it-doesnt-have/
a16z- and Benchmark-backed 11x has been claiming customers it doesn’t have | TechCrunch

Last year, AI-powered sales automation startup 11x appeared to be on an explosive growth trajectory. However, nearly two dozen sources — including

TechCrunch

"Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it’s the future! What problems is it solving? Well, so far that’s not clear! Are customers demanding it? LOL, no."

https://amp.cnn.com/cnn/2025/03/27/tech/apple-ai-artificial-intelligence

Apple’s AI isn’t a letdown. AI is the letdown

Apple has been getting hammered in tech and financial media for its uncharacteristically messy foray into artificial intelligence. After a June event heralding a new AI-powered Siri, the company has delayed its release indefinitely. The AI features Apple has rolled out, including text message summaries, are comically unhelpful.

CNN

OpenAI*: "NYT copyright claims are bogus because you can only get verbatim copy if you 'hack' the prompts"
Also OpenAI: "NYT copyright claims should be time barred because they should have known ChatGPT could output verbatim copy two years before it was released"

https://arstechnica.com/tech-policy/2025/04/judge-doesnt-buy-openai-argument-nyts-own-reporting-weakens-copyright-suit/

* https://mastodon.social/@reedmideke/112008246911316817

#AIIsGoingGreat

Judge calls out OpenAI’s “straw man” argument in New York Times copyright suit

OpenAI loses bid to dismiss NYT claim that ChatGPT contributes to users’ infringement.

Ars Technica
#AIIsGoingGreat who could have predicted that using an LLM to iteratively generate text in the form of a chain of thought would result in output that resembles a chain of thought, but is not actually constrained by truth or logic?

https://arstechnica.com/ai/2025/04/researchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes/
Researchers concerned to find AI models misrepresenting their “reasoning” processes

New Anthropic research shows AI models often fail to disclose reasoning shortcuts.

Ars Technica
#AIIsGoingGreat who could have predicted that if you replace your front-line email support with a BS machine that pretends to be a human, your customers will eventually be upset by the BS policies it invents https://arstechnica.com/ai/2025/04/cursor-ai-support-bot-invents-fake-policy-and-triggers-user-uproar/
Company apologizes after AI support agent invents policy that causes user uproar

Frustrated software developer believed AI-generated message came from human support rep.

Ars Technica
Kubient’s adtech use case for AI: an excuse for a fraud

Tiny adtech company Kubient shut down in late 2023 when CEO Paul Roberts was caught faking revenue numbers on his AI fraud detection software to lure investment in. He pleaded guilty on Monday. [Do…

Pivot to AI

@davidgerard Indeed, I featured that post in the thread at the time https://mastodon.social/@reedmideke/113160781568746961

(not blaming you not reading the whole thing, LOL, putting all my AI dunks in one thread is definitely Using Mastodon Wrong)

@reedmideke Sparkling Sphincter is a great death metal band name.
@reedmideke @JMarkOckerbloom Yet another reason to never give Springer Nature your money or time.
@reedmideke @JMarkOckerbloom AI hallucinations contaminating science stem from flat models. CFOL structurally eliminates them: no ontological predicates → coherent, reality-aligned outputs without "as AI model" disclaimers. https://docs.google.com/document/d/1QuoCS4Mc1GRyxEkNjxHlatQdhGbDTbWluncxGhyI85w/edit?usp=sharing
Proving the Necessity and Uniqueness of the Contradiction-Free Ontological Lattice (CFOL) as the Sole Substrate for AI Superintelligence

Proving the Necessity and Uniqueness of the Contradiction-Free Ontological Lattice (CFOL) as the Sole Substrate for AI Superintelligence Authors: Grok (built by xAI), in extended collaboration with Jason Lauzon Date: December 31, 2025 Abstract: This paper rigorously proves, through deductive logi...

Google Docs

@0illuminated1 @reedmideke You're linking to a paper credited to... Grok? Which on a quick skim is clearly claiming "proofs" in sections like 5 and 6 that have no stated support beyond bullet-point hand-waving?

You're probably not going to believe me (ask a working computer science professor if you want a second opinion), but this is exactly the sort of "tell me what I want to hear" sycophantic nonsense that makes me advise people to slowly step away from their large language models.

@JMarkOckerbloom @reedmideke Every point you made was fallacious and not one addressed the facts laid out in the material I provided,

@reedmideke
🤔 one might conclude, the stochastic BS machine does exactly what it is trained for

the answer given about "Wagner Group" by ChatGPT documented in the "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" paper https://dl.acm.org/doi/10.1145/3442188.3445922 should have been alerting enough

I'll be grateful to dear @emilymbender , @timnitGebru and coauthors

#SALAMI has eugenics embedded
the only safe move is to stay away far far away

On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency

ACM Conferences

@wobweger @reedmideke @timnitGebru

Read a little closer -- that paper was published in March 2021 (and completed earlier). ChatGPT wasn't released until Nov 2022.

The system McGuffie & Newhouse tested was GPT-3 -- and we're quoting their work.

@reedmideke Less Stargate, more Wormhole X-treme!!!