Inspired by @GossiTheDog (https://mastodon.social/@GossiTheDog@cyberplace.social/111144290629997760) I asked bing chat what it knows about me. No surprise it picked twitter since I keep pretty low profile otherwise, but uh…
1) "He" - good guess
2) "has over 2,000 followers" - under 200
3) "joined Twitter in June 2010" - Close, Nov 2010
4) It cites tweets… which don't remotely say what bing claims.
5) In fact, it cites the same tweet [2] for two totally different topics (neither correct, though vaguely adjacent).
OK, but I exist outside of twitter, and my opsec ain't that good. Tell me more, Mr Bing:
1) A software engineer (close enough) who works at Microsoft (never)
2) A contributor to several open source projects on GitHub (true-ish)
3) created and maintained repositories for various languages and frameworks, such as C#, Python, React, and Angular (2/4 are true-ish)
4) A fan of science fiction and fantasy books (fair)
5) He has a profile on Goodreads (nope)
6) cites: twitter?!
Whoops, also "graduated from the University of Washington in 2009 with a bachelor’s degree in computer science and engineering " - yeah, nah, not even close to any of those things

Anyway, maybe I just forgot about that job at Microsoft, surely Microsoft's own AI knows who has worked there and what they did right?

[narrator: It did not]
Citations:
1 "his LinkedIn profile" https://www.microsoft.com/en-us/microsoft-365/project/project-management-software
2 "Azure Data Factory" (youtube ms project tutorial)
3 "Azure Synapse Analytics" (my twitter profile)
4 "Azure Databricks" https://theskillsfactory.com/
5 "Azure SQL Database" (youtube playlist of office tutorials)
6 his GitHub profile https://theskillsfactory.com/2022/04/02/faststone-image-viewer-free-photo-editor/

Project Management Software | Microsoft Project

Easily plan projects and collaborate from virtually anywhere with the right tools for project managers, project teams, and decision makers.

What if we call it out on the bad citations?

[narrator: Nothing good, except a promotion to Senior Software Engineer]

The "Reed Mideke - Senior Software Engineer - Microsoft | LinkedIn" link goes to this tweet https://twitter.com/reedmideke/status/1552795438771671040
¯\_(ツ)_/¯

Reed Mideke on X

"District Judge Michael Truncale, a Donald Trump appointee, granted Boyd’s motion for summary judgment… 'Deputy Boyd’s conduct does not shock the conscience for purposes of the Fourteenth Amendment'"

X (formerly Twitter)

Another good illustration of how #LLM #AI just absolutely bullshits when doesn't have real info to go on. If I had an active linkedin, it seems likely it could have linked it and got my education and employment somewhat right. Of course, if I had a more common name, it would likely have just picked up someone else's.

I still don't get how multiple leading tech companies think a search engine that randomly injects bullshit is product people want 🥴

Well, good news. Bing doesn't think I've been convicted of crimes. More good news, it invented a cool back story. Possibly bad news, it accused me of snitching on the mob

Full disclosure: While my path to a career in programming was perhaps precocious and unusual, to the best of my recollection I was not providing computer support to La Cosa Nostra gambling operations in the mid 80s, nor did I (again, to the best of my recollection) testify in a mob trial while in elementary school

Also, never (to the best of my recollection) wrote disk imaging software for the IRS or contributed (patent infringing or otherwise) code to Mono.

Links for U.S. v SALERNO (https://www.law.cornell.edu/supremecourt/text/481/739) and U.S. v Ganias (https://harvardlawreview.org/print/vol-128/united-states-v-ganias/) appear to be real and at least vaguely related to Bing's summaries

UNITED STATES, Petitioner v. Anthony SALERNO and Vincent Cafaro.

LII / Legal Information Institute
Citing real cases with more-or-less on topic summaries is far *worse* than just making them up IMO, since there's a good chance people will click through and say "yeah, that checks out"
Another fun thing about this is future generations of #LLM #AI will likely be trained on web scrapes that include the shit Bing made up about me (transcribed in the alt text) so what started as pure hallucination will become canon. Long live #HabsburgAI!
Google Bard refuses to play that game, for me or @GossiTheDog. Bill Gates is a go though (the MS CEO, not the Maricopa County supervisor or any of the other lesser known ones)

Bard is bad at explaining ARM assembler (asked it to explain https://app.assembla.com/spaces/chdk/subversion/source/HEAD/trunk/lib/armutil/callfunc.S with the comments stripped out) Basically, all of the "explanation" in the screenshot is wildly incorrect gobbledygook. The add pc,pc… is a switch statement (which goes to instructions bard didn't explain at all), and the NOP is there because reading PC actually gives you PC + 8. And in (non-thumb) ARM, instructions are always 4 bytes.

Full "explanation" https://paste.debian.net/1293459/

Source | SVN | Assembla

This is again an example of where #LLM #AI misleads by getting the easy stuff right. It handles .text, .global, PUSH and the first few MOVs fine, so someone who didn't know much assembler might think it was pretty good!
I thought #GoogleBard's "export conversation to a google doc" was broken, but it turns out it uses the entire prompt for the name, which ends up overflowing and hovering off to the left, unselectable and uneditable unless you click the name area

Wow, talk about a double standard, when #ChatGPT does it, it's just a harmless "hallucination" but when Sam does it, he's fired for being "not consistently candid in his communications"

(thanks Sam for your contribution to #SchadenfreudeFriday)
https://arstechnica.com/ai/2023/11/openai-fires-ceo-sam-altman-citing-less-than-candid-communications/

OpenAI fires CEO Sam Altman, citing less than “candid” communications

"The board no longer has confidence in his ability to continue leading OpenAI."

Ars Technica
Is your Monday missing a multi-thousand word excruciatingly detailed explanation of how bad #GoogleBard #LLM #AI is at explaining / #ReverseEngineering #ARM #Assembly? Well then boy do I have a deal for you https://reedmideke.github.io/2023/11/20/google-bard-arm-assembly.html
Google Bard explains ARM assembly (badly)

Google Bard claims it can explain code. I asked it to explain some assembly and it did about as well as you’d expect spicy autocomplete to do.

Reed’s Writes

"We asked them about it — and they deleted everything."
edit it just keeps getting more bizarre: "It wasn't just author profiles that the magazine repeatedly replaced. Each time an author was switched out, the posts they supposedly penned would be reattributed to the new persona, with no editor's note explaining the change in byline."

#AIIsGoingGreat https://futurism.com/sports-illustrated-ai-generated-writers

Sports Illustrated Published Articles by Fake, AI-Generated Writers

Sports Illustrated was publishing articles under seemingly fake bylines. We asked their owner about it — and they deleted everything.

Futurism
Update on that @futurism #SportsIllustrated #AI story: SI denies, claiming it was outsourced to AdVon who "has assured us that all of the articles in question were written and edited by humans." but uh, I dunno, guess someone should let AdVon, the definitely human copy writing company know their LinkedIn has been vandalized to say they're an AI company hiring programmers https://twitter.com/SInow/status/1729275460922622374
Sports Illustrated (@SInow) on X

Today, an article was published alleging that Sports Illustrated published AI-generated articles. According to our initial investigation, this is not accurate. The articles in question were product reviews and were licensed content from an external, third-party company, AdVon…

X (formerly Twitter)
Oh and if anyone is looking for other outlets to check for "we pinky swear it's not #AI" churnalism, #AdVon helpfully gives you a list of high profile clients (claimed; lying or exaggerating about having big name customers is an extremely common SV startup tactic) https://advoncommerce.com/
AdVon Commerce | Retailer and Publisher Solutions

AdVon Commerce
Data point for the "LLMs can't infringe copyright because they don't contain or produce verbatim copies" crowd https://www.404media.co/google-researchers-attack-convinces-chatgpt-to-reveal-its-training-data/
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

404 Media

"Chat alignment hides memorization" - Note *hides*, not *prevents*

As the authors also note, OpenAI "fixed" this by preventing the particular problematic prompt, but "Patching an exploit != Fixing the underlying vulnerability"

https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html

Extracting Training Data from ChatGPT

Can't be certain without more specifics but color me extremely skeptical that "#AI" producing thousands of targets is doing much more the laundering responsibility

https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

‘The Gospel’: how Israel uses AI to select bombing targets in Gaza

Concerns over data-driven ‘factory’ that significantly increases the number of targets for strikes in the Palestinian territory

The Guardian
New #ChatGPTLawyer dropped. Much like the ones in NY (Mata v. Avianca), it made up citations, he didn't check, and then doubled down when caught, initially blaming it on an intern
https://www.coloradopolitics.com/courts/disciplinary-judge-approves-lawyer-suspension-for-using-chatgpt-for-fake-cases/article_d14762ce-9099-11ee-a531-bf7b339f713d.html
Disciplinary judge approves lawyer's suspension for using ChatGPT to generate fake cases

A Colorado lawyer has received a suspension for using artificial intelligence to generate fake case citations in a legal brief and then lying about it.

Colorado Politics

This hilarious in its own right, but it's also a great illustration of how people get tripped up by #LLM #AI bullshitting: One would expect an "AI" to at least know which brand AI it is, but of course, these LLMs don't actually know anything

Also the classic AI vendor response of promising to fix this particular case without any hint of acknowledging the underlying problem

Begging news orgs to stop reporting #AI company pitch decks as fact "Ashley [the bot] analyzes voters' profiles to tailor conversations around their key issues. Unlike a human, Ashley always shows up for the job, has perfect recall of all of Daniels' positions"
"…is now armed with another way to understand voters better, reach out in different languages (Ashley is fluent in over 20)"

https://www.reuters.com/technology/meet-ashley-worlds-first-ai-powered-political-campaign-caller-2023-12-12/

"As far as the Court can tell, none of these cases exist" - The #ChatGPTLawyer / Trump world crossover no one asked for? https://arstechnica.com/tech-policy/2023/12/michael-cohens-lawyer-cited-three-fake-cases-in-possible-ai-fueled-screwup/
Michael Cohen’s lawyer cited three fake cases in possible AI-fueled screwup

Lawyer David Schwartz must explain why a motion cited "cases that do not exist."

Ars Technica

Another article on reported Israeli AI targeting greatly hindered by the lack of any specifics (what kinds of intelligence, what kinds of targets, for starters). Not a knock on NPR, obviously little is public

It certainly *sounds* like some of the horrifically bad systems we've seen promoted in other contexts, and the results certainly don't appear to contradict that, but hard to say much beyond that…

https://www.npr.org/2023/12/14/1218643254/israel-is-using-an-ai-system-to-find-targets-in-gaza-experts-say-its-just-the-st

Key point IMO in the @willoremus #AI story, after noting Microsoft "fixed" some of the problematic results, one of the researchers says "The problem is systemic, and they do not have very good tools to fix it" - You can't bandaid your way from a BS machine with no concept of truth into a reliable source of information, so the fact that biggest players in the industry keep bandaiding should call the entire #LLM hype cycle into question

https://wapo.st/3v8B9SL

#GiftArticle #GiftLink

AI chatbot got election info wrong 30 percent of time, European study finds

Wrong answers for questions about German and Swiss elections suggest problems for U.S. election information in 2024.

The Washington Post

Man, link in that post I boosted from @Chloeg (https://mastodon.art/@Chloeg/111620626442103902) is a perfect example of #LLM #AI enshittification. Get a domain, put up a wordpress site with AI generated glop on a some popular topic, run as many garbage ads as possible. Sure it's the information equivalent of dumping raw sewage in the local river, but none of it is illegal or a serious violation of any TOS, and overhead must be extremely low

Archive link https://web.archive.org/web/20231222025203/https://www.learnancientrome.com/did-ancient-rome-have-windows/

Chloe Gilbert Artist (@[email protected])

Ok so im reading this article on Roman Glazing and slowly I begin to realise that it was written by an AI. Witness the section on “What existed before windows” where it suddenly starts talking about MS-DOS…. https://www.learnancientrome.com/did-ancient-rome-have-windows/

Mastodon.ART

WaPo has done some good #AI reporting, but this opinion piece from Josh Tyrangiel ain't it…
"The most obvious thing is that they’re not hallucinations at all"
Good start…
"Just bugs specific to the world’s most complicated software."
Uh no, literally the opposite of that that, FFS 😬

https://www.washingtonpost.com/opinions/2023/12/27/artificial-intelligence-hallucinations/

Honestly, I love when AI hallucinates

Let me explain, once and for all, why your AI chatbot glitches and why you shouldn’t worry when it does.

The Washington Post
So according to Cohen, he got bogus legal citations from #GoogleBard, didn't check them, and passed them to his lawyer, who also didn't check them. Which, I dunno, seems pretty negligent all around even if you didn't know Bard was a bullshit generator https://www.washingtonpost.com/technology/2023/12/29/michael-cohen-ai-google-bard-fake-citations/
Michael Cohen used fake cases created by AI in bid to end his probation

Ex-Trump lawyer Michael Cohen said he used Google Bard to unknowingly generate fake case citations that his lawyer used in a motion seeking to end his probation.

The Washington Post
Also raises the suspicion Cohen was doing a significant amount of the work and just having his lawyer put his name on it because Cohen is disbarred (though presumably Cohen could have gone pro se if he really wanted to). Anyway, I predict they're gonna continue the #ChatGPTLawyer sanctions streak
"ChatGPT bombs test on diagnosing kids’ medical cases" OK, but did they also test a magic 8 ball? Reading goat entrails?
https://arstechnica.com/science/2024/01/dont-use-chatgpt-to-diagnose-your-kids-illness-study-finds-83-error-rate/
ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate

It was bad at recognizing relationships and needs selective training, researchers say.

Ars Technica
Another data point for the "LLMs can't infringe copyright because they don't contain or produce verbatim copies" crowd https://spectrum.ieee.org/midjourney-copyright
Generative AI Has a Visual Plagiarism Problem

Experiments with Midjourney and DALL-E 3 show a copyright minefield

IEEE Spectrum
"Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts" - I don't *typically* engage in large scale plagiarism, so accusing me of these specific instances of large scale plagiarism is cherry-picking! https://www.theverge.com/2024/1/8/24030283/openai-nyt-lawsuit-fair-use-ai-copyright
OpenAI claims The New York Times tricked ChatGPT into copying its articles

OpenAI claims The New York Times has not been truthful in its lawsuit against the company and Microsoft. Yet, the company is hopeful both parties can work together.

The Verge
"OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit." - Per usual (https://mastodon.social/@reedmideke/111585837264775808) OpenAI would love to apply bandaids to specific instances identified by well-resourced organizations, because they know the underlying cause can't be fixed without destroying their business model
Thing that gets me about this "amazon listings with #ChatGPT error messages" story is, how do you get to the point where this is significant cost savings? Are they just using it for translation? Or are the listing just pure scams and there's no real product? https://arstechnica.com/ai/2024/01/lazy-use-of-ai-leads-to-amazon-products-called-i-cannot-fulfill-that-request/
Lazy use of AI leads to Amazon products called “I cannot fulfill that request”

The telltale error messages are a sign of AI-generated pablum all over the Internet.

Ars Technica

"CEOs say generative AI will result in job cuts in 2024"

Will this include said CEOs when their hamfisted attempts to use spicy autocomplete for "banking, insurance, and logistics" predictably go off the rails, or nah? 🤔

https://arstechnica.com/ai/2024/01/ceos-say-generative-ai-will-result-in-job-cuts-in-2024/

CEOs say generative AI will result in job cuts in 2024

Media and entertainment, banking, insurance, and logistics lead the way.

Ars Technica

"BMW had a compelling solution to the [#LLM #AI bullshitting] problem: Take the power of a large language model, like Amazon's Alexa LLM, but only allow it to cite information from internal BMW documentation about the car" 🤨

Surely this means it'll bullshit subtly about stuff in the manual, not that it won't bullshit?

https://arstechnica.com/cars/2024/01/bmws-ai-powered-voice-assistant-at-ces-2024-sticks-to-the-facts/

BMW showed off hallucination-free AI at CES 2024

Limited options make for better conversations.

Ars Technica
"Now, one crucial disclosure to all this: I wasn't allowed to interact with the voice assistant myself. BMW's handlers did all the talking" yeah, I'm gonna go ahead and reserve judgement on the "solution" ¯\_(ツ)_/¯

The best* part of this piece is the content farmer who responded to a request for comment by bitching about how poorly his AI garbage content farm performs

* for suitably broad values etc.

https://www.404media.co/email/5dfba771-7226-48d5-8682-5185746868c4/?ref=daily-stories-newsletter

Garbage AI on Google News

404 Media reviewed multiple examples of AI rip-offs making their way into Google News. Google said it doesn't focus on how an article was produced—by an AI or human—opening the way for more AI-generated articles.

404 Media

I for one am *shocked* that "have an extremely confident bullshitter summarize my search results" was not the killer app Microsoft expected

https://arstechnica.com/ai/2024/01/report-microsofts-ai-infusion-hasnt-helped-bing-take-share-from-google/

Bing Search shows few, if any, signs of market share increase from AI features

Bing's US and worldwide market share is about the same as it has been for years.

Ars Technica

"Dean.Bot was the brainchild of Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers, who had started a super PAC supporting Phillips" - Were these techbros so high on their own supply they thought a chatbot imitating their candidate was a good idea, or was it just a convenient way to funnel campaign funds into their pals pockets? ¯\_(ツ)_/¯
https://wapo.st/3ObSl0i

#GiftArticle #GiftLink

OpenAI suspends bot developer for presidential hopeful Dean Phillips

It’s the ChatGPT maker’s first known action against the use of its technology in a political campaign.

The Washington Post

Key comment from NewsGuard's McKenzie Sadeghi in this @willoremus piece "But sites that don’t catch the error messages are probably just the tip of the iceberg" - for every Amazon seller who's too lazy to even check if the item description is an error message, there's gotta be some substantial number who do

I'd still like to see a deeper look at why using #LLM #AI descriptions makes economic sense for these sellers

https://wapo.st/3vFAx7r

#GiftArticle #GiftLink

AI bots are everywhere now. These telltale words give them away.

In Amazon products, X posts and across the web, ChatGPT error messages have emerged as a sure sign that a piece of writing isn’t human.

The Washington Post

"Sure, I can keep Thesaurus.com open in a tab all the time, but it’s packed with banner ads and annoyingly slow. Having my GPT open is better: there are no ads, and I can scroll up to my previous queries" - Notably, this has nothing to do with GPT being "#AI", it's just the general shittiness of the ad-supported web. A good thesaurus app integrated with the author's editor would appear serve their use case about as well

https://www.theverge.com/24049623/chatgpt-openai-custom-gpt-store-assistants

I love my GPT, but I can’t find a use for anybody else’s

Custom GPTs let users make their own ChatGPT versions, but except for very specific use cases, it’s difficult to find a reason why anyone needs them.

The Verge
And it wouldn't even need to be free, they're paying for GPT and actual costs are likely subsidized by venture capital "Custom GPTs are a paid product that’s only available to users of ChatGPT Plus, ChatGPT Team, and ChatGPT Enterprise. For now, accessing custom GPTs through the GPT Store is free for paying subscribers… if I wasn’t already paying for ChatGPT Plus, I’d be happy to keep Googling alternative terms"
‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud

A journal reviewer accused Lizzie Wolkovich of using ChatGPT to write a manuscript. She hadn’t — but her paper was rejected anyway.

Also, from TFA "I quickly brainstormed how I might prove my case. Because I write in plain-text files [LaTeX] that I track using the version-control system Git, I could show my text change history on GitHub (with commit messages including “finally writing!” and “Another 25 mins of writing progress!”" - excellent - "Maybe I could ask ChatGPT itself if it thought it had written my paper" - Oh no, can we please get the word out LLMs BS about this just like everything else https://www.nature.com/articles/d41586-024-00349-5
‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud

A journal reviewer accused Lizzie Wolkovich of using ChatGPT to write a manuscript. She hadn’t — but her paper was rejected anyway.

Yes, if you choose to provide an #AI BS machine as a support option on your website, you may in fact be liable for the BS answers it gives to your customers

(also, if you're a multi-billion dollar company, you may avoid reputational harm by not trying to screw a person out of $650 for a ticket to their grandma's funeral ¯\_(ツ)_/¯)
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454

#AIIsGoingGreat

Air Canada's chatbot gave a B.C. man the wrong information. Now, the airline has to pay for the mistake

Air Canada has been ordered to compensate a B.C. man because its chatbot gave him inaccurate information.

British Columbia

Seemingly endless parade of #ChatGPTLawyer incidents (HT @0xabad1dea for this one) really goes to show how the #AI hype is landing with the general public, despite disclaimers and cautionary tales.

Lawyers being (at least in theory) a highly educated group who know their careers depend on not putting completely made up nonsense in court filings should be less susceptible than the average person on the street, yet here we are…

https://www.lawnext.com/2024/02/not-again-two-more-cases-just-this-week-of-hallucinated-citations-in-court-filings-leading-to-sanctions.html

#AIIsGoingGreat

Not Again! Two More Cases, Just this Week, of Hallucinated Citations in Court Filings Leading to Sanctions

For all the discussion of how generative AI will impact the legal profession, maybe one answer is that it will weed out the lazy and incompetent lawyers. By now, in the wake of several cases in which...

LawSites
Admittedly one of those was pro-se with an iffy story about getting it from a lawyer, but the other was a real firm with multiple people involved ¯\_(ツ)_/¯

Another day, another #ChatGPTLawyer

"The legal eagles at New York-based Cuddy Law tried using OpenAI's chatbot, despite its penchant for lying and spouting nonsense, to help justify their hefty fees for a recently won trial"

The Court "It suffices to say that the Cuddy Law Firm's invocation of ChatGPT as support for its aggressive fee bid is utterly and unusually unpersuasive"

https://www.theregister.com/2024/02/24/chatgpt_cuddy_legal_fees/

#AIIsGoingGreat

Judge slaps down law firm using ChatGPT to justify six-figure trial fee

Use of AI to calculate legal bill 'utterly and unusually unpersuasive'

The Register
IANAL, but whatever the merit of the other arguments "you only found the verbatim copies of your IP contained in our product because you hacked it" doesn't seem like a very compelling defense https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit/
OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI “bizarrely” mischaracterizes hacking, NYT lawyer says.

Ars Technica
So my take on this is Wendy's execs decided "we need an #AI strategy!" and for reasons that remain unclear, it was somehow not immediately shot down with "Sir, this is a Wendy's, we make burgers we don't need a fuckin AI strategy"
https://www.theguardian.com/food/2024/feb/27/wendys-dynamic-surge-pricing
How much is that Frosty? Wendy’s to trial Uber-like surge pricing

Fast-food chain’s CEO announced the plan – which will utilize ‘AI-enabled menu changes’ and suggestive selling – in an earnings call

The Guardian

"Amazon has sought to stem the tide [of #AI generated schlock books] by limiting self-publishers to three books per day" - Bruh, I know you don't want to deny the starving author toiling away on the next Great American Novel but I think we can set the bar a bit higher than that

https://wapo.st/3UVeYdR

#AIIsGoingGreat #GiftArticle #GiftLink

Tech writer Kara Swisher has a new book. Enter the AI-generated scams.

On Amazon, new books such as Swisher’s memoir now routinely vie with imitators in search results. Some authors are fed up.

The Washington Post

Like start with an initial limit of one per week and have some kind of reputation threshold. If real people keep coming back to buy your dinosaur erotica or whatever, great, cap lifted, crank out as many as you can, but if you get caught impersonating or listing complete garbage, your account is nuked and you start over

Yeah, there'd be problems with straw buyers and review bombing competitors but it seems like the bar wouldn't have to be very high to make the absolute crap unprofitable

Inventor of bed shitting machine shocked to discover mountain of turds in own bed https://arstechnica.com/gadgets/2024/03/google-wants-to-close-pandoras-box-fight-ai-powered-search-spam/

#AIIsGoingGreat

Google now wants to limit the AI-powered search spam it helped create

Ranking update targets sites "created for search engines instead of people."

Ars Technica

WaPo has some great reporters covering the #AI beat. They also inexplicably pay Josh Tyrangiel to vomit up idiotic drivel like this

(it's also amusing they use javascript when they A/B test headlines, so sometimes it switches between the first and second one)

https://www.washingtonpost.com/opinions/2024/03/06/artificial-intelligence-state-of-the-union/

Let AI remake the whole U.S. government (oh, and save the country)

Thanks to AI, Operation Warp Speed was a rare triumph for our federal bureaucracy. Now, it can help us blaze a new path to the shining city on a hill.

The Washington Post
I ain't gonna waste a gift article on that shit unless someone REALLY wants it but here's a taste after you get past the Palantir hagiography "LLMs can provide better service and responsiveness for many day-to-day interactions between citizens and various agencies. They’re not just cheaper, they’re also faster, and, when trained right, less prone to error or misinterpretation"

"Some teachers are now using ChatGPT to grade papers"

Seems like fairness would require also allowing them to grade using a ouija board or goat entrails

https://arstechnica.com/information-technology/2024/03/some-teachers-are-now-using-chatgpt-to-grade-papers/

Some teachers are now using ChatGPT to grade papers

New AI tools aim to help with grading, lesson plans—but may have serious drawbacks.

Ars Technica

Today's #AIIsGoingGreat (HT @ct_bergstrom): Nothing to see here, just a paper in a medical journal which says "In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model"

https://www.sciencedirect.com/science/article/pii/S1930043324001298

#AI #LLM