just called ai "the mediocrity machine" in a meeting and a tech bro is twitching so hard he can't even plug this into the mediocrity machine so it can tell him how to respond

@ElleGray

IT at work wasn't happy when I responded to their "AI is now available!" announcement to remind them AI suggested glue as a pizza topping.

Google still recommends glue for your pizza

After news stories were written about Google AI Overviews telling people to put glue on pizza, now AI Overviews cites those stories to tell people how much glue to put on pizza.

The Verge

@jmccyoung @ScottSoCal @ElleGray

Again as funny this is not one of the dangerous ones, they are clearly dangerous and faulty.

Gemini reporting in a convinced tone slightly wrong payroll tax rates (see attached screenshots, it claims they are current for this year even, and provides a kind of correct, but top-level URL)

Notice the little whopper, the pension part: 10,25% is the actual employee part, according to a more detailed PDF here: https://www.sozialversicherung.at/cdscontent/load?contentid=10008.784719&version=1703166731 )

@jmccyoung @ScottSoCal @ElleGray
Note:
Gemini answered a German question about Austrian payroll taxes with a elegant, convincing answer, that contained seemingly all relevant information (it left out half a dozen of small ones in the <1% range out), with the wrong data (all the percentages are wrong, but most of them are off only by a small error).

And more believability to the big lie, it added a truly valid date when the update “social insurance thresholds, etc” usually happens.

@yacc143 @ScottSoCal @ElleGray I appreciated the more accurate term proposed here: https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/ to replace "hallucination." The whole mechanism of LLMs is focused on plausibility; truth isn't a factor at all.
ChatGPT Isn’t ‘Hallucinating.’ It’s Bullshitting.

Opinion | Artificial Intelligence models will make mistakes. We need more accurate language to describe them.

Undark Magazine

@jmccyoung @ScottSoCal @ElleGray
As I like to point out, truth is a hard to nail down concept. Yes, the easy questions sound easy, but even these can have surprising twists.

Now you might want to discuss sharks and batteries with some MAGA acolytes. The insights you'll gain might surprise you.

But without somehow a way for a computer to measure truth to guide the training of the network involved, it's unfair from us to expect them to be truthful.

@jmccyoung @ScottSoCal @ElleGray
And technically, it's not even “plausibility”, generally the training is measured against the test data set, which normally is a split of the training set, e.g. how similar the generated text is to the test part of the corpus. So it's “similarity to the training data”. Which presently is assumed to be mostly human written text.

(But you might see how this can turn ugly when the Internet fills up with AI-generated bull shit.)