Jeff Watson

@jeffwatson
6 Followers
64 Following
423 Posts
I like how any Mr. Beast contest can be described as "What if a bad person was forced to give away large amounts of money but found a way to still be a total dick while doing so?"
Tesla dealerships don't want you to know this one weird trick of using saltwater and lemon juice to preserve and maintain the finish on their cybertrucks.
Woz: ‘I Am the Happiest Person Ever’
https://daringfireball.net/linked/2025/08/15/woz-on-slashdot
Woz: ‘I Am the Happiest Person Ever’

Link to: https://yro.slashdot.org/comments.pl?sid=23765914&cid=65583466

Daring Fireball
We have all worshipped at the feet of the wrong Steve…
https://mastodon.social/@daringfireball/115033598998035149
macOS UI design had a good run, and the macOS 26 will be remembered and studied in book under: how to not design user interface, case study.

It is impossible to conceive that anyone at Apple could think this was an improvement (either aesthetically or functionally) and so the only conclusion is that they simply don't care.

Whatever requirements drove this icon change; whatever process led to its approval; it is clear that at no point did the question "is this good?" factor in.

Once respected as design leaders, Apple has now transcended the very notion of design quality as a concept.

From: @BasicAppleGuy
https://mastodon.social/@BasicAppleGuy/115016185421357323

Choose One and Boost Please

#polls #nerlingersjokes

Dilly-Dally
52.2%
Lollygag
47.8%
Poll ended at .
Every time I look at anything Liquid Glass, this is all I’ll ever see
It’s remarkable, especially this summer, to see just how much Apple got right in the first decade of the Mac. The NeXT acquisition may have saved the company, sure, but it was the durable and thoughtful design of Mac OS that persuaded users to tolerate a decaying foundation until something better came along.
×

@b_rain This also ties into how the way we design things influences how people percieve them.

Before ChatGPT, there was "OpenAI Playground," a paragraph-sized box where you would type words, and the GPT-2 model would respond or *continue* the prompt, highlighted in green.

Then ChatGPT came along, but it was formatted as a chat. Less an authoritative source, more a conversational tool.

Now the ChatGPT home page is formatted like a search engine. A tagline, search bar, and suggested prompts.

@boltx @b_rain you probably know that but it wasn't until a few weeks ago that I learned that under the hood, they have to write "user:" before your prompt and "agent:" after it, before the interface hands it to the LLM, otherwise it would just continue writing your prompt.

@jollysea Technically the models can vary a bit in how they handle that (e.g. they could be using an XML format with <user> and <llm> for example) but yeah, that's the structure essentially all conversational LLMs have to follow.

In the end, LLMs are just word prediction machines. They predict the most likely next word based on the prior context, and that's it. If nothing delineated between the original prompt and the LLM's response, it would naturally just continue the prompt.

@jollysea That was actually one of the most fun parts about the original interface. If you wanted it to continue some code, just paste in your code and it'll add on to it. Have a random idea for a poem? Write the first line, and it'll write a poem that continues from that starting line in a more cohesive manner.

Now any time you ask an LLM to do something, it won't just do the thing you wanted, it'll throw in a few paragraphs of extra text/pleasantries/re-iteration you didn't ask for.

@boltx @jollysea @LordCaramac but also it was hard to project another mind into that interface so they had to change it for marketing reasons 🤷
@lechimp @boltx @jollysea GPT2 was a lot of fun, but for some reason, I find GPT3 and later versions rather boring.

@LordCaramac I'd assume that has something to do with how GPT2 was a lot more loosely fine-tuned than GPT3 and subsequent models.

GPT2 was more of an attempt at simply mimicking text, rather than mimicking text *in an explicitly conversational, upbeat, helpful tone designed to produce mass-market acceptable language*

Like how GPT2 would usually just do the thing you asked for, whereas GPT3 and others now all start with "Certainly! Here's a..." or something similar.

@boltx GPT2 was often quite unhinged and produced text that was quite surreal and like a weird hallucination or the ramblings of a madman. I liked it.

@jollysea @boltx @b_rain yeah. OAI and friends are desperately trying to add as much abstraction & as much features on top of LLMs to hide the fact they're just the predictive text on your phone but overgrown.

its just that the training data always had a specific 'token' to delineate user input and expected output so the LLM behaves like a chat bot

Teach kids (and adults) to check sources. Where does chatGPT get this info? Learning to check sources is a useful skill in manu situations. Note that Wikipedia lists its sources. ChatGPT makes them up.

Also teach them that ChatGPT is the ultimate bullshitter. It's designed to always produce an answer, regardless of whether it's true or false. It has no concept of truth. It just makes stuff up based on the content it's trained on, which means it's sometimes correct, but mostly by accident. It can also be very wrong.

No matter what you use it for, always, always double check the output of these LLMs. Because it's just as likely to be bullshit as true.

@mcv The biggest problem is, in my view: the answers are most of the time at least working. If you ask for something that is reasonably well trained, it will get you valid answers.
But how do you know when it’s wrong? You don’t.
Checking sources is a good sentiment, but given those things throw in vast texts, you cannot really check it; it defeats the purpose of those AIs.
Now, then don’t use ChatGPT!
But there’s no escape, you will just get another AI, from another vendor.

We’re toast.