"each individual kid is now hooked into a Nonsense Machine"
Edit: I got those screenshots from imgur. It might be from Xitter, with the account deleted or maybe threads with the account not visible without login? 🤷
2nd Edit: @edgeofeurope found this https://threadreaderapp.com/thread/1809325125159825649.html
#school #AI #KI #meme #misinformation #desinformation

@b_rain This also ties into how the way we design things influences how people percieve them.

Before ChatGPT, there was "OpenAI Playground," a paragraph-sized box where you would type words, and the GPT-2 model would respond or *continue* the prompt, highlighted in green.

Then ChatGPT came along, but it was formatted as a chat. Less an authoritative source, more a conversational tool.

Now the ChatGPT home page is formatted like a search engine. A tagline, search bar, and suggested prompts.

@boltx @b_rain you probably know that but it wasn't until a few weeks ago that I learned that under the hood, they have to write "user:" before your prompt and "agent:" after it, before the interface hands it to the LLM, otherwise it would just continue writing your prompt.

@jollysea Technically the models can vary a bit in how they handle that (e.g. they could be using an XML format with <user> and <llm> for example) but yeah, that's the structure essentially all conversational LLMs have to follow.

In the end, LLMs are just word prediction machines. They predict the most likely next word based on the prior context, and that's it. If nothing delineated between the original prompt and the LLM's response, it would naturally just continue the prompt.

@jollysea That was actually one of the most fun parts about the original interface. If you wanted it to continue some code, just paste in your code and it'll add on to it. Have a random idea for a poem? Write the first line, and it'll write a poem that continues from that starting line in a more cohesive manner.

Now any time you ask an LLM to do something, it won't just do the thing you wanted, it'll throw in a few paragraphs of extra text/pleasantries/re-iteration you didn't ask for.

@boltx @jollysea @LordCaramac but also it was hard to project another mind into that interface so they had to change it for marketing reasons 🤷
@lechimp @boltx @jollysea GPT2 was a lot of fun, but for some reason, I find GPT3 and later versions rather boring.

@LordCaramac I'd assume that has something to do with how GPT2 was a lot more loosely fine-tuned than GPT3 and subsequent models.

GPT2 was more of an attempt at simply mimicking text, rather than mimicking text *in an explicitly conversational, upbeat, helpful tone designed to produce mass-market acceptable language*

Like how GPT2 would usually just do the thing you asked for, whereas GPT3 and others now all start with "Certainly! Here's a..." or something similar.

@boltx GPT2 was often quite unhinged and produced text that was quite surreal and like a weird hallucination or the ramblings of a madman. I liked it.