Reminder that EVERYTHING Anthropic or OpenAI announce in public is propaganda designed to boost their market cap when they hit the IPO they're aiming for.

It's marketing, folks. There is no intelligence behind the artifice, it's just spicy autocomplete and hucksters in $5000 suits trying to pick your boss's pocket.
https://mastodon.social/@arstechnica/115548040527129225

@cstross I mean, you have right to have your opinions as all of us, but saying that AI is "autocomplete… is too much. An autocomplete would never suggest new names, new implementation and architectural ideas, and so on, and so forth. Yes, it's not "intelligence" as we perceive our own intelligence, but not autocomplete either.

@menelion @cstross "An autocomplete would never suggest new names, new implementation and architectural ideas, and so on, and so forth."

Why? Autocompletes often make changes I do not want and that are out of context with bad parameters.

LLMs are basically large scale autocomplete, running on GPU, with a grain of stochastic variablity put on top.

@Enthalpiste @cstross It's like calling a modern powerful PC a typewriter. Fundamentally yes, probably. But I experimented with AI: I opened a page to an AI that doesn't know anything about me, no signup, no login. And I gave out of the blue three words I invented, totally made up, and asked to choose. And AI deduced that one of them might be a name of a planet in a fantasy setting (which is true for that case). I repeat, I invented those words, I was not inspired by anything (like "Earst" for "Earth", far from that). So, call it autocomplete if you fancy, but it has enough training to evaluate made-up stuff and suggest anything upon it.
Of course, it hallucinates.
Of course, you don't commit AI-generated code to production, you don't send AI-generated emails without checking, you don't… you get it, of course that is absolutely true.
But I don't get why not to use it as a tool that can make processes far quicker and spare your time from boring stuff.

@menelion @cstross The fact that a word is made up and that it sounds fitted for a given context are two different things, none of which validate your claim.

One the hand you say that your word is made up so it’s unlikely that ChatGPT gives you its context. Yet it is fairly easy to guess the context of some made up words such as FartLand or Poppilympulla. The first one can be a parody country or planet and the second a fake medica or botanical term. Yet, they’re made up. Because made up words are also often chosen so that we make sense of their use.

In addition to that, you say that you've chosen your word to a given use case that ChatGPT guessed. By the mere fact that you have chosen this word for a fantasy land context is an indication that you found it fit for the purpose, based on your own knowledge of fantasy literature or art. This is exactly what ChatGPT is also trained with. It is fooling you.

On the other hand, for the use of such tools, my brain does a better job most of the time withour relying on a venture industry that is likely to dramatically raise its costs at some point and that is polluting the planet, essentially for mediocre outputs that nobody requires.