Just in case anyone forgot, Altman of Open AI is reminding us that they believe they are actually building "AGI" ("AI systems that are generally smarter than humans") and that ChatGPT et al are steps towards that:

https://openai.com/blog/planning-for-agi-and-beyond/

>>

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is.

I wish I could just laugh at this, but it's problematic because these people living in a fantasy world are also influencing policy decisions while also stirring up the current #AIhype frenzy, which also makes it more difficult to design and pass effective policy.

@emilymbender could there be a thing where people who spend years training their minds to emulate computers are more inclined to look into the matrices and see themselves?
@milesmcbain @emilymbender shouldn’t be, because once you know what’s going on it’s pretty obvious it isn’t general AI, unless you really want to believe it is.
@Colman @milesmcbain @emilymbender almost no one working on this thinks it’s general AI right now, but it’s clearly a stepping stone on the path to general ai
@Techronic9876 @Colman @milesmcbain @emilymbender I see no evidence that this is clear. In the 70s/80s folks claimed lisp was a clear stepping stone to "AI". 🤪
@dalias @Colman @milesmcbain @emilymbender what kind of evidence would you find convincing?
@Techronic9876 @Colman @milesmcbain @emilymbender Short of actual demonstration of it working, I guess maybe research showing correspondence to human brain processes. However I think that's missing the point. X isn't really worth calling a stepping stone to Y if X is the ridiculously easy part.
@dalias @Techronic9876 @Colman @milesmcbain @emilymbender I'd say maybe people committing to serious attempts to make systems that reason *about* their data instead of mostly detecting patterns and blindly extrapolating new data points. And no, I have no idea how exactly that would work or what it would even look like. (and yes, I've done my share of amateur Prolog, it's not the same)
@Techronic9876 @Colman @milesmcbain @emilymbender Nothing LLMs are doing is revolutionary. Most of the concepts were known decades ago. What's new is the scale of scraping of data and the scale of resource burning to use that data for training. This is particularly unlike humans. We don't need to have read the entire indexed internet to bullshit. Our corpora of textual learning are much smaller but with much better results.
@Techronic9876 @Colman @milesmcbain @emilymbender As such this makes me doubt that the current direction has much value as a step towards intelligence. It is what it is: a statistical model for making text you can convince humans to believe in a selected context. I.e. for making bullshit.

@dalias @Colman @milesmcbain @emilymbender “a statistical model for making text” defines most human communication (if you start saying statistically improbable things ppl will think you’re crazy)

LLMs don’t know whether they’re making up bullshit (neither do a lot of humans for that matter), but they will soon

One-shot and zero-shot prompting took scientists completely by surprise, so did chain of thought reasoning— these were emergent, unknown capabilities of LLMs

@Techronic9876 @Colman @milesmcbain @emilymbender Your first paragraph mixes up necessary & sufficient conditions (equivalently, proposition and its converse).

Of course the majority of communication is statistically probable in some probability distribution. But that doesn't mean probability in such a distribution "defines" it. Being probable given textual context does not make a statement non-batshit-wrong.