Just in case anyone forgot, Altman of Open AI is reminding us that they believe they are actually building "AGI" ("AI systems that are generally smarter than humans") and that ChatGPT et al are steps towards that:

https://openai.com/blog/planning-for-agi-and-beyond/

>>

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is.

I wish I could just laugh at this, but it's problematic because these people living in a fantasy world are also influencing policy decisions while also stirring up the current #AIhype frenzy, which also makes it more difficult to design and pass effective policy.

@emilymbender could there be a thing where people who spend years training their minds to emulate computers are more inclined to look into the matrices and see themselves?
@milesmcbain @emilymbender shouldn’t be, because once you know what’s going on it’s pretty obvious it isn’t general AI, unless you really want to believe it is.
@milesmcbain @emilymbender but that was true of expert systems and they got the same sales pitch and the same religious fervour, so 🤷🏻‍♂️.
@milesmcbain @emilymbender in fact it reminds me of a phenomenon I noticed when working on a PhD many decade ago: newly developed formal methods tended to be ready for industrial application at just about the time the lead researcher started needing to settle down and buy a house, have some kids.
@Colman @emilymbender I’m suggesting they may have a more reductive view of what general intelligence actually is, influenced by the training of their brains to work in mathematical approximations. And that this may be an unconscious bias.
@milesmcbain @Colman @emilymbender Strong disagree. Folks who actually train their brains to do intense math see no magic here. It's the ones who see coding as "writing 100000 lines of boilerplate class interface definitions" who think LLMs are going to replace humans.

@dalias I tend to view that kind of coding as a combination of bad framework design & bad language design.

A few macros can typically reduce such boilerplate to a minimum.

Yes this is again my #Lisp is superior argument.

No need for AI, just tools that aren't completely useless.

@lispi314 It's absolutely bad framework, language, and programming idiom design. If any part of the programming task is so idiotically repetitive a LLM could do it, it shouldn't have been there to begin with. That code should be (rigorously, not AI junk) generated automatically as part of the build process (not by an IDE then checked in and editable, which is just awful).
@dalias Oh dear, the IDE-generated code gives me a lot of #Java flashbacks.
@dalias @milesmcbain @Colman @emilymbender The ones who haven't written their own template files, anyway.
@dalias @milesmcbain @Colman @emilymbender Seeing no magic in LLMs does not imply thinking of human thought as somehow magically above that
@anoreon @milesmcbain @Colman @emilymbender It's not "magically above that". It's different from that. I hesitate to even say "above" because they're not levels on a ladder, they're different things. Just like an ALU and human brain activity are different things.
@dalias @milesmcbain @Colman @emilymbender I mean I agree to a certain extent, an ALU and a human brain have important differences, and above maybe wasn't the right word. I was just pointing out that viewing the prospect of AGI downstream of LLMs as possible can come also from viewing the human brain as fundamentally unmagical and thus comparable, which I think is also the point miles was trying to make
@Colman @milesmcbain @emilymbender almost no one working on this thinks it’s general AI right now, but it’s clearly a stepping stone on the path to general ai
@Techronic9876 @Colman @milesmcbain @emilymbender I see no evidence that this is clear. In the 70s/80s folks claimed lisp was a clear stepping stone to "AI". 🤪
@dalias @Colman @milesmcbain @emilymbender what kind of evidence would you find convincing?
@Techronic9876 @Colman @milesmcbain @emilymbender Short of actual demonstration of it working, I guess maybe research showing correspondence to human brain processes. However I think that's missing the point. X isn't really worth calling a stepping stone to Y if X is the ridiculously easy part.
@dalias @Techronic9876 @Colman @milesmcbain @emilymbender I'd say maybe people committing to serious attempts to make systems that reason *about* their data instead of mostly detecting patterns and blindly extrapolating new data points. And no, I have no idea how exactly that would work or what it would even look like. (and yes, I've done my share of amateur Prolog, it's not the same)
@Techronic9876 @Colman @milesmcbain @emilymbender Nothing LLMs are doing is revolutionary. Most of the concepts were known decades ago. What's new is the scale of scraping of data and the scale of resource burning to use that data for training. This is particularly unlike humans. We don't need to have read the entire indexed internet to bullshit. Our corpora of textual learning are much smaller but with much better results.
@Techronic9876 @Colman @milesmcbain @emilymbender As such this makes me doubt that the current direction has much value as a step towards intelligence. It is what it is: a statistical model for making text you can convince humans to believe in a selected context. I.e. for making bullshit.

@dalias @Colman @milesmcbain @emilymbender “a statistical model for making text” defines most human communication (if you start saying statistically improbable things ppl will think you’re crazy)

LLMs don’t know whether they’re making up bullshit (neither do a lot of humans for that matter), but they will soon

One-shot and zero-shot prompting took scientists completely by surprise, so did chain of thought reasoning— these were emergent, unknown capabilities of LLMs

@Techronic9876 @Colman @milesmcbain @emilymbender Your first paragraph mixes up necessary & sufficient conditions (equivalently, proposition and its converse).

Of course the majority of communication is statistically probable in some probability distribution. But that doesn't mean probability in such a distribution "defines" it. Being probable given textual context does not make a statement non-batshit-wrong.

@Techronic9876 @dalias @Colman @milesmcbain @[email protected]

anything that says that these products actually can do excellence sometimes and not just average mediocrity at the absolute best and if you ignore that they have no ontologies whatsoever

@milesmcbain @Techronic9876 @dalias @Colman @emilymbender

Mincing words here. The advent of networked computing and mass data collection is what “started it” in my opinion.

@Colman @milesmcbain @emilymbender People lie best when they lie to themselves.
@milesmcbain @emilymbender Exactly the opposite. If you understand any of this at all on a technical level, it has no such magic and obviously has no relationship with intelligence.