Just in case anyone forgot, Altman of Open AI is reminding us that they believe they are actually building "AGI" ("AI systems that are generally smarter than humans") and that ChatGPT et al are steps towards that:

https://openai.com/blog/planning-for-agi-and-beyond/

>>

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is.

I wish I could just laugh at this, but it's problematic because these people living in a fantasy world are also influencing policy decisions while also stirring up the current #AIhype frenzy, which also makes it more difficult to design and pass effective policy.

@emilymbender could there be a thing where people who spend years training their minds to emulate computers are more inclined to look into the matrices and see themselves?
@milesmcbain @emilymbender shouldn’t be, because once you know what’s going on it’s pretty obvious it isn’t general AI, unless you really want to believe it is.
@Colman @emilymbender I’m suggesting they may have a more reductive view of what general intelligence actually is, influenced by the training of their brains to work in mathematical approximations. And that this may be an unconscious bias.
@milesmcbain @Colman @emilymbender Strong disagree. Folks who actually train their brains to do intense math see no magic here. It's the ones who see coding as "writing 100000 lines of boilerplate class interface definitions" who think LLMs are going to replace humans.

@dalias I tend to view that kind of coding as a combination of bad framework design & bad language design.

A few macros can typically reduce such boilerplate to a minimum.

Yes this is again my #Lisp is superior argument.

No need for AI, just tools that aren't completely useless.

@lispi314 It's absolutely bad framework, language, and programming idiom design. If any part of the programming task is so idiotically repetitive a LLM could do it, it shouldn't have been there to begin with. That code should be (rigorously, not AI junk) generated automatically as part of the build process (not by an IDE then checked in and editable, which is just awful).
@dalias Oh dear, the IDE-generated code gives me a lot of #Java flashbacks.