Just in case anyone forgot, Altman of Open AI is reminding us that they believe they are actually building "AGI" ("AI systems that are generally smarter than humans") and that ChatGPT et al are steps towards that:

https://openai.com/blog/planning-for-agi-and-beyond/

>>

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is.

I wish I could just laugh at this, but it's problematic because these people living in a fantasy world are also influencing policy decisions while also stirring up the current #AIhype frenzy, which also makes it more difficult to design and pass effective policy.

@emilymbender could there be a thing where people who spend years training their minds to emulate computers are more inclined to look into the matrices and see themselves?
@milesmcbain @emilymbender shouldn’t be, because once you know what’s going on it’s pretty obvious it isn’t general AI, unless you really want to believe it is.
@Colman @emilymbender I’m suggesting they may have a more reductive view of what general intelligence actually is, influenced by the training of their brains to work in mathematical approximations. And that this may be an unconscious bias.
@milesmcbain @Colman @emilymbender Strong disagree. Folks who actually train their brains to do intense math see no magic here. It's the ones who see coding as "writing 100000 lines of boilerplate class interface definitions" who think LLMs are going to replace humans.
@dalias @milesmcbain @Colman @emilymbender Seeing no magic in LLMs does not imply thinking of human thought as somehow magically above that
@anoreon @milesmcbain @Colman @emilymbender It's not "magically above that". It's different from that. I hesitate to even say "above" because they're not levels on a ladder, they're different things. Just like an ALU and human brain activity are different things.
@dalias @milesmcbain @Colman @emilymbender I mean I agree to a certain extent, an ALU and a human brain have important differences, and above maybe wasn't the right word. I was just pointing out that viewing the prospect of AGI downstream of LLMs as possible can come also from viewing the human brain as fundamentally unmagical and thus comparable, which I think is also the point miles was trying to make