Just in case anyone forgot, Altman of Open AI is reminding us that they believe they are actually building "AGI" ("AI systems that are generally smarter than humans") and that ChatGPT et al are steps towards that:

https://openai.com/blog/planning-for-agi-and-beyond/

>>

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is.

I wish I could just laugh at this, but it's problematic because these people living in a fantasy world are also influencing policy decisions while also stirring up the current #AIhype frenzy, which also makes it more difficult to design and pass effective policy.

@emilymbender Is there really a necessary equivalence between AGI and thinking entities? I've mostly seen AGI defined as "general problem solving" rather than saying something about thought or sentience, which is a totally separate concept. You can prove general problem solving, but it's a harder task to prove thinking and sentience. Non-thinking general problem solvers are enough to create societal-level problems and power imbalances, which I think OpenAI acknowledges.