Just in case anyone forgot, Altman of Open AI is reminding us that they believe they are actually building "AGI" ("AI systems that are generally smarter than humans") and that ChatGPT et al are steps towards that:

https://openai.com/blog/planning-for-agi-and-beyond/

>>

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is.

I wish I could just laugh at this, but it's problematic because these people living in a fantasy world are also influencing policy decisions while also stirring up the current #AIhype frenzy, which also makes it more difficult to design and pass effective policy.

@emilymbender unlike us who aren't text synthesis machines at all

@lritter there are a lot of machines that can do things we also do and we don't claim they are thinking or do anything like general intelligence

there are plenty of text synthesis systems we don't claim that about either, many not particularly architecturally different from the latest and greatest except for in size

and even the very simplest text synthesizers if presented in chatbot form can make us think there's a person writing the text, but that says more about us than the text generators