Just in case anyone forgot, Altman of Open AI is reminding us that they believe they are actually building "AGI" ("AI systems that are generally smarter than humans") and that ChatGPT et al are steps towards that:

https://openai.com/blog/planning-for-agi-and-beyond/

>>

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is.

I wish I could just laugh at this, but it's problematic because these people living in a fantasy world are also influencing policy decisions while also stirring up the current #AIhype frenzy, which also makes it more difficult to design and pass effective policy.

@emilymbender

So far ChatGPT has been tried in:

1. Customer call centers (customers *loathe* chatbots)
2. News articles (paying subscribers start canceling their subscriptions)
3. Recommendation systems (users start harassing authors & libraries for non-existent books & articles)
4. Content farms (followers start unfollowing)

Are there any examples of "successful" launches of ChatGPT aside from dating & porn sites, Twitter, Facebook, & Instagram chatbots?

@Npars01 @emilymbender
It is still early to say but companies are exploring all kind of applications.
I heard of an interesting use case, in which a human would be trained to perform a series of specific commands on an old system in order to fullfil a customer request, they use to spend months teaching people the usage of this commands.

ChatGPT was able to generate the list of commands from the customer request in plain words.