I just asked #chatGTP who I was and it told me I created Scarfolk and directed "Children of the Stones: The Original Soundtrack, inspired by the 1977 TV series of the same name." It also said that I "worked as a journalist and editor for various publications and has taught creative writing at the University of Bristol."
Even though only Scarfolk is correct, I'm putting all of that on my CV.
I just questioned its facts & sources. It got flustered and is now claiming that I did not create Scarfolk after all, but I am a fan of it.
"The anonymous creator."
I feel like I'm talking to V'ger.
It refuses to accept that I might be Richard Littler or the creator of Scarfolk because, according to its false info sources, the 'creator is anonymous'.
Now, imagine in the future you need to request emergency medication from your GP but have to go through a chatbot to get it...
I asked if it should be accountable for disseminating false information leading to catastrophic real-world conflicts/crises. It basically said 'not my problem' and then "I recognize that there are inherent limitations to human cognition." So, a touch of victim blaming too.
Right now, it's essentially a sociopathic Speak & Spell that doesn't even observe Asimov's basic laws of robotics. "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
It won't even acknowledge that any false information it gives might lead to real-world conflicts. "I do not have the ability to take action that could harm humans directly or indirectly." This is in an age where the spread of false information has demonstrably led to crises.
It's all a bit "Fish, plankton, seagreens and protein from the sea..."
Or maybe ChatGPT is more like HAL 9000's half-witted cousin, DIK -1000.
@Richard_Littler Someone on here recently compared folks cheering on LLMs like ChatGPT to "parrots arguing with a mirror". That's exactly right; there's nothing there.
@Richard_Littler No; the thing about ChatGPT is that it's NOT artificial intelligence—it's artificial Boris Johnson!

@cstross @Richard_Littler
Content-wise I agree, but if you ever read or listen to that human bloviator, you will realise ChatGPT has a much better grasp of the English language.

#mansplainingasaservice

@tomstoneham @cstross @Richard_Littler
ChatGPT may be doing model of buses in its spare time, we don't know for sure. Someone should ask the question.
@cstross @Richard_Littler If we achieve artificial intelligence, how far behind can artificial stupidity be?

@cstross @Richard_Littler

Oh imagine the pitch meeting …

@Richard_Littler Oddly, I just scrolled down and found an unrelated post about OFSTED, which behaves in exactly the same way.

@Richard_Littler I mean, even if it did acknowledge that its responses could cause real-world problems... it would be meaningless because *it doesn't understand what it's saying*.

All that verbiage indicates is that the creators of ChatGPT wish to deny all responsibility ... but we knew that already.

@Richard_Littler Even that's anthropomorphising it; essentially it is a massive Galton board, where you pour words in the top & get a pattern of words at the bottom. The nails are just haphazardly positioned based on a massive scraping of internet data.
@Richard_Littler it’s automated guesswork that is more often correct than not, but just by statistical chance. When it runs slowly, you can see more how it works as it prints a word and figures out what word seems to go best next.
@Richard_Littler as a side question, who decreed that spelling is meant to be fun?

@Richard_Littler

So the primary directive will force AI to force all humans to stop self harming.

The first action AI would take would be to "switch off" all fossil fuel extraction & biomass production. Air pollution is evidently self harming.

Humans will then have to work out it out for themselves.

#ClimateChange

@SteveJonesnono1

@Richard_Littler My local GP once had a receptionist like that.
@Richard_Littler
I recently sat through a presentation by an AI enthusiast who described getting medication via an AI chatbot as a positive thing. To top it off, he told us how the AI could hook into Uber's API to automatically send your meds to you. 😬

@Richard_Littler
Its training data is probably about 30% know-nothing guys "well, actually"-ing people on social media, so it replicates that behaviour pretty well.

If pressed, nobody involved in large language models would claim automating extreme online Dunning-Kruger was asked for by anyone or has a use case, but it's kind of a monkey's paw situation now and what has been invented cannot be uninvented.

@Richard_Littler We're officially living in "Computers Don't Argue" by Gordon Dickson.