🚨 noyb has filed a complaint against the ChatGPT creator OpenAI

OpenAI openly admits that it is unable to correct false information about people on ChatGPT. The company cannot even say where the data comes from.

Read all about it here 👇

https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

ChatGPT provides false information about people, and OpenAI can’t correct it

noyb today filed a complaint against the ChatGPT maker OpenAI with the Austrian DPA

noyb.eu

@noybeu reading the article I couldn't help but wonder if some folk deliberately misunderstand what AI is. In particular the I part. What part of an encyclopaedia is intelligent? It's a basic store of information. It can be read, edited corrected where wing and so on.

But a AI is thus named as it is loosely modelled on the imagined workings of a brain. A neutral network ... It's also a word used in the arena for similar reasons.

And the goal is for a similar outcome. That is, learning, which captures information in a form that permits recollection only by reconstruction (much like the brain) and in responding to a prompt or request, to assemble a truly judged most likely to please, an idea governer by a button of familiarity and being the kind of reply we're confident we've heard to similar questions etc...

But you get my drift... Open AIb is not being evasive or inconsiderate. They are working on AI, they can't easily predict is responses nor modify them etc, any more than you can your colleagues brain. The very thing that is evening AI is the ability to handle an abundance of abstract data in real time ... Stored as weights abstracted from learning inputs.

It is no more likely to spit out actuate facts than, wait for it, the brains it is being modelled on. The winner point of the I in AI is being able to reassemble learnings in novel ways and to respond to prompts with diverse goals etc.

In a nutshell, who is surprised that it can and does say untrue things... That would seem to be an inherent property of the endeavour.

@thumbone @noybeu The point is that that inherent property may well be illegal under EU law, not that it is surprising.

@denisbloodnok @noybeu It may be, though that itself works be surprising, given humans do it all the time. Perhaps all this "I have a right to my opinion" people uttering untruths across the internet need to lobby for AI to have that same right 🤣. It is after all emulating us ... That is the goal.

That said, to me the article read like people were indeed surprised at the fallibility of AI. And this is an issue, the quality of language emulation achieved in LLMs can and does lull people into thinking the model is smart.

Ironically , well spoken humans have been all over that since the dawn of language and it's an accepted explanation for the complexity of modern languages (evolving to maintain grading bars so that elocution can serve as a proxy for measuring credibility).

@thumbone @noybeu Humans do it all the time, but there isn't a law against what data humans can have and process in their heads, for obvious reasons. There is law about what organisations and computers can do with data.

"AI" isn't emulating us. Humans - sometimes badly - reason about whether statements are true or false.

@denisbloodnok @noybeu I beg to differ. AI simply scans data much as your eyes do and need not retain it, in fact the whole point is not to, to extract patterns from it. And to argue it is not emulating humans strikes me as disregarding the very evolution of neural networks, LLMs and AI in general. It is at bare minimum inspired by humans and targeted at interfacing with (interacting and communicating with) humans and I don't know what part of "emulating" you think is missing.... Perhaps the physical? Even there robotics is doing an awful lot of human emulation ... And animal emulation... Etc.