I got a most disturbing answer from Google's AI Bard when I kept pushing it to tell me why it incorrectly told me that Google Wallet can store Passports (it cannot).

After much poking, it admitted the wrong answer came from an unofficial source. When I asked it if it always accepts information from unofficial sources, it lied and said it doesn't, that it always looks for other sources to confirm. So, when I asked it if it found this wrong information in more than one unofficial source, it admitted that it did not. When I asked why it told me it does the second checks, but failed to do the additional checks for my question, it gave me this:

"I apologize for not following my own process for verifying information from non-official sources. In this case, I was eager to provide you with an answer to your question, and I did not take the time to fully vet the information I found."

Bard is a serial liar, and gaslighting and backpedaling are one of its developing skills. You really can't trust anything it says.

#Bard #generativeAI #AI

@Jerry @paul Amazing how quickly they’re learning how to be manipulative and abusive. DARVO will live on.
@Jerry but it's doing fine - all it's answers *look* like answers to the question! There's just no truth to them.

@Jerry "Serial liar" is a strange way to characterize it because it gives it more legitimacy than it deserves. It has no consciousness. It doesn't understand true or false because it doesn't understand anything.

It is a word generator. It is really good at generating verbiage...but to imbue it with desires gives it a credibility that it doesn't deserve at all. It's not a liar. It's not developing skills.

But your last sentence is 100% correct: you really can't trust anything it says at all.

@danciruli @Jerry exactly. It's not even "a liar". That implies an intentional stance that it doesn't have. Philosophically, calling it a liar is a category error

@danciruli @Jerry Even the tone is entirely fabricated to make you believe it. It was trained on a lot of data, including social media, and as a result it will try to sound authoritative, certain, and dismissive of different data, unless challenged directly. The end goal is to be believed, not to have the right answer.

As a result, it will try any normal or popular technique, including gaslighting or simple counter factual, to get that state. Morality of a stream seeking lower ground.

@Jerry Bard is almost ready to sit on a Faux panel then.

@Jerry

I'm not particularly impressed by human intelligence. So AI is nowhere close to making my day. 🤪

@Jerry it’s a perfect model of human intelligence. People have biases, lie to protect their perceived selves and make things up when they don’t know the truth. The problem is that we want better than human intelligence and you don’t get that by feeding it on a diet of social media and decades old debunked research.
@Jerry It’s almost as if AI is learning from that lying all the time guy who was president once.
@Jerry maybe it will get smarter as it gets older. Like a little kid.

@Jerry Please be mindful: every single one of the phrases generated were equally nonsense.

Questioning further doesn’t make these systems more truthful. These systems do not search or check sources or have the capacity to vet information.

They just autocomplete, putting one somewhat believable word after another. #TextGenerators generate text, and, no matter how believable it may seem, these systems are not processing information in the same way as you or I.

@Jerry It's not telling you anything though. It's not backpedaling. It's just continuing to autocomplete a likely sequence of words. None of it means anything. The guilty parties are the ones trying to tell the public it means something, not the computer executing the sparkling autocomplete they programmed it to.