It's interesting how people are reacting to the OpenAI and Gemini demos on a purely technical achievement level without realizing it.

The audience ooh'd and aah'd when Gemini identified the speaker on the table when asked "what makes sound?"

Indeed, that's very impressive as a technical accomplishment. It's amazing for an AI.

The problem is that Google immediately took the leap to "this will help you in your day-to-day life!" & put it on all our phones.

Bruh I know what a speaker looks like.

Often when we don't know something, my wife will search it on her phone. This used to involve her clicking a result or two and then announcing the answer.

These days, she just reads out the AI summary that appears at the top.

I shit you know, the last 10 times she did this, all 10 of them were wrong. And wrong in a subtle way that only stands out if you know what's right, or try to apply the wrong answer to actually solve a problem.

It's all very impressive but it's consistently not useful.

@rodhilton This is what I keep running into... and now I see it on about a third of the websites that I click through to find answers to questions.

In those instances, I skim about a quarter of the way through and then identify that it was definitely written by AI and then have to piece out.

I wish there was some way to certify stuff written by humans. At least them dumb stuff can be attributed to authors I can avoid. AI is everywhere now.

@rodhilton indeed. LLMs more and more feel like dealing with a kid who is incredibly fast at spewing out walls of text, but most of the time the answer is gonna be extremely basic and will need careful checking.
I asked "How would you model Time as a dimension of a 4-dimensions object in a 5-dimensions space" to chatGPT and it took me 12 interactions to get close to the answer I was looking for.
It was just walls of text until I asked: MATH PLEASE
and again, close but fundamentally wrong.