Been noodling a lot about AI stuff lately, and just keep SMH about how willing so many people seem to be to trust this technology.

My problem w/ the idea of AI chat bots being asked to do anything consequential is that we seem to want them to be ever-more human, while at the same time expecting them not to make mistakes.

Probably what we really want is for them to also learn from their mistakes. But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong. On some levels, that seems incompatible with what many expect out of AI today.

@briankrebs I think a lot of these “AI” outputs are simply reflections of their creators and an oversimplification of the status quo. Meaning, they’ll be confidently wrong and then try to wiggle their way out of it or insist that their information is valid. (Because they said so.)

To me, most generative text results are nothing more than makeshift stories where nearly each word is chosen on the basis of assumed prevalence, given the input or prompt.

I agree— it’s weird how some put their full faith in these things…

@briankrebs It’s interesting that so many people are trying to portray all of this as “the response is so human, but has to be trusted because it’s not” when the process is literally contingent on it gathering it’s knowledge base from a combination of already established human sentiments, regardless of validity.

And, I haven’t even touched on the fact that it’s all very obviously profit-driven and a pipe dream to eliminate the need for human intervention in some areas. Despite how many people try to argue that point. (Those who pay to have it developed aren’t paying for it to be beneficial to everyone.)