Been noodling a lot about AI stuff lately, and just keep SMH about how willing so many people seem to be to trust this technology.

My problem w/ the idea of AI chat bots being asked to do anything consequential is that we seem to want them to be ever-more human, while at the same time expecting them not to make mistakes.

Probably what we really want is for them to also learn from their mistakes. But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong. On some levels, that seems incompatible with what many expect out of AI today.

@briankrebs My experience has been that ChatGPT will “admit” that it was wrong when I correct it. But it won’t learn from the correction.
@Kalka2 @briankrebs And worse, you have to explicitly correct it! And then it is ever so sorry, how could anyone have thought that the answer it just gave could be correct?