Been noodling a lot about AI stuff lately, and just keep SMH about how willing so many people seem to be to trust this technology.

My problem w/ the idea of AI chat bots being asked to do anything consequential is that we seem to want them to be ever-more human, while at the same time expecting them not to make mistakes.

Probably what we really want is for them to also learn from their mistakes. But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong. On some levels, that seems incompatible with what many expect out of AI today.

@briankrebs I think quick trust in technology is also because we want convenience (faster end result and/or less input required of us). Complex tools give complex results and the verification requirement scales, but that erodes some of the convenience. Taking a shortcut to maintain convenience is also a human tendency. Looks like correct work took a backseat to convenience.