Been noodling a lot about AI stuff lately, and just keep SMH about how willing so many people seem to be to trust this technology.

My problem w/ the idea of AI chat bots being asked to do anything consequential is that we seem to want them to be ever-more human, while at the same time expecting them not to make mistakes.

Probably what we really want is for them to also learn from their mistakes. But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong. On some levels, that seems incompatible with what many expect out of AI today.

@briankrebs This attitude makes sense if for the last 30 years companies had been making humans into bots with scripts. People basically look at it as just the next logical step, not realizing AI bots lack human reality tethering.