Been noodling a lot about AI stuff lately, and just keep SMH about how willing so many people seem to be to trust this technology.

My problem w/ the idea of AI chat bots being asked to do anything consequential is that we seem to want them to be ever-more human, while at the same time expecting them not to make mistakes.

Probably what we really want is for them to also learn from their mistakes. But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong. On some levels, that seems incompatible with what many expect out of AI today.

@briankrebs I have been working on the ML stuff seriously again for four years after a 20 year hiatus to do #swsec. See https://berryvilleiml.com

Happy to chat anytime, esp on the porch by the river.

Berryville Institute of Machine Learning

Building Security into Machine Learning

Berryville Institute of Machine Learning