Been noodling a lot about AI stuff lately, and just keep SMH about how willing so many people seem to be to trust this technology.

My problem w/ the idea of AI chat bots being asked to do anything consequential is that we seem to want them to be ever-more human, while at the same time expecting them not to make mistakes.

Probably what we really want is for them to also learn from their mistakes. But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong. On some levels, that seems incompatible with what many expect out of AI today.

@briankrebs You can simplify this to "executives want to replace their expensive human staff". The problem is that this is less like replacing an auto worker with a robotic system that repeats the same exact task over and over again, and more like offshoring intelligence work to another, cheaper country and discovering that there are a bunch of downsides to it.
@dbendit @briankrebs Sure, but the executives aren't doing the building, and that means that if this is the end goal (not an unreasonable assumption) the people doing the work are just building up the leopard that's going to eventually eat them.
@foxxtrot @briankrebs How is this any different from all the folks who got flown to India to train the teams that ended up replacing them? None of this is new.
@dbendit @briankrebs Well, the Indian workers were much more reliable than these LLMs are likely to be.
@foxxtrot @briankrebs Eh. Having worked with folks overseas, I'm not sure I'd agree. Everyone talking about having LLMs write code for them and then edit and fix it afterwards are doing the same thing I was doing with folks from Jakarta ten years ago.