Yesterday, had an argument with an AI booster. I'm not going to link, both because I don't want to platform that and because I don't want anyone to go harass them. But what I thought was very interesting was that I asked point-blank if there was any degree to which ethical problems with LLMs could make them not want to use AI — they told me no, there was not, and implied that they evaluated AI purely on the basis of its efficacy.
@xgranade AI (currently Automated Incompetence) can certainly work if it's configured and fed correctly. Fed tons of poorly curated stolen data from all over the internet, which is 90% garbage, it mostly spews garbage. (GIGO applies) the concept of such an LLM being able to properly feed a front end that would then magically give correct answers was flawed from the beginning. It is that unethical back-end that is both the ethical problem and the technical problem with what they're pushing. There are solutions for that in the long run, but in the short run, the only correct step is to stop using LLMs, or never start.