@elilla Something I've noticed lately to add to this list of Bads:
5) Translate results are more superficially fluent and plausible
that's the output smoothing from the LLM. earlier versions of translation would fail visibly (whether you knew the target language or not you could see syntax or word choices that didn't make sense) and informatively (you could tell something was off, and if you knew the target language even a little you'd probably see it). you could never count on translation results being suitable for copy-paste, and so—the inevitable corollary—you knew NOT TO COUNT ON THEM
now however translate results look "good," i.e. plausible, and unless you know the target language *better* than a little that veneer of plausibility effaces your ability to gut-check the results (and not just in the moment, but by way of steady erosion the more you engage with the software. in effect you're developing a muscle memory that tells you you CAN and SHOULD count on the results)
machine translation is every bit as unreliable as it always has been and *will* be: anyone who's actually done translation knows that it's not a solvable problem (nor in fact a "problem" at all in that compsci-ish sense), any more than "intelligence" is. but you can make it LOOK solved, and if you've bamboozled enough of your user base into playing along then who's to say it isn't?
meanwhile you've boosted your AI numbers and given yourself a whole nother category of deskilled labor you can show the axe to; and the money's very happy with you