We used to have working spelling and grammar checkers. Why does everybody in tech pretend you need a whole-ass LLM to check for typos?
@baldur
And translations
And text to speech was working good in most cases
@wikiyu @baldur no offense, but LLMs are really really good at translations, compared to the state of the art before. (and e.g. Google Translate was a lot more LLM-style AI for years than people think)
@wikiyu @baldur (I'd argue that's probably the thing their design lends itself to rather well – analyzing which tokens in which context. Certainly will never reach human translator qualities, but saying "machine translation was good before", um, no, it really really wasn't.)
@funkylab @wikiyu I'm Icelandic and I know a bit of Danish and French, and I can tell you right now that for the languages I'm familiar with, LLM translators are worse, less accurate, and very prone to fabricate nonsense, than the non-LLMs they are replacing. Maybe they're great for other languages but they're horrible for the ones I know.
@baldur @funkylab @wikiyu The trouble is that so much language is as much about what you don't say, and the words you don't use, as what you do. And LLMs are very bad at spotting sarcasm, innuendo, slang, dialect, and specific turns of phrase. For example, there is a world of difference in (American) English between a butt dial and a booty call. Even as an Englishman, I know that.

@UkeleleEric @baldur @wikiyu don't know whether that's a good example, because the difference is clear even devoid of context, PLUS existing LLMs have no problem with that difference at all. (The two phrases are only similar to the human reader. You're projecting things that are easy to make mistakes on for humans to machine translation! (see attached Deep-L)

I'm also not sure rule based & Bayesian translation makes a lot of difference when it comes to sarcasm. That's sentiment detection!