We used to have working spelling and grammar checkers. Why does everybody in tech pretend you need a whole-ass LLM to check for typos?
@baldur
And translations
And text to speech was working good in most cases
@wikiyu @baldur no offense, but LLMs are really really good at translations, compared to the state of the art before. (and e.g. Google Translate was a lot more LLM-style AI for years than people think)
@[email protected] They are not. This is a commonly-held view that, unfortunately, is ultimately chauvinistic and does not hold up to scrutiny. These Google-style translators might have achieved state of the art performance on benchmarks translating between English and other dominant Latinate languages, but outside of that they are fairly poor. Furthermore, LLM use gets in the way of learning the detailed linguistic features that would allow someone to design a significantly more performant--in all senses of that word--non-LLM translator that would be of general use. So LLM-based translators are poor in this respect as well. @[email protected] @[email protected]

@abucci @wikiyu @baldur I feel like we're arguing based on perceptions here – I certainly am, and can but vaguely remember the press echo when neural (not LLM) translators came out. So, I might need to shut up here and say: Have not enough data to base my claims here. Do you?

Do we have any qualitative analysis in literature that I could read? So far we've got four people claiming things, that's not a great discussion :)

@[email protected] I am not going to do your homework for you on Mastodon. I do teach computer science classes for pay from time to time and would be happy to consider helping you in that capacity. A fair-minded (i.e., not biased towards supporting one's prior assumptions) scan through the Association for Computational Linguistics publications isn't a bad place to start. @[email protected] @[email protected]