Machine translations are often brought up as a gotcha whenever I criticize LLMs. It's worth pointing out two things: Machine translations existed decades before LLMs, and yes, machine translations are useful. However: I would never in my life read a machine translated book. Understanding what a social media post is talking about in rough terms? Sure. Literature? Absolutely not. Hell, have you ever seen machine translated subtitles? It's absolute garbage.
I have the impression that primarily anglophone people don't read as much translated literature, because so much good literature already exists in their language, so this issue may not be as familiar within that demographic. As someone who did not grow up anglophone, I can tell you there is a world of difference between a good and a bad translation even when done by humans. Machine translations are not even on the scale.
From what I've observed, people who claim that LLMs can replace artists don't understand art, people who claim that they can replace musicians don't understand music, people who claim that they can replace writers don't understand literature, and people who claim they can replace translators don't rely on translations. If I had a button that would erase LLMs from the world but it would take machine translations away (which is a false dichotomy anyway), I would absolutely still press it.
@Gargron But it seems that LLMs are here to stay. This time, it doesn't seem to be just a passing fad. There is a lot of investment involved.
@df @Gargron That or the people who invested find out that it's not a profitable venue, no matter how much they are trying to force the issue.
@ainmosni @df @Gargron my take is: Investors will figure out it’s too expensive to be a viable business so big AI providers will fail, especially those who try to archive „general knowledge“ AI like OpenAI.

Small models will then be the focus, and integrating them on-device for AI assistance. Latest models, like Gwen 3.5-9b already show promising results and performance locally.

The question is who will invest in training small models to deploy on-device and will those models be open sourced? I hope they will.

@kevin @df @Gargron small models are well and good and hopefully will be focused on actually useful things, as I'm personally still not convinced that LLMs are really that useful at all, and are taking winds out of the sail out of other AI avenues that have been very useful, things that we would classify as machine learning.

But if we want general models... those might just take too many resources to build and I honestly think society will be better off with no new ones of those anyway, while letting stuff like ollama collect enough bitrot that it loses most of its damaging potential.

@kevin @df @Gargron Note that with useful I mean "something we couldn't have done without LLMs".
@ainmosni @df @Gargron I agree. Focusing on machine learning would be a better way of spending all that money, and I sincerely hope the LLM market crashes to make space for _real_ ai products and companies trying to solve problems