Machine translations are often brought up as a gotcha whenever I criticize LLMs. It's worth pointing out two things: Machine translations existed decades before LLMs, and yes, machine translations are useful. However: I would never in my life read a machine translated book. Understanding what a social media post is talking about in rough terms? Sure. Literature? Absolutely not. Hell, have you ever seen machine translated subtitles? It's absolute garbage.
I have the impression that primarily anglophone people don't read as much translated literature, because so much good literature already exists in their language, so this issue may not be as familiar within that demographic. As someone who did not grow up anglophone, I can tell you there is a world of difference between a good and a bad translation even when done by humans. Machine translations are not even on the scale.
From what I've observed, people who claim that LLMs can replace artists don't understand art, people who claim that they can replace musicians don't understand music, people who claim that they can replace writers don't understand literature, and people who claim they can replace translators don't rely on translations. If I had a button that would erase LLMs from the world but it would take machine translations away (which is a false dichotomy anyway), I would absolutely still press it.
Technology is not inevitable. We've decided not to have asbestos in our walls, lead in our pipes, or carginogenic chemicals in our food. (If you're going to argue that it's not everywhere, where would you rather live?) We could just not do LLMs. It's allowed.
@Gargron It is a technology that humanity has been seeking for a long time. At least since the 1950s, with Turing and his colleagues.

@df @Gargron

Transformers are neural networks.

LLMs are transformers wrapped in some Python scripting.

Every neural network can be accurately represented as an Excel sheet, even if it ends up having billions of cells.

Since it's just addition and multiplication, the model is fully deterministic. Same input, same output. Not intelligent.

It's Python code that does probabilistic sampling of the output. It's just a few lines of well-understood math plus a dice roll. Again, not intelligent.

@df @Gargron To be clear, “Python” is a placeholder language, it can be Rust, or it can be a GPU shader, and it changes nothing.

@patrys @df @Gargron does determinism imply non-intelligence?
If you hooked up the computer to a Geiger counter for true random noise and used that to modulate the output, would that have any bearing on its intelligence?

Or from the other side, what makes you think our brains are non deterministic, and why does that make us more intelligent than if the exact same history and sense-data always produced the same response?

@FishFace @df @Gargron If it’s deterministic, it can be unrolled into a giant lookup table. Did we kill phone books because they were on the verge of achieving AGI?

To me, intelligence implies a lot of things, like being able to form higher-order abstractions, learn, and thus remember things (no, being passed your “memories” as part of every prompt does not count). It also implies being curious.

@patrys @df @Gargron given that the lookup table would generally be infinite, I don't even see what that would have to do with anything. What about the Geiger counter?

I don't think those things are really needed for human-like intelligence, and something like curiosity can easily be simulated by a rules-based system.

@FishFace @df @Gargron No, you got it wrong. The model itself can be unrolled into a finite lookup table. The only random part is which word you take from the few options in the resulting row.
@patrys a computable function can generally produce infinitely many different outputs. You're still not saying why a non-deterministic part affects intelligence.
@FishFace It generates one token at a time, which makes it impossible to formulate higher-order abstractions that are not already baked into the weight matrix. I said it in another answer, not being able to learn disqualifies it as intelligence.

@patrys LLMs are intelligent only in the sense of pattern recognition; that is, they possess logical intelligence. However, some psychologists argue that there are multiple intelligences that cannot be reduced to logic, nor are LLMs capable of possessing them. See psychologist Howard Gardner.

@FishFace @Gargron

@df @FishFace @Gargron This pattern recognition is an artifact of the training process, not something that occurs at inference time. It’s like having termites dig some tunnels in an earth mound, then removing the termites, pouring aluminum into the mound, and attributing the resulting intricate shapes to the intelligence of the mound. The patterns it carries are from human artifacts used as input for the model before its weights settled.

@patrys Undoubtedly, LLMs in this regard end up being mirrors of who we are, reflecting our biases, our prejudices, and our worldviews. That is why they are not innocuous tools and why #ethics and regulation of #AI are necessary.

@FishFace @Gargron

@FishFace @patrys @df @Gargron

"Or from the other side, what makes you think our brains are non deterministic"

Us having free will/being non-deterministic is pretty much the base assumption we all operate on to even be able to function as humans. That of course doesn't mean that it's automatically true, but it makes the question of why do you think your brain is non-deterministic a no-brainer to answer: because we can't help but perceive ourselves as such.

@frog_reborn @FishFace @df @Gargron The very fact that you can read and mid-sentence learn something that changes your perception of the world means that you have brain plasticity that no neural network possesses. It’s deterministic AND rigid because training and inference happen separately.
@patrys you're talking about differences between brains and neural networks that exist, but still not arguing the philosophical point about why that is relevant to intelligence.