Researchers Dropped 1,000 AIs in Minecraft and Watched a Civilization Form
Researchers Dropped 1,000 AIs in Minecraft and Watched a Civilization Form
Holy shit. This is the craziest article to write about one of the shittiest videos I have ever seen.
That video is glazing the fuck out of LLMs, and the creator knows jackshit about how AIs or even computers work. What a fucking moron.
So, like, the point of the experiment is that LLMs will generate outputs based on their inputs, and then those outputs are interpreted by an intermediary program to do things in games. And the video is trying to pretend that this is LITERALLY a new intelligent species emerging because you never told it to do anything other than its initial goal! Which… Isn’t impressive? LLMs generate outputs based on their datasets, like, that’s not in question. That isn’t intelligence, because it is just one giant mathematics problem.
This article is a giant pile of shit.
Just as a brain is not a giant statistics problem, LLMs are not intelligent. LLMs are basically large math problems that take what you put into them and calculate the remainder. That isn’t an emergent behavior. That isn’t intelligence at all.
If I type into a calculator 20*10 and it gives me 400, is that a sign of intelligence that the calculator can do math? I never programmed it to know what 10 or 20 or 400 were, though I did make it know what multiplication is and what digits and numbers are, but those particular things it totally created on its own after that!!!
When you type a sentence into an LLM and it returns with an approximation of what a response sounds like, you should treat it the same way. People programmed these things to do the things that they are doing, so what behavior is fucking emergent?
Information Integration Theory would suggest that phi (Φ) can be used to measure the degree to which a system generates irreducible, integrated cause–effect structure. The irreducible nature of something is exactly as you postulate: it cannot possibly be modeled mathematically. If it could, that would make it reducible to smaller parts.
You can describe the function of the human brain mathematically, of course… For example, some low hanging fruit might be:
But that’s not going to model human experience. The experience isn’t reducible. That, instead, models something closer to the quality of experience. Human rationality is derived downstream of human experience. So it’s just not a fair argument to say that a tool mimicking only the downstream patterns of human experience will somehow also possess the upstream experience capacity, or even a relatable sense of rationality at all.
I don’t think we’re going to get a deterministic explanation for human behavior, ever. Most likely just statistical truths. Unless you can somehow mathematically model the entire universe as well. Good luck, because now the endeavor sounds god-like.
Im no academic, so apologies for the lack of substance. I mostly just get stuck in rabbit holes reading about philosophy and consciousness while I should be working.
Check out these theories for some interesting ideas:
My summarized take is that modeling consciousness is akin to modeling the three-body problem or the double-pendulum. Even if the system is deterministic and capable of being modeled, you’ll forever be bottlenecked by finite precision in your model. The system itself is one where errors grow exponentially. For example, tiny differences in the double pendulum’s initial angle (like 0.000001°) rapidly amplify over time to produce wildly different trajectories. It is computationally intractable without unlimited precision — hence, this is why I said you’d need to model the entire universe. This is deterministic chaos, and we have no reason to think human-brains aren’t heavily dependent on its utility.