Announcing ARC-AGI-3 - An benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.

https://lemmy.ca/post/62420870

Announcing ARC-AGI-3 - An benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%. - Lemmy.ca

The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, “Reasoning” models, and Agentic frameworks. > ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels. > > Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what’s next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations. You can try the tasks yourself here: https://arcprize.org/arc-agi/3 [https://arcprize.org/arc-agi/3] Here is the current leaderboard for ARC-AGI 3, using state of the art models - OpenAI GPT-5.4 High - 0.3% success rate at $5.2K - Google Gemini 3.1 Pro - 0.2% success rate at $2.2K - Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K - xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K. ARC-AGI 3 Leaderboard [https://lemmy.ca/pictrs/image/c7521941-7eac-46f4-98de-876bcf99220c.png] (Logarithmic cost on the horizontal axis) https://arcprize.org/leaderboard [https://arcprize.org/leaderboard]

Biased study. Take any average person off the streets and shove this thing in their face. That 100% notion will go down fast.

ARC-AGI-3 Launch event - Shared publicly live on March 25 in San Francisco at Y Combinator HQ, featuring a fireside conversation between François Chollet (creator, ARC-AGI) and Sam Altman (CEO, OpenAI) on measuring intelligence on the path to AGI.

François Chollet is a software engineer, artificial intelligence (AI) researcher, and former Senior Staff Engineer at Google. Chollet is the creator of the Keras deep-learning library released in 2015.

They didn’t say “100% of humans can solve this benchmark”, they said “humans can solve 100% of this benchmark”.
I couldn’t get past the second level :(
feelsbadman. You need more RAM!
Guy, I found the bot!

I see by your lack of pluralization that you’ve realized there’s only one person here and everyone else is bots. However through inference and deduction, you are therefore also a bot. I have good reason to believe I am the non-bot though I wonder if I could know for certain…

That was a lot of effort for a typo joke…

My programming tells me I’m not a bot.
Cogito, ergo sum

I finished one of the tasks. And, I imagine I could finish at least some of the others. But, I wasn’t being paid, and it wasn’t very entertaining, so I stopped.

They should ad a “global” and “friends-only” leaderboard (like the Zachtronics games, etc.) and really see the competition (at leat human competition) heat up.

Of the first task? Yikes.

“Humans score 100%. Frontier AI scores 0.26%.”

The title deals in absolutes.

Those are high scores.
🤔 So this is a visual comparison between peak performance of some humans and peak performance of current LLMs in a controlled environment?
Is this a gotcha? Not sure where you got the “visual” from, but yes it is best human performance vs best LLM performance
I don’t know why you assume there has to be a gotcha, maybe it’s the competitive background… Anyway, it’s visual because you look at it to see it. And it’s not the best human performance vs best LLM performance, it’s best controlled performance because the testing is limited to a set of parameters.
That’s what games are? I really don’t see how it is an unfair comparison to you. How would you change it?
Stress test it. Low, average, high, impairment conditions, safeguards off, order, chaos and everything in between.
I haven’t read all of their Benchmark introduction and Technical Documentation. I assume you have and didn’t find any of the tests you’re asking for?
ARC Prize - What is ARC-AGI?

The only AI benchmark that measures AGI progress.

ARC Prize
Pretty defensive there. It’s not even a study
If it studies something, it’s a study. If you feel defensiveness, you consider aggression. If you feel bias in one way, someone can feel bias in another way. If there’s an action, there’s a reaction.

If you feel defensiveness, you consider aggression.

Aggression as in calling something biased without providing evidence?

As in assuming you are starting with an unbiased point of view.
Of course we all have our biases. But what to do with that lesson? It can be a convenient response whenever someone disagrees with us. But it can also serve as a powerful motivation to find some common ground against all odds. The universe is chaotic. Language is illogical. Yet sometimes we find stuff we can agree on. Isn’t that beautiful?

If there’s an action, there’s a reaction.

Sort of like how when people outsource all their critical thinking to AI, their ability for critical thinking atrophies?

I’m studying these comments, now I am a study
I salute your dedication to science. 🫡