Thanks, I mostly agree with your approach except for one thing: eyesight feels like a "harness" that humans get to use and LLMs do not.
I'm guessing you did not pass the human testers JSON blobs to work with, and suspect they would also score 0% without the eyesight and visual cortex harness to their reasoning ability.
I'm all for testing humans and AI on a fair basis; how about we restrict testing to robots physically coming to our testing center to solve the environments via keyboard / mouse / screen like our human testers? ;-)
(This version of the benchmark would be several orders of magnitude harder wrt current capabilities...)
This counterpoint doesn't address the issue, and I would argue that it is partially bad faith.
Yes, making it to the test center is significantly harder, but in fact the humans could have solved it from their home PC instead, and performed the exact same. However, if they were given the same test as the LLMs, forbidden from input beyond JSON, they would have failed. And although buying robots to do the test is unfeasible, giving LLMs a screenshot is easy.
Without visual input for LLMs in a benchmark that humans are asked to solve visually, you are not comparing apples to apples. In fact, LLMs are given a different and significantly harder task, and in a benchmark that is so heavily weighted against the top human baseline, the benchmark starts to mean something extremely different. Essentially, if LLMs eventually match human performance on this benchmark, this will mean that they in fact exceed human performance by some unknown factor, seeing as human JSON performance is not measured.
Personally, this hugely decreased my enthusiasm for the benchmark. If your benchmark is to be a North star to AGI, labs should not be steered towards optimizing superhuman JSON parsing skills. It is much more interesting to steer them towards visual understanding, which is what will actually lead the models out into the world.
I just realized that this also means that the benchmark is in practice unverified by third parties, as all tasks are not verified to be solvable through the JSON interface. Essentially there is no guarantee that it is even possible to understand how to complete every task optimally through the JSON interface alone.
I assume you did not develop the puzzles by visualizing JSON yourselves, and so there might be non obvious information that is lost in translation to JSON. Until humans optimally solve all the puzzles without ever having seen the visual version, there is no guarantee that this is even possible to do.
I think the only viable solution here is to release a version of the benchmark with a vision only harness. Otherwise it is impossible to interpret what LLM progress on this benchmark actually means.
Oookay. I actually tried the harness myself, and there was a visual option. It is unclear to me if that is what the models are using on the official benchmark, but it probably is. This probably means that much of my critique is invalid. However, in the process of fiddling with the harness, building a live viewer to see what was happening, and playing through the agent API myself, I might have found 3-4 bugs with the default harness/API. Dunno where to post it, so of all places I am documenting the process on HN.
Bug 1: The visual mode "diff" image is always black, even if the model clicked on an interactive element and there was a change. Codex fixed it in one shot, the problem was in the main session loop at agent.py (line 458).
Bug 2: Claude and Chatgpt can't see the 128x128 pixel images clearly, and cannot or accurately place clicks on them either. Scaling up the images to 1028x1028 pixels gave the best results, claude dropped off hard at 2048 for some reason. Here are the full test results when models were asked to hit specific (manually labeled) elements on the "vc 33" level 1 (upper blue square, lower blue square, upper yellow rectangle, lower yellow rectangle):
Model | 128 | 256 | 512 | 1024 | 2048
claude-opus-4-6 | 1/10 | 1/10 | 9/10 | 10/10 | 0/10
gemini-3-1-pro-preview | 10/10 | 10/10 | 10/10 | 10/10 | 10/10
gpt-5.4-medium | 4/10 | 8/10 | 9/10 | 10/10 | 8/10
Bug 3: "vc 33" level 4 is impossible to complete via the API. At least it was when I made a web-viewer to navigate the games from the API side. The "canal lock" required two clicks instead of one to transfer the "boat" when water level were equilibriated, and after that any action whatsoever would spontaneously pop the boat back to the first column, so you could never progress.
"Bug" 4: This is more of a complaint on the models behalf. A major issue is that the models never get to know where they clicked. This is truly a bit unfair since humans get a live update of the position of their cursor at no extra cost (even a preview of the square their cursor highlights in the human version), but models if models fuck up on the coordinates they often think they hit their intended targets even though they whiffed the coordinates. So if that happens they note down "I hit the blue square but I guess nothing happened", and for the rest of the run they are fucked because they conclude the element is not interactive even though they got it right on the first try. The combination of an intermediary harness layer that let the models "preview" their cursor position before the "confirmed" their action and the 1024x1024 resolution caused a major improvement in their intended action "I want to click the blue square" actually resulting in that action. However, even then unintended miss-clicks often spell the end of a run (Claude 4.6 made it the furthest, which means level 2 of the "vc 33" stages, and got stuck when it missed a button and spent too much time hitting other things)
After I tried to fix all of the above issues, and tried to set up an optimal environment for models to get a fair shake, the models still mostly did very badly even when they identified the right interactive elements...except for Claude 4.6 Opus! Claude had at least one run where it made it to level 4 on "vc 33", but then got stuck because the blue squares it had to hit became too small, and it just couldn't get the cursor in the right spot even with the cursor preview functionality (the guiding pixel likely became too small for it to see clearly). When you read through the reasoning for the previous stages though, it didn't truly fully understand the underlying logic of the game, although it was almost there.
Well, yes, and would hand even more of an advantage to humans. My point is that designing a test around human advantages seems odd and orthogonal to measuring AGI.
The whole point of AGI is "general" intelligence, and for that intelligence to be broadly useful it needs to exist within the context of a human centric world
General intelligence not owning retinas.
Denying proper eyesight harness is like trying to construct speech-to-text model that makes transcripts from air pressure values measured 16k times per second, while human ear does frequency-power measurement and frequency binning due to it's physical construction.
Does this mean blind people are not intelligent?
Blind people do function within the context of a human-centric world, though, so they would qualify as intelligent.
Yes, but they use various "harnesses" to do so (dog guides, text to speech software, assistance of other humans when needed..). Why can't AI?
Assistance of other humans? You do realise we're talking about an intelligence test right, at that point what are you even testing for. I'm sure you've taken exams where you couldn't bring your own notes, use Google or get help from someone, even though real life doesn't have those constraints
Then why deny it a harness it can also use in a human centric world?
There is no general purpose harness.
The human testers were provided with their customary inputs, as were the LLMs. I don't see the issue.
I guess it could be interesting to provide alternative versions that made available various representations of the same data. Still, I'd expect any AGI to be capable of ingesting more or less any plaintext representation interchangeably.
The issue is that ARC AGI 3 specifically forbids harnesses that humans get to use.
So what? Are you suggesting that an agent exhibiting genuine AGI will be tripped up by having to ingest json rather than rgb pixels? LLMs are largely trained on textual data so json is going to be much closer to whatever native is for them.
But by all means, give the agents access to an API that returns pixel data. However I fully expect that would reduce performance rather than increase it.
Because it is. Opus 4.6 jumps from 0.0% to 97.1% when given visual input
That's impressive. I'm also a bit surprised - I wouldn't have expected it to be trained much at all on that sort of visual input task. I think I'd be similarly surprised to learn that a frontier model was particularly good at playing retro videogames or actuating a robot for example.
However, if it can't figure out to render the json to a visual on its own does it really qualify as AGI? I'd still say the benchmark is doing its job here. Granted it's not a perfectly even playing field in that case but I think the goal is to test for progress towards AGI as opposed to hosting a fair tournament.
> However, if it can't figure out to render the json to a visual on its own does it really qualify as AGI? I'd still say the benchmark is doing its job here.
Can you render serialized JSON text blob to a visual with your brain only? The model can't do anything better than this - no harness means no tool at all, no way to e.g. implement a visualizer in whatever programming language and run it.
Why don't human testers receive the same JSON text blob and no visualizers? It's like giving human testers a harness (a playable visualizer), but deliberately cripples it for the model.
Huh. I thought it wasn't supposed to receive any instructions tailored to the task but I didn't understand it to be restricted from accessing truly general tools such as programming languages. To do otherwise is to require pointless hoop jumping as frontier models inevitably get retrained to play games using a json (or other arbitrary) representation at which point it will be natural for them and the real test will begin.
This is my understanding as well, I thought tools where allowed.
Source? I haven't seen anything like that for ARC-AGI performance.
Also, if it makes that big of a difference, then make a renderer for your agent that looks like the web page and have it solve them in the graphical interface and funnel the results to the API. I guarantee you won't get better performance, because the AGI is going to have to "understand" the raw data can be represented as a 2D matrix regardless of whether it gets a 2D matrix of pixels or a 2D matrix of enumeration in JSON. If anything, that makes it a more difficult problem for a AI system that "speaks" in tokens.
That score is in the arc technical paper [1]. It's the full benchmark score using this harness [2] (which is just open code with read, grep, bash tools).
This is already a solved benchmark. That's why scoring is so convoluted and a self proclaimed Agent benchmark won't allow basic agent tools. ARC has always been a bit of a nothing burger of a benchmark but this takes the cake.
[1] https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf
[2] https://blog.alexisfox.dev/arcagi3
> For example, in a variant of environment TR87, Opus 4.6 scores 0.0% with no harness and 97.1% with the Duke harness (12), yet in environment BP35, Opus 4.6 scores 0.0% under both configuration
This is with a harness that has been designed to tackle "a small set of public environments: ls20, ft09, and vc33" (of the arc-agi-3 challenge), yet it looks like it does not solve the full arc-agi-3 benchmark, just some of them.
The harness was designed with the preview, but no it was still tested on the full public set in that environment. You can run the benchmark in different 'environments' though it's unclear what the difference between them is.
>We then tested the harnesses on the full public set (which researchers did not have access to at the time)
It may have been tested on the full set, but the score you quote is for a single game environment. Not the full public set. That fact is verbatim in what you responded to and vbarrielle quoted. It scored 97% in one game, and 0% in another game. The full prelude to what vbarrielle quoted, the last sentence of which you left out, was:
> We then tested the harnesses on the full public set (which researchers did not have access to at the time). We found extreme bimodal performance across the two sets, controlling for the same frontier model...
The harness only transfers to like-environments and the intelligence for those specific games is baked into the harness by the humans who coded it for this specific challenge.
The point of ARC-AGI is to test the intelligence of AI systems in novel, but simple, environments. Having a human give it more powerful tools in a harness defeats the purpose. You should go back and read the original ARC-AGI paper to see what this is about+. Are you upset about the benchmark because frontier LLM models do so poorly exhibiting the ability to generalize when the benchmarks are released?
+ https://arxiv.org/abs/1911.01547
> intelligence for those specific games is baked into the harness
This is your claim but the other commenter claims the harness consists only of generic tools. What's the reality?
I also encountered confusion about this exact issue in another subthread. I had thought that generic tooling was allowed but others believed the benchmark to be limited to ingesting the raw text directly from the API without access to any agent environment however generic it might be.
1) Pointing out what tools to use is part of the intelligence that LLMs aren't great at.
2) one of the tools is a path finding algorithm. A big improvement/crutch over a regular LLM that has no such capability.
You'd think if LLMs are intelligent they'd be able to determine that a path finding algorithm is necessary and have a sub agent code it up real quick. But apparently they just can't do that without humans stepping in to make it a standard tool for them.
Here's the paper on what they did for the Duke Harness:
https://blog.alexisfox.dev/arcagi3
>You'd think if LLMs are intelligent they'd be able to determine that a path finding algorithm is necessary and have a sub agent code it up real quick.
ARC 3 doesn't allow that so.
>Here's the paper on what they did for the Duke Harness: https://blog.alexisfox.dev/arcagi3
Yeah, and the tools are general, not 'baked into the harness by the humans who coded it for this specific challenge.'
Adding a path finding algorithm and environment transform tools to a supposed "AGI", sure does seem like cheating to me. Sad part is, it's a cheat that only works on environments where pathfinding is a major part. And when it doesn't have those clues it bombs on everything.
I guess you really want to love the current SOTA LLMs. It's a shame they're dumb af.
Have a great day.
>Adding a path finding algorithm and environment transform tools to a supposed "AGI", sure does seem like cheating to me.
You would need all that if you, a human wanted any chance of solving this benchmark in the format LLMs are given. The funny thing about this benchmark is that we don't even know how solvable it is, because the baseline is tested with radically different inputs.
>I guess you really want to love the current SOTA LLMs. It's a shame they're dumb af.
I guess you really don't want to think critically. Yeah good day lol.
Really tired of you making up stuff about this. The baseline and entire benchmark evaluation is clearly defined, with a statistically sound number of participants for the baseline using the same consistent deterministic environments to perform evaluation. The fact you don't like where the "human performance" line was drawn or how the scale is derived is not the same as the benchmark being tested with "radically different inputs". Clearly you would rather hype AI than critically advance it. So I won't waste time with someone who is clearly not posting in good faith.
Byebye now.
Humans and LLMs are not seeing the benchmark in the same format. What's made up about that ? Can you solve this in the JSON format ?
Look man, don't reply if you don't want to.
>The point of ARC-AGI is to test the intelligence of AI systems in novel, but simple, environments.
The point is whatever Francois wants it to be.
>Having a human give it more powerful tools in a harness defeats the purpose.
Why does it defeat the purpose? Restricting the tools available is an arbitrary constraint. The Duke harness is a few basic tools. What's the problem ? In what universe would any AI Agent worth its salt not have access to read, grep and bash ? If his benchmark was as great and the difference as wide as he claimed, then it simply wouldn't matter if those tools were available. Francois removed access to tools because his benchmark falls apart with them. Simple as.
>You should go back and read the original ARC-AGI paper to see what this is about+.
>Are you upset about the benchmark because frontier LLM models do so poorly exhibiting the ability to generalize when the benchmarks are released?
I’m not upset about anything. I do not care about ARC, and I never have. I think it is a nothingburger of a benchmark: lots of grand claims about AGI, but very little predictive power or practical utility.
When models started climbing FrontierMath, that benchmark actually told us something useful: their mathematical capabilities were becoming materially stronger. And now state-of-the-art systems have helped with real research and even contributed to solving open problems. That is what a good benchmark is supposed to do.
ARC ? Has 0 utility on its own and manages to tell you nothing at the same time.
Unsaturated benchmarks matter because they help show where the state of the art actually is. The value is not “look, the score is low,” but whether the benchmark tells you something real and useful about capability. ARC has always struggled on that front, but 3 has taken that to a new level of useless.
My sense is that a powerful enough AI would have the sense to think something like "ah, this sounds like a video game! Let me code up an interactive GUI, test it for myself, then use it to solve these puzzles..." and essentially self-harness (the way you would if you were reading a geometry problem, by drawing it out on paper).
Yeah but thats literally above ASI, let alone AGI. Average human scores <1% on this bench, opus scores 97.1% when given an actual vision access, which means agi was long ago achieved
> opus scores 97.1% when given an actual vision access
Do you have a source for this? I would be very curious to see how top models do with vision.
https://news.ycombinator.com/item?id=47532483
No, there is no source for this. Opus is scoring around 1% just like all the other frontier models. It would be fairly trivial to add a renderer intermediary. And if it improves to 97+%... Then you would get a huge cut of $2 million dollars. The assertion that Opus gets 97% if you just give it a gui is completely bogus.