I'm making an LLM agent that can play DS games. The biggest blocker is clicking on the right spot to move things around in space rather than reasoning abilities.

Arc AGI seems to test that as well. Every game is a rectangular grid to make it as easy as possible yet the AIs still fail.

I'm fairly certain the way forward isn't through agents directly interfacing with UIs but through agents using scripts and other tools to interact with the interface. That's why harnesses are so critical to performance on tasks like this.

I would like a version of Arc AGI that tests the agent's ability to dynamically create these harnesses.

the whole point of arc-agi 3 is that if models are AGI then they should be able to solve the same tasks as humans do given the same information, but they cant. allowing scripts and harnesses and whatnot completely defeats the purpose.

But humans aren't just a "reasoning component"; our nervous system (and body in general) provides us with significant capabilities that would be considered a "harness" for our frontal lobe. It just seems silly to me to try to solve all of this in a single leap. But I guess that they just feel burned by how relatively quickly ARC-AGI 2 was solved

Humans haven't interacted with computers by typing in "5 columns right, 3 columns down" since before I was born. They use a mouse and keyboard.

Meanwhile AI agents are expected to guess pixels and fail each time.