We're excited to release v0.3.0 of the Factorio Learning Environment (FLE), an open-source environment for evaluating AI agents on long-horizon planning, spatial reasoning, and automation tasks.

== What is FLE? ==

FLE uses the game Factorio to test whether AI can handle complex, open-ended engineering challenges. Agents write Python code to build automated factories, progressing from simple resource extraction (~30 units/min) to sophisticated production chains (millions of units/sec).

== What's new in 0.3.0 ==

- Headless scaling: No longer needs the game client, enabling massive parallelization!

- OpenAI Gym compatibility: Standard interface for RL research

- Claude Code integration: We're livestreaming Claude playing Factorio [on Twitch](http://twitch.tv/playsfactorio)

- Better tooling and SDK: 1-line CLI commands to run evaluations (with W&B logging)

== Key findings ==

We evaluated frontier models (Claude Opus 4.1, GPT-5, Gemini 2.5 Pro, Grok 4) on 24 production automation tasks of increasing complexity.

Even the best models struggle:

- Most models still rely on semi-manual strategies rather than true automation

- Agents rarely define helper functions or abstractions, limiting their ability to scale

- Error recovery remains difficult – agents often get stuck in repetitive failure loops

The performance gap between models on FLE correlates more closely with real-world task benchmarks (like GDPVal) than with traditional coding/reasoning evals.

== Why this matters ==

Unlike benchmarks based on exams that saturate quickly, Factorio's exponential complexity scaling means there's effectively no performance ceiling. The skills needed - system debugging, constraint satisfaction, logistics optimization - transfer directly to real challenges.

== Try it yourself ==

>>> uv add factorio-learning-environment

>>> uv add "factorio-learning-environment[eval]"

>>> fle cluster start

>>> fle eval --config configs/gym_run_config.json

We're looking for researchers, engineers, and modders interested in pushing the boundaries of agent capabilities. Join our Discord if you want to contribute. We look forward to meeting you and seeing what you can build!

-- FLE Team

haha, I am sure somewhere, some PhD student told their supervisor: “No, seriously, I have to play 600 hours of Factorio… for science.”

This is dope. When is it appropriate to start enabling multiple agents for one player to see if they can collaborate and divide up roles?

Loving the ‘Claude plays’ integration. Great work

Thank you!

Related. Others?

Multi-Agent Coordination in Factorio: FLE v0.2.0 - https://news.ycombinator.com/item?id=43926829 - May 2025 (5 comments)

Show HN: Factorio Learning Environment – Agents Build Factories - https://news.ycombinator.com/item?id=43331582 - March 2025 (209 comments)

This is our earlier work. Since May we've made it really easy for the community to build their own agents to play the game: you can now hook up your terminal to get Claude Code to play the game.

That's great!

(just for clarity: links to past threads in no way imply that the new post isn't welcome! They're just because some readers enjoy poking back through past related discussions as well)

Is there going to be some kind of plugin support for other games?

Id love to see Claude playa age of empires.

Claude plays command and conquer.

I already know there a huge AI starcraft 2 scene, but I don't think those are LLM AI.

I am really keen on plugging into Age of Empires 2 - although practically I think we need a couple of years of improvements before LLMs would be smart/fast enough to react to the game in realtime. Currently they can't react fast enough - although specially trained networks could be viable.

Open AI tried to create a Dota 2 AI with reinforcement learning. Some of its best people worked on that.

They had to dumb down the game and keep the bot playing on the same patch, even then it could not win against a proffesional team.

I'm pretty sure that AI did take at least a few games off of the pros. IIRC the professional team only had one win, the last match.

I do agree that the game was terribly dumbed down to make it tractable. I keep hoping they'll revisit Dota 2 to see if they can find meaningful improvements and tackle the full game.

The last time they deployed it... It beat the current world champions

Yes, the OpenAI Five bots won a best of three in their custom format, back in 2019. The bots won the first two games, then a third game was played which the humans won, which is the point I was trying to make (I'm not the GP).

Unless you know of another time the bots were deployed formally against a pro team more recently, which I'd love to hear about.

[0] https://web.archive.org/web/20190413210513/https://venturebe...

Are bitters and cliffs disabled?

Biters are disabled, but cliffs are not

Live-stream is epic

[dead]