>The goal was to test our structured-generation algorithms and their open-source counterparts, replacing the naive “does it accept this string?” with something closer to the real problem: “does it produce the right token distribution?” The experiment kept coming up in conversation, then returning to the roadmap. Last month, I spent half an hour explaining the method to Codex. A few hours later, it had produced a working first version. That’s all it took.

Proving that the bottleneck, was, in fact, the code. It's just that the AI wrote it now.

The person who thought "the bottleneck wasn't the code" already had the goal discussed and coherent in their mind.

Code as bottleneck doesn't have to mean "I wanted this feature but it took me many months to finally code it". It is also "I wanted this feature for 2 years, but the friction in sitting down to put it in code and spending 5-10 days on it, etc, put me off".

If the code wasn't the bottleneck, they could just sit and write it themlseves. But, they didn't want to go through the effort and time spent of coding it themselves, as they knew it wouldn't take as little as with the LLM.

(And even when you don't have a clear final spec in mind, the exploratory code+check+discard+retry-new-design, is also faster with an LLM, precisely because the "code" part is).

In other words, the code was the bottleneck.

The post appears AI-generated itself, just with instructions to avoid obvious constructions, which still makes for tedious reading.

Author here. Sorry my writing is tedious. Next time I’ll use AI to make it more readable.