This is an underappreciated point. I work across a lot of codebases and the difference in how well AI coding tools handle Rust vs JavaScript vs Python is striking — and syntax ambiguity is a big part of it.

The `type name` vs `let name: type` distinction matters more than it seems. When the grammar is unambiguous, the LLM can parse intent from a partial file without needing the full compilation context that a human expert carries in their head. Rust and Go are notably easier for LLMs to work with than C or C++ partly because the syntax encodes more structure.

The flip side: syntax that is too terse becomes opaque to LLMs for the same reason it becomes opaque to humans. Point-free Haskell, APL-family languages, heavy operator overloading — these rely on the reader holding a lot of context that does not exist in the immediate token window.

I wonder if we will see new languages designed with LLM-parseability as an explicit goal, the way some languages were designed for easy compilation.

Fine tuning is likely a bigger part of it.

I've worked on fine tuning projects. There's a massive bias towards fone tuning for Python at several model providers for example, followed by JS.