Example I chose at random:

> Convert a hexadecimal number, represented as a string (e.g. "10af8c"), to its decimal equivalent using first principles (i.e. no, you may not use built-in or external libraries to accomplish the conversion).

So it's fairly synthetic. It's also the sort of thing LLMs should be great at since I'm sure there's tons of data on this sort of thing online.

Yeah but programming isn't about solving problems that were solved millions of times already. I mean, web dev kind of is, but that's not the point. If a problem is solved, then it's just a matter of implementing the solution and anyone can do that given the proper instructions (even without understanding how or why they solve the problem).

I've formalized a lot of stuff I didn't understand just by copying the formulas from Wikipedia.

As long as LLMs are not capable of proper reasoning, they will remain a gimmick in the context of programming.

They should really just focus on refactoring benchmarks across many languages. If an AI can refactor my complex code properly without changing the semantics, it's good enough for me. But that unfortunately requires such a high-level understanding of the codebase that with the current tech it's just impossible to get a half-decent result in any real-world scenario.