> I don't know but it seems like LLMs are just adding small incremental improvements in the polish/iterating phase.

A couple days ago I was searching my hard drive for <something> and found this java file I downloaded a long time ago. This long, lost project idea from days gone by I never got around to doing anything with though I do remember making an attempt.

So, I got the robots to work. They analyzed the code and answered a whole slew of questions about how it compares to 'modern' implementations. They converted it to C, made improvements (which they also suggested) and added functionality (which I suggested) to the original code then wrapped it all up into a gimp plugin. If I ever get around to installing gimp-devel (or whatever the Fedora package is) I can get the bugs shaken out and upload it to the gimp plugin registry -- as was also suggested by the robots.

And this was all over the course of an hour or two until, quite honestly, I got hungry and went foraging for food. I would never spend the days to learn how gimp plugins work and, most likely, would just have just let the file sit on my hard drive for another decade before even looking at it again.

I have to disagree they are only good at small, incremental improvements. There's a bunch of papers I've collected over they years which only have the 'algorithmic code' I plan on letting them loose on which I either tried (and failed) to convert into actual code or didn't even try at all.

You're picking probably one of the best use cases for LLMs though.

At this point you'd think anything related to language is a solved problem and yet my wife is a translator who spends way too much time correcting machine translated text.

And what about other realms outside of language like self driving? How long until it is generally available?