When you argued that it saved you time by a factor of 10, have you even measured that properly? I initially also had the feeling that LLMs save me time, but in the end it didn't. I roughly compared my performance to past performance by the amount of stories done and LLMs made me slower even if I thought I am saving time...

From several month of deep work with LLMs I think they are amazing pattern matchers, but not problem solvers. They suggest a solution pattern based on their trained weights. This even could result in real solutions, e.g., when programming Tetris or so, but not when working on somewhat unique problems...

I am pretty confident. Last similar LVGL thing I did took me 10-12 hours, and I had a quicker iteration time (running locally instead of the test hardware). Here I spent a little more than an hour, testing on real hardware, and the last 20 minutes was nitpicking.

Writing front-end display code and instantiating components to look right is very much playing to the model’s strength, though. A carefully written sentence plus context would become 40 lines of detail-dense but formulaic code.

(I have also had a lot of luck asking it to make a first pass at typesetting things in Tex, too, for similar reasons)

There was a recent study that found that LLM users in general tend to feel like they were more productive with AI while actually being less productive.

presumably the study this very HN discussion responds to.

Heh, yep. Guess I sometimes forget to read the content before commenting too.