AI saves you an insane amount of typing, but adds an insane amount of reading, which is strictly harder than typing (at least for me).

Hmm, that is interesting; reading is harder? You have to read a lot of code anyway right? From team members, examples, 3rd party code/libraries? Through the decades of programming at least I became very proficient and rapidly spotting 'fishy' code and generally understanding code written by others. AI coding is nice because it is, for me, the opposite of what you have; reading the code it generates is much faster than writing it myself even though I am fast at writing it; not that fast.

I have said it here before, because I would love to see some videos of HNers who complain AI gives them crap as we are getting amazing results on large and complex projects... We treat AI code the same as human code, we read it and recommend or implement fixes.

> Hmm, that is interesting; reading is harder?

Much, much harder. Sure, you can skim large volumes of code very quickly. But the type of close reading that is required to spot logic bugs in the small is quite taxing - which is the reason that we generally don't expect code review processes to catch trivial errors, and instead invest in testing.

But we are not talking about large volumes of code here; we are talking about; LLM generates something, you check it and close read it to spot logic bugs and either fix yourself, ask the LLM or approve. It is very puzzling to me how this is more work/taxing than writing it yourself unless for very specific examples;

Examples from every day reality in my company; writing 1000s of lines of react frontend code is all LLM (in very little time) and reviews catch all the issues while the database implementation we are working on we spend sometimes one hour on a few lines and the LLM suggest things but they never help. Reviewing such a little bit of code has no use as it's the result of testing a LOT of scenarios to get the most performance out in the real world (across different environments/settings). However, almost everyone in the world is working on (similar issues like) the former, not the latter, so...

> writing 1000s of lines of react frontend code

Maybe we just located the actual problem in this scenario.

Shame we cannot combine the two threads we are talking about, but our company/clients structure do not allow us to do this differently (quickly; our clients have existing systems with different frontend tech; they are all large corps with many external and internal devs which built some 'framework' on top of whatever frontend they are using; we cannot abstract/library-fy to re-use across clients). I would if I could. And this is actually not a problem (outside it being a waste to which I agree) as we have never delivered more for happier clients in our existence (which is around 25 years now) than in 2024 because of that. Clients see the frontend and being able to over-deliver there is excellent.

You should use testing and a debugger for this. Don’t just read code, run it and step through it and observe code as it mutates state.

I think this is great advice for folks who work on software that is well enough contained enough that you can run the entire thing on your dev machine, and it happens to be written in the same language/runtime throughout.

Unfortunately I've made some career choices that mean I've very rarely been in that position - weird mobile hardware dependencies and/or massive clouds of micro services both render this technique pretty complicated to employ in practice.

Yeah it’s not really career choices. The code should be properly done so that it can hit multiple automated test targets.

It’s a symptom less of the career choice and more of poor coding practices.

Oh, we have automated tests out the wazoo. Mostly unit tests, or single-service tests with mocked dependencies.

Due to unfortunate constraints of operating in the real world, one can only run integration tests in-situ, as it were (on real hardware, with real dependencies).

When you type the code, you definitely think about it, deepening your mental model of the problem, stopping and going back and changing things.

Reading is massively passive, and in fact much more mentally tiring if whole reading is in detective mode 'now where the f*ck are some hidden issues now'. Sure, if your codebase is 90% massive boilerplate then I can see quickly generated code saves a lot of time, but such scenarios were normally easy to tackle before LLMs came. Or at least those I've encountered in past few decades.

Do you like debugging by just tracing the code with your eyes, or actually working on it with data and test code? I've never seen effective use of such regardless of seniority. But I've seen in past months wild claims about magic of LLMs that were mostly un-reproduceable by others, and when folks were asked for details they went silent.

Depends ofc on the complexity of the area, but... reading someones code to me feels a bit like being given a 2D picture of a machine, then having to piece together a 3D model in my head from a single 2D photo from one projection of the machine. Then figuring out if the machine will work.

When I write code, the hard part is already done -- the mental model behind the program is already in my head and and I simply dump it to keyboard. (At least for me typing speed has never been relevant as a limiting factor)

But I read code I have to reassemble the mental model "behind" it in my head from the output artifact of the thought processes.

Of course one needs to read code of co-workers and libraries -- but it is more draining, at least for me. Skimming it is fast but reading it thoroughly enough to find bugs by reading requires making the full mental model of the code which takes more mental effort for me at least.

There is a huge difference in how I read code from trusted experienced coworkers and juniors though. AI falls in the latter category.

(AI is still saving me a lot of time. Just saying I agree a lot that writing is easier than reading still.)

Running code in your head is another issue that AI won't solve (yet); we had different people/scientists working on this; the most famous person there being Brett Victor, but also Jonathan Edwards [0] and Chris Granger (lighttable). I find the example in [0] the best; you are sitting there with your very logically weak brain trying to think wtf will this code do while there is a very powerful computer next to you that can tell you. But doesn't. And yet, we are mostly restricted to first think out the code to at least some extent before we can see it in action, same for the AI.

[0] https://vimeo.com/140738254

Don’t run code in your head. Run it in reality and step through it with a debugger.

You mean like a blueprint of a machine? Because that is exactly how machines are usually presented in official documentation. To me the skill of understanding how "2d/code" translates to "3d/program execution" is exactly the skill that sets amateurs apart from pros, saying that, I consider myself an amateur in code and a professional in mechanical design.

"In the small", it's easy to read code. This code computes this value, and writes it there, etc. The harder part is answering why it does what it does, which is harder for code someone else wrote. I think it is worthwhile expending this effort for code review, design review, or understanding a library. Not for code that I allegedly wrote. Especially weeks removed, loading code I wrote into "working memory" to fix issues or add features is much much easier than code I didn't write.

> The harder part is answering why it does what it does, which is harder for code someone else wrote.

That's a vital part of writing software though.

True. I will save effort by only expending it when needed (when I need to review my coworkers' code, legacy code, or libraries).

Here is a chat transcript from today, I don't know if it'll be interesting to you. You can't see the canvas it's building the code in: https://chatgpt.com/share/67a07afe-3e10-8004-a5ea-cc78676fb6...

Yes, I have to read what it writes, and towards the end it gets slow and starts making dumb mistakes (always; there's some magically bad length at which it always starts to fumble), but I feel like I got the advantages of pairing out of it without actually needing to sit next to another human? I'll finish the script off myself and review it.

I don't know if I've saved actual _time_ here, but I've definitely saved some mental effort on a menial script I didn't actually want to write, that I can use for some of the considerably more difficult problems I'll need to solve later today. I wouldn't let it near anything where I didn't understand what every single line of code it wrote was doing, because it does make odd choices, but I'll probably use it again to do something tomorrow. If it needs to be part of a bigger codebase, I'll give it the type-defs from elsewhere in the codebase to start with, or tell it it can assume a certain function exists