I directionally disagree with this:

``` It's common for engineers to end up working on projects which they don't have an accurate mental model of. Projects built by people who have long since left the company for pastures new. It's equally common for developers to work in environments where little value is placed on understanding systems, but a lot of value is placed on quickly delivering changes that mostly work. In this context, I think that AI tools have more of an advantage. They can ingest the unfamiliar codebase faster than any human can, and can often generate changes that will essentially work. ```

Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.

Having said that it also depends on how important it is to be writing bug free code in the given domain I guess.

I like AI particularly for green field stuff and one off scripts as it let's you go faster here. Basically you build up the mental model as you're coding with the AI.

Not sure about whether this breaks down at a certain codebase size though.

Just anecdotally - I think your reason for disagreeing is a valid statement, but not a valid counterpoint to the argument being made.

So

> Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.

This is completely correct. It's a very fair statement. The problem is that a developer coming into a large legacy project is in this spot regardless of the existence of AI.

I've found that asking AI tools to generate a changeset in this case is actually a pretty solid way of starting to learn the mental model.

I want to see where it tries to make changes, what files it wants to touch, what libraries and patterns it uses, etc.

It's a poor man's proxy for having a subject matter expert in the code give you pointers. But it doesn't take anyone else's time, and as long as you're not just trying to dump output into a PR can actually be a pretty good resource.

The key is not letting it dump out a lot of code, in favor of directional signaling.

ex: Prompts like "Which files should I edit to implement a feature which does [detailed description of feature]?" Or "Where is [specific functionality] implemented in this codebase?" Have been real timesavers for me.

The actual code generation has probably been a net time loss.

> I've found that asking AI tools to generate a changeset in this case is actually a pretty solid way of starting to learn the mental model.

This. Leveraging the AI to start to develop the mental model is an advantage. But, using the AI is a non-trivial skill set that needs to be learned. Skepticism of what it's saying is important. AI can be really useful just like a 747 can be useful, but you don't want someone picked off the street at random flying it.

> This. Leveraging the AI to start to develop the mental model is an advantage

Is there any evidence that AI helps you build the mental model of an unfamiliar codebase more quickly?

In my experience trying to use AI for this it often leads me into the weeds

Yeah fair points particularly for larger codebases I could see this being a huge time saver.