> Yes, TDD on production code is nice in theory, but it doesnt work in my case...

Parent said something more along the lines of "they don't work in every case, and trying to force it in every case is misguided".

I agree that too big is more common than too small with respect to PR size, but you aren't putting forward much of an argument against parents "there are no absolutes" argument by straw manning them.

I think "doesn't work in every case" is true for basically every rule of course. But 99% of people in the industry are not qualified to make that call because they will always choose "not" out of laziness rather than because it actually wasn't a good idea

Have we stoped celebrating laziness being a virtue in software development? Discipline doesn't and will never scale and the pressures of business mean that processes that processes that put up walls to shipping will always crumble.

Real example, we do PR reviews because they're required for our audit and I'm of the opinion that they're mostly theater. It's vanishingly rare that someone actually wants a review rather than hitting the approve button and will call it out specifically if they do. Cool. This means you can't count on code review to catch problems, discipline doesn't scale after all. So instead we rely on a suite of end to end tests that are developed independently by our QA team.

Laziness is a virtue when it leads to automation. Not when it leads to shoddy work

Give me one example then. One is all it takes to disprove a rule.

Im fairly sure that I could explain how to break up any long PR in a sensible way. Parent thinks couldnt be done, so do you - what is an example?

The only exception i can think of is something where 99.9% of the changes are autogenerated (where i wouldnt really be reading it carefully anyway, so the length is immaterial...).

> Im fairly sure that I could explain how to break up any long PR in a sensible way. Parent thinks couldnt be done, so do you - what is an example?

Not couldn't - but shouldn't, such as when there's tight coupling across many files/modules. As an example, changing the css classes and rules affecting 20+ components to follow updated branding should be in one big PR[1] for most branching strategies.

Sometimes it's easier to split this into smaller chunks and front-load reviews for PRs into the feature branch, and then merge the big change with no further reviews, which may go against some ham-fisted rule about merging to main. Knowing when to break rules and why, ownership, and caring for the spirit of the law and not just the letter are what separates mid-levels from seniors.

1. Or changeset, if your version control system allows stacking.

You can and should break that up because I'm probably going to want to see screenshots to ensure that the branding changes make sense in context and everything looks consistent.

How would you do this? You'd either

1. Create N pull requests then merge all of them together into a big PR that would get merged into mainline at once 2. Do the same thing but do a bit of octopus merging since git merge can take multiple branches as arguments. Since most source control strategies are locked down, this isn't usually something that I can tell my juniors to do

The point of breaking things down like this is to minimize reviewer context. With bigger PRs there's a human tendency to try and hold the whole thing in your head at once, even if parts of the pull request are independent from others.

> The point of breaking things down like this is to minimize reviewer context.

This principle is much more important than some rule that says "Merges to main should not be more than 150 lines long". Sticklers for hard-and-fast rules usually haven't achieved the experience to know that adhering to fundamental principles will occasionally direct you to break the rules.

> Merges to main should not be more than 150 lines long

This can be done by allowing a flag in the commit message that bypasses the 150 line long (or whatever example) rule in the CI that enforces it. Then the reviewers and submitter can agree whether or not it makes sense to bypass the rule for this specific case.

In many cases like this, it's okay to override a rule if the people in charge of keeping the codebase healthy agree it's a special case.

minimizing reviewer context is one thing a PR can try to do, but it's not like that's any kind of universal most-important metric that every PR needs to optimize for, in fact very often minimizing reviewer context is in direct tension with making changes that are holistic and coherent

code review is meant to take time

>Not couldn't - but shouldn't, such as when there's tight coupling across many files/modules.

No, this is a pretty classic example of where you can break up the work by first refactoring out the tightly wound coupling in one PR before making the actual (now simpler/smaller) change in a second PR.

I agreed with you initially.

> I'm fairly sure that I could explain how to break up any long PR in a sensible way. Parent thinks couldnt be done, so do you - what is an example?

To me, when I meet experts in any field, the quality that stands out isn't that they do everything to expert level, it's that they get everything done as they said they would. Sometimes that means big PRs, because that's the environment created, and the expert finds the way to get the job done.

I'm not doubting you _could_ break up any PR into a shorter one. But that's kind of the point of an expert: they recognise what makes sense to do in reality, rather than just doing something because it's best practice and expecting everyone else to do the same.

They ultimately get the thing done how they said they would.

In my experience - this one is the correct one. Make a commitment, keep the commitment, stay responsible for the commitment afterwards.

This whole chain is like arguing on how tidy your desk should be. Some people like it fastidious to the nth degree. Some people prefer a little mess.

In neither case does that preference really matter much compared to all the other things a real job entails.

>I'm not doubting you _could_ break up any PR into a shorter one. But that's kind of the point of an expert: they recognise what makes sense to do in reality

I have seen plenty of huge PRs which were more trouble than they were worth to break up after discovery. At some point it becomes like unbaking a cake. It's a trade off.

Ive just never thought when I saw any of them that there wasnt a more practical way to get there with a bunch of smaller PRs.

Unlike dealing with an already existent large PR this isnt really a trade off thing - there are basically almost no circumstances when it is preferable to review one 1000 line code change instead of 4x self contained 200 line changes.

> they get everything done as they said they would

This. It saves everyone's time.

Lets say you have an api bug where you are allowing clients to ask for data with a very expensive query, and you want to dissallow that behavior. I think it makes more sense to change both the api spec (remove the endpoint) and the backend (dissallow queries of that kind in the future) in one go. That way you can note in the one PR exactly why both changes are made, referencing whatever bug/postmortem. Making two PRs that separately look like fixes to the issue can be confusing later, and don't really buy you any clarity in the PR itself.

Is it possible to break them up? Sure. Is it better to do so? I don't think so.

Also, for clarity, neither myself, nor op, every said couldn't be done.

A widely used class has a bad API. You refactor it to make a cleaner API, but your change isn't backwards compatible. Your options are:

- Change everything all at once. This creates a large PR. - Split it up into multiple small PRs. Now your individual PRs don't compile, and make less sense on their own. - Create a new class, and then split up multiple PRs that transition code to use the new class, and then finally a PR to remove the old class. This is more work for both the author and the reviewer and the individual PRs are harder to understand on their own.

This is interesting. I believe one way to deal with such breaking changes is to have multiple PRs, where the breaking change in each is hidden (protected) by a feature flag and tested by unit tests. Once all the PRs are committed, end to end testing can be done by enabling the flag. Any problems in production can be quickly reverted by disabling the flag. Eventually, a final PR removes the now-useless flag.

Of course, your mileage may vary; this technique is certainly not suitable for all breaking changes or all workfkows.

API changes breaking BC feel like they should be using versioning. I don't see enough people putting versioning support in to their API stuff at the outset. I've been chastised for doing that with "YAGNI". And then... one day, we do need it, and trying to introduce versioning support becomes... that much harder.

The context here was a class API, not a REST API, so versioning is not relevant.

You're right... misread that.

Does seem like you could namespace classes in to versions. Then it's much clearer which version of a class a caller is using.

A new feature that fundamentally changes the way a lot of code is structured.

A group of features that only combined produce a measurable output, but each one does not work without the others.

A feature that will break a lot of things but needs to be merged now so that we have time for everyone to work on fixing the actual problems before deadline X as it is constantly conflicting every day and we need to spend time on fixing the actual issues, not fixing conflicts.