I actually prefer to not be dogmatic about one approach or the other, and I think the answer will be different for different dev/ci/deployment workflows.
Here's my approach, of course from experience limited to my (past) workplace. We have the usual CI setup, where each merged PR triggers a build followed by a deploy to staging.
This means that what goes in a PR is decided by what sub-functionality of the feature at hand has to be tested first[0], whereas what goes in commits is decided by what is easy to read for reviewers for PRs where such an approach makes sense [1], or it simply doesn't matter much like you said, for a lot of other cases.
That is the way I like to think about it.
I know I know git bisect etc... but IME in the rare cases we used it we just ran bisect on the master branch which had squashed PR level commits, and once we found the PR, it was fairly straightforward to manually narrow it down after that.
In more systems level projects there will actually be clear layers for different parts of your code (let's be honest, business logic apps are not structured that well, especially as time goes) and the individual-commits-should-work approach works well.
[0] ideally, the whole feature is one PR and you config-gate each behaviour to test individually, but that's not always possible.
[1] for example, let's say we have a kafka producer and consumer in the same repo. They each call a myriad of internal business logic functions, and modifications were required there too. It is much easier to read commits separating changes, even to the same function, made as part of the producer flow and the consumer flow.