There’s in-fact an in between, which is humans enforcing rules. It’s what’s in place now. It does have an actual effect, it’s not like it’s imaginary or doesn’t do anything. Some of the rules aren’t practically enforceable by software alone, at least so far (things like “don’t try to fingerprint the user or device in unauthorized ways”)
Those rules are even less enforceable by human reviewers because they don't employ people to reverse engineer your app, never mind any subsequent updates.
Your contention is that the review process entirely fails at enforcing privacy and security rules that cannot be achieved entirely through automation, or fails at such a high rate that it may as well be entire?
That doesn’t reflect my experience submitting apps, nor as a user of Apple devices. It’s certainly imperfect, but it achieves a lot more than if they simply stopped doing it.
[edit] and in fact, some of the automated checks wouldn’t be practical to run on a user’s device—are those also totally ineffective?