agree on the code side. for decisions i'd push it a bit further though. the trap isn't really that the model does your thinking for you, it's that it agrees with the thinking you already brought to it. you can't verify a decision the way you verify code, there's no test that fails when "this answer is wrong because you wanted to hear it." so even if you don't outsource the thinking, you can walk away with a plausible-sounding agreement that feels like understanding. that's the failure mode that worries me more. replacement is honestly the easier one to spot.

the agreement problem is the one that worries me more too. you can catch replacement, it's visible. you can't catch a model that confidently validates your existing blind spots back at you the deeper version of this: the reps that build judgment used to happen naturally in production. you shipped something wrong, it broke, you learned. that feedback loop is now compressed or gone entirely. the question isn't really "is AI replacing your thinking" it's "where are the reps happening now." if the answer is nowhere, the judgment debt is accumulating invisibly and the AI agreement problem you're describing is exactly how it stays invisible