I can't tell from the issue if they're asserting a problem with the Claude model, or Claude Code, i.e. in how Claude Code specifically calls the model. I've been using Roo Code with Claude 4.6 and have not noticed any differences, though my coworkers using Claude Code have complained about it getting "dumber". Roo Code has its own settings controlling thinking token use.

(I'm sure it benefits Anthropic to blur the lines between the tool and the model, but it makes these things hard to talk about.)

I also havent noticed the degradation and I'm not on Claude Code. I'm on week 4 of a continuous, large engineering project, C, massive industrial semiconductor codebase, with Opus, and while it's the biggest engagement I've had, its a single agent flow, and it's tiny on the scale of the use case in the post, so I wonder if they are just stressing the system to the point of failure.