> GPT 5.5 is the first model good enough for me to just let rip.

You know this is the exact same thing said during Opus 4.6, right?

That makes it hard to believe because it's the same "last week's model was so much behind you can't even comprehend" meme that's been going on throughout last year.

More info dumped into tickets and projects is great for understanding for both people and LLM. But hopefully not LLM generated.

> You know this is the exact same thing said during Opus 4.6, right?

Yeah, and for Sonnet 3.5 or even GPT4o. Because it was true for many. Different people have different timing to reach acceptance stage.

It's just cope. I'm so close to just never coming back to HN because the quality of thought has just gone through the floor. Anything whatsoever to hedge one's way to fellating a phallusless chatbot

>You know this is the exact same thing said during Opus 4.6, right?

spicyusername said this exact same thing about Opus 4.6?

or is there more than one person on HN, and perhaps they have different opinions?

There wasn't any personal mention in my post. A snark remark at the fact that this cycle keeps continuing and every new release is game changer except in the banchmarks where there is mostly a slight couple percent change, generally.

You're missing the point that it's (conceivably, and probably) different people making the comments. Each model release has a few new converts, which is expected if the models are in fact getting better at agentic coding.

You're implying it's a hype train when in fact it's an adoption curve.

> which is expected if the models are in fact getting better at agentic coding

Is it? Or is it also explainable that the models are not getting better but people are still adopting it.

If the models were getting we’d be seeing mobile apps with new features at 10x the rate previously, or websites with 4 times the number of features. But we’re not.