few thoughts on this- it's not gambling if the most expected outcome actually occurs.

It also depends on what you're coding with;

- If you're coding with opus4.6, then it's not gambling for a while.

- If you'r coding with gemini3-flash, then yeah.

One thing I have noticed though is- you have to spend a lot of tokens to keep the error/hallucination rate low as your codebase increases in size. The math of this problem makes sense; as the code base has increased, there's physically more surface where something could go wrong. To avoid that you have to consistently and efficiently make the surface and all it's features visible to the model. If you have coded with a model for a week and it has produced some code, the model is not more intelligent after that week- it still has the same layers and parameters, so keeping the context relevant is a moving target as the codebase increases (and that's why it probably feels like gambling to some people).

> it's not gambling if the most expected outcome actually occurs.

> you have to spend a lot of tokens to keep the error/hallucination rate low

Ironically, I find your comment more effective at convincing me AI coding is gambling than the original article. You're talking about it the exact same way that gamblers do about their games.

so your whole argument is that you are convinced that ai coding is gambling because according to you i am talking about it like gamblers talk about gambling?

- Was there anymore intelligence that you wanted to add to your argument?

lol that's interesting. care to explain why?

I mean, the most expected outcome does mostly happen. When gambling, you are expected to lose money and you do. I'm not quite convinced that the same isn't true for vibecoding.