I would always prefer something local. By definition it's more secure, as you are not sending your code on the wire to a third party server, and hope that they comply with the "We will not train our models with your data".
I would always prefer something local. By definition it's more secure, as you are not sending your code on the wire to a third party server, and hope that they comply with the "We will not train our models with your data".
That's a fair point - you're talking about data security (not sending code to third parties) and I was talking about output quality security (what the model generates). Two different dimensions of "secure" and honestly both matter.
For side projects I'd probably agree with you. For anything touching production with customer data, I want both - local execution AND a model that won't silently produce insecure patterns.
I think you are deluded if you think the latter does not happen with hosted models.
Oh it absolutely does, never said otherwise. Hosted models produce plenty of insecure code too - the Moltbook thing from like a week ago was Claude Opus and it still shipped with wide open auth.
My point was narrower than it came across: when you swap from a bigger model to a smaller local one mid-session, you lose whatever safety checks the bigger one happened to catch. Not that the bigger one catches everything - clearly it doesn't.