Not in my experience with Gemini Pro and coding. It hallucinates APIs that aren't there. Claude does not do that.
Gemini has flashes of brilliance, but I regard it as unpolished some things work amazingly, some basics don't work.
Not in my experience with Gemini Pro and coding. It hallucinates APIs that aren't there. Claude does not do that.
Gemini has flashes of brilliance, but I regard it as unpolished some things work amazingly, some basics don't work.
It's very hard to tell the difference between bad models and stinginess with compute.
I subscribe to both Gemini ($20/mo) and ChatGPT Pro ($200/mo).
If I give the same question to "Gemini 3.0 Pro" and "ChatGPT 5.2 Thinking + Heavy thinking", the latter is 4x slower and it gives smarter answers.
I shouldn't have to enumerate all the different plausible explanations for this observation. Anything from Gemini deciding to nerf the reasoning effort to save compute, versus TPUs being faster, to Gemini being worse, to this being my idiosyncratic experience, all fit the same data, and are all plausible.
You nailed it. Gemini 3 Pro seems very "lazy" and seems to never reason for more than 30 seconds, which significantly impacts the quality of its outputs.