I have the same experience, where Gemini thinks dramatically less than ChatGPT (or Claude), while achieving 90%-95% of the answer on it's first go. I'm surprised this isn't talked about more, because the difference is stark, usually around a factor of 5. This shows up in benchmarks too, where Gemini consistently uses many fewer tokens per solve.
So while ChatGPT produces a correct and/or thorough result after 10 minutes, Gemini got most of the way there in 2 minutes. The downside being you need to prompt again to get to the same level as ChatGPT, but you also can get ~5 prompts in the same amount of time.
I have claude to, but I use it the least because it limits so quickly. However its thinking time seems to be on par with ChatGPT
Probably because Gemini has access to Google's Knowledge Graph which has been around since 2012. One of the many major advantages Google has over other players that I also think is underdiscussed