> Its too shallow. The deeper I go, the less it seems to be useful. This happens quick for me.

You must be using a free model like GPT-4o (or the equivalent from another provider)?

I find that o3 is consistently able to go deeper than me in anything I'm a nonexpert in, and usually can keep up with me in those areas where I am an expert.

If that's not the case for you I'd be very curious to see a full conversation transcript (in chatgpt you can share these directly from the UI).

I have access to the highest tier paid versions of ChatGPT and Google Gemini, I've tried different models, tuning things like size of context windows etc.

I know it has nothing to do with this. I simply hit a wall eventually.

I unfortunately am not at liberty to share the chats though. They're work related (I very recently ended up at a place where we do thorny research).

A simple one though, is researching Israel - Palestine relations since 1948. It starts off okay (usually) but it goes off the rails eventually with bad sourcing, fictitious sourcing, and/or hallucinations. Sometimes I actually hit a wall where it repeats itself over and over and I suspect its because the information is simply not captured by the model.

FWIW, if these models had live & historic access to Reuters and Bloomberg terminals I think they might be better at a range of tasks I find them inadequate for, maybe.

> I unfortunately am not at liberty to share the chats though.

I have bad news for you. If you shared it with ChatGPT (which you most likely did), then whatever it is that you are trying to keep hidden or private, is not actually hidden or private anymore, it is stored on their servers, and most likely will be trained on that chat. Use local models instead in such cases.