I have access to the highest tier paid versions of ChatGPT and Google Gemini, I've tried different models, tuning things like size of context windows etc.
I know it has nothing to do with this. I simply hit a wall eventually.
I unfortunately am not at liberty to share the chats though. They're work related (I very recently ended up at a place where we do thorny research).
A simple one though, is researching Israel - Palestine relations since 1948. It starts off okay (usually) but it goes off the rails eventually with bad sourcing, fictitious sourcing, and/or hallucinations. Sometimes I actually hit a wall where it repeats itself over and over and I suspect its because the information is simply not captured by the model.
FWIW, if these models had live & historic access to Reuters and Bloomberg terminals I think they might be better at a range of tasks I find them inadequate for, maybe.
> I unfortunately am not at liberty to share the chats though.
I have bad news for you. If you shared it with ChatGPT (which you most likely did), then whatever it is that you are trying to keep hidden or private, is not actually hidden or private anymore, it is stored on their servers, and most likely will be trained on that chat. Use local models instead in such cases.