That’s not context decay, that’s training data ambiguity. So much misinformation, nerfs, buffs, changes that an LLM can not keep up given the training time required. Do it for a game that has been stable and it knows its stuff.

It didnt gave outdated, on some cases it did, and with two tries telling it to search for updated information it got it right ( shouldn't need to do that though) but it also gave wrong information about sockets ( support skills) , which never existed or never were able to be socketed together in the first place. ( Ok maybe in 0.1, but that's what web search is for ... ) If it even can't handle easy versioned information from a game. How should it handle anything related to time, dates, news, science etc?

Like any human would, 75% certain with 99% confidence. That’s what you fail to realize. They aren’t “god mode machine”. They are “human-mode” machines and humans make mistakes in thinking just like you do. Some might say asking a powerful LLM for gaming tips is a waste of compute power. Others might say it gives you the knowledge of a new meta emerging. Either way, you both are going to get trained.

Please don’t pop the AI bubble, bro. Stop asking questions, bro. Believe the hype, bro.