AI apologists need to decide whether they are claiming LLMs are almost-AGI, or not.

This backlash of pointing out LLM failures is a reaction to the overblown hype. We don't expect a statistical-language-processing-gadget to do math well, but then people need to stop claiming they're something other than statistical-language-processing-gadgets.