Gemini has always felt like someone who was book smart to me. It knows a lot of things. But if you ask it do anything that is offscript it completely falls apart

I strongly suspect there's a major component of this type of experience being that people develop a way of talking to a particular LLM that's very efficient and works well for them with it, but is in many respects non-transferable to rival models. For instance, in my experience, OpenAI models are remarkably worse than Google models in basically any criterion I could imagine; however, I've spent most of my time using the Google ones and it's only during this time that the differences became apparent and, over time, much more pronounced. I would not be surprised at all to learn that people who chose to primarily use Anthropic or OpenAI models during that time had an exactly analogous experience that convinced them their model was the best.

We train the AI. The AI then trains us.

I'd rather say it has a mind of its own; it does things its way. But I have not tested this model, so they might have improved its instruction following.

Well, one thing i know for sure: it reliably misplaces parentheses in lisps.

Clearly, the AI is trying to steer you towards the ML family of languages for its better type system, performance, and concurrency ;)

I made offmetaedh.com with it. Feels pretty great to me.