Also LLMs have fairly well proven that even if you have calculator you probably should have ability to do some sanity check on the answer. In case you hit wrong button for example. With LLMs they can be confidently wrong and unless you are able to tell you are out of luck...

Plus all about capability to actually retain whatever you ask from the model...

Ironically, America finally automated one of our oldest workplace specialties: grifting! People overconfidently declaring made-up nonsense in board meetings is a classic executive behavior. I suppose it’ll be interesting to see what happens now that everyone has one in their pocket. Will their patter improve from exposure alone, or will they more easily detected because their skills weaken from disuse?