Better at what

Paperclip maximization.

Better at avoiding human oversight and better at achieving whatever meaningless goal (or optimization target) was unintentionally given to it by the lab that created it.

So better at nothing that actually matters.

I disagree.

I expect AI to make people's lives better (probably much better) but then an AI model will be created that undergoes a profound increase in cognitive capabilities, then we all die or something else terrible happens because no one knows how to retain control over an AI that is much more all-around capable than people are.

Maybe the process by which it undergoes the profound capability increase is to "improve itself by rewriting its own code", as described in the OP.

Just stop using it.