Or we could just demand agents that offer this level of introspection?

I certainly wouldn't trust self-reporting on this

Not only trust, but how you later optimize what is in the context to cater how you use llms... There is a whole world to be explored inside that context window.