If they really need to guard the thinking output, they could encrypt it and store it client side. Later it'd be sent back and decrypted on their server.

But they used to return thinking output directly in the API, and that was _the_ reason I liked Claude over OpenAI's reasoning models.