> But chat control will have "master keys"to all communication.

That's not my understanding. My understanding is that ChatControl will run client-side scanning and report what is deemed illegal.

This is not a master key to all communications.

Last time I checked, the preferred method was to simlify known CSAM material enough that you can hash the result, then repeat in the client end and hope nothing else has the same hash.

Which makes it even less of a problem than what people say. I see that as an argument in favour of ChatControl. If it really "just" compares hashes locally, then the claims that it breaks encryption is even more wrong.

But how is this supposed to protect children if say an abuser takes pictures or videos of the victim?

I don't see this as a valid argument. You can't say "I can find a situation where ChatControl does not help, therefore ChatControl is always useless".

On the contrary, it is an argument in favour of extending ChatControl to using machine learning for detecting such cases.

The problem, again, is that we don't want to have an opaque system that can be extended to surveilling new things, because it's very difficult to audit and make sure it is not abused.