> Yes, identity for agents is a real problem.

I don't agree that bot identity is a problem or something that we are better off with if it is solved. I'd rather have a web where adversarial interoperability is possible than one where service operators have a say what tools you can use to access their websites.

The main problem with we are see today is bad actors that

a) completely ignore and/or side step copyright an licensing to use work of others for their own benefit without contributing anything back

b) send an unreasonable number of request

Both will not be solved with identifying bots, that will at best get rid of the small players and give Google, Meta, etc. even more power. The unsustainable parasitic theft of open content is something that needs to be dealt with legally and nothing else will solve it. DRM never works. If enough people block Gemini, Google will just feed it with Google bot crawls and no one can afford to block that. And then they will sell the data to other players. Or someone will make a browser extension to do the same.

The second issue also should be solved via legislation and enforcement thereof. It can also be solved by disconnecting and/or throttling abusive networks wholesale - whole countries if need be. You know, like we have been handling abusive network participants forever. Trying to detect "bots" is a fools errand that will only get rid of the laziest crawlers. You cannot win the bot blocking game when the bots can afford to spend more resources per request than real users are willing to.

So yes, the web must remain open. But to do that we must not have bot identity checks at all - whether that's managed by a single company that has inserted it as a gatekeeper or not.