It doesn't matter if you brute forced their crappy login with commonly-used credentials. You think it's OK for someone to rummage around in your garage just because they correctly guessed your keycode was 12345? Of course not.
It doesn't matter if you brute forced their crappy login with commonly-used credentials. You think it's OK for someone to rummage around in your garage just because they correctly guessed your keycode was 12345? Of course not.
I'm more focused on the assertion that "The CFAA isn't super complicated."
Which raises sincere doubts about the commenter's credibility to make such a claim.
How does „you’re not allowed to guess credentials“ mean it’s complicated?
I think that's a massive oversimplification of how the CFAA has been applied.
Doesn't this posture also criminalize white-hat hackers, whose disclosures would protect you from the people who actually want to do damage?
> Doesn't this posture also criminalize white-hat hackers, whose disclosures would protect you from the people who actually want to do damage?
There is no law for "white-hat hackers". You don't get to break into a system because the color of your hat.
"White-hat hackers" have contracts, or very specific rules of engagement. Having run many a bug bounty, if someone was malicious, we would absolutely work to prosecute.
You can also find bugs in software freely, as long as you don't obtain unauthorized access to other people's systems.
This isn't true: there is, jurisdictionally dependent and I think also dependent on DOJ norms, a broad exception for good-faith white hat vulnerability research that would otherwise violate CFAA. Like I said, CFAA is very complicated in practice.
(I don't know enough about the CFAA to know whether this is true so I'll assume it is.)
To continue the garage door analogy, you wouldn't walk up to any random garage door and try code 12345 to help protect the owner's stuff, would you?
To stick with this analogy: I think a white hat equivalent would be more like driving down the street with a garage door remote set to a default code and then notifying anyone whose door opens in response that they should change their code. I don't think that should be illegal.
You think walking through an unlocked door should result in federal charges?
Walking through an unlocked door that has a sign "private property, do not enter", searching for sensitive information, finding it and exposing it surely could.
Or not, depending on how the party who owns what's inside that door feels. But if it feels he should be prosecuted, then hell yes, the state should do that. My 2c.
So what about using rakes or bump keys? Very low tech, very easy. Can defeat some poor quality locks.
Still sounds like petty crime that doesn't need the FBI to roll in.
The point is that in the physical world there is some notion of proportionality in the response to trespassing depending on the actual damage done and sophistication and premeditation of the act. We don't generally lock up people because they accidentally walked into an area they shouldn't have. But once computers are involved we have laws that automatically make even even minor infractions into a big scary issue that allows the government to essentially destroy someone's live.
So now the door is unlocked?? Where are the goal posts?
Don't mess with people's stuff if they don't want you to. This seems very simple to me. But I'm aware that you're trying to find some fringy gray area where you think it will be OK to mess with people's stuff even though they don't want you to.
If we're making an analogy to the Weev case then yes the door was unlocked, with the explicit intent that the general public could come through that door and access some of the documents.