I've said before that we need strong legal protections for white-hat and even grey-hat security researchers or hackers. As long as they report what they have found and follow certain rules, they need to be protected from any prosecution or legal consequences. We need to give them the benefit of the doubt.

The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.

Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.

As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.

> we need strong legal protections for white-hat and even grey-hat security researchers or hackers.

I have a radical idea which goes even further: we should have legaly mandated bug bounties. A law which says that if someone makes a proper disclosure of an actual exploitable security problem then your company has to pay out. Ideally we could scale the payout based on the importance of the infrastructure in question. Vulnerabilities with little lasting consequence would pay little. Serious vulnerabilities with potential to society wide physical harm could pay out a few percents of the yearly revenue of the given company. For example hacking the high score in a game would pay only little, a vulnerability which can collapse the electric grid or remotely command a car would pay a king’s ransom. Enough to incentivise a cottage industry to find problems. Hopefully resulting in a situation where the companies in question find it more profitable to find and fix the problems themselves.

I’m sure there is a potential to a lot of unintended consequences. For example i’m not sure how could we handle insider threats. One one hand insider threats are real and the companies should be protecting against them as best as they could. On the other hand it would be perverse to force companies to pay developers for vulnerabilities the developers themselves intentionally created.