> Their initial reply from the CEO: "I would love to hear what the vulnerability is, but I assume you want to get paid for it. Is that the play?"
Well that’s pretty damning.
> Their initial reply from the CEO: "I would love to hear what the vulnerability is, but I assume you want to get paid for it. Is that the play?"
Well that’s pretty damning.
Should have been handled better, but some context is necessary:
If your name is associated with a startup in a visible leadership position you will get mass-spammed from people claiming to have discovered critical vulnerabilities in your system. When you engage with them, the conversation will turn into requests to hire them for their services.
So the CEO handled it poorly, but it's also not a great choice to withhold the details of the vulnerability in initial contact. If the goal was to get something fixed it should have been included in an easy-to-forward e-mail that could have been sent to someone who could act upon it.
Anyone who works with security or bug bounties can tell you that the volume of bad reports was a problem before LLMs. Now that everyone thinks they're going to use LLMs to get gigs as pentesters the volume of reports is completely out of control.
The number of spam "I found a vulnerability" emails you get as a SaaS operator is ridiculous, they never offer any proof of a vuln and just want you to confirm you have a bug bounty program (in which case they'll start scanning afterwards), or to pay ahead of time for the information or they'll threaten to release it.
Their response isn't damning to me. It sounds like they just assume they're one of these spammers.
I keep getting emails with the content like: "I found a critical bypass vulnerability in your app what is the appropriate channel to disclose it, and do you have a bounty program?"
I tried engaging and replying to them, and it inevitably turns into: "Yeah, we don't actually have the vulnerability, but you are totally vulnerable, just let us do a security audit for you".
I have a pre-written reply for these kinds of messages now.
Yeah, the signal to noise ratio on vulnerability reports is very weak, especially when the initial report withholds any detail.
I get tons of these messages too and the ones that do include details are the kind of junk you get from free "website vulnerability scanners" that are a bunch of garbage that means nothing -- "missing headers" for things I didn't set on purpose, "information disclosure vulnerabilities" for things that are intentionally there, etc... You can put google.com into these things and get dozens of results.
I run bug bounty for a fairly large OSS project and the amount of shitty/bad actor spam/beg bounties etc we get is huge. Like 95% of the emails to security@ are straight garbage
From the looks of it, they actually asked for a way to report.
email security@company
Sure that is perhaps a good way to inquire about the appropriate channels to disclose a security vulnerability, but email is not a secure communication method for sending the details about a security vulnerability
It's kind of insane to think that the state of email encryption is still so bad in The Future Year 2026.
No flying cars? Okay. Nobody traveled much beyond the orbit of the Moon? Dang. But email? We didn't even get reliable privacy separate from identity?
> Nobody traveled much beyond the orbit of the Moon?
Oh, don't think that outer space will let you escape the misery of email:
> "I have two Microsoft Outlooks and neither one is working": Artemis II astronauts
start there and handle everything once you get in contact with appropriate people
Yeah. I'm just saying how it could have been overlooked. Doesn't excuse it, though.
i have even more damning ones.
When the "good Samaritan" do not go to the vendor, they go to the client (i.e., they do not contact the DIB company, they contact the Gov agency).
I have seen government contractors getting pilloried, losing their livelihood when this happened. And, yes there is always a "quick fix offer" by the "good Samaritan" to the vendor and promised re-assurance to the Gov agency, only if this misguided vendor would go with their solution.
It is also not unusual to find out later on, that the identification or even the resource reported on was wrong - but by this time the Gov agency already punished the contractor and the reporting "good Samaritan" is laughing (sometimes to the bank).
they can get away with unethical vulnerability disclosure because think of the children, the threat to the nation, grandma off the cliff, and <insert your favorite cliche justification of malfeasance>.
Yes, sore subject.
That just sounds like good old business to me. When outside of public view, good businessmen are extremely cut-throat and unethical.
They could sell the next one to an adversary for a lot more money if they're going to act like that.
Yes, there are also many other lucrative illegal activities.
How is it illegal? It’s information available to the public.
If you sell something to someone and they do computer crimes, you're going to have to prove that you couldn't've known that they're a computer crimer.
It's the same thing with selling general offensive security tools. You have to proactively make it clear that it's for testing and not criminal use. Otherwise, cops are going to assume you're complicit and make things shitty.
Isn't it also illegal to withhold knowledge of a vulnerability for payment? It sounds like it should fall under some variety of blackmail.
That would be even worse than our already bad system.
The system is already pretty bad because vendors underinvest in security, and then to fix it, researchers have to volunteer their time to investigate with no guarantee of payment. If the vendor could force researchers to hand over findings for free, nobody would want to do security research except hobbyists having fun. They're basically signing up for hours of tedious forced labor to explain vulnerabilities to the vendor.
I wish there was legislation that allowed the government to fine vendors for security vulnerabilities like this where the amount scales based on how much user data they leaked. And it could function like other whistleblower systems where a researcher who spots a leak can report it to the government and collect 50%. That way, if the vendor says, "We're not paying you," the researcher can turn around and collect the money from fines.
Vendors routinely get researchers arrested for breaking into their computers as well.
Legality aside there is no market for this really.
Data breaches of average people sell for quite a bit of money, often for phishing. I find it hard to believe no one would be interested in this.
Or any other dataset with a hyper targeted demographic.