This has been a very long time coming and the crackup we're starting to see was predicted long before anyone knew what an LLM is.

The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.

This has been playing out in slow motion ever since BinDiff: you can't patch software without disclosing vulnerabilities. We've been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.

It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.

The norms of coordinated disclosure are not calibrated for this environment. They really haven't been for the last decade.

I'm weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.

> It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.

Probably goes without saying but the last line of defense is not deploying your software publicly and instead relying on server-client architectures to do anything. Maybe this will be more common as vulnerabilities are more easily detected and exploited. Of course its not always feasible.

It has been annoying seeing my (proguard obfuscated) game client binaries decompiled and published on github many times over the last 11 years. Only the undeployed server code has remained private.

Interestingly I didn't have a problem with adversaries reverse engineering my network protocols until I was updating them less frequently than weekly. LLM assisted adversaries could probably keep up with that now too.

>Only the undeployed server code has remained private.

How easy to do you this is for LLM to build decent emulator of the server in question by just observing what you send and what you get as response?

not sure why downvoted. server emulators will become faster to make. protocol analysis will become faster as well.

Because while you could get something that drives a dumb interface, by moving the work and data to the server it's not available for the emulation software to use.

If the contract is well defined, the LLM can infer what it's purpose is, implementation, possibly even your secret sauce. There is no software moat anymore.

yes this is what i was trying to say. its quite common on older client-server games to do this sort of thing. powerful ai models will just make the work to recreate/emulate servers faster.

Except that emulating what is seen is surprisingly useful to find attack vectors. As a single deeper datapoint, one can look at more than just baseline behavior and delve into timing details to further refine implementation guesses.

> BinDiff: you can't patch software without disclosing vulnerabilities

That’s why Microsoft has been obfuscating its binary builds for at least the last two decades so that even the two builds from the same source would produce very different blobs.

Sounds dubious, do you have a citation? The disassembly looks very straightforward for a lot of Windows code.

They're not encoded, but the code blocks are shuffled. That's why disassembly does look straightforward, but it used to thwart BinDiff at the time.

That sounds a lot like US9116712, but I don't think its ever been publicly said that Windows does this.

If I understand correctly, that is just randomness comes from parallel compiling and linking.

If you saying there is a whole step just scrambling blobs, i will be very surprised.

What made you believe this is the case? any examples/links/etc.?

It was a part of our Windows build process when I was at Microsoft. I only assumed that they would keep doing it, but they might have as well dropped the practice.

How are they obfuscated?

See my sibling comment.

So what materializes now is basically tech debt returns on the "move fast and break things" paradigm?

> based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise!

Care to mention these reasons?

With "convenience of system administrators", I'm guessing you mean that there's a patch available that sysadmins can install, ideally before the vulnerability is disclosed? What else are sysadmins supposed to do, in your opinion? Fix the vulnerability themselves? Or simply shutdown the servers?

With the various copyfails of recent, it at least was possible to block the affected modules. If that were not the case, what would you have done, as a sysadmin?

Many vulnerabities seem to be in code paths for rarely used features. They can often be disabled.

I believe this premise that the cost of identification of vulnerabilities via diffs is going down over time begs the question "what do our processes need to look like if simply making the patch public is the disclosure?"

Current coordinated disclosure practices have a dependency on patching and disclosure being separate, but the gap between them seems to be asymptomatically approaching zero.

Right, all I'm saying is that we were asymptotically close many years ago; all that's changed is that nobody can kid themselves about it anymore.

The actual policy responses to it, I couldn't say! I've always believed, even when there was a meaningful gap between patching and disclosing, that coordinated disclosure norms were a bad default.

What process or mechanism would you prefer to use instead of coordinated disclosure?

I always understood the business reasons that brought about coordinated vulnerability disclosure & I've been forced to toe this line at employers, but I've always been firmly in the full disclosure camp. I am so ready for this.

You’re obviously one of the most knowledgeable people on this topic around here.

What would the best solution be? And where do you believe the industry is headed (which may very well be something other than the best solution) ?

I can’t think about anything other than improving operations, but given the state of the industry, this seems like a pipe dream.

[deleted]