I can't quite make out if this is new or not. The attack vector here seems congruent with a similar exploit from a couple months ago [1]
But still might be an open threat. On the email thread Jens seems to think that this is already patched and in stable, he also points out that for this exploit to work (as written in the article) you already need escalated privileges [2] Catchy title though.
No, you can grant yourself this inside an unprivileged user namespace. `unshare -Ur capsh --print` lists the capabilities inside a user namespace and demonstrates that it has both CAP_SYS_ADMIN and CAP_NET_ADMIN.
Almost all distros allow unprivileged user namespaces, and in my opinion this is the right decision, because they're important for browser sandboxing which I think is more important than LPEs.
You're probably right, but that seems like the less important part of this. At that point you've already got an out-of-bounds write. Another comment speculated that you could use PageJack as an alternative exploit path once you have that primitive: https://news.ycombinator.com/item?id=48069623
What is happening? I see multiple outages and CVEs is being reported on HN's front page. I've never seen these many security/incident related posts on HN's front page.
Some combination of reporting bias given concerns about LLM security capabilities and actual new vulnerabilities found with LLM assistance. Even if exploits and outages are unrelated to LLMs, I'm certainly thinking about whether claude could build these things (or if actors already have).
Slowly at first, and then suddenly. AI assisted anything follows this trend. As capabilities improve, new avenues become "good enough" to automate. Today is security.
i believe a good portion of the cves hitting the front page are moreso because they are ai-related (found partially/in whole by ai) and make for quick upvotes.
AI assistance was explicitly disclosed on yesterday's. Today's has Claude as one of two contributors on this GitHub Pages site at least so it's also very likely.
Agents are capable of finding this kind of stuff now and people are having a field day using them to find high-profile CVEs for fun or profit.
Anyone care to share which models and which prompts actually lead to finding these kinds of vulnerabilities? Or the narrowing-down workflow that can get an LLM to discover them? Surely just telling claude "Find all vulnerabilities in this project LOL" isn't enough? I hope?
Everyone was talking about how Mythos was overblown marketing, and while it may be, they missed the forest for the trees. Capabilities have been escalating for a year now and we're at the point of widespread impact. I don't suspect we'll see a slowdown for a long time.
I agree. It is not like Mythos or other LLMs are insanely smart/superhuman. Many of these vulnerabilities could be discovered fairly easily by trained human experts as well. The problem is more that it requires an insane amount of attention and time of highly-paid experts to shake out these issues vs. an LLM that never gets tired and can analyze a large amount of code at low cost.
Linus' law was wrong because there were never enough (qualified) eyeballs to check the code. LLMs provide an ample supply of eyeballs (though it's not a benefit to open source, since proprietary developers can use the same LLMs).
Same applies to them being good enough to program, but many are so focused on source code generation that they don't get the whole picture.
Thanks to agents and tool calling, there are now business cases that can be fully described by AI tooling, the next step in microservices, serverless and what not.
Naturally with a much smaller team than what was required previously.
Desktop and server vulnerabilities are one thing. At least many are actively maintained and will get patched. I have a concern about all the common and cheap internet firewalls and routers that are around, running old software and kernels. Many or most will not get patched. I have some Ubiquiti boxes that are long out of support and run old kernels for instance. The hope is only that there's nothing they expose that gets hit.
io-uring is a security nightmare. Constant privescs and a powerful primitive for syscall smuggling. Worth considering disabling it outright (already the case for most containers afaik).
Interesting, I haven't tested this myself but intuitively I think that a 4 byte OOB write is plenty for a data-only attack like [PageJack](https://i.blackhat.com/BH-US-24/Presentations/US24-Qian-Page...), so I don't think hardening against the KASLR leaks discussed in OP would necessarily save you from this attack.
How many systems have the relevant NICs, and followed the non-automatic setup steps in https://docs.kernel.org/networking/iou-zcrx.html, and are not running within a VM/container disabling io_uring?
This seems on the low impact end of the numerous historical io_uring issues.
Obviously the way to prevent this is by bounds checking, which is literally in the `770594e` patch. It's just a bug and they happen routinely in all languages. Since this is doing pointer arithmetic, it could just as easily happen in unsafe Rust, for example.
If static analysis could actually find these issues with a reasonable false positive rate, the companies behind them would be running them on Linux to get the publicity of having found the issues like all the AI companies are doing now. Imo the good static analysis heuristics are already built into compilers or in open source linters.
The cheap, low-hanging "fruit" lint rules have been added to today's C/C++ compilers. But these rules can be fragile, depending on what level the static analysis scan occurs - source-code-level-textual pattern matching or use of an AST/parse tree.
Possible problems within a function should be discoverable.
This particular bug would be hard to discover for a typical linter unless they knew/remembered that there are two execution paths for cleanup of a given element.
"static analysis" is usually deterministic rules you can e.g. put in CI. AI is also somewhat dynamic in that it can execute commands to try stuff out. The best AI vuln finding harnesses work that way, by essentially putting the AI inside of a fuzzer-like environment and telling it to produce a crash.
It's possible I had seen that blog post and not remembered! I was intending to reference the Onion though (and even googled to make sure I had the wording right), but seeing someone else make the same joke and forgetting is certainly something I would do
Technically, the kernel team is sufficiently competent to design and build bespoke tools for themselves. It‘s probably a question of risk assessment and priorities.
sure, but with unsafe Rust you have a very clear marking for the section of code that requires additional care and attention. it is also customary to include a "SAFETY" comment outlining why using unsafe is OK here
You actually kind of don't, I use like a zillion crates which have unsafe Rust in them and it's not like I'm sitting here reading every single line of their code. I like Rust for various reasons, but its memory safety is (imo) overstated, especially when doing low-level stuff.
Almost all rust (95%) is safe rust. You can opt out of array bounds checks with unsafe { array.get_unchecked(idx) } instead of just typing array[idx]. But I can't remember the last time I saw anyone actually do that in the wild. Its not common practice, even in most low level code.
Rust is bounds checked by default. C is not. Defaults matter because, without a convincing reason, most people program in the default way.
But one would have to explicitly choose to use unsafe Rust for this instead of ordinary safe Rust. And safe Rust has no particular difficulty writing to slots in an array or slice or vector specified by their index.
Based on the raw number of assorted crates, which has no bearing on kernel code. The more relevant question is, can a performant, cross-architecture, kernel ring-buffer be written in safe Rust?
Hubris, an embedded RTOS-like used in production by Oxide, has ~4% unsafe code in the kernel last I checked. There’s a ring buffer implementation that has one unsafe, for unchecked indexing: https://github.com/oxidecomputer/hubris/blob/master/lib/ring... (this of course does not mean that it is the one ring buffer to rule them all, but it’s to demonstrate that yes, it is at least possible to have one with minimum unsafe.)
It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.
I doubt it, but you can probably get pretty close.
This is something a lot of people misunderstand about unsafe rust. The safe / unsafe distinction isn't at the crate level. You don't say "this entire module opts out of safety checks". Unsafe is a granular thing. The unsafe keyword doesn't turn off the borrow checker. It just lets you dereference pointers (and do a few other tricks).
Systems code written in rust often has a few unsafe functions which interact with the actual hardware. But all the high level logic - which is usually most of the code by volume - can be written using safe, higher level abstractions.
"Can all of io_uring be written in safe rust?" - probably not, no. But could you write the vast majority of io_uring in safe rust? Almost certainly. This bug is a great example. In this case, the problematic function was this one:
"unsafe Rust" is not a binary; you don't opt into it for every single line of code. Given that the entire premise behind the idea that using C instead of Rust is fine is that people should be able to pay close attention and not make mistakes like this, having the number of places you need to look be a tiny fraction of the overall code that's explicitly marked as unsafe is a massive difference from C where literally every line of the code could be hiding stuff like this.
Really? Why? I've not used Rust outside of some fairly small efforts, but I've never found a reason to reach for unsafe. So why is "nearly everyone" else using it?
Let's say you want to call win32 (or Mac) OS functions, all of a sudden you're doing all kinds of wonky pointer stuff because that's how these operating systems have been architected. Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.
And even if you do end up writing an unsafe block, that should be a massive flag that the code in said block should deserve extra comments on why it is safe, and extra unit tests on verifying that it does not blow up.
How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.
Exactly; I feel like a lot of people seem to misunderstand what Rust is trying to solve. It's fundamentally not trying to make unsafe code impossible; it's making the number of places you need to audit it a tiny fraction of your codebase compared to needing to audit the entirety of a C or C++ codebase. When I'm doing code reviews, you'd better believe I'm going to spend some extra time on any unsafe block I see to figure out if it's necessary and if so, if it's actually safe safe (with the default assumption for both of those being that they're not until I can convince myself otherwise).
The thing is you can actually write quite good C code (see OpenBSD project). The power of C is that it's pragmatic. It lets you write code with you taking the full responsibility of being a responsible person. To err is human, but we developed a set of practices to handle this (by making sure the gun is unloaded and the safety is on before storing it to avoid putting holes in feet).
I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.
It may, but it still requires careful annotations. So you should hope that you have not made an error there and described the wrong structure for the code.
So what? Just because you used the keyword `unsafe` to call an unsafe API does not mean that you are going to use unsafe pointer access to write to a vector.
I first read this from the author's posting to oss-security. Turns out that the author did agree to revise the blog post for the "admin cap for root shell" part [^0]. [^1] would probably tell more.
So this is another CVE? Or am I misreading this one? "Copy‑fail", "DirtyFrag", now "IUrinegOnYou :)"?
Joke aside, we'll see more CVEs in the coming months, and in a sense that's good: it leaves less maneuvering room for bad actors (especially those selling them to the highest bidder).
If this many are public right now, what does that say about the dark matter of private ones? What's the typical public-private rate for this sort of thing/can someone help me calibrate my base rate expectations?
high privilege access required (CAP/NET admin), containers / sandboxing wins once again.
Can we make sandboxing the new default now? Flatpak does a good job, but we're still pretty far away for apt/yum/pacman installed packages. AppArmor was a decent step forward, but clearly not enough.
As other people said in this thread: so many devices won't be patched. And that can easily lead to users and manufacturers moving away from Linux. Linux is in a glass house.
Government agencies probably already have half of these exploits in their private toolbox for years now. Finding and patching them is good, but there probably needs to be some systematic change to prevent them rather than just patching bugs when they get found.
Linux is "falling apart" because it's the highest-profile open source project people can point LLM agents at to find CVEs. It'll come out the other end of this hardened by all of the attention it's getting, but the next few months/years will be... bumpy.
I do think SELinux is a good example of how robust software with poor UX/DX gets undermined by that poor UX/DX. Although I do wonder if AI can help with it?
Pray to God no one ever lets an AI agent run loose on the various leaked Windows source code dumps.
Given Windows' absurd amount of backwards compatibility, chances are pretty high that there are a lot of sleeping dragons buried inside even modern Windows 10/11 kernel and userland that date back to code and issues from the 90s - code where half the people who have worked on it probably not just have departed Microsoft but departed living in the meantime.
While true, since MinWin and OneCore that most of that code has been moved around.
Also contrary to Linux, Windows 11 (optional on W10) uses sandboxing for kernel and drivers.
Since Windows XP SP2 that Windows keeps getting mitigations, Microsoft has security teams whose day job is to attack Windows.
They are also promoting using CoPilot for C and C++ code review for some time now.
While it won't stop all attacks, it is better than the whole UNIX is safer than Windows attitude from the 90's, turns out it is a matter of how much money is into it.
Want really safe above anything else, look into Qube OS with its sandboxing over everything, or mainframe systems like Unysis ClearPath MCP, with NEWP as systems language, and managed environments.
I can't quite make out if this is new or not. The attack vector here seems congruent with a similar exploit from a couple months ago [1]
But still might be an open threat. On the email thread Jens seems to think that this is already patched and in stable, he also points out that for this exploit to work (as written in the article) you already need escalated privileges [2] Catchy title though.
[1] https://snailsploit.com/security-research/general/io-uring-z...
[2] https://seclists.org/oss-sec/2026/q2/448
> “and is writable with CAP_SYS_ADMIN”
Am I reading this wrong or is this just a way of executing an arbitrary binary with uid=0 if you have both CAP_NET_ADMIN and CAP_SYS_ADMIN?
If you can write modprobe_path, is it really news that you can find a way to execute code?
No, you can grant yourself this inside an unprivileged user namespace. `unshare -Ur capsh --print` lists the capabilities inside a user namespace and demonstrates that it has both CAP_SYS_ADMIN and CAP_NET_ADMIN.
Almost all distros allow unprivileged user namespaces, and in my opinion this is the right decision, because they're important for browser sandboxing which I think is more important than LPEs.
I don't think namepsace CAP_SYS_ADMIM grants you access to write non namespaces sysctls like modprobe_path
You're probably right, but that seems like the less important part of this. At that point you've already got an out-of-bounds write. Another comment speculated that you could use PageJack as an alternative exploit path once you have that primitive: https://news.ycombinator.com/item?id=48069623
Right. `CAP_SYS_ADMIN` is for all intents and purposes equivalent to root.
What is happening? I see multiple outages and CVEs is being reported on HN's front page. I've never seen these many security/incident related posts on HN's front page.
Some combination of reporting bias given concerns about LLM security capabilities and actual new vulnerabilities found with LLM assistance. Even if exploits and outages are unrelated to LLMs, I'm certainly thinking about whether claude could build these things (or if actors already have).
> What is happening?
Slowly at first, and then suddenly. AI assisted anything follows this trend. As capabilities improve, new avenues become "good enough" to automate. Today is security.
i believe a good portion of the cves hitting the front page are moreso because they are ai-related (found partially/in whole by ai) and make for quick upvotes.
In some sense, I wonder if non-open-source is "safer" since LLMs can't mass scan the code for exploits.
Maybe for a while, but there's nothing stopping LLMs from examining disassembler output.
It's actually the perfect evergreen content to discuss on HN in an age where so much else is AI generated.
AI is happening.
In each recent case?
AI assistance was explicitly disclosed on yesterday's. Today's has Claude as one of two contributors on this GitHub Pages site at least so it's also very likely.
Agents are capable of finding this kind of stuff now and people are having a field day using them to find high-profile CVEs for fun or profit.
I wonder where are the Rust naysayers hiding now
C code is broken - period
Automated vulnerability discovery via LLM.
Anyone care to share which models and which prompts actually lead to finding these kinds of vulnerabilities? Or the narrowing-down workflow that can get an LLM to discover them? Surely just telling claude "Find all vulnerabilities in this project LOL" isn't enough? I hope?
The Anthropic researchers have said their flow is as simple as:
1. Pick a file to seed as a starting place.
2. Ask the LLM (in an agent harness) to find a vulnerability by starting there.
3. If it claims to have found something, ask another one to create an exploit/verify it/prove it or whatever.
4. If both conclude there is a vuln, then with the latest models you almost certainly found something real.
Just run it against every file in a repo, or select a subset, or have an LLM select files with a simple "what X files look likely to have vulns?".
So basically yes, it is that simple. It's just a matter of having the money to pay for the tokens.
Thanks for the reply. Pretty remarkable.
[dead]
Everyone was talking about how Mythos was overblown marketing, and while it may be, they missed the forest for the trees. Capabilities have been escalating for a year now and we're at the point of widespread impact. I don't suspect we'll see a slowdown for a long time.
I agree. It is not like Mythos or other LLMs are insanely smart/superhuman. Many of these vulnerabilities could be discovered fairly easily by trained human experts as well. The problem is more that it requires an insane amount of attention and time of highly-paid experts to shake out these issues vs. an LLM that never gets tired and can analyze a large amount of code at low cost.
Linus' law was wrong because there were never enough (qualified) eyeballs to check the code. LLMs provide an ample supply of eyeballs (though it's not a benefit to open source, since proprietary developers can use the same LLMs).
Same applies to them being good enough to program, but many are so focused on source code generation that they don't get the whole picture.
Thanks to agents and tool calling, there are now business cases that can be fully described by AI tooling, the next step in microservices, serverless and what not.
Naturally with a much smaller team than what was required previously.
A mix of AI and hybrid warfare.
... there's also a bit of a frequency illusion factor.
Perhaps it was the prior quiescent period that was the anomaly.
Desktop and server vulnerabilities are one thing. At least many are actively maintained and will get patched. I have a concern about all the common and cheap internet firewalls and routers that are around, running old software and kernels. Many or most will not get patched. I have some Ubiquiti boxes that are long out of support and run old kernels for instance. The hope is only that there's nothing they expose that gets hit.
This kind of post really shouldn't require client-side js — from third-party domain — to read...
static markdown version: https://raw.githubusercontent.com/ze3tar/ze3tar.github.io/9d...
big ups pimp
CAP_NET/SYS_ADMIN is required for this. So this would be "not as bad" as the others.
Also "The page pool is only created on a real ZCRX-capable NIC (mlx5 ConnectX-6+, Intel E800, NFP)"
It could work for container escape?
Containers, even with root user, are often stripped of these capabilities unless --privileged
Let's see... That's 4 Linux LPEs in the last 10 days?
Copy Fail [1]
Copy Fail 2: Electric Boogaloo [2]
Dirty Frag [3]
And now this...
[1]: https://copy.fail
[2]: https://github.com/0xdeadbeefnetwork/Copy_Fail2-Electric_Boo...
[3]: https://github.com/V4bel/dirtyfrag
Aren't CF2 and DF the same exploit?
io-uring is a security nightmare. Constant privescs and a powerful primitive for syscall smuggling. Worth considering disabling it outright (already the case for most containers afaik).
At one point, Google disabled io_uring on its production servers (https://security.googleblog.com/2023/06/learnings-from-kctf-...) - I don't know whether this is still true, though. Perhaps a Google can confirm.
super curious on this one as well, last I heard they've been enabling it slowly
Interesting, I haven't tested this myself but intuitively I think that a 4 byte OOB write is plenty for a data-only attack like [PageJack](https://i.blackhat.com/BH-US-24/Presentations/US24-Qian-Page...), so I don't think hardening against the KASLR leaks discussed in OP would necessarily save you from this attack.
How many systems have the relevant NICs, and followed the non-automatic setup steps in https://docs.kernel.org/networking/iou-zcrx.html, and are not running within a VM/container disabling io_uring?
This seems on the low impact end of the numerous historical io_uring issues.
Interesting and important all the same.
[flagged]
> "No way to prevent this", Says Only Language Where This Regularly Happens
also see lib0xc etc.: https://news.ycombinator.com/item?id=47978834NOTE: This is a design document and the feature is not available for users yet.
https://clang.llvm.org/docs/BoundsSafety.html
Obviously the way to prevent this is by bounds checking, which is literally in the `770594e` patch. It's just a bug and they happen routinely in all languages. Since this is doing pointer arithmetic, it could just as easily happen in unsafe Rust, for example.
Like they said, "no way to prevent this" (kind of bug from happening again).
Static analysis and other tools can find this, but they're expensive; wonder what the kernel team has access to?
If static analysis could actually find these issues with a reasonable false positive rate, the companies behind them would be running them on Linux to get the publicity of having found the issues like all the AI companies are doing now. Imo the good static analysis heuristics are already built into compilers or in open source linters.
The cheap, low-hanging "fruit" lint rules have been added to today's C/C++ compilers. But these rules can be fragile, depending on what level the static analysis scan occurs - source-code-level-textual pattern matching or use of an AST/parse tree.
Possible problems within a function should be discoverable.
This particular bug would be hard to discover for a typical linter unless they knew/remembered that there are two execution paths for cleanup of a given element.
If not static analysis what would ai tools be considered? They're operating off the same source code
Also nice the onion reference by op.
"static analysis" is usually deterministic rules you can e.g. put in CI. AI is also somewhat dynamic in that it can execute commands to try stuff out. The best AI vuln finding harnesses work that way, by essentially putting the AI inside of a fuzzer-like environment and telling it to produce a crash.
It's a reference to Xe Iaso's blog (e.g. https://xeiaso.net/shitposts/no-way-to-prevent-this/CVE-2025...), which is itself a reference to The Onion.
It's possible I had seen that blog post and not remembered! I was intending to reference the Onion though (and even googled to make sure I had the wording right), but seeing someone else make the same joke and forgetting is certainly something I would do
Coverity scans several open source projects for free. see https://scan.coverity.com/faq and https://scan.coverity.com/projects
see https://scan.coverity.com/projects/linux for the linux-specific scan results - you need to create an account to view the reported defects.
This past couple of weeks isn't a good look for them with the releases of defects found in Linux and Firefox.
Linus himself wrote a static analyzer. https://en.wikipedia.org/wiki/Sparse
There are other free ones, I don't know if they're run as a matter of course.
Technically, the kernel team is sufficiently competent to design and build bespoke tools for themselves. It‘s probably a question of risk assessment and priorities.
sure, but with unsafe Rust you have a very clear marking for the section of code that requires additional care and attention. it is also customary to include a "SAFETY" comment outlining why using unsafe is OK here
You actually kind of don't, I use like a zillion crates which have unsafe Rust in them and it's not like I'm sitting here reading every single line of their code. I like Rust for various reasons, but its memory safety is (imo) overstated, especially when doing low-level stuff.
Almost all rust (95%) is safe rust. You can opt out of array bounds checks with unsafe { array.get_unchecked(idx) } instead of just typing array[idx]. But I can't remember the last time I saw anyone actually do that in the wild. Its not common practice, even in most low level code.
Rust is bounds checked by default. C is not. Defaults matter because, without a convincing reason, most people program in the default way.
[dead]
But one would have to explicitly choose to use unsafe Rust for this instead of ordinary safe Rust. And safe Rust has no particular difficulty writing to slots in an array or slice or vector specified by their index.
except nearly everyone uses unsafe rust
No they really don't. 95% of rust is safe rust[1].
Also unsafe rust doesn't remove bounds checks. arr[idx] is bounds checked in every context.
You can opt out of array bounds checking by writing unsafe { arr.get_unchecked(idx) } . But thats incredibly rare in practice.
[1] https://cs.stanford.edu/~aozdemir/blog/unsafe-rust-syntax/
> 95% of rust is safe rust.
Based on the raw number of assorted crates, which has no bearing on kernel code. The more relevant question is, can a performant, cross-architecture, kernel ring-buffer be written in safe Rust?
Hubris, an embedded RTOS-like used in production by Oxide, has ~4% unsafe code in the kernel last I checked. There’s a ring buffer implementation that has one unsafe, for unchecked indexing: https://github.com/oxidecomputer/hubris/blob/master/lib/ring... (this of course does not mean that it is the one ring buffer to rule them all, but it’s to demonstrate that yes, it is at least possible to have one with minimum unsafe.)
It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.
I doubt it, but you can probably get pretty close.
This is something a lot of people misunderstand about unsafe rust. The safe / unsafe distinction isn't at the crate level. You don't say "this entire module opts out of safety checks". Unsafe is a granular thing. The unsafe keyword doesn't turn off the borrow checker. It just lets you dereference pointers (and do a few other tricks).
Systems code written in rust often has a few unsafe functions which interact with the actual hardware. But all the high level logic - which is usually most of the code by volume - can be written using safe, higher level abstractions.
"Can all of io_uring be written in safe rust?" - probably not, no. But could you write the vast majority of io_uring in safe rust? Almost certainly. This bug is a great example. In this case, the problematic function was this one:
At a glance, this function absolutely could have been written in safe rust. And even if it was unsafe, array lookups in rust are still bounds checked."unsafe Rust" is not a binary; you don't opt into it for every single line of code. Given that the entire premise behind the idea that using C instead of Rust is fine is that people should be able to pay close attention and not make mistakes like this, having the number of places you need to look be a tiny fraction of the overall code that's explicitly marked as unsafe is a massive difference from C where literally every line of the code could be hiding stuff like this.
> except nearly everyone uses unsafe rust
Really? Why? I've not used Rust outside of some fairly small efforts, but I've never found a reason to reach for unsafe. So why is "nearly everyone" else using it?
Let's say you want to call win32 (or Mac) OS functions, all of a sudden you're doing all kinds of wonky pointer stuff because that's how these operating systems have been architected. Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.
> Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.
So the vast majority of Rust projects involve writing at least one unsafe block? Is that really your claim?
And even if you do end up writing an unsafe block, that should be a massive flag that the code in said block should deserve extra comments on why it is safe, and extra unit tests on verifying that it does not blow up.
How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.
Exactly; I feel like a lot of people seem to misunderstand what Rust is trying to solve. It's fundamentally not trying to make unsafe code impossible; it's making the number of places you need to audit it a tiny fraction of your codebase compared to needing to audit the entirety of a C or C++ codebase. When I'm doing code reviews, you'd better believe I'm going to spend some extra time on any unsafe block I see to figure out if it's necessary and if so, if it's actually safe safe (with the default assumption for both of those being that they're not until I can convince myself otherwise).
The thing is you can actually write quite good C code (see OpenBSD project). The power of C is that it's pragmatic. It lets you write code with you taking the full responsibility of being a responsible person. To err is human, but we developed a set of practices to handle this (by making sure the gun is unloaded and the safety is on before storing it to avoid putting holes in feet).
I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.
> To err is human
Yes, which is precisely why I write in Rust, because the compiler errs less than I do.
It may, but it still requires careful annotations. So you should hope that you have not made an error there and described the wrong structure for the code.
A tiny fraction of programs need to use win32 or Mac OS functions beyond the standard library or other safe wrappers for said functions.
Making use of win32 functions doesn't turn off bounds checking in your rust code.
So what? Just because you used the keyword `unsafe` to call an unsafe API does not mean that you are going to use unsafe pointer access to write to a vector.
That's not prevention. That's remediation.
Surely nobody could create a better language in 50 years. Surely we can't fix these issues.
And you see a lot of other languages being used to create operating systems with complicated multiprocessor and locking semantics?
> Affected: Linux 6.15 – 6.19 [...] Fix: commit 770594e (not yet in any stable branch at time of writing).
Is it considered good pactice to publish a vulnerability not yet patched in any stable branch?
I first read this from the author's posting to oss-security. Turns out that the author did agree to revise the blog post for the "admin cap for root shell" part [^0]. [^1] would probably tell more.
The title looks like clickbait to me.
[^0]: https://www.openwall.com/lists/oss-security/2026/05/08/10
[^1]: https://www.openwall.com/lists/oss-security/2026/05/08/14
Do most servers need this? Or can most of us 'sysctl -w kernel.io_uring_disabled=2 ' ?
So this is another CVE? Or am I misreading this one? "Copy‑fail", "DirtyFrag", now "IUrinegOnYou :)"?
Joke aside, we'll see more CVEs in the coming months, and in a sense that's good: it leaves less maneuvering room for bad actors (especially those selling them to the highest bidder).
If this many are public right now, what does that say about the dark matter of private ones? What's the typical public-private rate for this sort of thing/can someone help me calibrate my base rate expectations?
What’s our prior for p(doom) today…?
high privilege access required (CAP/NET admin), containers / sandboxing wins once again.
Can we make sandboxing the new default now? Flatpak does a good job, but we're still pretty far away for apt/yum/pacman installed packages. AppArmor was a decent step forward, but clearly not enough.
Yes on Android, iDevices, macOS, Windows (UWP, Win32 boxing), Qube OS, but it remains a controversial topic in GNU/Linux land.
[dead]
Another one.
Linux is falling apart faster than it can assign these CVEs.
Falling apart? You mean getting stronger? Every single one of these is an existing hole being patched. It isn't making new holes
As other people said in this thread: so many devices won't be patched. And that can easily lead to users and manufacturers moving away from Linux. Linux is in a glass house.
Government agencies probably already have half of these exploits in their private toolbox for years now. Finding and patching them is good, but there probably needs to be some systematic change to prevent them rather than just patching bugs when they get found.
Something something microkernels + capability-based security.
> for years now ZCRX is less than half a year old. I'm so tired boss.
I remember when people used to joke with Windows security and something like that would never happen on Linux, well..
Linux is "falling apart" because it's the highest-profile open source project people can point LLM agents at to find CVEs. It'll come out the other end of this hardened by all of the attention it's getting, but the next few months/years will be... bumpy.
perhaps this will lead to better AppArmor and SELinux defaults?
People will just turn SELinux off rather than have to go through the horrible tooling when it breaks a regular use case.
It is enabled by default on Android, and only developers can change it temporarly via an ADB session.
I do think SELinux is a good example of how robust software with poor UX/DX gets undermined by that poor UX/DX. Although I do wonder if AI can help with it?
There is also the Android way, this is how it goes, fix your apps.
How's BSD doing? How about Amazon Linux?
Amazon Linux is a Linux distro? Though, yes, I would like to know how the BSDs are doing.
Yes, it's a fork of Fedora. https://docs.aws.amazon.com/linux/al2023/ug/what-is-amazon-l...
FreeBSD is getting piles of security updates lately too. Not sure about the other BSDs.
And Windows?
Pray to God no one ever lets an AI agent run loose on the various leaked Windows source code dumps.
Given Windows' absurd amount of backwards compatibility, chances are pretty high that there are a lot of sleeping dragons buried inside even modern Windows 10/11 kernel and userland that date back to code and issues from the 90s - code where half the people who have worked on it probably not just have departed Microsoft but departed living in the meantime.
While true, since MinWin and OneCore that most of that code has been moved around.
Also contrary to Linux, Windows 11 (optional on W10) uses sandboxing for kernel and drivers.
Since Windows XP SP2 that Windows keeps getting mitigations, Microsoft has security teams whose day job is to attack Windows.
They are also promoting using CoPilot for C and C++ code review for some time now.
While it won't stop all attacks, it is better than the whole UNIX is safer than Windows attitude from the 90's, turns out it is a matter of how much money is into it.
Want really safe above anything else, look into Qube OS with its sandboxing over everything, or mainframe systems like Unysis ClearPath MCP, with NEWP as systems language, and managed environments.