The half of the files contain 'unsafe' keyword? It doesn't seem as a good rewrite. What is the point of rewrite into Rust, if ~half of your code is still unsafe?

Bun is fundamentally a boundary-heavy system and it also rolls its own version of a lot of things that people typically use via libraries, where unsafe is hidden. (no async, memory arenas, etc). It also uses FFI heavily which requires unsafe.

It also looks like the top 2 maintainers are currently actively working on getting the amount of unsafe down and it's going down quickly.

If the unsafe can be iteratively removed and the final code is of reasonable quality that seems like a sane strategy. Any large migration just needs to be doable incrementally so progress can be made.

1. Rewrite from zig to rust in as close to zig as you can.

2. Turn into idiomatic rust.

1. Get hired into a company where you have a solid bet on making multi-century lasting generational wealth (>$50,000,000).

2. Every waking moment do everything in your power to boost the company that might give you the ability to define the direction of technology for the rest of your life.

3. Use the only thing you have (bun) to help push you in this direction and do things to help boost LLM marketing (a technology that already deeply struggles to find customers and has to rely on welfare (lucrative government contracts) to make sales).

---

Honestly think this generation of tech workers in SF are more evil than those that worked at Google + Facebook in the early 10s.

> a technology that already deeply struggles to find customers

As far as I know it's the opposite, Anthropic struggles to satisfy demand, they have tons of paying customers and their customer base is growing fast.

Wow as far as you know? That settles it then! Just ignore this:

https://www.flyingpenguin.com/wheres-ed-anthropic-told-court...

So, your link shows that they probably have like $1 billion in sales per month (but they publicly overstated this by 30%), and that's the struggle to find customers?

There are tons of posts and reporting about Anthropic's problems with meeting demand, usage limits (on paid plans, especially during peak hours), fast growth (your link confirms that), and problems with infrastructure.

Some links:

https://uk.finance.yahoo.com/news/anthropic-throttles-claude...

https://techcrunch.com/2026/03/28/anthropics-claude-populari...

So the takeaway here is that they scaled to just over $5bn instead of $6.6bn in revenue in just a few years…? Still sounds like plenty demand exists?

What does that have to do with rewriting from zig to rust??? This thread is what's pushing LLM marketing, not the rewrite itself.

If the rewrite is just a stunt and it will crash and burn it will do that whether we spend our free (or work) time writing comments. If there is any hype around this particular topic, it's happening here not in the GitHub repo.

This is exactly the case here.

The author of Bun is a Thiel Fellow, so he's already been trained in The Way.

People are trying to wash away the recklessness of this rewrite by applying engineering principles the author their self didn't apply. It's like trying to make sense of a certain president's words. There is a lot of missing analysis both before this rewrite, during it, and after that is missing. And given that Zig and Rust can interoperate with each other via C, it makes a wholesale rewrite even more bizarre.

I’m honestly confused. What is it that you think makes these workers “more evil” than Google and Facebook workers from the early 2010s?

Google and Facebook workers just made a lot of cash and mostly made everyone's life harder by Leetcode and bad interview process, they didn't threaten and actively work to put millions of SE on the street.

> they didn't threaten and actively work to put millions of SE on the street

Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.

They (we) did it to tons of other industries. And we collectively patted ourselves on the back, saying that automation is a good thing and we're the good guys for doing it and people who lost their jobs will adapt and maybe they should just learn to code.

Now it's happening to (some of) us and suddenly it's evil?

No. The point is: programmers are whores. We like to act all righteous on forums, but very very few of us care enough about the consequences of our code to do something about it.

We either don't think about it ("what could go wrong?"), don't care about it (eh), justify it ("I need to eat!!!", "I'm just following orders"), or actively embrace it ("It's the future!").

> Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.

Nah. The fact that such opportunity wasn't available attracted a different sort of person.

[flagged]

And definitely not more evil than the workers at current Meta.

> What is the point of rewrite

To win a news cycle.

For the forseeable future, the AI market competition is not about which product can provide the most valuable utility to users. It's about which product can be holding the protective aura of social media and investment zeitgeist while competitors buckle under the strain from unfulfilled hype and over-leveraging.

Utility, engineering, efficiency... these are all menial details for the winners to reluctantly iron out in 2035.

Bannon’s ‘flood the zone’ strategy applied to AI.

unsafe just means that you take responsibility for the safety of the code contained within. Calling into non-Rust libraries has to be wrapped in unsafe. Making syscalls has to be wrapped in unsafe.

Bun needs to interact with FFI code. This gets wrapped in unsafe blocks.

There are many places where a JavaScript interpreter and library would need to make unsafe calls and operations.

It doesn't literally mean the code is unsafe. It means the code contained within is not something that can be checked by the compiler, so the writer takes responsibility for it.

There are many low-level data munging and other benign operations that a human can demonstrate are safe, but need to be wrapped in safe because they do things outside of what the compiler can check.

There's actually a good example of this in the rewrite [1], in `PathString::slice`. They are doing an unsafe operation to return a slice that could be a use-after-free, if the caller had not already guaranteed that an invariant will remain true. Following proper rust idiomatic practices, claude has added a SAFETY comment to the unsafe block to explain why it's safe: "caller guarantees the borrowed memory outlives this".

Now, normally, you'd communicate this contract to your API users by marking the type's constructor (PathString::init) as "unsafe", and including the contract in its documentation. Unfortunately in this case, this invariant does not exist - it appears to have been fabricated out of thin air by the LLM [2]. So, not only does this particular codebase have UB problems caused by unsafe code, the SAFETY blocks for the unsafe code are also, well, lies.

[1] https://github.com/oven-sh/bun/blob/63035b3e37/src/bun_core/...

[2] https://github.com/oven-sh/bun/blob/63035b3e37/src/bun_core/...

`PathString` worked the exact same way in our Zig code, with less visibility from the compiler & type system. And yes, it will be refactored heavily (or deleted overall) in the next week or so.

One potential way to solve this in a principled manner is to turn at least some "unsafe" annotations into ghost capability tokens that are explicitly threaded through the code and consistently checked by the compiler. Manufacturing the capability could itself be left as an unsafe operation, or require a runtime check of some kind.

You already see this in some cases, for example the NonZero<T> generic type can be viewed as a T endowed with a capability or token that just says "this particular value of type T is nonzero, so the zero value is available for niche purposes". But this could be expanded a lot, especially with some AI assistance.

This already happens all the time in rust, including in the standard library. The typical pattern is to define your CheckedType to be

pub struct CheckedType(UncheckedType);

e.g. where its inner field is private. Then, you only present safe constructors that check your invariant, and only provide methods that maintain the invariant.

For a concrete example, String in rust is a Vec<u8> with the guarantee that the underlying bytes correspond to valid UTF8. Concretely, it is defined as

#[derive(PartialEq, PartialOrd, Eq, Ord)] #[stable(feature = "rust1", since = "1.0.0")] #[lang = "String"] pub struct String { vec: Vec<u8>, }

You can construct a string from a vec of bytes via

fn from_utf8(vec: Vec<u8>) -> Result<String, _>;

as well as the unsafe method

unsafe fun from_utf8_unchecked(vec: Vec<u8>) -> String;

Note here that there isn't a separate capability/token though. This is typically viewed as bad practice in rust, as you can always ignore checking a capability/token. See for example rust's mutexes Mutex<T>, which carry the data (T) that you want access to themself. So, to get access to the data, you must call .lock(). There is a similar philosophy behind Rust's `Result` type. to get data underlying it, you must handle the possibility of an error somehow (which can include panicing upon detecting the error of course).

Yes, or you could review the code.

It’d only take an hour if you reviewed a million lines per hour

[Sorry guys, I couldn't review this code because I generated it all]

Even before AI, deterministic checks by compilers are almost always better than "review the code"

"review the code" as a solution will eventually fail and cause a problem, even pre-AI.

The entire point of unsafe blocks and SAFETY comments is that they are easy for humans to find and audit, but not compiler checkable. If it can be compiler-checked by some clever token system, then ... it's just plain safe rust, and you don't need to document any special safety invariants in the first place

even when you can review the code, it's good to have the compiler check for you. This is for similar reasons why it's better to have CI check correctness on each code change, vs testing the code thoroughly one time, and then being careful going forward.

> unsafe just means that you take responsibility for the safety of the code contained within.

In this case it means you delegated the responsibility to a notably flaky heuristic.

> a JavaScript interpreter

Bun is not a Javascript interpreter. But I do see the point.

Some correct me if I'm wrong, but it's unlikely they wrote this first initial version of Rust and will leave it unchanged as-is. What's there now is a step in a long process, not the final destination.

The point is to serve as marketing for Claude. Absolutely nothing else.

Rust has a ton of other features besides safe. Like exhaustive checking of enum variants and the ability to avoid using null with option and result.

Zig has these modern language features too fwiw.

I think the goal was to do a massive rewrite for Anthropic (they acquired bun) and show that rewriting projects from lang -> lang with Claude can reduce security vulnerabilities to help with the hype for an IPO.

I don’t use/know Rust so I can’t comment on the quality, but there was a public security review that found issues with the new Rust code: https://x.com/SwivalAgent/status/2054468328119279923

This is an interesting experiment but I’m skeptical of any claims of success by Jarred/Anthropic due to the incentive to hype agents. There’s probably a trillion dollars at stake with the IPO. And Anthropic seems to be developing this part of their business with Mythos and the super review features.

But I’d like to see the same experiment done on a project without so much relying on the story being success.

There's a reasonable request to run the same analysis for the Zig version of the code as a comparison.

In lieu of that, it seems the Swivel devs ran an analysis on Tigerbeetle, one of the other major Zig projects, and found only 7 medium/low priority issues:

https://xcancel.com/SwivalAgent/status/2054063291266113994

To clarify, those are things an LLM considers to be issues, and LLMs can make mistakes.

Some of those are clear false positives, others I need to revisit tomorrow to say one way or another.

that sounds like a starting point and an honest translation. If it was originally unsafe and suddenly becomes safe immediately after the rewrite, it would mean they break existing behaviors