Still writing the blog post about this. Will share more details.
For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.
You, nine days ago[0]:
> I work on Bun and this is my branch
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
Maybe... it wasn't such an overreaction?
[0]: https://news.ycombinator.com/item?id=48019226
I'm really out the loop here so maybe you can help answer me a question - why is HN unhappy about this rewrite? why are people writing here almost as if they feel betrayed by Bun being rewritten from Zig into Rust?
I genuinely don't get it. I've been following this Bun stuff a bit but I don't understand where the HN sentiment is coming from.
Because in the software world, especially before 2022, ownership and stability have been valued. People like using things that do not randomly start breaking more often after every new release, and if things break, there is a human who knows exactly why it broke and what's the best way to fix it. Businesses would not want their losses to be attributed to an AI rewriting an entire codebase. AI owns nothing, not even the bugs which it produces. I would not want my SaaS to have downtime because a JavaScript runtime it depends on decided that they had to market their LLM by rewriting years of code recklessly.
People are not betrayed by a rewrite. They are betrayed by an LLM rewriting with minimal supervision fasttracked to a merge within 9 days of commencement.
To the contrary I do not understand how we have become so insensitive towards stability since the LLM era. Why is unbreakable code no longer the goal but a truckload of generated code is.
> Businesses would not want their losses to be attributed to an AI rewriting an entire codebase. AI owns nothing, not even the bugs which it produces. I would not want my SaaS to have downtime because a JavaScript runtime it depends on decided that they had to market their LLM by rewriting years of code recklessly.
I don't know how else to say this but "Tough Shit"? Businesses are building their entire enterprise on the volunteer work donated by the free software community (or given away for free by some other company solving its own problems).
If you don't want 'your' SaaS to have downtime based on somebody else's whims, then fucking pay for your own developers (or your own AI) to build your SaaS platform in house. That's what IBM did in the 1970s, and nothing except market pressure is stopping you from doing it today.
I'm sorry for the vulgarity but this entitled attitude of businesses toward FREE SOFTWARE GIVEN TO THEM FOR NO MONEY is infuriating. If the electric company decided to give your company free power on windy days, would you then get angry that they installed a new model of turbine?
> Because in the software world, especially before 2022, ownership and stability have been valued.
Stability in JS ecosystem was never valued.
The unhappiness is primarily stemming from Bun’s ownership by Anthropic - HN sees this as Anthropic using an OSS project for reckless marketing stunts.
For the record I don’t believe it’s a stunt, it’s ridiculous to me - everyone’s just seeing what they want to see out of sheer hate for anything Anthropic does.
In any case if the rewrite is really as reckless as many in this thread claim, we will see Bun collapse in on itself with a 1M LOC codebase the core team doesn’t understand, or rollback to Zig. So we don’t need to have a flamewar over it, time will answer the question.
[flagged]
Vibe coding a Rust rewrite of a widely used tool is basically catnip for the HN crowd.
Not if you use that tool, then it's just scary.
I would think the Zig implementation with 500+ issues on Bun's GitHub tracker mentioning "segfault" would be even scarier.
My read is it's less the rewrite and more the messaging around the rewrite. Nine days between "you're over-reacting" and merge is surprising, to say the least. Sure will be interesting to see that blog post!
The context nobody is mentioning is this came shortly after Bun forked Zig in the name of optimization, but then a Zig maintainer came out and basically said they (Bun) don't know what they're doing, or else they would have known that wasn't an effective optimization.
It outwardly seemed like they forked Zig for a flashy headline, were called out, then immediately started moving to Rust. This, combined with being bought by Anthropic, and plugging vibe coding the whole way, just gives the impression of random and chaotic technical decisions, which is not what people want in software their business depends on.
https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
My read is that it just seems a bit reckless doing a full rewrite so quickly.
My read. If the code has a comprehensive feature test suite, a performance test suite (how long a function takes), and a linter with readability guidelines (e.g. cyclomatic complexity; no code duplication), and the LLM rewrite passes all three, then it should be fine. But I think that in the real world only the first one (functional tests) exists.
Maybe Jarred can fill in here
posting my read (since it differs so much from the others')- there's a 'holy war' being waged by people that think LLMs shouldn't do full rewrites of software. There are various reasons people think this (think LLMs are parrots that make slop and are incapable of writing good code, have environmental concerns, or are angry that software licenses can be circumvented). I call it a 'holy war' because I think most see our current trajectory as a bit inevitable and have a strong urge to proselytize their views and chide maintainers that use LLMs in ways they don't like.
Very similar angry comments happened with the discussions of the Chardet rewrite, next.js/vinext, and JSONata/gnata if you want to look at this in context.
You're not alone in voicing this, another (now dead) comment did it earlier too with a bit more of an emotional response (https://news.ycombinator.com/item?id=48134229).
Still, do you folks never do something to see how you feel about something, then chose to go one way or another? I'm not sure why it's so hard to see that it was an overreaction at the time, because it was an experiment, then at one point it stopped being an experiment and now they've chosen to actually run with it?
Is this not a common occurrence for other people? Personally I change my mind all the time, especially based on new evidence, which usually experiments like this surface, I'm not sure I understand the whole "You said X some days ago" outrage that seems to cause people's reaction here.
Yes sure it's ok to change your mind. But don't you think the people Jarred accused of "overreacting" in retrospect didn't?
No, what we knew then is still what was known then. Today is different, and seemingly they've committed to the rewrite, so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.
> so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.
It also makes sense to have strong feelings when you're able to pattern match well enough to predict something will happen despite others trying to convince you that your predictions are incorrect.
It's not overreacting when correctly predicting the future, just because others couldn't. In the same vein, the idea that "everyone out to get you" is not called paranoia when there are people actually out to get you. That's better called being observant.
Some of those who predicted correctly might also have overreacted, but I believe that the majority understood that to be a blanket statement about prediction as a whole vs any specific individual reaction.
“Nobody could have seen this coming…”?
Well apparently a lot of people did. Maybe Jarred didn’t, maybe you didn’t, but most people correctly predicted what was coming.
See what coming?! I really don't understand what's going on here. Correctly predicted what, that Bun was being rewritten into Rust? I'm not sure anyone doubted that, all the work they did was public???
What on earth is going on here?
> I'm not sure anyone doubted that, all the work they did was public???
https://news.ycombinator.com/item?id=48019226
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
> What on earth is going on here?
With the nearly complete PR with the port to rust, a number of people predicted that it was going to happen. They were assured it's unlikely to happen and then they were accused of overreacting over effectively nothing. When those same people who were already upset about the rewrite, learned that their predictions the same ones that were rudely dismissed, were in fact, correct, they became upset again; this time about being lied to.
Correct or not, it's reasonable to conclude they were lied to. Especially given they correctly predicted the future.
>Correct or not, it's reasonable to conclude they were lied to.
No it's not. If we were 9 days away from a human written version of this experiment then yeah it would be reasonable to conclude they were lied to, because a human written version would progress so much slower and steadier that it's very unlikely you hadn't made up most of your mind a week before merge time.
But it's not human written. It's months, perhaps years of work compressed into a week, where the machine can go from 'nothing is working' to 'everything is working' in a few days. There is nothing reasonable about concluding you must have been lied to when such a delta in such a short time is possible. And if people fail to see that, then perhaps the initial assertions about an emotional meltdown were not so far off after all.
I might surprise you, but tech projects have social part of it. Decisions like that are discussed with community. It is completely fine to not give a single shit about community, but then don't act surprised when community doesn't give a shit about you.
Decisions like this are discussed however the maintainers of the project wish to discuss them. And a majority of the time, these decisions are made and discussed solely by the maintainers, so I really have no idea what you're talking about.
It's really simple.
9 days ago this is how the migration was described:
> I work on Bun and this is my branch
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
> I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
9 days after that comment, the rewrite has been merged to master.
9 days after "this is my branch" "the code doesn't work" "I'm just curious" "high chance it's thrown out"... it's merged to master.
-
Some people saw the original as an attempt to downplay the importance of the branch in response to negative feedback, rather than accurately describing what the branch represented.
Those people essentially predicted that Bun's actions would shortly reflect much more conviction than was being let on.
Experiments graduate to production all the time, but given the timelines involved, their predictions were correct.
> Those people essentially predicted that Bun's actions would shortly reflect much more conviction than was being let on.
Ironically these people are displaying great confidence in AI’s abilities.
If that’s the case, what are they objecting to exactly?
> Ironically these people are displaying great confidence in AI’s abilities.
Maybe they were displaying high confidence in a marketing machine's ability to commit to dangerous stunts.
Stop thinking about '9 days' like it means the same thing in an era where machines can generate thousands of lines of code in a few hours.
There is no way a human rewrite like this wouldn't be roughly at the same stage with a 9 day delta. In that case, some of these accusations would be reasonable to make. But that is not the case here.
Thats fine if some Claude code agent made PR and committed it. No human involved, no human drama ensued.
People here are pointing the problem because Anthropic dude claimed, it is an experiment, tests are still failing, may go nowhere.. blah..blah.
Yes because it was an experiment and tests were indeed failing at that point in time, but guess what ? When an experiment succeeds you probably don't throw away the results.
You know, we used to look down on engineers who didn't realize there's more to software than the raw lines of code.
You're free to look down on whoever you want. I'm free to tell you I couldn't care less, and that both replies so far just confirm how much of an emotional meltdown the reactions here really are. Your comment has managed to have nothing to do with the point I was making.
You're getting the responses you earned by intentionally being flippant as possible.
If you had presented your point more thoughtfully, maybe I'd have spoon fed the point of my response, which 100% relates to what you said: your model of time compression is describing the speed of creating code.
But Bun is more than lines of code and serves as core infrastructure for lots of other projects. It's a terrible look in terms of governance to approach this migration as they have, especially the initial denial.
That shouldn't be contentious.
There's no reason to think there was an 'initial denial'. That's the point. Everyone here is saying there was denial because all of this happened in 9 days, and again, that's a silly assertion to make when humans did not create or review the code. Someone can have a swift turn in opinion when an incredible amount of change happens in a short time. The LoC comment I made was simply to serve as an illustration to how fast things can change with LLM generated code.
I'm being flippant because this should be incredibly easy to understand.
Maybe it might be easier to understand if I was a really terrible engineer.
AI gives me 750k LoC PR that's mostly broken and unuseable on Monday.
AI then fixing it by adding another 250k LoC, is not going to convince me, a competent maintainer of a major Js runtime with years of contributions, plenty of downstream dependents, and an understanding of the AI zeitgeist... to merge it all in by the next Wednesday
Just because the machines can generate code that quickly doesn't mean that human thought has changed to moving faster. Everyone's had a problem they were working on, and the solution doesn't come sitting at the desk staring at the code, but three days later in the shower, eureka! hits. Just because machines are writing code hasn't changed the underlying human thought speed substrate. That's why people see nine days as too fast, even in this sped up AI era.
Human speed thought doesn't matter here because it's not human reviewed. The code was generated. It exists and it (now) works to the extent they're satisfied with going through with a canary release. Going on about about '9 days' is working with a mental model that simply does not apply here. That is my point.
If you think there should be human review or that there should have been a lot more human collaboration, that's one thing but accusing Jarred of lying about his intentions is another thing entirely, and one where '9 days' is not remotely the proof people think it is in this situation.
I'm not sure where I accused Jarred of lying. All I'm saying is that 9 days is not very long.
The chain we're on and the comments I originally responded to have such concerns. And I mean, if it's not going to be reviewed by humans then really what makes 9 days too soon ? Should the code just sit there collecting dust until everyone agrees an arbitrary amount of time has passed ?
[flagged]
Making a factual statement is drinking Koolaid ? Okay
> What on earth is going on here?
Irrational armchair quarterbacking driven by emotional reactions to change and perceived threats. It’s not worth worrying about this specific instance, but the overall trends could get messy. This is just a taste of that.
Maybe the people who "were overreacting" just happened to have more foresight than you and me? Perhaps they saw where this was heading, and that led to their "overreaction"?
In what way? Foresight about what? It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.
> It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.
Yes - I think I didn't explain my feelings well. But, now I understood them finally! So:
It was an experiment back then. Now, nine days and a million lines later, it suddenly isn't an experiment anymore? I understand there's a comprehensive test suite (yay!) but still... a million-line diff in nine days still sounds like an experiment to me.
The difference is an assumption of good faith, for the most part, and that is to some extent modulated by how reasonable people believe a large scale LLM and/or rust rewrite is a reasonable idea.
Why are you defending them so much, lol. It's no longer an underdog open source project fighting for survival, it's a freaking Anthropic subsidiary that has been bought for hundreds of millions of dollars.
The top comment at that link points out how many of the sibling comments are delirious and emotional, kneejerk responding to the news rather than giving any sort of sober analysis.
That people were overreacting with emotional meltdowns (common in AI-related threads) is perfectly compatible with the branch making enough progress to get merged.
Anyone who disagrees with me is having an emotional meltdown and obviously they're delirious AI-haters.
I'm not in a cult, you are in a cult and delusional!
This seems dishonest.
I'm reading through the top comments next to his and don't see that. You can always find delirious and emotional takes, but those didn't dominate the discussion
https://news.ycombinator.com/item?id=48017005
> [...] Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
https://news.ycombinator.com/item?id=48017358
Compares this to Go runtime's C to Go migration
https://news.ycombinator.com/item?id=48017309
Link to Github diff view
https://news.ycombinator.com/item?id=48017505
> I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.
Who cares? Go see a therapist
It's a high profile open source project. While Bun/Jarred don't owe anything to anyone, nobody should be surprised when decisions like these result in strong backlash.
Imagine if Guido or Linus said a couple of days ago that they're just experimenting and then submitted and merged complete machine-assisted rewrite of CPython or Linux in Rust.
This actually happened to me a couple months ago. Started a Rust rewrite of a project as an experiment, then a few weeks later it was presented to the team and promoted to mainline.
Although in that case the language change was almost incidental — the rewrite was very much not a straight 1:1 port, but more of a substantive architectural overhaul and longstanding tech debt cleanup; Rust was just one of many tools and design decisions that helped get the best possible end result. There were also various reasons it made sense to attempt a rewrite within that particular window of time.
The upshot is we've ended up with a substantially stronger QA posture, a much higher-quality and more maintainable codebase, and an extremely positive audit report by a group that was brought in to review the project. There were some early kinks to work out, but the longer we've lived in this version of code the more it's proven itself to be a stronger foundation than its predecessor.
Of course, Bun is its own thing and all circumstances are unique. I have no idea how that rewrite was approached, whether it was the right decision, or how it will ultimately prove itself. Just saying the shift from "experiment" to "official new direction" is normal and credible, and that I'd give it some time to see how it handles contact with reality before passing judgement. If it's truly a disaster, nothing's stopping them from reversing course and backporting any new changes to the old Zig codebase.
The author discussed this here four days ago
https://news.ycombinator.com/item?id=48077663
I was down voted pretty hard for calling this comment out. I would say I'm surprised but honestly? Completely predictable.
Yea, what the heck.
I bet the blog post will make no mention of pressure from anthropic to do this and instead will celebrate the fact that “it passes all tests”, of course omitting how many tests were modified to forcibly pass
Do you have any proof Anthropic pushed for this? Because the author has been clear this was an experiment they wanted to test out on their own, only when it seemed to be in a working state did they consider, okay maybe this might work for us.
Does it take a phd in psychoanalysis to not see that the company that has been marketing the fuck out of lame publicity stunts, to not take advantage of another publicity stunt? Good lord, no wonder the public hates tech workers.
I refuse to blindly hate something because someone tells me to with no evidence, if you want to hate me for that, so be it, that sounds like a personal problem.
Show me the incentive and I'll show you the outcome.
I don’t have proof, but I can offer you that dexter suspicious meme instead
Was there pressure to do this, or freedom to do this? If I had an unlimited token budget I'd probably try all sorts of crazy things. Also you (one) can read the tests and see that they weren't modified to forcibly pass.
Looking forward to the blog post. Do you plan to run both the Zig and Rust binaries side-by-side across a wide range of real applications (potentially shadowing in production) to weed out bugs?
That's way too smart, safe and sensible.
They have a PR (~~closed by GitHub bot as AI slop, ironically~~ this was wrong info, it was apparently closed by Jarred himself as it missed a conversion or some 20 Zig files to Rust) to remove the Zig code.
I guess the answer is "no".
I'm curious how much this would cost a paying customer. Can you please give us an estimate?
Great question and I'd love the answer.
I bet the answer is industry changing even if the token cost is high.
This work was impossibly expensive in terms of people hours and time before. Architectural planning, engineering alignment and politics, phased engineering that gets interrupted by changing priorities.
That it's possible to do R&D, the port, and get 99.X test passing in less than 2 weeks is so much more efficient for the humans.
Any plans to issue a CVE for this HTTP request smuggling attack vector fixed in the latest bun release?
https://github.com/oven-sh/bun/issues/29732
https://github.com/oven-sh/bun/security
Surprisingly, they appear to have not disclosed any vulnerabilities whatsoever. It's likely there have been numerous vulnerabilities in the past, but they are all being ignored.
https://x.com/DavidSherret/status/2031432509301428644
This is really poor form given that Anthropic is going around getting all kinds of public goodwill for finding CVEs in other people’s products.
Yeah! Why would the company that stands to make themselves look better in front of an IPO do such a thing?! Next thing you're going to tell me was that this whole rewrite was another marketing ploy to help potentially turn themselves in multi-millionaires!
Yes, it is helpful for a company to be very clear that in a choice between the safety and integrity of their customers, and profit, they are choosing profit.
maybe you should ask on the issue directly?
Did you (or will you) implement some kind of e2e (fuzzy?) testing comparing the two binaries? Do you have particular plans regarding the release of this (for ex to not break users workflows or things like that)?
Will this likely fix stability issues in the Bun Workers API? https://bun.com/docs/runtime/workers
Is writing the blog post taking longer than the rewrite
almost
> The codebase is otherwise largely the same. The same architecture, the same data structures.
How can you possibly verify this, if a 1M line patch was written over 7 days? It's at best a hunch (vibes?), and at worst a lie.
Because it passes the existing test suite? And he knows what's in the test suite?
The test suite explicitly verifies the architecture and the data structures used? Depends on the suite, I suppose.
I can hope this will lead to little to no memory issues in using bun as a web server
I'd be surprised if they could eliminate memory issues completely, especially considering the amount of `unsafe` the codebase seems to contain.
On the other hand - now it should be possible to tackle some of those one by one?
Oh yes, I don't doubt they'd eventually be able to seriously reduce that number, probably to a handful of places. I don't doubt the strategy employed here, rewriting it keeping it similar, then slowly change it. I do still doubt they'd be able to completely eliminate memory issues in the end regardless.
Doesn't that count anything that has 'unsafe' in it, not just the keyword?
It does, see the sibling comment made about an hour before yours, fixing that issue has marginal difference.
That's picking up all the "bunsafety" references in there :P
When I read what you wrote, I was like "of course, duh, I'm stupid" but running `ag "unsafe" src | grep -i "bunsafety"` it doesn't seem to be the case actually, I see zero bunsafety mentions from it.
However, `ag unsafe` does over-count anyways, just in a different way, matching stuff like SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION and _unsafe_ptr_do_not_use and others.
Better command with same previous commit, `ag -w unsafe src | wc -l`, reports 13914 "unsafe" usages now, slightly better but pretty awful still.
My understanding is that that's because they were trying to do a structurally homologous port from Zig to Rust, precisely to keep their mental model and not change "too much" at once, and then they plan to refactor to make it safe Rust later.
it's clear that as of the time of this merge, no human has read any appreciable fraction of current mainline bun, so it's not particularly clear how much of a "mental model" exists anymore.
Does that mean that from now your coding agents working on the Bun codebase are themselves running on that rust-Bun runtime?
So a question you should answer: Couldn't you just train the super SOTA model on fixing those issues instead of porting it?
[dead]
[flagged]
Coming on a bit strong no? Isn't it possible one could do an experiment almost two weeks ago, then by today the experiment concluded and now you've made a choice?
Did you think "experiment" meant 100% this will be thrown away? Wouldn't make much sense to experiment with something you know you'll throw away, unless you have some specific reason for it.
You don’t speak for most of us.