When announcements say that rewrite took 1 week, I wonder how much time went into preparing this file with very detailed instructions on mapping Zig to Rust idioms: https://github.com/oven-sh/bun/commit/46d3bc29f270fa881dd573...

On top of that, if you look at 'Pointers & ownership' and 'Collections' sections, the Bun codebase is already prepared, using internal smart pointer types that map 1-to-1 to Rust equivalents, and `bun_collections` Rust crate already exists.

This makes an impression, that rewrite was prepared long time ago and was Bun team proposition to Anthropic during the acquisition deal.

Yeah I don’t know what’s true when reading about LLMs. Same with comments here on hacker news. So much money on the line it’s clear they would seed communities with marketing shills (and some people are just tribal).

Same since they own Bun, they have every incentive to make this seem easier than it was.

This is a huge problem regarding the specifics of ai. Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines more and more.

Influencers are getting paid to promote ai for 10s of thousands of USD. This is one the reasons social media has been swamped with it lately.

Yes, some of the latest campaigns:

https://www.wired.com/story/super-pac-backed-by-openai-and-p...

Anthropic's own talking point guide:

https://news.ycombinator.com/item?id=47945021

There were earlier initiatives from the industry. This is just what is in the open and does not even include automated LLM "influencers".

> Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines

Since one of LLM's largest market (with product fit) is us developers, we are experiencing what the crypto bros did to others.

You can just use AI for yourself and see. It isn't some mysterious product that only a few people get to use.

This is the thing. I do use LLMs (mostly Anthropic).

It just does not generate good useable code. I have to review every single change to a higher degree than I would my own code because it likes to slip in hidden nasties. I have to rewrite at least 50% of what it generates.

That being said, I know devs who swear that they don’t even write code anymore. Like this rust port. I can’t even fathom blindly merging something his massive.

I think we're still seeing pretty wild variance in how effective LLMs can be for code, depending on who is driving it. I've seen some folks getting themselves into messes pretty regularly with LLMs. But, ever since Opus 4.5, it's been pretty obviously better to work with it than without it, remarkably better in some use cases. Porting an application with source available and a huge existing test suite is pretty much the ideal use case for an LLM. It has everything it needs to succeed. I can't imagine why anyone would embark on a porting effort without an LLM at this point.

While this is true, it's also true that few people have the budget to spend a bunch of tokens on porting bun over to rust.

And yet we have stories[0] of companies judging merit on tokens used.

Rather than using these tokens to do rewrites that have the potential to massively improve the day to day, they're just burnt for the sake of burning them.

It's individual initiative, and company culture that are at play as much as budget.

0: https://news.ycombinator.com/item?id=48110529

Most people do use LLMs, which is why they have the so-called pessimistic opinions they do.

Judging by most public comments, people are really mediocre at using them. I don't get how it's possible to get such poor results from them.

That's because your usecases were simple and/or small.

Otherwise, you would have known.

Unless you don't have experience and you believe the whole "You are right! it _is_ a and not b" bs...

I'm not sure it matters what anyone claims. It's easy to use and experience its abilities and limitations.

The truth lies somewhere in the middle.

Context: 20 years coding, 13-ish of which professional. Using LLMs for side projects, including a very big one. Also using them to help manage our home server.

I’ve used 20-ish agents with OpenRouter, Google’s own AGY, Mistral’s Vibe, and Claude Code. The good ones are good and can be very helpful with spec’ing work or handling repetitive tasks. Except for Opus 4.6, none of them produce TypeScript that I’d be super proud of; but they write stuff that’s good enough compared to what I’ve seen in the industry. It’s always some mix of spaghetti and shortcuts. That’s fine, you steer the model and tighten your specs and tests.

Anyone claiming ‘Model X can one-shot’ an app is delusional about maintainability, deployment, all the little things that grease the wheels. Anyone claiming ‘LLMs are useless’ is probably not being impartial. That’s it.

And any company claiming AI is awesome at everything and will replace everyone? Yeah, they’re lying, at least about their capabilities as of right now.

Similar highly crafted success stories getting passed around within some big tech firms right now.

We got told that someone wrote a huge, sophisticated driver in Rust in a single day using Claude Code. This is being pushed as a case of AI doing something that we encounter on a regular basis, way faster than a human could do it.

Some ommitted details: Turns out the official spec for this driver is written in C, and the standard has a massive official suite of unit tests.

Ignoring things like whether the Rust that was output could be deemed qualitatively good, whether the resulting line count is appropriate, how much the codebase was ready or primed for this kind of exercise going in, and so on, is it fair to say that a 622 line artefact created up front is a relatively small cost for a potential increase in consistency or quality of output when the output is ~1M LoC? It seems like there's a multiplicative power here given how much output there is. Or is that missing a lot of nuance?

I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.

This is effectively a very expensive and resource-intensive machine translation. As such, there is no increase in consistency or quality of output.

The translation is a starting point to enable follow-on work to take advantage of Rust's features.

How would you have achieved this “machine translation” without an LLM?

It seems to me it would have been highly likely to be more expensive and more resource intensive - if realistically possible at all, short of implementing a general Zig to Rust translator first.

"Short of..." indeed. You already know the answer, although it doesn't need to be general; it only needs to work on a single codebase.

A recent and highly relevant example is the migration of the TypeScript compiler to Go. They did not use an LLM to translate the code. Instead, they used LLM assistance to write a deterministic TypeScript-to-Go translator and then used that to translate the code. I have far more confidence in this approach than in letting the LLMs rip on the translation itself.

I think TypeScript to Go is far easier to translate than something to Rust though.

Is it? I wouldn't assume that. Go is a smaller and less flexible language than Java/Typescript (I say that as a compliment) so it's not clear to me that all Typescript idioms have an obvious Go equivalent.

Leaving aside ownership, Rust is a big, complex, expressive language. I'm not that familiar with Zig, but I think it tries to be a "better, modern C" so it seems like it should be easily possible to mechanically translate Zig into direct Rust equivalences. You probably won't get "good" idiomatic Rust at the end, but you should get working code that does the same thing.

Even before the advent of LLMs, I have personally (and largely successfully) translated several production systems from one language to another. I've learned it's best to start with a mechanical translation, literally bug for bug, leaving shit exactly as I found it (just in another language.)

I've done Perl to Java, Java to Kotlin, Python to Ruby, Ruby to Java, C to Swift, you name it.

It's only when you change behavior during the rewrite that it becomes an intractable problem. If you ship a 1:1 translation, THEN you can start going through the list of "bugs" you found along the way. Tread carefully when it comes to this, however, as I can almost guarantee that within your non-trivial codebase there will be some code that implicitly _depends_ on a "bug" to function at all. This where shit hits the fan.

"Load bearing bugs"

It really is incredible how frequently these occur in everyday codebases of sufficient size.

> I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.

I think that's the point the original poster was making. There's basically zero chance this file was just spit out by memory in an afternoon. It was obviously the result of a LOT of pre-planning and back and forth checking over the artifacts that Claude was incorrectly generating for one reason or another. So yeah, an extremely iterative process.

With rules as fine-grained as these, there was almost certainly many instances where hundreds of files are generated -> one particular file doesn't translate <X> correctly -> add a rule for <X> -> regenerate everything again -> crap, that rule broke a different file because <Y> -> add a rule for <X if Y>, another for <X not Y> -> regenerate everything again[0] -> repeat. The token costs must have been out of this world.

0: now I'm sure people will say "why would you regenerate a file that generated correctly once? Just mark it off the list and move on." Well, when essentially 99.9999% of your codebase is generated artifacts, the tiny fraction that is actually human-understandable is now the spec, the source of truth for everything. It HAS to be able to essentially redo the entire process if you expect any level of maintainability going forward.

I would guess it was a for ... each loop. They likely wrote a bunch of skills. The foor loop went through each file and generated a complimentary file, then had another process integrate/validate.

I doubt the entire process was a single week, just whatever harness they specially prepared for the work.

> I doubt the entire process was a single week, just whatever harness they specially prepared for the work.

it wasn't. probably quite a lot of preparation i would think. and it's very much a first pass which is far from idiomatic rust and far from memory safe. still impressive though for what it is.

https://x.com/jarredsumner/status/2053588764774269292 https://x.com/jarredsumner/status/2054984043708740093

> using internal smart pointer types that map 1-to-1 to Rust equivalents

Smart pointers weren't invented by Rust. If you write code in other languages with pointers you mentally model the same types already.

> and `bun_collections` Rust crate already exists.

This is wrong. It's part of the PR in the codebase. It did not previously exist.

Agree, after closer look smart pointer types are pretty standard and collections were indeed a part of migration.

But still, in order to prepare those detailed and very project-specific instructions you need to iterate on trying to convert the files from this specific codebase.

Its like that hackathon winner project that everyone knows wasn't ideated or built there. True to the law, not to the spirit.

Based on the use of "≥" and em-dashes, I'd say this markdown file was written with or by an LLM.

Yes, there is exaggeration going on.

Nonetheless, it’s a fact it would have taken much longer without LLMs, I’d say all possible.

I find this is a valid success story if you can look past the embellishments. More than that, it’s really cool, actually.

Given zig instability (as in frequent breaking changes), it wouldn't surprise me if they intentionally design bun from the start in a way to make it easier to migrate to rust if needed.

It's the same thing with their gcc stunt.

It would be _so_ easy to alleviate any doubt from this and hype up the IPO even more. They just need start a separate repo with all the hidden work they needed to do to prod the AI along, and let everyone replicate the results. After all, isn't that what all their customers are trying to achieve? A million lines of usable code in "7" days? Never mind the fact that it will also boost Anthropic's usage metrics as everyone tries to replicate it into their workflows.

If it was beautiful, they would've started with a blog post about this with links and instructions. Perhaps I will still be proven wrong and a blog post is being written as I type this.

Which part of a Zig to Rust port (working, passing tests) of a quite large codebase in a little over a week is not worthy of hype do you reckon? That they didn't one-shot it? What could possibly make it impressive if not the sheer velocity of the thing? That's a months or years long operation for a human. There's a reason porting large programs to new languages was vanishingly rare throughout most of computing history, and there's a reason people are suddenly doing it almost on a whim, now.

That makes the Bun owner's claim, just a week ago in this site, even more dubious when he came on here and said this code was just an experiment and likely to be thrown away.

I don't think the owner lied, but rather that the entirely speculative comment on here is obviously wrong.

It says here in the comments that it's mistaken about the supposed previous existence of the crate.

Seems like Zig Bun had 3 pointer types that map neatly to existing Rust pointer types. The other 7-8 needed types to be created.

Is that the conspiracy?

bun_collections doesn't look much older than the porting guide.

Still writing the blog post about this. Will share more details.

For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.

You, nine days ago[0]:

> I work on Bun and this is my branch

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

Maybe... it wasn't such an overreaction?

[0]: https://news.ycombinator.com/item?id=48019226

I'm really out the loop here so maybe you can help answer me a question - why is HN unhappy about this rewrite? why are people writing here almost as if they feel betrayed by Bun being rewritten from Zig into Rust?

I genuinely don't get it. I've been following this Bun stuff a bit but I don't understand where the HN sentiment is coming from.

Because in the software world, especially before 2022, ownership and stability have been valued. People like using things that do not randomly start breaking more often after every new release, and if things break, there is a human who knows exactly why it broke and what's the best way to fix it. Businesses would not want their losses to be attributed to an AI rewriting an entire codebase. AI owns nothing, not even the bugs which it produces. I would not want my SaaS to have downtime because a JavaScript runtime it depends on decided that they had to market their LLM by rewriting years of code recklessly.

People are not betrayed by a rewrite. They are betrayed by an LLM rewriting with minimal supervision fasttracked to a merge within 9 days of commencement.

To the contrary I do not understand how we have become so insensitive towards stability since the LLM era. Why is unbreakable code no longer the goal but a truckload of generated code is.

> Because in the software world, especially before 2022, ownership and stability have been valued.

Stability in JS ecosystem was never valued.

> Businesses would not want their losses to be attributed to an AI rewriting an entire codebase. AI owns nothing, not even the bugs which it produces. I would not want my SaaS to have downtime because a JavaScript runtime it depends on decided that they had to market their LLM by rewriting years of code recklessly.

I don't know how else to say this but "Tough Shit"? Businesses are building their entire enterprise on the volunteer work donated by the free software community (or given away for free by some other company solving its own problems).

If you don't want 'your' SaaS to have downtime based on somebody else's whims, then fucking pay for your own developers (or your own AI) to build your SaaS platform in house. That's what IBM did in the 1970s, and nothing except market pressure is stopping you from doing it today.

I'm sorry for the vulgarity but this entitled attitude of businesses toward FREE SOFTWARE GIVEN TO THEM FOR NO MONEY is infuriating. If the electric company decided to give your company free power on windy days, would you then get angry that they installed a new model of turbine?

The unhappiness is primarily stemming from Bun’s ownership by Anthropic - HN sees this as Anthropic using an OSS project for reckless marketing stunts.

For the record I don’t believe it’s a stunt, it’s ridiculous to me - everyone’s just seeing what they want to see out of sheer hate for anything Anthropic does.

In any case if the rewrite is really as reckless as many in this thread claim, we will see Bun collapse in on itself with a 1M LOC codebase the core team doesn’t understand, or rollback to Zig. So we don’t need to have a flamewar over it, time will answer the question.

[flagged]

My read is it's less the rewrite and more the messaging around the rewrite. Nine days between "you're over-reacting" and merge is surprising, to say the least. Sure will be interesting to see that blog post!

Vibe coding a Rust rewrite of a widely used tool is basically catnip for the HN crowd.

Not if you use that tool, then it's just scary.

I would think the Zig implementation with 500+ issues on Bun's GitHub tracker mentioning "segfault" would be even scarier.

The context nobody is mentioning is this came shortly after Bun forked Zig in the name of optimization, but then a Zig maintainer came out and basically said they (Bun) don't know what they're doing, or else they would have known that wasn't an effective optimization.

It outwardly seemed like they forked Zig for a flashy headline, were called out, then immediately started moving to Rust. This, combined with being bought by Anthropic, and plugging vibe coding the whole way, just gives the impression of random and chaotic technical decisions, which is not what people want in software their business depends on.

https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

My read is that it just seems a bit reckless doing a full rewrite so quickly.

My read. If the code has a comprehensive feature test suite, a performance test suite (how long a function takes), and a linter with readability guidelines (e.g. cyclomatic complexity; no code duplication), and the LLM rewrite passes all three, then it should be fine. But I think that in the real world only the first one (functional tests) exists.

Maybe Jarred can fill in here

[deleted]

posting my read (since it differs so much from the others')- there's a 'holy war' being waged by people that think LLMs shouldn't do full rewrites of software. There are various reasons people think this (think LLMs are parrots that make slop and are incapable of writing good code, have environmental concerns, or are angry that software licenses can be circumvented). I call it a 'holy war' because I think most see our current trajectory as a bit inevitable and have a strong urge to proselytize their views and chide maintainers that use LLMs in ways they don't like.

Very similar angry comments happened with the discussions of the Chardet rewrite, next.js/vinext, and JSONata/gnata if you want to look at this in context.

You're not alone in voicing this, another (now dead) comment did it earlier too with a bit more of an emotional response (https://news.ycombinator.com/item?id=48134229).

Still, do you folks never do something to see how you feel about something, then chose to go one way or another? I'm not sure why it's so hard to see that it was an overreaction at the time, because it was an experiment, then at one point it stopped being an experiment and now they've chosen to actually run with it?

Is this not a common occurrence for other people? Personally I change my mind all the time, especially based on new evidence, which usually experiments like this surface, I'm not sure I understand the whole "You said X some days ago" outrage that seems to cause people's reaction here.

Yes sure it's ok to change your mind. But don't you think the people Jarred accused of "overreacting" in retrospect didn't?

No, what we knew then is still what was known then. Today is different, and seemingly they've committed to the rewrite, so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.

> so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.

It also makes sense to have strong feelings when you're able to pattern match well enough to predict something will happen despite others trying to convince you that your predictions are incorrect.

It's not overreacting when correctly predicting the future, just because others couldn't. In the same vein, the idea that "everyone out to get you" is not called paranoia when there are people actually out to get you. That's better called being observant.

Some of those who predicted correctly might also have overreacted, but I believe that the majority understood that to be a blanket statement about prediction as a whole vs any specific individual reaction.

“Nobody could have seen this coming…”?

Well apparently a lot of people did. Maybe Jarred didn’t, maybe you didn’t, but most people correctly predicted what was coming.

See what coming?! I really don't understand what's going on here. Correctly predicted what, that Bun was being rewritten into Rust? I'm not sure anyone doubted that, all the work they did was public???

What on earth is going on here?

> I'm not sure anyone doubted that, all the work they did was public???

https://news.ycombinator.com/item?id=48019226

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

> What on earth is going on here?

With the nearly complete PR with the port to rust, a number of people predicted that it was going to happen. They were assured it's unlikely to happen and then they were accused of overreacting over effectively nothing. When those same people who were already upset about the rewrite, learned that their predictions the same ones that were rudely dismissed, were in fact, correct, they became upset again; this time about being lied to.

Correct or not, it's reasonable to conclude they were lied to. Especially given they correctly predicted the future.

>Correct or not, it's reasonable to conclude they were lied to.

No it's not. If we were 9 days away from a human written version of this experiment then yeah it would be reasonable to conclude they were lied to, because a human written version would progress so much slower and steadier that it's very unlikely you hadn't made up most of your mind a week before merge time.

But it's not human written. It's months, perhaps years of work compressed into a week, where the machine can go from 'nothing is working' to 'everything is working' in a few days. There is nothing reasonable about concluding you must have been lied to when such a delta in such a short time is possible. And if people fail to see that, then perhaps the initial assertions about an emotional meltdown were not so far off after all.

I might surprise you, but tech projects have social part of it. Decisions like that are discussed with community. It is completely fine to not give a single shit about community, but then don't act surprised when community doesn't give a shit about you.

Decisions like this are discussed however the maintainers of the project wish to discuss them. And a majority of the time, these decisions are made and discussed solely by the maintainers, so I really have no idea what you're talking about.

It's really simple.

9 days ago this is how the migration was described:

> I work on Bun and this is my branch

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

> I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.

9 days after that comment, the rewrite has been merged to master.

9 days after "this is my branch" "the code doesn't work" "I'm just curious" "high chance it's thrown out"... it's merged to master.

-

Some people saw the original as an attempt to downplay the importance of the branch in response to negative feedback, rather than accurately describing what the branch represented.

Those people essentially predicted that Bun's actions would shortly reflect much more conviction than was being let on.

Experiments graduate to production all the time, but given the timelines involved, their predictions were correct.

> Those people essentially predicted that Bun's actions would shortly reflect much more conviction than was being let on.

Ironically these people are displaying great confidence in AI’s abilities.

If that’s the case, what are they objecting to exactly?

> Ironically these people are displaying great confidence in AI’s abilities.

Maybe they were displaying high confidence in a marketing machine's ability to commit to dangerous stunts.

Stop thinking about '9 days' like it means the same thing in an era where machines can generate thousands of lines of code in a few hours.

There is no way a human rewrite like this wouldn't be roughly at the same stage with a 9 day delta. In that case, some of these accusations would be reasonable to make. But that is not the case here.

Thats fine if some Claude code agent made PR and committed it. No human involved, no human drama ensued.

People here are pointing the problem because Anthropic dude claimed, it is an experiment, tests are still failing, may go nowhere.. blah..blah.

Yes because it was an experiment and tests were indeed failing at that point in time, but guess what ? When an experiment succeeds you probably don't throw away the results.

You know, we used to look down on engineers who didn't realize there's more to software than the raw lines of code.

You're free to look down on whoever you want. I'm free to tell you I couldn't care less, and that both replies so far just confirm how much of an emotional meltdown the reactions here really are. Your comment has managed to have nothing to do with the point I was making.

You're getting the responses you earned by intentionally being flippant as possible.

If you had presented your point more thoughtfully, maybe I'd have spoon fed the point of my response, which 100% relates to what you said: your model of time compression is describing the speed of creating code.

But Bun is more than lines of code and serves as core infrastructure for lots of other projects. It's a terrible look in terms of governance to approach this migration as they have, especially the initial denial.

That shouldn't be contentious.

There's no reason to think there was an 'initial denial'. That's the point. Everyone here is saying there was denial because all of this happened in 9 days, and again, that's a silly assertion to make when humans did not create or review the code. Someone can have a swift turn in opinion when an incredible amount of change happens in a short time. The LoC comment I made was simply to serve as an illustration to how fast things can change with LLM generated code.

I'm being flippant because this should be incredibly easy to understand.

Maybe it might be easier to understand if I was a really terrible engineer.

AI gives me 750k LoC PR that's mostly broken and unuseable on Monday.

AI then fixing it by adding another 250k LoC, is not going to convince me, a competent maintainer of a major Js runtime with years of contributions, plenty of downstream dependents, and an understanding of the AI zeitgeist... to merge it all in by the next Wednesday

Just because the machines can generate code that quickly doesn't mean that human thought has changed to moving faster. Everyone's had a problem they were working on, and the solution doesn't come sitting at the desk staring at the code, but three days later in the shower, eureka! hits. Just because machines are writing code hasn't changed the underlying human thought speed substrate. That's why people see nine days as too fast, even in this sped up AI era.

Human speed thought doesn't matter here because it's not human reviewed. The code was generated. It exists and it (now) works to the extent they're satisfied with going through with a canary release. Going on about about '9 days' is working with a mental model that simply does not apply here. That is my point.

If you think there should be human review or that there should have been a lot more human collaboration, that's one thing but accusing Jarred of lying about his intentions is another thing entirely, and one where '9 days' is not remotely the proof people think it is in this situation.

I'm not sure where I accused Jarred of lying. All I'm saying is that 9 days is not very long.

The chain we're on and the comments I originally responded to have such concerns. And I mean, if it's not going to be reviewed by humans then really what makes 9 days too soon ? Should the code just sit there collecting dust until everyone agrees an arbitrary amount of time has passed ?

[flagged]

Making a factual statement is drinking Koolaid ? Okay

> What on earth is going on here?

Irrational armchair quarterbacking driven by emotional reactions to change and perceived threats. It’s not worth worrying about this specific instance, but the overall trends could get messy. This is just a taste of that.

Maybe the people who "were overreacting" just happened to have more foresight than you and me? Perhaps they saw where this was heading, and that led to their "overreaction"?

In what way? Foresight about what? It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.

> It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.

Yes - I think I didn't explain my feelings well. But, now I understood them finally! So:

It was an experiment back then. Now, nine days and a million lines later, it suddenly isn't an experiment anymore? I understand there's a comprehensive test suite (yay!) but still... a million-line diff in nine days still sounds like an experiment to me.

The difference is an assumption of good faith, for the most part, and that is to some extent modulated by how reasonable people believe a large scale LLM and/or rust rewrite is a reasonable idea.

Why are you defending them so much, lol. It's no longer an underdog open source project fighting for survival, it's a freaking Anthropic subsidiary that has been bought for hundreds of millions of dollars.

The top comment at that link points out how many of the sibling comments are delirious and emotional, kneejerk responding to the news rather than giving any sort of sober analysis.

That people were overreacting with emotional meltdowns (common in AI-related threads) is perfectly compatible with the branch making enough progress to get merged.

Anyone who disagrees with me is having an emotional meltdown and obviously they're delirious AI-haters.

I'm not in a cult, you are in a cult and delusional!

This seems dishonest.

I'm reading through the top comments next to his and don't see that. You can always find delirious and emotional takes, but those didn't dominate the discussion

https://news.ycombinator.com/item?id=48017005

> [...] Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.

https://news.ycombinator.com/item?id=48017358

Compares this to Go runtime's C to Go migration

https://news.ycombinator.com/item?id=48017309

Link to Github diff view

https://news.ycombinator.com/item?id=48017505

> I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.

Who cares? Go see a therapist

It's a high profile open source project. While Bun/Jarred don't owe anything to anyone, nobody should be surprised when decisions like these result in strong backlash.

Imagine if Guido or Linus said a couple of days ago that they're just experimenting and then submitted and merged complete machine-assisted rewrite of CPython or Linux in Rust.

This actually happened to me a couple months ago. Started a Rust rewrite of a project as an experiment, then a few weeks later it was presented to the team and promoted to mainline.

Although in that case the language change was almost incidental — the rewrite was very much not a straight 1:1 port, but more of a substantive architectural overhaul and longstanding tech debt cleanup; Rust was just one of many tools and design decisions that helped get the best possible end result. There were also various reasons it made sense to attempt a rewrite within that particular window of time.

The upshot is we've ended up with a substantially stronger QA posture, a much higher-quality and more maintainable codebase, and an extremely positive audit report by a group that was brought in to review the project. There were some early kinks to work out, but the longer we've lived in this version of code the more it's proven itself to be a stronger foundation than its predecessor.

Of course, Bun is its own thing and all circumstances are unique. I have no idea how that rewrite was approached, whether it was the right decision, or how it will ultimately prove itself. Just saying the shift from "experiment" to "official new direction" is normal and credible, and that I'd give it some time to see how it handles contact with reality before passing judgement. If it's truly a disaster, nothing's stopping them from reversing course and backporting any new changes to the old Zig codebase.

[deleted]

The author discussed this here four days ago

https://news.ycombinator.com/item?id=48077663

I was down voted pretty hard for calling this comment out. I would say I'm surprised but honestly? Completely predictable.

Yea, what the heck.

Looking forward to the blog post. Do you plan to run both the Zig and Rust binaries side-by-side across a wide range of real applications (potentially shadowing in production) to weed out bugs?

That's way too smart, safe and sensible.

[deleted]

They have a PR (~~closed by GitHub bot as AI slop, ironically~~ this was wrong info, it was apparently closed by Jarred himself as it missed a conversion or some 20 Zig files to Rust) to remove the Zig code.

I guess the answer is "no".

I'm curious how much this would cost a paying customer. Can you please give us an estimate?

Great question and I'd love the answer.

I bet the answer is industry changing even if the token cost is high.

This work was impossibly expensive in terms of people hours and time before. Architectural planning, engineering alignment and politics, phased engineering that gets interrupted by changing priorities.

That it's possible to do R&D, the port, and get 99.X test passing in less than 2 weeks is so much more efficient for the humans.

I bet the blog post will make no mention of pressure from anthropic to do this and instead will celebrate the fact that “it passes all tests”, of course omitting how many tests were modified to forcibly pass

Do you have any proof Anthropic pushed for this? Because the author has been clear this was an experiment they wanted to test out on their own, only when it seemed to be in a working state did they consider, okay maybe this might work for us.

Does it take a phd in psychoanalysis to not see that the company that has been marketing the fuck out of lame publicity stunts, to not take advantage of another publicity stunt? Good lord, no wonder the public hates tech workers.

I refuse to blindly hate something because someone tells me to with no evidence, if you want to hate me for that, so be it, that sounds like a personal problem.

Show me the incentive and I'll show you the outcome.

Was there pressure to do this, or freedom to do this? If I had an unlimited token budget I'd probably try all sorts of crazy things. Also you (one) can read the tests and see that they weren't modified to forcibly pass.

Any plans to issue a CVE for this HTTP request smuggling attack vector fixed in the latest bun release?

https://github.com/oven-sh/bun/issues/29732

https://github.com/oven-sh/bun/security

Surprisingly, they appear to have not disclosed any vulnerabilities whatsoever. It's likely there have been numerous vulnerabilities in the past, but they are all being ignored.

https://x.com/DavidSherret/status/2031432509301428644

This is really poor form given that Anthropic is going around getting all kinds of public goodwill for finding CVEs in other people’s products.

Yeah! Why would the company that stands to make themselves look better in front of an IPO do such a thing?! Next thing you're going to tell me was that this whole rewrite was another marketing ploy to help potentially turn themselves in multi-millionaires!

Yes, it is helpful for a company to be very clear that in a choice between the safety and integrity of their customers, and profit, they are choosing profit.

maybe you should ask on the issue directly?

Did you (or will you) implement some kind of e2e (fuzzy?) testing comparing the two binaries? Do you have particular plans regarding the release of this (for ex to not break users workflows or things like that)?

Will this likely fix stability issues in the Bun Workers API? https://bun.com/docs/runtime/workers

Is writing the blog post taking longer than the rewrite

almost

> The codebase is otherwise largely the same. The same architecture, the same data structures.

How can you possibly verify this, if a 1M line patch was written over 7 days? It's at best a hunch (vibes?), and at worst a lie.

Because it passes the existing test suite? And he knows what's in the test suite?

The test suite explicitly verifies the architecture and the data structures used? Depends on the suite, I suppose.

I can hope this will lead to little to no memory issues in using bun as a web server

I'd be surprised if they could eliminate memory issues completely, especially considering the amount of `unsafe` the codebase seems to contain.

    git rev-parse HEAD && ag "unsafe" src | wc -l
    19d8ade2c6c1f0eeae50bd9d7f2a4bf4a2551557
    14865

On the other hand - now it should be possible to tackle some of those one by one?

Oh yes, I don't doubt they'd eventually be able to seriously reduce that number, probably to a handful of places. I don't doubt the strategy employed here, rewriting it keeping it similar, then slowly change it. I do still doubt they'd be able to completely eliminate memory issues in the end regardless.

Doesn't that count anything that has 'unsafe' in it, not just the keyword?

It does, see the sibling comment made about an hour before yours, fixing that issue has marginal difference.

That's picking up all the "bunsafety" references in there :P

When I read what you wrote, I was like "of course, duh, I'm stupid" but running `ag "unsafe" src | grep -i "bunsafety"` it doesn't seem to be the case actually, I see zero bunsafety mentions from it.

However, `ag unsafe` does over-count anyways, just in a different way, matching stuff like SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION and _unsafe_ptr_do_not_use and others.

Better command with same previous commit, `ag -w unsafe src | wc -l`, reports 13914 "unsafe" usages now, slightly better but pretty awful still.

My understanding is that that's because they were trying to do a structurally homologous port from Zig to Rust, precisely to keep their mental model and not change "too much" at once, and then they plan to refactor to make it safe Rust later.

it's clear that as of the time of this merge, no human has read any appreciable fraction of current mainline bun, so it's not particularly clear how much of a "mental model" exists anymore.

Does that mean that from now your coding agents working on the Bun codebase are themselves running on that rust-Bun runtime?

So a question you should answer: Couldn't you just train the super SOTA model on fixing those issues instead of porting it?

[dead]

[flagged]

Coming on a bit strong no? Isn't it possible one could do an experiment almost two weeks ago, then by today the experiment concluded and now you've made a choice?

Did you think "experiment" meant 100% this will be thrown away? Wouldn't make much sense to experiment with something you know you'll throw away, unless you have some specific reason for it.

You don’t speak for most of us.

    $ rg 'unsafe [{]' src/ | wc -l
    10428
    $ rg 'unsafe [{]' src/ -l | wc -l
    736
    
    Language        Files     Lines      Code  Comments    Blanks
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    Rust             1443    929213    732281    116293     80639
    Zig              1298    711112    574563     59118     77431
    TypeScript       2604    654684    510464     82254     61966
    JavaScript       4370    364928    293211     36108     35609
    C                 111    305123    205875     79077     20171
    C++               586    262475    217111     19004     26360
    C Header          779    100979     57715     29459     13805

Cool you can just search specifically for potentially unsafe code in Rust. How do you search for unsafe code in Zig? Or do you just have to assume it's everywhere?

If half of your code is unsafe then unless you exercise tremendous discipline (Claude basically doesn't) you will just end up with a big ball of unsafe, peppered with hallucinations in whatever random documentary comments Claude decided to make. I doubt they enforced the confinement of unsafe to a specific architectural layer or anything like that.

Aren't the Rust unsafes a reflection of the Zig it was ported from? However now that you're working with Rust, you're in a position to continue improving and eliminating the unsafes.

Plus I seem to recall the Rust community solved this issue by making tooling that proofs if unsafe code is truly unsafe, I remember one of the concurrent frameworks got scanned and people freaked out, the creator was about to abandon ship entirely as a result, don't recall what fully came of it. Anyway my overall point being, if there's already tooling to find the truly unsafe / bad code, it might make fixing it simpler / quicker to accomplish.

There is no Rust tooling that tells you if your unsafe code is shit or not. If there was you wouldn't need the unsafe stuff at all.

The Actix web stuff was the maintainer using unsafe code to increase performance (iirc, it was a long time ago) in what was the most popular rust web frameworks at the time. It has since declined and been supplanted by other projects but the push was mainly a web framework shouldn't need so much unsafe. They eventually ceded the project to another maintainer and went off to work on something else.

Maybe my memory is hazy on it, but yeah that's the exact drama I remember.

You're likely thinking of Miri, a sanitiser. It's not a proof solver, but it screams to high heaven about this code nonetheless.

https://github.com/oven-sh/bun/issues/30719

There is a qualitative difference between unsafe Rust and Zig as far as I know.

In principle static analysis is possible. (Note: WIP)

https://github.com/ityonemo/clr

if half of your files in a million line codebase are unsafe that doesn't tell you much any more. Presumably the point of a Rust rewrite is that you actually make use of Rust's safety features in a coherent way.

But given the whole "let AI rewrite this for me" stunt nature of this project that was not going to happen because that would require well, actual thinking and a re-design. So now you have Zig disguised as Rust and a line-by-line port because the semantics of idiomatic Rust don't map on the semantics of Zig.

>if half of your files in a million line codebase are unsafe that doesn't tell you much any more.

If half of your files in the first pass of a million line rewrite are unsafe then that's completely fine. Do you understand what the tag actually is? It doesn't even mean that the code is actually unsafe, just that the compiler can't guarantee its safety, which can happen for a number of reasons, some benign.

Who rewrites a 700K codebase trying to be idiomatic from the get go ? That's setting yourself up for failure, whether you're a human or a machine.

A 1:1 translation, warts and all, is the _only_ foolproof way to do a language to language rewrite. Anything else in a non-trivial codebase is almost guaranteed to introduce regressions.

And? This is absolutely the correct and standardized way to do mechanical rewrites: you do a rewrite that maps directly to the original source so you can rely on the original correctness guarantees and bug-for-bug compatibility and log issues, and then you go into the next phase where you begin to use idiomatic constructs.

This is the same in COBOL-to-Java ports that have been done in banking and insurance for the past 20 years.

If the rewrite was zig to C and half the code was in __asm blocks is that different or the same?

COBOL to Java is a completely different thing and pretty much unrelated.

Rust can easily call C libraries and vice versa and so can Zig. A more appropriate and designed rewrite would identify the core pieces of the Zig code that were the primary sources of all the big issues. Then, you rewrite that component in Rust and verify that you get the expected improvements. That keeps the codebase stable, it keeps you honest on actually reducing bugs and issues, and other benefits. Then you either just keep it that way or slowly rinse and repeat.

Without doing the analysis of what the core issues were in the first place, the author of Bun can make no claims towards the rewrite. He claims to have fixed flaky tests and improved memory safety. Where is the analysis that shows this? Where is the proof and data? Does he even know where the issues in the Zig codebase were at? I saw a commit where a test had a one second sleep put in place.

Compare this to say the Racket rewrite where a significant portion of the C core was replaced by Chez Scheme and Racket itself. There were several blog posts doing both pre- and post-analysis, and Racket has far less users than Bun.

This rewrite is totally unprofessional and has been poorly and even antagonistically communicated. The author was on this site just days ago telling everyone to relax and that he'd probably throw out this code, and that was even after it had been brought up that this wasn't pre-communicated to users. If I was a dependent in Bun, I would migrate off immediately.

So I push back in the idea that this is the way to do a rewrite like this.

>This is the same in COBOL-to-Java ports

it isn't, because those guys didn't think a naive 1-1 machine translation would give them the benefits of Java, which somehow the people involved in this rust rewriting seem to think they've already gained despite the virtually identical code.

If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly

You gain some benefits. You could in theory gain benefits in compilation speed, portability or even memory use and execution speed, from an automatic language translation. But everyone, including the bun people, understand that you certainly don't get code clarity benefits, and safety benefits is extremely dubious.

> If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly

If it were just a marketing stunt you wouldn't have a fraction of a percent of the test suite passing with the remaining bugs being realistically very fixable, and everything written in languages with type systems that give far more guarantees than what COBOL is possible.

You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected, and maps with many people's experience with using LLMs for tasks like these.

>You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected

no I'm being negative because as I just said, if you want to do a purely syntactic translation you don't even need an LLM, that's called transpilation and we've been doing it programmatically for decades.

This is the kind of thing that looks great to people who can't program, think this is some new superpower unlocked by the mystery magic of LLMs and that is exactly the kind of impression Claude wants to sell.

Transpilation won't get you passing 99.8% of a comprehensive test suite of a 700K+ codebase in a week (and maybe none at all) and that's assuming transpilation is practical for the pair in question. So if you remotely want these kinds of results, then you most certainly do need an LLM.

There are literally formally verified language transpilers out there today. They can get you 100% coverage without "cheating" like LLMs tend to do by modifying test suites to pass, etc.

I'm currently using an LLM in my day job to accelerate such a 1:1translation, and it's certainly "working"/making progress but God I wish I had a formally verified machine translator instead of this probalistic bullshitting LLM.

Don't get me wrong, it's extremely helpful and impressive in what it can do. But I trust it somewhat less than if I had done it myself, and for good reason. The lies I tell myself tend not to take down production. The lies my LLM tells me do however.

I mean No-one is forcing you to not use a transpiler right? If it was quicker to use one or build a specific, limited one for your existing codebase and run it then you would certainly have done that already.

Sadly none is available for my current use case. Building one is so far out of scope that it'd be the most epic yak shaving of all time. If this was a personal project I would consider it. My personal projects are all about the journey and not the destination so side quests are all part of the fun. Not true for my day job however...

A. Transpilation is not 100% compatible because there are many idioms in some languages that cannot be directly translated to others. The lifetime system in Rust disallows a lot of constructs coming from languages with more relaxed constraints. Ironically transpilation will produce code with worse semantics than an LLM. B. At this point it's clear that LLMs reason very effectively about code and its intent. If you haven't asked Claude Opus with Max Reasoning to do, I suggest you give it a try, because the results are pretty fantastic.

Push comes to shove, you could probably still ask an LLM to generate transpiler code, if you're so inclined, and then have it fix the remaining "edge cases" afterward, right…?

[deleted]

[dead]

It's worth pointing out that "unsafe" in rust is not a very sound concept - it's not like a monad or "function colour" whereby the compiler can say "this code ultimately calls unsafe". It's more like a comment on steroids; you call unsafe in a function, write a comment about it, and no caller of that function would have any idea that it's calling unsafe code.

Yes, the point of unsafe is that you promise it's safe, you promise to preserve the necessary invariants to make it safe to call no matter from where. It was never supposed to "taint" all code that calls it, that would defeat its purpose. It's sound enough, it's just not at all trying to do that.

The half of the files contain 'unsafe' keyword? It doesn't seem as a good rewrite. What is the point of rewrite into Rust, if ~half of your code is still unsafe?

Bun is fundamentally a boundary-heavy system and it also rolls its own version of a lot of things that people typically use via libraries, where unsafe is hidden. (no async, memory arenas, etc). It also uses FFI heavily which requires unsafe.

It also looks like the top 2 maintainers are currently actively working on getting the amount of unsafe down and it's going down quickly.

If the unsafe can be iteratively removed and the final code is of reasonable quality that seems like a sane strategy. Any large migration just needs to be doable incrementally so progress can be made.

1. Rewrite from zig to rust in as close to zig as you can.

2. Turn into idiomatic rust.

1. Get hired into a company where you have a solid bet on making multi-century lasting generational wealth (>$50,000,000).

2. Every waking moment do everything in your power to boost the company that might give you the ability to define the direction of technology for the rest of your life.

3. Use the only thing you have (bun) to help push you in this direction and do things to help boost LLM marketing (a technology that already deeply struggles to find customers and has to rely on welfare (lucrative government contracts) to make sales).

---

Honestly think this generation of tech workers in SF are more evil than those that worked at Google + Facebook in the early 10s.

> a technology that already deeply struggles to find customers

As far as I know it's the opposite, Anthropic struggles to satisfy demand, they have tons of paying customers and their customer base is growing fast.

Wow as far as you know? That settles it then! Just ignore this:

https://www.flyingpenguin.com/wheres-ed-anthropic-told-court...

So, your link shows that they probably have like $1 billion in sales per month (but they publicly overstated this by 30%), and that's the struggle to find customers?

There are tons of posts and reporting about Anthropic's problems with meeting demand, usage limits (on paid plans, especially during peak hours), fast growth (your link confirms that), and problems with infrastructure.

Some links:

https://uk.finance.yahoo.com/news/anthropic-throttles-claude...

https://techcrunch.com/2026/03/28/anthropics-claude-populari...

So the takeaway here is that they scaled to just over $5bn instead of $6.6bn in revenue in just a few years…? Still sounds like plenty demand exists?

What does that have to do with rewriting from zig to rust??? This thread is what's pushing LLM marketing, not the rewrite itself.

If the rewrite is just a stunt and it will crash and burn it will do that whether we spend our free (or work) time writing comments. If there is any hype around this particular topic, it's happening here not in the GitHub repo.

This is exactly the case here.

The author of Bun is a Thiel Fellow, so he's already been trained in The Way.

People are trying to wash away the recklessness of this rewrite by applying engineering principles the author their self didn't apply. It's like trying to make sense of a certain president's words. There is a lot of missing analysis both before this rewrite, during it, and after that is missing. And given that Zig and Rust can interoperate with each other via C, it makes a wholesale rewrite even more bizarre.

I’m honestly confused. What is it that you think makes these workers “more evil” than Google and Facebook workers from the early 2010s?

Google and Facebook workers just made a lot of cash and mostly made everyone's life harder by Leetcode and bad interview process, they didn't threaten and actively work to put millions of SE on the street.

> they didn't threaten and actively work to put millions of SE on the street

Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.

They (we) did it to tons of other industries. And we collectively patted ourselves on the back, saying that automation is a good thing and we're the good guys for doing it and people who lost their jobs will adapt and maybe they should just learn to code.

Now it's happening to (some of) us and suddenly it's evil?

No. The point is: programmers are whores. We like to act all righteous on forums, but very very few of us care enough about the consequences of our code to do something about it.

We either don't think about it ("what could go wrong?"), don't care about it (eh), justify it ("I need to eat!!!", "I'm just following orders"), or actively embrace it ("It's the future!").

> Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.

Nah. The fact that such opportunity wasn't available attracted a different sort of person.

[flagged]

And definitely not more evil than the workers at current Meta.

> What is the point of rewrite

To win a news cycle.

For the forseeable future, the AI market competition is not about which product can provide the most valuable utility to users. It's about which product can be holding the protective aura of social media and investment zeitgeist while competitors buckle under the strain from unfulfilled hype and over-leveraging.

Utility, engineering, efficiency... these are all menial details for the winners to reluctantly iron out in 2035.

Bannon’s ‘flood the zone’ strategy applied to AI.

unsafe just means that you take responsibility for the safety of the code contained within. Calling into non-Rust libraries has to be wrapped in unsafe. Making syscalls has to be wrapped in unsafe.

Bun needs to interact with FFI code. This gets wrapped in unsafe blocks.

There are many places where a JavaScript interpreter and library would need to make unsafe calls and operations.

It doesn't literally mean the code is unsafe. It means the code contained within is not something that can be checked by the compiler, so the writer takes responsibility for it.

There are many low-level data munging and other benign operations that a human can demonstrate are safe, but need to be wrapped in safe because they do things outside of what the compiler can check.

There's actually a good example of this in the rewrite [1], in `PathString::slice`. They are doing an unsafe operation to return a slice that could be a use-after-free, if the caller had not already guaranteed that an invariant will remain true. Following proper rust idiomatic practices, claude has added a SAFETY comment to the unsafe block to explain why it's safe: "caller guarantees the borrowed memory outlives this".

Now, normally, you'd communicate this contract to your API users by marking the type's constructor (PathString::init) as "unsafe", and including the contract in its documentation. Unfortunately in this case, this invariant does not exist - it appears to have been fabricated out of thin air by the LLM [2]. So, not only does this particular codebase have UB problems caused by unsafe code, the SAFETY blocks for the unsafe code are also, well, lies.

[1] https://github.com/oven-sh/bun/blob/63035b3e37/src/bun_core/...

[2] https://github.com/oven-sh/bun/blob/63035b3e37/src/bun_core/...

`PathString` worked the exact same way in our Zig code, with less visibility from the compiler & type system. And yes, it will be refactored heavily (or deleted overall) in the next week or so.

One potential way to solve this in a principled manner is to turn at least some "unsafe" annotations into ghost capability tokens that are explicitly threaded through the code and consistently checked by the compiler. Manufacturing the capability could itself be left as an unsafe operation, or require a runtime check of some kind.

You already see this in some cases, for example the NonZero<T> generic type can be viewed as a T endowed with a capability or token that just says "this particular value of type T is nonzero, so the zero value is available for niche purposes". But this could be expanded a lot, especially with some AI assistance.

This already happens all the time in rust, including in the standard library. The typical pattern is to define your CheckedType to be

pub struct CheckedType(UncheckedType);

e.g. where its inner field is private. Then, you only present safe constructors that check your invariant, and only provide methods that maintain the invariant.

For a concrete example, String in rust is a Vec<u8> with the guarantee that the underlying bytes correspond to valid UTF8. Concretely, it is defined as

#[derive(PartialEq, PartialOrd, Eq, Ord)] #[stable(feature = "rust1", since = "1.0.0")] #[lang = "String"] pub struct String { vec: Vec<u8>, }

You can construct a string from a vec of bytes via

fn from_utf8(vec: Vec<u8>) -> Result<String, _>;

as well as the unsafe method

unsafe fun from_utf8_unchecked(vec: Vec<u8>) -> String;

Note here that there isn't a separate capability/token though. This is typically viewed as bad practice in rust, as you can always ignore checking a capability/token. See for example rust's mutexes Mutex<T>, which carry the data (T) that you want access to themself. So, to get access to the data, you must call .lock(). There is a similar philosophy behind Rust's `Result` type. to get data underlying it, you must handle the possibility of an error somehow (which can include panicing upon detecting the error of course).

Yes, or you could review the code.

It’d only take an hour if you reviewed a million lines per hour

[Sorry guys, I couldn't review this code because I generated it all]

Even before AI, deterministic checks by compilers are almost always better than "review the code"

"review the code" as a solution will eventually fail and cause a problem, even pre-AI.

The entire point of unsafe blocks and SAFETY comments is that they are easy for humans to find and audit, but not compiler checkable. If it can be compiler-checked by some clever token system, then ... it's just plain safe rust, and you don't need to document any special safety invariants in the first place

even when you can review the code, it's good to have the compiler check for you. This is for similar reasons why it's better to have CI check correctness on each code change, vs testing the code thoroughly one time, and then being careful going forward.

> unsafe just means that you take responsibility for the safety of the code contained within.

In this case it means you delegated the responsibility to a notably flaky heuristic.

> a JavaScript interpreter

Bun is not a Javascript interpreter. But I do see the point.

Some correct me if I'm wrong, but it's unlikely they wrote this first initial version of Rust and will leave it unchanged as-is. What's there now is a step in a long process, not the final destination.

The point is to serve as marketing for Claude. Absolutely nothing else.

Rust has a ton of other features besides safe. Like exhaustive checking of enum variants and the ability to avoid using null with option and result.

Zig has these modern language features too fwiw.

I think the goal was to do a massive rewrite for Anthropic (they acquired bun) and show that rewriting projects from lang -> lang with Claude can reduce security vulnerabilities to help with the hype for an IPO.

I don’t use/know Rust so I can’t comment on the quality, but there was a public security review that found issues with the new Rust code: https://x.com/SwivalAgent/status/2054468328119279923

This is an interesting experiment but I’m skeptical of any claims of success by Jarred/Anthropic due to the incentive to hype agents. There’s probably a trillion dollars at stake with the IPO. And Anthropic seems to be developing this part of their business with Mythos and the super review features.

But I’d like to see the same experiment done on a project without so much relying on the story being success.

There's a reasonable request to run the same analysis for the Zig version of the code as a comparison.

In lieu of that, it seems the Swivel devs ran an analysis on Tigerbeetle, one of the other major Zig projects, and found only 7 medium/low priority issues:

https://xcancel.com/SwivalAgent/status/2054063291266113994

To clarify, those are things an LLM considers to be issues, and LLMs can make mistakes.

Some of those are clear false positives, others I need to revisit tomorrow to say one way or another.

that sounds like a starting point and an honest translation. If it was originally unsafe and suddenly becomes safe immediately after the rewrite, it would mean they break existing behaviors

Better to know where memory bugs may happen than them being everywhere. Also, bun team are looking it to reduce it by a large margin. Since it was a line by line port, there is a good space for improvement. By first rust release, a significant number of it should be resolved.

Wouldn't it be better to port more idiomatically? Otherwise, you've done nothing but port all the existing bugs while creating new ones.

That's one problem with LLM's. I had claude write a function in python for me that did a bit of math, because, like most programmers, I don't know math.

The function worked perfectly mathematically speaking, but after a bit of research I realized a human being would never write a piece of code so bad.

I don't remember exactly, but it looked like this:

    denominators = [...]

    def lcm(a, b):
        return abs(a * b) // math.gcd(a, b)
    
    return reduce(lcm, denominators)
There are 2 problems with this code.

First, that is the correct way to calculate the LCM that you'll quickly learn if you google it (or if you ask claude). The problem: math.lcm already exists! Any human being writing this would have paused to think "wait, Python has math.gcd, does it have math.lcm as well?" And then they would have just used that.

Second, you don't even need reduce. You can just math.lcm(*denominators). A human being would have realized this when intellisense showed it takes any number of arguments instead of just 2.

Pretty much every time I used an LLM to generate code it generates a rough draft barely held together that needs to be completely rewritten later. With Qt for example it generated 2 push buttons for Ok/Cancel when there is QDialogButtonBox for this that even orders the buttons to match the typical system order, or when generating a combo box that associated labels with objects it tried to figure out which object from the text of the label of the items when there is already a way to just set an arbitrary object for each item and then get it later with .currentData().

Every single time it makes me think: yes, this works. But no, not like this.

I can't imagine with 1 million lines of this feels like.

Sure hope Mythos is as world beating as they claim, they’re gonna need it now.

We got memory safety at home !

At home:

> 10428

Remember the top comment to this Hacker News thread? https://news.ycombinator.com/item?id=48016880 "This is an overreaction." "302 comments about code that does not work." "We haven’t committed to rewriting." "There’s a very high chance all this code gets thrown out completely."

Well. That was about a week ago.

> +1009257 -4024

Bun is now over 1M lines of Rust code.

This is approaching the size of the Rust compiler itself; except that BunJs is mostly a JavaScript interpreter wrapper + a reimplementation of the NodeJS library (Rust STD wrapper).

I think BunJS is becoming the canary for software complexity management in the LLM era.

> mostly a JavaScript interpreter wrapper

Not accurate. Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code.

Don't forget the image rendering library!

Now that Bun can leverage Rust do you think some of this code will get disaggregated? Eg, Bun could use swc crates

It wouldn't have been that hard to do that from Zig if they'd wanted to. They don't, because they want to do everything themselves so that it works exactly the way they want (except the core JS engine for which this is infeasible—though even that has custom patches). After all, there are already plenty of libraries on npm for those other parts of the stack and they do work in Bun.

[dead]

Bun is not a JavaScript interpreter, it's "only" a reimplementation of the NodeJS library + various other libraries. Bun uses JavaScriptCore as its JS engine. So Bun itself does (or at least should do) no JavaScript parsing, interpreting or JITing.

EDIT: I misread, sorry! You said "JavaScript interpreter wrapper", which is correct.

No, it does parsing and a bunch more. The Bun founder says it best in this comment:

"Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code."

https://news.ycombinator.com/item?id=48140921

Bun is now almost twice the size of JavaScriptCore, too, by linecount after this.

This is the 'world class' engineering that Jarred claims he can't hire Americans to do, by the way https://x.com/jarredsumner/status/1969751721737077247. This company is parasitic to its literal (javascript) core.

That’s what they said - “JavaScript interpreter wrapper”.

You're right, sorry! I completely missed the word "wrapper" somehow.

No worries. :)

I'm not sure if it's just the leading '+' or if there are other factors for phone number detection on iOS, but on mobile the line count changes are underlined and I can tap it to start a call, which, if it is because of the diff size, is something I find pretty amusing.

Apple has had a feature called Apple Data Detectors since the 90's that looks for different patterns in text and allows you to perform actions on them.

So if the text includes a phone number, email address, flight number, package tracking number, street address or other pattern in the data it is underlined and allows you to perform one or more actions.

The patterns it looks for and actions it takes are extensible by developers.

If you don't care for it, you can turn it off.

> +1009257 -4024

    +1 (009) 257-4024

I think it just lines up with the typical size of a phone number and the '-' is interpreted as a separator. Just a simple regex probably.

Maybe it's the phone number of the vibe coding police?

91�

The leading “+” is not needed. Numbers with seven digits are automatically hyperlinked (possibly depends on locale).

123456

1234567

12345678

Interesting. Where I am, both six and eight digit phone numbers are valid, but seven digit ones are not, and yet that's the only one that gets linked for me. US assumptions bleeding through, I assume.

Interestingly, the entire line gets formatted once it reaches seven digits, +lines and -lines both, so I guess the -lines is just interpreted as a dash. But your eight digit string doesn't. Perhaps it's not interesting, though I've never really given it a second thought before.

There’s certainly some regex or similar involved that tries to recognize phone numbers, and then hyperlinks the whole thing. My point was that it’s not solely the plus sign that is triggering it.

The Bun codebase had a similar number of lines of code before the rewrite.

There's nothing unusual about a rewrite coming in with a similar LOC number.

I think the unusual thing is that it was written in a week. I highly doubt that they read and understood all 1M lines. But if it works and people use it, what does that mean for software? Should we still care about the code that’s written? Should we even look? I’ve always thought so, but maybe I’m just biased.

I think we should care way more about what the validation story is of code. The obvious question does it all work? I'm happy to not look at any code if we have good ways to validate what is there. The other thing I care about is the architectural structure of the code. Given its a port I don't think that would have changed.

I was going to comment this same thing.

I don't know enough about what Bun does... But Rust is so insanely complicated, it's hard for me to wrap my head around how Bun is equally complictated.

Complicated things can often be expressed very succinctly - the hard part is in understanding why the short program does what it is supposed to.

Simple things often take a lot of space, simply because there's a lot of similar but different simple things that each need to be written down.

Lines of code just isn't a good measure of "complicated".

They are complicated in different ways. The rust compiler doesn’t include redis, Postgres, and S3 clients for instance.

If anything, it's a little surprising that the Rust code isn't significantly larger because I tend to think of Rust as requiring somewhat more boilerplate than JS.

The code was using Zig before, not JS.

Ah fair point. I don't have a sense of which of those are more verbose.

Zig is, typically. And yet here, the rust rewrite is around 60% more lines of code.

Not to mention how trigger happy LLMs can be when it comes to being overly verbose and adding unnecessary bits even with explicit direction not to do so.

1MLOC for a JavaScriptCore wrapper is a great example of what agents are capable of.

Code is cheap. Only the quality and maintenance is interesting. Those will be seen later on.

I would not be surprised if the next major step for them is to audit the code and trim the fat.

> I think BunJS is becoming the canary for software complexity management in the LLM era.

Yeah, Cursor did the same thing, bragging about how many lines of code they managed to produce for a semi-working browser, completely missing the idea where less code is better, not the other way around.

I think their point was that the project is complex, with the implicit assumption that the complexity is to a large degree inherent.

Even if it's mostly accidental, and the code is overengineered slop (which it is), the system being able to decompose a problem and deliver something is impressive in terms of stability: it wasn't sucked into rewriting everything from scratch every time it would run into issues, it didn't have infinite subagent recursion with a one-agent-per-line type workflow, etc.

[dead]

you can easy fix this by MAKE NO MISTAKES, DO NOT HALLUCINATE under your zig2rust.md skill agent flow /s

About 9 days ago, Jarred wrote that it was far from certain that this would merge and that it was an overreaction. Ironic.

Model open source leadership. Imagine the meltdown if Linus says Linux kernel is not going to be rewritten and then one day wakes up and merges full machine-assisted rewrite in Rust.

As long as it was still GPL and it wasn't just license washing, I'd be elated.

You won't be when you can't boot your system anymore on x86.

When you don't own your company any more anything you say can be safely ignored. It was obvious that the token spend will need to be justified.

They've been shady since day one, claiming wild performance improvement compared to their competitors and never proving any of them.

You don't think installing NPM packages 2 seconds faster, something most working devs do one a month, to be amazing?

Yes it is amazing, and it was and is a big deal

- Working dev

Also once a month? Really?

I mean, that doesn't exclude the outcome that it gets merged.

That doesn't mean he was lying. Just that things changed.

It was uncertain then, and not so uncertain now.

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

I would say it is reasonably clear they had already committed to rewriting at that point.

The possibility that that particular code might be thrown out was potentially true, but also totally unrelated to the previous statement.

At the end of the day, whatever, but this feels a heck of a lot like “ah, we didn't mean for this to be public yet” rather than “this is just a random experiment”.

AI companies love AI stories.

It is an AI company.

:p

[1] - https://news.ycombinator.com/item?id=48016880

[flagged]

Edit: my mistake. Sorry for misreading.

You've crossed into personal attack with this, and that's not allowed here. Please don't.

https://news.ycombinator.com/newsguidelines.html

Which persons were attacked by their comment? The "them" is confusing me – I interpreted it as Bun the organisation / Anthropic?

I'm confused too as to how my comment can be interpreted as a personal attack on anyone.

I was indeed talking about Bun as a whole and not any particular person. I'd even include the Bun community in my "them".

But I'll take dang's word for it and will watch what I say.

Ah, I thought you referring to a person. I'm sorry for misreading you.

It's still a bad HN comment, I'm afraid (denunciatory rather than curious, for one thing), but it wasn't a personal attack and not a post that would normally clear the bar for a mod reply.

I think Jarred's response at the time was intended to cool the ridiculous hype when the branch first appeared!

[flagged]

I don't know if the intent was to deceive, but the comments certainly had the effect of deceiving me. I came away from that first thread thinking, "Ah, so the 'story' here is that someone on the project tried an experiment on a branch that they probably should have put in a branch on their personal fork." I was no longer thinking it was a serious possibility that an AI rewrite would get merged.

I'm actually excited for somebody trying experimenting with automated translation, but I'm afraid this will be lots of backwards compatibility issues.

I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.

The only silver lining I see is that the server side JS community for some reason is already used to breakages all the time.

The whole idea that my RUNTIME contains code that a single human hasn't looked at does make me uncomfortable, but if this actually works without a ton of issues it's pretty remarkable.

Don't worry, no one reviewed open source code before AI either. Basically nothing changed about the trust model.

The person who wrote the code reviewed it as a part of writing it and going through the PR process.

You think Jarred reviewed 1M lines of code in 9 days?

No that's my point, Jarred didn't write the code. Before AI, at least the person who wrote the code "reviewed" it (as being aware of the code you wrote was a necessary part of the process of writing code).

The speed of the change did. This is the “climate has always been changing” argument climate deniers make. It is a true statement which is still a lie by omission. Climate deniers purposely ignore that the climate has never changed at the current rate, and AI-stans neglect to mention that before AI nobody was merging a 1M+ lines of code in one go.

> I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves

Not sure if these decisions were made by the LLM, but I've always felt that Claude is more prone to doing "shady stuff" like modifying tests than finding correct solutions to problems.

GPT/Codex is more honest in this regard.

Yeah, Claude is very creative in finding ways of "solving" problems that go against what the user probably intended.

Having said that, after looking at some of the test changes, they seem to be minor things, like changing timeouts, not changing the actual intended semantics of the tests. But it's too much code to review everything, so I might be completely wrong about that, and in real-world usage, even minor changes like these will cause issues.

I doubt it will end up as stable release very soon, but I'm happy to be proven wrong. I have some skepticism about this whole rewrite, Jarred Sumner has enormous internet following and it feels like an ad.

How do you wash to define ad, and why does it matter? If I tell you I had lunch, I mean. okay, great. If I tell you I had a delicious Coca-Cola with my lunch, sure. If I happen to work at Coca-Cola, does that now become an ad? And what level does it become an issue? And I what is the issue?

If you work for Coca-Cola then yea there’s reason to question your intent even if simply because you aren’t objective due to your proximity to Coca-Cola.

> solving the ,,tests not pass'' problem by changing the tests themselves

https://github.com/oven-sh/bun/pull/30412/changes/68a34bf8ed...

This is great! Just add a random sleep(1) to a test, don't worry about it, it's going to be fine!

On the other hand, the sleep fits better to the test description, "should allow reading stdout after a few milliseconds". Even if 1 != 'a few'. It's possible the part of the commit reverted here, https://github.com/oven-sh/bun/commit/a42bf70139980c4d13cc55..., defeated the purpose of the test by removing the sleep. I don't think adding the sleep back is an example of AI cheating.

Strange test though either way.

To be fair the commit message `revert proc.exited change in spawn.test.ts` suggests the sleep was there originally.

I wish I could take a look through the tests to see if anything substantial actually changed, but I can't even get github to load the diffs for me.

> I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.

Wow, This is definitely quite something for sure.

Can jarred comment about if he has read the commits or not too or respond to your comment, this has basically made me lose the small faith I had in what bun is doing if it turns out to be correct.

It's OK, we'll see how it goes. He and Antropic are giving it us for free, and nowdays just forking the old version is easy if a project needs that. Even maintenance is much easier using LLMs.

I'm happy it's not a project I'm depending on, but a large enough project had to try this at some point so that we all can learn from how it goes.

I think this is why Antropic bought bun, so that they can sell big code translation as a feature for all the banks with COBOL code that they want to get rid of for a long time.

Still, those banks / enterprises won't appreciate the number of unit test changes.

And I agree with another comment that Codex xhigh is much better for these kinds of tasks, but still hard on this kind of scale.

Jared has commented on this elsewhere in the thread, basically claiming the parent you replied to is outright lying: it has removed no tests and has not meaningfully changed annotations to reduce coverage of effectiveness. It added additional tests and made a few changes to hard coded values due to differences in, as an example, how LLVM and Zig handle stack frames.

The MR is right there, linked at the top of this page. You can check who is telling the truth.

That said, I don't know how anyone is actually claiming to have done that. All day, the size of the MR makes the diff take too long to load and GitHub dies. I'll have to pull it later to check myself.

> it's basically solving the ,,tests not pass'' problem by changing the tests themselves.

False.

0 test files were deleted. 0 pre-existing tests were skipped, todo’d, or had assertions removed. 5 new tests were added in test.skip/test.todo state to track known not-yet-fixed bugs in the port that lacked test coverage before.

The merge changed 28 test files in total.

+1,312 lines

−141 lines

Most of that +1,312 is new tests.

The depth-of-recursion tests for TOML/JSONC parsers went from 25_000 -> 200_000 because Rust’s smaller stack frames (LLVM lifetime annotations let the optimizer reuse stack slots) mean 25k levels no longer reaches the 18 MB stack on Windows.

We're keeping this honest and chill, no worries.

What is "most of that "?

Why did you feel the need to produce so much detail about a single category of tests?

That's great!

It's too bad you haven't structured the commits and pull requests a bit differently so that it's easier to review the exact changes, but I hope it goes well.

For example doing the test refactorings in a first pull request, and using something like test.xfail that is first fails then after the merge succeeds (but the test code itself doesn't change).

Also I have seen some tests getting stricter, which is again not a problem, but separating to a different pull request would have improved the reviewability significantly for a runtime that many people and companies depend on.

I'm sorry you were downvoted by HN and your comment got ,,dead'', that's not the way to review things.

[dead]

[deleted]

in tsz[0] 100% of tests pass yet I have a ton of bugs. I don't think any software out there is fully tested really. I'm experimenting this this idea as well. So far learned a ton.

I'm convinced the future of writing code is heavily LLM assisted

[0] https://tsz.dev

Wow. This is going to be interesting to follow. There's absolutely no way any of this code was reviewed, but maybe we're in a post-human world now where you can trust the models to write and review the code. This is like Gastown but on a higher profile project. Will be fascinating to see how this project is able to add new features going forward (or even _if_ it will be able to).

Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code? I'm more than slightly worried about using Bun going forward myself, but I'm not sure to what extent that applies to using Claude as well.

> you can trust the models to write and review the code

You definitely cannot!

Reminds me of going on linkedin and seeing all these sales and product people who are talking big game about engineering now. Well yeah they are definitely producing something but not sure I'd call it "engineering."

You can trust them to flag some things during review that may or may not be relevant. But just like with human review and unit testing, you cannot guarantee the absence of bugs after an LLM code review. It's just another set of (virtual) eyeballs.

I trust them somewhat to flag bugs. I don't trust them to produce clean, maintainable code - even code maintainable by the LLM itself. Any sufficiently complex LLM changeset can be assumed to contain duplicated logic, method scope creep, and code changes without accompanying documentation changes that the model often will not catch no matter how many rounds of review you run. If those issues make it into a commit, the next time you ask the LLM to update some of the functionality that it introduced earlier, bugs will creep in.

I find that documentation creep is wildly better in AI coded environments than human ones. You can deterministic force a documentation sync process on every PR, documentation rot has gotten way better.

It passed all the tests.

If you can't trust your test suite to catch an automatic language translation you shouldn't trust it at all. :)

Tests can only prove the presence of bugs, but not their absence. If the AI can access the tests, it can easily make them pass by just adding additional if statements. It doesn't mean the code is actually correct.

What if we only trusted the test suite a reasonable amount, instead of pretending trust must either be blindly total or nonexistent?

The entire underlying system has been replaced. The test suite is written around the current fuzzy edges and past problem areas, not every single behavior of the existing platform.

"If you can't trust your test suite to catch a hardware floating point arithmetic bug, you shouldn't trust it at all."

"If you can't trust your test suite to catch a JVM bug, you shouldn't trust it at all."

"If you can't trust your test suite to catch a recurring memory error, you shouldn't trust it at all."

It also modified many of the tests to make them pass in mischievous ways. You can't trust a test suite to catch regressions if the new version doesn't use the same test suite.

Do you have some examples?

Ah, I just learnt that you don't. Jarred's comment saying exactly that: https://news.ycombinator.com/item?id=48133806

I'll actually concede that, on a slower skim, some changes to the test suite and fixtures that first seemed suspicious to me indeed align with what those tests were doing previously, and I wish I could retract that comment.

I still think it's not such an impressive test suite as it's being claimed; which, if this actually works out, should say more about Claude's skill than the people driving it.

Gotcha. I'm genuinely curious: by "impressive", are you referring to coverage? I'd be grateful if you could say a few words about it could be more impressive (e.g, if you indeed meant to talk about coverage, say what functionality/edge cases aren't covered as of now)

Our programming languages are bad at specification and verification, so the next best thing is property-testing for modeling (e.g. Hypothesis for Python) or, for the reference implementations, extensive "expect"/snapshot test cases (e.g. Cram).

Instead, I found the bog standard suite with a single case per regression and very few actual modeling, although I wasn't expecting more. (I don't care much for JS, let alone Bun, so I can't point to features I'd like to see better tested, but I'm sure the issue tracker can do that job already.)

To be fair, our whole industry is really bad at this; most test suites are verification theatre, but now that machines can fill out implementations on their own, we should strive to properly model our requirements and limits so they can one shot what we intended. Otherwise we're left in an awkward middle in which we don't add much value over the AI fumbling around.

Thank you!

I think demonstrating broken behavior in the new build would be interesting if you have a non passing test from the original suite

A wise teacher once told me a good programmer looks both ways when crossing a one way street.

Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code?

It seems to be used by anthropic as a way to shift the discussion window into it being acceptable that you yolomerge millions of lines.

the `claude` binary is essentially a packed copy of bun + the js code, so this will replace the native runtime part of claude code.

How's the test suite?

I will move the handful of my projects that use Bun to something else. I don't trust governance that permits this kind of reckless change.

Deno is amazing and doesn't get the love it deserves, in my opinion.

It doesn't need to be rewritten because it was written well in the first place.

Same, just gonna stick with node. On the other hand, the trial by fire will be interesting to see... long term I can only imagine the kinks will surely work themselves out

Wait till you hear about https://github.com/nodejs/node/pull/61478

This is a PR that has been getting reviewed since the end of January. The Bun port branch was created 9 days ago.

Yes, reviewed since January, has almost 400 comments, and 7 (seven!) approvals from core nodejs contributors.

I don't understand the point you're trying to make here.

Regardless the outcome, this is such a disrespectful move towards the huge amount of contributors who invested time and effort to learn the project and make it better. I hope the zig/dev community forks the project and continues the development. I'd rather use the fork than this project that has sacrificed its contributors for marketing purposes.

How is that different (in this sense) to any "slower" rewrites or other significant changes?

The difference is exactly the speed. Slowly transitioning from one thing to another gives the opportunity to contributors to get involved in the process.

So? Keep up.

Just because some set of hypothetical contributors want a slow-moving target and the maintainers want to be on Rust now, I'm supposed to be mad at the maintainers? Why?

I think they’re pointing out that maintainers would not care to continue.

You know, for some people the word _community_ doesn’t mean “my free developers.”

> this is such a disrespectful move towards the huge amount of contributors who invested time and effort to learn the project and make it better.

What? How?

You contribute to projects run by others with the understanding that others run the project, is this not the default assumption others have too when contributing to FOSS?

Is it disrespectful if my proposed feature was merged, but then later was removed because the maintainer just didn't want the feature anymore? In my mind, pretty clear it wouldn't, I'm only a contributor after all, not the maintainer or the person running the project.

> Is it disrespectful if my proposed feature was merged, but then later was removed because the maintainer just didn't want the feature anymore?

No, the big difference is that the described scenario does not require getting familiar with a new 1M LoC codebase written in a different language to be able to continue contributing to the project.

For who? What you say is true for everyone who doesn't know Rust (before Zig), and not true for everyone else, same as it always is been, for every single FOSS project out there.

So it's disrespectful because before you could contribute, but because of the direction of the project, you no longer can?

Does that also means it'd be disrespectful to make projects more complicated and complex, because maybe someone who contributed initially don't know these new concepts, so introducing those would require this individual to learn about those things?

All of this still sounds like entitlement to me. Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better isn't disrespectful to anyone else, you're not forced to having to contribute to any FOSS projects.

> For who? What you say is true for everyone who doesn't know Rust (before Zig), and not true for everyone else, same as it always is been, for every single FOSS project out there.

Even if you are fluent in rust, it is going to require significant efforts to contribute to a new 1M LoC codebase.

> Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better

This is so far from the reality. The power of open source is coming from the contributors. Contributors are the most valuable assets of an open source project - without them most of the free tools you use would be significantly worse - including bun. The reason my open source projects got somewhat successful is the community that formed around the projects. And, it is hard to create a community when you give contributors no chance to participate in the projects direction, especially in such a critical decision that has enormous consequences.

> Even if you are fluent in rust, it is going to require significant efforts to contribute to a new 1M LoC codebase.

Of course, but this is true for any project or any language, can hardly be disrespectful of me to chose Clojure just because you don't happen to know it? That sounds crazy to me.

> Contributors are the most valuable assets of an open source project

You're talking about something else. Open source is literally about "This code has a specific license that allows you to do X" where X and Y differs by the license. Contributors or not matters squat if some open source project is valuable or not.

Don't mix concerns here, you're talking about "open development" or something else, not specifically open source.

Sure it's hard to create a community and get contributors and what not. But a maintainer choosing a different language and people feel that being "disrespectful" instead of just "stupid" or "dumb"? No, give me a break, you run your projects your way, and let others run theirs that way, they're not made for you, they just happen to be available to you because someone was nice enough to make it so. Don't spoil that by acting so entitled about how they should maintain and develop their project.

> Of course, but this is true for any project or any language, can hardly be disrespectful of me to chose Clojure just because you don't happen to know it?

Nobody said that the problem is not knowing rust. The problem is changing the whole stack of a project overnight. This requires significant effort to get familiar with, even if a contributor have all the experience in the world with the new stack.

> Don't mix concerns here, you're talking about "open development"

Call it however you want, bun could not be the tool it is without its >800 contributors.

I think most maintainers would rather you not contribute to their project if your contribution comes with the idea in your head that you're now a stakeholder who has some share in the project's technical direction.

Of course they're a stakeholder. They've made an investment of time and effort, and they're hoping that it will pay off. The question is whether a maintainer will respect that.

If you want to maintain sole ownership of something that >800 people contributed to, that reflects on you. People will judge you. Most maintainers would feel obligated to concede some control. But LLMs have intentionally aimed to devalue programming, so this transition is totally consistent with the new ownership. And it may be wildly successful, because they've got an unlimited supply of tokens for the foreseeable future.

But I'd say the opposite: Most maintainers would feel blessed to have a lot of contributors so invested that they felt a need to have a say in the direction of the project.

> No, give me a break, you run your projects your way, and let others run theirs that way, they're not made for you, they just happen to be available to you because someone was nice enough to make it so. Don't spoil that by acting so entitled about how they should maintain and develop their project.

Well in this case Jarred and Bun can run their project their way, and since they're not made for me, they can just happen to be available to someone else like Claude code and they can stay in their happy read-only land.

> Don't spoil that by acting so entitled about how they should maintain and develop their project.

Are you sure you even understand what entitled means?

> Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better isn't disrespectful to anyone else, you're not forced to having to contribute to any FOSS projects.

Tell me you've never worked on any meaningful OSS project.

Good luck to Bun, if I was in any of its contributors list, and not on Anthropic's payroll, I'd say goodbye and never touch the project with a ten foot pole. And I say this as an honest feedback, save your "don't let the door hit you on the way out".

As an educational thread, see this one from a week ago where Jarred again deflects from a merge decision and legions of foot soldiers attack anyone who predicted the impending merge:

https://news.ycombinator.com/item?id=48073680

Didn't age well, did it?

From "This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely." and what seems to amount to some experimental curiosity -- to merging the whole thing in 10 days!? This seems really crazy.

It'll never cease to amaze me how many bootlickers are out of there that don't really care which boot to lick.

[deleted]

Love seeing the tests themselves getting modified, with random `sleep(1)` thrown around in a few of them. This bodes well, I pray some idiot at some large AI co actually ends up using this garbage in prod

Claude Code uses Bun as its runtime.

If this has been merged, I expect that Bun-rust is good enough to power Anthropic's internal agents to do live testing.

Jarred had tweeted that they're using the rust version internally with Claude Code

This era is hilarious. I just wish I didn't have to rely on code written by these idiots.

You don't, pi with a codex subscription is great. Mario Zechner is on the opposite end of that AI hype spectrum.

Until we can daily drive pi + qwen/glm/kimi.

If this goes wrong even in the slightest, the ridicule about a drug dealer getting high on their own supply will be neverending and grim.

not enough people are emotionally prepared for if it’s not going wrong even in the slightest

It's going to work for the most part. Most people know that. It's a file by file, mostly function by function, conversion from one low level language to another with a very large test suite (with lots of Rust unsafe to work around differences). I've done that for C tools and it's fine, with some obscure edge cases here and there. The challenges are going to be making the new, very ugly, alien codebase idiomatic Rust in future and adding features or debugging the complex issues. I wish the developers luck. They're in for a slog.

Just to clarify, you did this for C tools using LLMs or using deterministic conversion tools?

I think given the novelty of this, a lot of eyes will be on it, so a lot of issues will be dealt with out of the gate. The problem will be when smaller projects that aren't in the spotlight think it's safe too and then do stuff like this after being encouraged by bun, and for those projects then lots of bugs will just remain unfixed. Basically a nation state adversary's wildest dreams came true today.

If that scenario happens it just means the collapse will be slower but still inevitable as anecdotes pile up and reach critical mass of common knowledge.

Yeah. I'm just suggesting bun won't blow up spectacularly as antiai people are expecting it too.

Having seen some of the diffs, it's already going wrong in my view.

If most of the glaring problems are addressed (massive unsafe usage), and metrics show improvement (less crashes), then did it really go wrong? The fact the code is not idiomatic is less interesting, because that can be addressed incrementally. Let's wait 3 months and reflect.

I'm thinking regressions and broken tests. Bun is already known to segfault a lot and their existing tests were rather lackluster, the Rust port being just as unsafe would be the least of their problems.

This assumes that the memory safety bugs in the unsafe Rust port are the same as the Zig codebase. A total rewrite with so little review is virtually guaranteed to introduce many new bugs which very well may be more severe than the old bugs.

Curious can you elaborate on this?

It will not go wrong in obvious ways, LLMs are actually not that bad for language translation, and they have big test coverage; any issues will be non obvious. The question will be more long term maintainability, how fast will the whole thing collapse.

However, you can never prove that it hasn't gone wrong, because there are so many long-form problems with software (quiet bugs, maintainability issues, etc). This creates FUD.

I expect it will be just fine. It's like bragging about getting the words right on a mental health exam. AI was given the answer, it just repeated it back in a slightly different format. Even a stupid human could have done that.

> AI was given the answer, it just repeated it back in a slightly different format.

Ridiculous.

Wasn't looking at leaked Claude Code source already enough for the ridicule?

I mean, that's just startup culture shipping half-baked duct-taped "products".

Reengineering a well-used open source project… that's proper hubris territory, if you do it poorly enough.

It's outside their "zone of absolute terror", to put it in anime references. Any argument against them while inside their domain is countered by their apparent success; as much as it pains me, the shit code did deliver enough. Not so when they step outside that domain, Bun was delivering before.

they are already high on their own supply

did you read their Mythos paper? they're anthropomorphizing it like crazy. Maybe it's just cheap heat, but if they really believe the LLM is conscious..wew

I'm a pretty reckless programmer, but I would never do it on a project this big... 1m LOC cannot be reviewed in <1 week. Why not put it behind a feature flag, since you're keeping the code anyway (only -4k LOC).

This does not seem thought out, and was fueled by dopamine.

Having just migrated all my teams repos to Bun, I feel… stupid. I was already feeling a little nervous by the time of the acquisition but this is pretty rough.

PR so thick, the page failed to load the first time I opened it, and the comments still continue to fail to load. Absolutely hilarious. Though that may be just GitHub having a normal one, hard to tell these days.

1 009 257 lines added

4024 lines removed

6755 commits

2188 files touched

I haven't the slightest clue how anyone would even remotely hope to review this. I guess by just using even more AI? Or maybe by throwing some über hardcore lint pass onto it? It really seems like more an exercise in risk assessment than code review.

The maddening thing is that there's a right way to do this if you have the patience and professionalism to do so. It requires building a bit of scaffolding (feature flags, cross-language calling support, harnesses for shadow testing, etc.), then you ship-of-theseus the codebase incrementally. This is not even incompatible with LLM-assistance, plus it breaks the thing up into smaller, reviewable changes that don't break your diff tool!

However, doing it the right way takes a bit more time, involves community feedback, and doesn't produce headlines about huge codebases being rewritten by LLMs in just a few days, so ...

There is never a right way, only trade offs.

The thing about being a Monday morning quarterback is that you can always claim you would have used even more caution and process.

> you can always claim you would have used even more caution and process.

Well, specifically, my claim is that any serious professional in this industry would have done so. But we're essentially in agreement, in the sense that yes, I am allowed to make this claim, and in fact already did, in the comment you are replying to.

EDIT: Actually I've been thinking about this a bit more. The thing about commenting on something that someone did is that you must always comment on it after they did it, otherwise it wasn't "something they did." However, being a "Monday morning quarterback", as I understand it in this context, means "criticism of someone's actions afterwards", so it would appear that I am doing that. I also understand this phrase to have a negative connotation, and I would hate to connote negatively in this otherwise very positive community. Quite a dilemma! Glad I have my life coach LLM to help me sort all this out.

>serious professional in this industry

As a serious professional in the industry - we're dinosaurs. Nobody cares anymore.

The kids are running the show and are making billions with stuff that doesn't work. But it makes money so nobody cares.

This is not a new phenomenon, it started years ago and really took off when JS became the new hotness. You could see it happening live, right here on HN. But the blast radius is massively increased now with AI and people are getting hurt. It's not funny.

The ship has sailed on rigor.

The sad thing is that this is not going to get better. The best we can hope for is slight improvements to agentic "engineering" practice with lots and lots of blog posts on HN written about how they are rediscovering basic engineering practices.

We (the dinosaurs) will roll our eyes while making a fraction of the money the kids are making.

And even if the whole AI ecosystem implodes (it won't) that would be a massive recession and certainly wouldn't make the remaining software engineering work more rigorous either.

As the Simpsons put it: "An I out of touch? No, it's the children who are wrong."

> There is never a right way, only trade offs.

There is a right way, especially when you have a community.

Can you cite a single software project with so many users which did a language migration in a more cavalier way?

I mean, some trade-offs are “something for nothing” which by definition makes them “the wrong way.”

Real life does exist.

Ah yes, you are actually describing fish shell's Rust rewrite. They specifically called it The Fish Of Theseus which is of course a reference to the ship of Theseus.

https://fishshell.com/blog/rustport/

I mean it's definitely at least partially a PR stunt

Not sure there is much of a point in reviewing a port of this size. It has >1000 instances of `unsafe` and uses the same patterns as the zig code according to Jarred. It feels like a vibe-ported version of what the TypeScript team are doing porting from TypeScript > Go with codemods.

Bun is owned by Anthropic.

Hopefully that answers all your questions.

Humans are no longer maintaining bun. There is no good faith argument that can claim a human understands this rewrite

This kind of frivolous nonsense disqualifies bun from ever being a serious option to me. I'm not building any kind of software used in a professional setting on 1M lines of unreviewed code.

Odd take. Bun was not option for me because or Zig. There was no security. Issue tracker has 3000 issues about segfaults. Now I might actually reconsider.

> There was no security

>1M lines of un-reviewed code are secure?

Comment was more like pick your poison. Eventually it gets reviewed if they are serious. Old version had no future for serious production on anytime soon. This might get there.

> Eventually it gets reviewed if they are serious.

So they just merged it for fun in the meantime? Hope we find out if they're serious soon.

I don't believe you actually think it's odd to not want to run unreviewed code in prod. I accept that you might disagree, but I don't believe this is a take you haven't heard a million times before.

Usually it is about the trust. A lot of code is reviewed, but is the reviewer good enough to spot all the issues? Do you trust the reviewer? Usually the trust comes from the ability to see the code by yourself or for the high trust for the existing reviewer. Code is open, it is there, and Bun is a major project which will attract many eyes, and big issues will be sorted very quickly.

I did not mean that I would use it immediately, right now. But it eventually gets there much sooner than Zig, because "compiler" is one sort of reviewer that mitigates many memory safety issues.

So, my point is, that in very short time, this has much more potential than Zig had, because compiler is very strong reviewer for specific kind of issues which were plaguing the old implementation.

Anthropic buys bun, makes them spend tokens to convert to rust, nobody understands it anymore, locked into ai now

So the geniuses in the datacenter prefer to rewrite the full codebase in another language instead of maintaining and improving its own fork or contributing to make the current language better.

Impressive to rewrite 1MLOC in a week yes, but this is more of a job of a million monkey programmers crammed in a datacenter than a bunch geniuses. And I would know, since I'm a monkey programmer who is in danger now... Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027...

> Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027

Imagine you want to monopolize programming by pushing LLM as an obligatory middle-men. Then people who can program without LLMs are direct threat to your business plan. It's time for us to start hiding. I'm cosidering adding `co-authored by Claude Code` to my hand-written commits and running Claude in useless loops to mock API usage.

You seriously think any of them gives a shit about any of this? They're part of Anthropic now, making money is the only goal.

No matter how I look at this, it's churn for the sake of churn.

Even if the translation was free and into ideal idiomatic Rust (and it's obviously not - it's now Zig with Rust syntax) then this would be churn for the sake of churn.

At some project scale the language really stops being any limiting factor, and you're instead mostly dealing with working past past architectural decisions, integration of large changes, deep optimization, steering the codebase into alignment with project roadmaps and long-term goals, regression testing as features get introduced, maintenance of multiple release trains... Experienced software engineers mostly stop caring about simple things like the programming language choice at that point, because whatever issues come from that choice have already been resolved. What matters is stability, careful orchestration of large changes and a stable and comprehensive test suite.

> At some project scale the language really stops being any limiting factor

That's not entirely true. At a certain scale, some languages start becoming increasingly more of a factor. Memory issues in C/C++ codebases, for example. This is pretty well established at this point, which is why there's a push to move away from memory-unsafe languages. Which likely would include Zig, for better or worse.

I agree that new software should avoid memory unsafe languages, but I would disagree that rewriting existing projects in a memory safe language at all cost is a universally good idea.

But you just shifted the claim to "at all cost".

What if there isn't much cost? What if the benefits outweigh the cost?

I mean... the token cost alone on this thing...

It's I think not churn for the sake of churn. It's likely encouraged by the fact that Zig itself will not accept AI written code contributions.

So now imagine your company and project -- written in Zig -- has just been acquired by the world's biggest/second-biggest AI company.

That company's most successful and popular tool is running on your platform that is written Zig.

And Zig maintainers want nothing to do with you.

What kind of pressures, real or imagined, do you think that puts on the developers of Bun?

Honestly, from what I've seen from a distance, actual rigorous software engineering doesn't happen at Anthropic. From what we saw of the Claude Code source, the reliability issues over the last few months, and now this. It's just a bunch of people getting high on their own supply falling all over each other. Quality issues galore and a delirious frenzy.

FWIW I don't think it's intrinsic to AI. Codex is very well written (in Rust, BTW), fast, and consistent.

The "idiomatic Rust" thing rubs me the wrong way. If someone writes Rust that compiles and works, that's Rust. full stop. Telling people it doesn't count until it's "idiomatic" is just gatekeeping. It quietly says you're not a real Rust dev until you've put in years and absorbed all the unwritten rules, which shuts out exactly the people who are still learning. Everyone writes "non-idiomatic" code when they start. That's not a failure, that's how learning works. Even if being written by LLMs, the devs still will need to improve their knowledge to keep the codebase.

I get the feeling, and shooting for idiomatic on a rewrite is definitely wrong.

That being said, "idiomatic" is more just saying "clean and familiar". It's using the right language features in the right places.

For example, you could write something like this

    fn add_double(a: f64, b: f64) -> f64 {
      return a + b;
    }

    fn add_float(a: f32, b: f32) -> f32 {
      return a + b;
    }
But that's not idiomatic. Idiomatic would look something like this

    fn add<T: std::ops::Add<Output = T>>(a: T, b: T) -> T {
      return a + b;
    }
The benefit of the idiomatic approach is now you have a function which handles a bunch of types from u32, to f64 and it also handles custom types and traits which implement the add ops.

The first method is what you might write if you were, for example, translating from C to Rust. It isn't idiomatic but it's easy to do.

The other thing to realize is that compiler authors optimize for idiomatic. The more you do things in a strange fashion, the more likely you are to stumble over a way of writing code which isn't being looked at when the language team is looking at performance and compile time optimizations.

There's nothing wrong with non-idiomatic code per say. However, part of learning a language is learning the idioms. It makes you better at that language.

Pedantic:

    fn add<T: std::ops::Add<Output = T>>(a: T, b: T) -> T {
      a + b
    }

I beliebe q3k's comment should be read as "[even if it's acceptable to the most stringent of gatekeepers] then this would be churn for the sake of churn."'

Not that only idiomatic Rust is appropriate.

Not really. Rust is designed to be written in a certain way. If you machine translate C into Rust you end up with a load of `unsafe` code that follows the C style but consequently doesn't get any of the benefits of being written in Rust.

Imagine if you translated assembly to C++, but you just did it by putting everything in `asm("...")` calls. That's not idiomatic C++ and you wouldn't get any of the benefits of using C++.

That said, the Rust code I skimmed actually did look surprisingly idiomatic. It wasn't full of `unsafe` like I would have expected.

> or contributing to make the current language better

The people making Zig have said they don't want that.

They also said that:

> Code origin was not even a factor [0]

> AI is entirely besides the point here. The changes in this Zig fork are not desirable to upstream for several reasons. [1]

So my view here is that besides AI policies to filter low value contributions and "contributor poker" [2] to attract contributors vs just contributions, a well thought of genious implementation aligned with the Zig roadmap instead of the "hacky implementation for a flashy headline" [1] would have made the cut.

But then again this entertaining drama will sadly get deprecated by mid 2027 as the datacenters will be churning out their own opusrust and clankzig.

[0] https://news.ycombinator.com/item?id=48017255

[1] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

[2] https://kristoff.it/blog/contributor-poker-and-ai/

Say what you want, but for people building products on Bun, this is bad news for the foreseeable future.

I guess it’s time to have Claude rewrite my Bun app in Deno

I'm sorry, Dave. I'm afraid I can't do that.

Hear me out, what if we rewrote Deno in Zig?

Deno fired most of their developers.

This made me laugh loud

That's a great idea! Once you've granted access to your private repository I can do that.

I am genuinely speechless.

I don't understand the rationale behind how any project, especially of this magnitude, can seriously build something stable this way.

My consolation - and it could be pure cope - is that at least I am in the same boat as a huge company like Anthropic, and they surely wouldn't be stupid enough to also build their cli tools around something that they saw as risky.

feelsbadman.

I guess that the next release of Claude Code will use that runtime.

No later than next week.

This is bad for anyone building on Zig.

Cue the clueless CEOs of zig shops (I don't know many, but still):

"Rust is faster and safer! Port it! If you don't do it, I'll do it myself, because AI can do everything a programmer can, including the stuff you don't want to do. Ship it!"

What serious zig shops exist are generally run by actual engineers. Check out tigerbeetle if you want a good example.

Why would it be? There is projects like Roc that did the opposite, they went from rust to zig, as they (had to) use lots of unsafe rust. And before you ask, no it was not an AI generated rewrite.

that is the point. rewrite is fine when - you take your sweet time doing that - you still know full well the codebase

that will ensure the new codebase can still be well understood and can continue to grow in foreseeable future

or you can just vibe the whole experience if it is a legacy project with all the specs and edge cases known.

since bun rewrite is neither of the case, it will be a crapfest soon enough.

I'm confused. Never heard of Bun until a few days ago here on HN. It's some nodejs wrapper thingy, written in Zig, and someone decided to use LLM to rewrite it in Rust. Is this a big deal? Who is even using this software? Why is this big?

Bun isn't a node.js wrapper. It's an alternative to node.js that sits at roughly the same spot in the stack.

Node.js is a distribution of the V8 JavaScript engine (the thing that executes JavaScript in the Chrome browser), along with a bunch of standard library code written mostly in C++.

Bun is a distribution of the JavaScriptCore engine (the thing that executes JavaScript in the Safari browser), along with a bunch of standard library code written mostly in Zig (and now Rust). Bun's standard library is in many cases compatible with or inspired by the Node.js standard library, but with some changes for convenience and performance.

Answering “who is even using this software” is unfortunately missing in your answer. I am honestly curious. I’ve never seen it “in the wild” (in job descriptions, hearing from past colleagues, meetups etc). Only place I heard about it is HN and Twitter.

It's primarily used by people who tend to sit on the cutting edge e.g. startups and developers who follow the latest tools. It's not well worn enough to be adopted by slower enterprise environments. Bun is well known within web development but if you don't work in the space and don't keep up to date with modern tooling it's unlikely you would have awareness of it.

I'd say the most prominent user (and the reason why Anthropic acquired Bun) is Claude Code

To my limited knowledge, "serious" production systems most likely use Node.js instead of any alternatives, and I don't see any movement towards adopting Bun.

notably anthropic on a multibillion revenue product

Rust vs Zig "wars" etc.

Also at some point Bun was acquired by Anthropic. And some people feared that this will greatly influence Bun's development.

I don't think Rust vs. Zig has anything to do with why people are talking about this. It is a large piece of "real software" that underwent a full language transition in ~1 week using LLMs. That is a big deal regardless of the language and will be a case study regardless of how it turns out.

>I don't think Rust vs. Zig has anything to do with why people are talking about this.

Maybe, but I've seen quite a few comments from people who felt sort of betrayed(?) by the decision. I feel like Bun was important for people as a project that advertises Zig and keeps it relevant even in it's current "pre 1.0" state.

It’s a watershed moment. Basically one of the most controlled applications of an LLM into a robust codebase without regard for the implications of doing so.

Anthropic needed something like this and it must proceed flawlessly. My guess is that nothing will explicitly break. But that’s the difficulty of LLM generated code: nothing breaks. You sit with a codebase that swallows all errors and appears to be working. Silently failing makes debugging performance and behavior much harder.

which was obviously a reasonable reaction.

I think relatively few people are probably running Bun in production, but as a dependency management system and bundler for the JavaScript ecosystem, it's similar to `uv` from the Python ecosystem in how much faster it is compared to the most popular alternatives so it's fairly popular in that space.

PNPM is just as fast and much more reliable.

Agree with this. Been a long time pnpm user that also uses bun nowadays. Not much faster other than initial startup because pnpm uses Node.js

Although pnpm has also been trying to rewrite Rust before, they call it pacquet. It is currently being revisited

Bun is not a node.js wrapper, it is a node.js alternative. It had non-trivial adoption, tens of thousands of stars on github for whatever that's worth (before the AI spam took over stars). It was then purchased by Anthropic and now we're witnessing open source software that people used be sacrificed to the altar of LLM marketing hype.

Not mature enough for everyone to be using it yet, but it may dominate the space down the line. They compete with Deno.

I've never done any JavaScript development of any kind and had never heard of this either. I thought it was a package manager at first, but apparently it's an entire runtime.

My question is, if it's this trivial to rewrite Zig to Rust, and trivial in general to write Rust at all, why not just use Rust for your server side code in the first place? What's the value of continuing to use JavaScript and putting so much effort into the runtime?

Bun has a lot of buzz as 'the next big thing' in the JS ecosystem, and was recently purchased by Anthropic. So it's kind of in the zeitgeist.

>Is this a big deal? Who is even using this software? Why is this big?

Let's see. $10T in market cap, a significant chunk of everyone's assets and retirement funds, are currently dedicated to AI build out because of the potential for AI like Claude Code, which is recently doing $3b in revenue, and built completely on Bun.

If Bun is able to successfully vibe code a complete language shift in this short of time, it much more concretely validates the potential of vibe coding / AI for the entire industry.

[deleted]

So many of the code comments on the new port concern only discussion on how it was ported, usually referring reader to the original zig implementation.

So now I'd basically be reading 2x the amount of comments and code to understand _why_ anything is happening.

the new code is generally hot ass

Software is only as good as the end result; it doesn't matter how we get there.

There is reason to be suspicious of LLMs, but people should stop getting so wrought up over _how_ the Bun team writes their software, until they have complaints over the software itself.

Just let the team do their thing. You're free to reject the end result.

I agree, if the code gets tested endlessly, and audited, and nobody, not even the LLM can find major jarring issues with it, it compiles, builds, and works as expected, isn't degraded in any way, I don't think I care how you built the "new" rendition of the software.

If LLMs can achieve this level of task in 9 days, why do we even need Bun in the first place? Shouldn't we just write our apps in Rust and not even deal with JS?

Why even rust at the first place? I dont see why we can't go straight from natural language -> Claude -> HTML/JS/CSS bundle. Instead of writing webpage, one can just write prompt for each page and serve it with claude.cgi

Can a webpage run my factories?

Yes, it can. Just vibe code Claude to connect to your lithiography machine and voila! Claude will run your factories. Claude can even apply oil to your rusty machines if you choose the $1000/month package

And if you inject information about the user into the context, everyone can have their own personalized version and we'll turn the internet into the tower of babel where no two people see or experience the same thing.

>> I dont see why we can't go straight from natural language -> Claude -> HTML/JS/CSS bundle.

Or we could just rewrite everything in assembly, becauase thats fast. Well, Claude can do that. (/s ??)

I find that LLMs are quite good at translating code. If you are writing something from scratch you have the burden of preparing something for the LLMs to "translate" from, i.e. prompt or specifications – next best thing to actual source code.

Defining specifications with the level of detail needed to build applications exactly as intended is not as trivial as it may seem.

Honest question, how many of the leaks and crashes can be attributed to zig the language vs possibly (maybe, we don't know) a loosey-goosey, slot machine approach to development heavily reliant on AI? Will the inherent leaks and crashes be fixed, purely by dint of porting to Rust?

Given Anthropic's existing track record of producing terrible hallucinated inaccurate documentation in Claude Code, I'm very curious how Bun will handle this as it continues development. Anthropic probably doesn't care about Bun's external compatibility as long as it runs Claude Code. Will Bun be eventually become "the JavaScript flavor that Claude Code uses"? Will they even bother updating external documentation as it changes? Docs currently live at https://bun.com/reference, but I don't know how much of this is separately maintained documentation versus JSDoc-style generated documentation.

If the bun team is around I would be interested to get their opinion on this: in the old time migrating a 1M codebase from one language to another meant you would pretty much become an expert in the target language. The output of the work is team experience/knowledge + the actual rewrite. With that Bun rewrite do you feel that the Bun team learned something other than “Claude can rewrite a very large codebase in no time”, which is impressive in itself. Is the output only the rewrite, or did you learn something along the way? And how do you feel about your answer? Not a snark question, like a lot of others I’m myself trying to understand how I feel about how our profession is/has been changing.

I used to think software was inherently valuable.

Then I decided that software is of limited value without a team to maintain it. Not necessarily because they fix it, but because they represent a bunch of humans who collectively understand it and therefore give it more possibilities.

And now this. I'm not sure what to make of it.

Same

I think one of the things I had forgotten about but sheds some more light in my mind about how this was done is that anthropic bought bun.

The change of tone with the author in the capabilities of Claude. The strategy of merging everything at once instead of a more slow, careful cutover. The “single” author story that every company loves to put forth.

By reading this thread I've learned that, apparently, you are not allowed to rewrite a large piece of software backed by a large test suite in another language within two weeks otherwise you are a witch and need to be burned on a stake. You are also not allowed to move from the PoC phase to lets-do-it phase within a couple of days without being called names. Why are we concerned with speed all of a sudden? Are we in the "people will literally die if a car moved faster than 25 mph" era of software engineering? Let them do whatever they want, they've shown the will to move on from wrong decisions, they will do it again if the Rust port fails to deliver and the whole industry gets to learn from it, whatever "it" might become.

I can't ignore how much this sounds like Stockton Rush.

> "Apparently if you build a submersible with carbon fiber you are a witch and need to be burned on a stake. But look we're making reliable trips down to the Titanic with no problems."

Realistically, this is a forum of experienced engineers watching a company make some extremely questionable but very flashy engineering decisions. There's going to be a lot of people standing around here going "gee I dunno, that seems questionable".

Personally, I think the rewrite will largely work - logically, direct translations from one language to another are pretty well within the realm of the few things LLMs should perform extremely well at. But I also think more information will come out showing this was much more bespoke than just prompting an agent to do the translation. This just feels too much like an ad for Anthropic, I think it's likely there was a lot more human involvement and planning than we are being told.

[dead]

That you're only just "learning" that these things are true is a damning admission. And to fix your bad analogy, it's more like "hey maybe we shouldn't be allowing f1 street races through school zones".

That analogy might work if this situation is 'reckless behaviour risking children's safety' but in this case it's much closer to 'We made an large, potentially risky change that you can choose to avoid until it's more mature'

The analogy is just bad to begin with.

It's more like "we've switched ingredients while actively denying that they'll be switched".

They never denied they'd switch, just that they'd need solid improvements confirmed before they switched. Clearly internally they've decided they've seen the gains necessary to carry on with the switch

https://news.ycombinator.com/item?id=48019226

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely

I know words are hard, but if you find it hard to believe any humans here, then feed it into your favorite LLM.

This is silly IMHO. They haven’t released a new official Bun version with this code yet. It is a canary release. Give them a chance to figure it out and try it out and see how the limited number of production users of bun as a runtime experience the move. If it succeeds, this will massively accelerate development and they will have much to teach us all about how to safely code 1M lines with AI and merge it in days. If it fails, we will know that AI isn’t ready for that yet

The AI polarization is making me sick. Please don't let this style of comment become normalized on HN (and that includes equivalently tribalistic anti-AI comments).

> By reading this thread I've learned that, apparently, you are not allowed to rewrite a large piece of software backed by a large test suite in another language within two weeks otherwise you are a witch and need to be burned on a stake.

You've just learned that you can't do random shit and not get called out? Were you born yesterday?

Anyone running bun in production right now has to be sweating lol, this is a ridiculous change for a part of your software stack that really ought to be reliable.

Heavy implications on how the future will be formed if things go well with this port. It would prove a lot of people wrong if things go well 3 months down the road.

Not really - three months is nowhere near long enough to demonstrate if a large piece of software has issues or not.

With the amount of applications running on Bun? I’d say enough.

You think they'll ~all merrily move to the new version?

Doesn’t have to. It’s a big bet, with a huge payout for Anthropic.

The top comment in the thread explains it pretty well, so please don't pretend it's anything else. The point is they went from "chillax, it's just an experiment" to "we'll switch languages via a 1M line vibecoded patch" in two days. People that rely on this software are understandably fearful, since there is no way this change has been properly revised and tested. Although perhaps the mistake was relying on such software in the first place... And so are contributors too, which have seen essentially the entire codebase replaced in a week.

People relying on this software can absolutely choose to stay on current/recent versions until this becomes more mature. My assumption is that the current state allows for public testing, but anyone needing a stable version wouldn't be affected and can choose to not be affected by it.

Why "no way"? You're also forgetting extensive test suite?

Merging it so quickly only odd if you're planning on retaining current community.

It's not like it was merged and shipped to every single stable distro overnight. That's how things get tested.

Congratulations to the Anthropic marketing team on their acqui-hire.

Wondering what they will do when rust rejects a pr from them.

I guess they vibe-rewrite to C, relying on CCC compiler. Agent loop will be modifying both the project and the compiler until the ends meet.

Is this really the state of "software engineering" today? :/

You'll pay for tokens and you'll be happy.

That's what the new AI overlords want the world to believe, at least.

mushware*

We should be greatful for this. This is the one public case study on how large-scale llm-driven code generation actually works out.

With node and deno there are reasonable alternatives for everyone who don't want to use bun anymore.

> This is the one public case study on how large-scale llm-driven code generation actually works out.

Is it, really? I can't imagine how much money in tokens was spent to get something like this + Jarred's and the teams salaries to review/manage this.

It’s not a public study though. We’re not going to get trust worthy numbers about labor or token cost.

The problem is that many negative effects of this kind of thing won't be clear or immediate, so it's not an easy test to make useful. At minimum, this increases the opacity of the box, reducing perceived trustworthiness.

Would be very cool if as a result the different components were published as crates and embeddable in other rust projects!

I just skimmed through the porting guide and based on the number of unsafe blocks, this looks like a fairly straight-forward mechanical translation.

If that is the case, why didn't they just "vibe-code" a Zig->Rust translator and a small Rust/TS/JS/whatever script to orchestrate things. You don't even need pretty printing support because rustfmt exists.

You'll save on a bunch of tokens, probably a lot of time/enegy, the process becomes auditable and (hopefully) deterministic, and if there's a mass bug in the translation, you only have to fix it in one spot.

No time for common sense solutions! Tokens gotta go somewhere!

Bun is owned by anthropic. They get infinite tokens, and anthropic gets a fluffy PR piece slash advertisement.

But... But... It's going to be harder for them to claim "AI did the rewrite"!

I hope the Deno lot take the opportunity to capitalise on this

This is their chance for sure but it seems they are scaling down, at least their main product Deno Deploy.

Prev they have presence in 31 regions but now it's down to just 6

https://docs.deno.com/deploy/classic/regions/

Bun's rise over Deno is honestly shocking. One man's project that went viral because of some very misleading benchmarks has evolved into a behemoth in an incredibly short time frame. Some major projects bought into the benchmarks and adopted it for important projects and thus it was thrust into stardom.

I was naive enough to believe Deno's ascendance was all but guaranteed with Ryan Dahl's name on it and the direly needed security guarantees it offered.

By having Codex port Deno to Zig, you mean?

Well, that escalated quickly. I think I first heard rumors of this a week or two ago. That's a very vast turnaround for such massive code-churn. I don't know how to feel about this.

Github is failing to load the 800 comments, naturally. I'll bet they're fun.

Too bad modern computers are not capable of processing 800 paragraphs of text. That’s several hundred kilobytes! Maybe the technology will advance thanks to AI…

Github actually made my computer lag when there were no comments at all because of the 1 million lines of code added iirc. I could've responded something first but well I wanted to say something meaningful and didn't have anything so I just closed it.

I had to literally force quit my browser because of how much it lagged iirc.

6,755 commits for the PR as well...

So how many of their employees are now familiar with the codebase? zero?

I mean if you look at the code, it's a pretty faithful rewrite. It seems like being near 1-1 with the original code was prioritized even more so than utilizing Rust's safety features since unsafe Rust is everywhere

first major company to really nuke their main product via AI psychosis?

I for one think it's a fascinating experiment to see how well it goes. Though if it actually works and leads to bun getting better over the coming months, I suspect the arguments against it will just take on a different flavor.

Of course they will, the goalposts will keep shifting because people don't want to admit that agents are now this capable.

Rust needs to remove the unsafe keyword to finally fulfill it's destiny as a practical LLM generation target.

One of Bun's longstanding issues was that bootstrapping Bun required Bun, so distributions were unusable to ship it or anything that depended on it: https://github.com/oven-sh/bun/issues/6887

Any ideas if this is now changing and Bun can be bootstrapped with "just" Rust?

This may be the largest AI-generated codebase right now, by a lot. It'll be interesting to see how this plays out.

Frontier AI software development still falls short in the design/architecture department, in my recent experience. Though it's pretty impressive at making "working" code.

This being a fairly direct conversion from one language to another, even keeping the same interfaces across files, means the architecture is already in place.

The detailed test coverage is also very helpful for Claude. But even detailed testing can't cover every edge case.

So my questions are: How well did Claude do on the edge cases? And how maintainable will this codebase be going forward?

> This may be the largest AI-generated codebase right now, by a lot.

I'm sure there's lots of other large scale applications of AI, just not many/any projects that are open source and so high profile - with the changes being done so far.

Personally, in the past 3 months I've shipped about 2.3M lines of a legacy project migration, though the new codebase is Java + Oracle ADF because of reasons™ and instead of being an interesting codebase, it's more forms heavy and essentially acts as a front end for a large Oracle instance, think more CRUD than application runtime (with an upsetting amount of XML).

The difference also is that it wasn't migrated by using AI on every file, but rather dumped the DB schema into JSON, and converted the old form contents to a YAML intermediate format that describes what's in the forms and have been iterating ever since of creating code that generates code - basically AI assisted development of a codegen solution + AI assisted sidecars that get merged with the generated code based on markers, when something can't be automated that way and often times also AI controlled browser based testing (since Playwright is in the cards for everything, but not yet).

Seems to be going pretty okay so far, will probably take months more of iteration and fixes, currently the automated testing is taking a while because let me tell you - not only Oracle ADF is shit, but so is WebLogic, like fuck I'd be so closer to being done if I was allowed to pick Python + HTMX or even Java + Thymeleaf. That's still better than a team spending a year on the migration and getting like 10% of the way there.

Obviously there's no more details to publicly share, but the overall vibe is clear: as long as you can test any changes, you can iterate faster than without AI - and the code ends up being more readable that colleagues would often write. The problem is that people would squint at the suggestion of 100% test coverage previously so most code is even written in a way that is straight up not testable (and often nothing is decoupled from the framework properly and tests take way too long, both time and resources).

I hope it's obvious why I'm removing Bun dependency in all my projects. Would be great to have a non-affiliated zig-bun fork that focuses on, well, runtime.

That's pretty... brave? Not releasing it in parallel and spending a few months testing it against the old mainline version to surface issues BEFORE a potential merge?

Who knows what their release strategy will be. This is still only a canary release. Don’t put your horse before your cart

I wonder what portion of the migration was contributed by Mythos. Surely the Bun team now has access to more powerful models, but could such a migration be done with just Opus 4.7? Nonetheless, nearly 7k commits is impressive.

Turns out "its just an experiment, you all are overreacting" was just a lie to damp criticism.

https://news.ycombinator.com/item?id=48019226

Merging a complete rewrite in another language in 9 days seems insane to me. Maybe I'm just too cautious but with something like this I'd split off as a separate binary and get some heavy use customers involved as testers first to see if it causes any unforeseen problems before slowly expanding it out.

I'd want to be pretty damn confident it won't cause any regressions before sunsetting the original codebase in favor of this one.

I don’t think you’re too cautious. Big upgrades and rewrites is somewhat of a „work hobby” of mine and this seems waaay too fast. I don’t know how the Bun canary process works and I guess their test suite is better than typical projects but still… I can’t imagine this working out well without testing it on a variety of big projects for a significant amount of time.

There’s probably loads(?) of observable behaviors that people rely on, consciously or not. Even _if_ the new thing is 100% spec compliant, it might still be breaking or otherwise problematic for heavy users.

That said, I’d love to be proven wrong. I use Bun from time to time on small stuff and I enjoy it, so I wish them well (:

> too cautious

No, you are perfectly normal.

The people who in one week decided to replace the whole codebase for a widely used tool with code no human has seen are the crazy ones.

Testing in production xD

9 days is the official story. Nobody knows, how long they really work on.

Well I've got egg on my face.

I am in that post, defending bun.

I thought for sure the peanut gallery was overreacting. Especially when the concern was absurd - because who would do such an insance thing? Like, at the time I legitimately thought 'no way a project switches over in a few months'. Even as an absurd hypothetical, I couldn't even imagine the prospect of it being done in a matter of days.

Feeling really confused right now.

> Well I've got egg on my face.

Not at all. Supporting a methodical conversion to Rust seems reasonable. How could you have predicted they'd shotgun it?

that’s the advertisement part of this ordeal you’re experiencing.

It seems it was an experiment at that moment, and that it went well? I do hope they release it under 2.x though, cannot imagine how a 1M LoC can break in so many ways, especially if what xiphias says is true:

https://news.ycombinator.com/item?id=48132902

If I got magically handed the perfect rust rewrite for a project of this magnitude, it would take way longer than 9 days to merge, because I would need to make sure it's actually good.

> it would take way longer than 9 days to merge, because I would need to make sure it's actually good

What if another (unstated) goal of your rewrite was to provide marketing material for how advanced your acquirers AI tools are? The faster the turnaround, the better they (and therefore you) look.

> It seems it was an experiment at that moment, and that it went well?

There’s no way they can know that for sure. A change of this magnitude cannot go from experiment to success in such a short time frame. Even if all the code were 100% correct, you can’t call it a success until it’s battle tested in real world scenarios for a while, and that is impossible without time. Same way you can’t cook properly by throwing food into a vulcano. It’s not just about the temperature.

Either the “experiment” claim was a lie or they are being irresponsible.

Maybe Anthropic decided to push this because of all the attention the experiment got.

If it works out it’ll be a good study case for marketing.

I'm no believer... 9 days later... Lessssssgoooooooo wooooooooo <sunglasses and rave>

[deleted]

The experiment might have turned out well, or the author might have spent enough time to bring it to a place they was comfortable.

Frustration moves mountains, I don't think this rewrite was done lightly.

The rewrite was obviously done lightly.

"We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely."

People conflate “high chance of X” with “X will happen” all the time. See elections, for example.

The phrasing strongly implies that they are taking the migration seriously and carefully. Merging straight to canary after 9 days is insane.

I have a friend who get super mad when he fails ">80% chance of success" throws.

This isn't case of this tho. Even he said that there is a high chance of RIIR, 9 days still insanely short time for such rewrite if you're planning to have some sort of community around the project.

We all have eyes, it doesn't take a genius to spot a lie.

You have no idea if it was a lie or not. I routinely have my clanker fleet spend a couple days toiling on some crap that I assume I will throw away, but it turns out pretty awesome, so I keep it.

It's entirely plausible that when that comment was posted, he doubted it would work well enough to keep.

(Sensible default for LLM code, btw. But sometimes it works great.)

> was just a lie to damp criticism.

Citation needed. Couldn't it just as easily have been one person being as suspicious of the task as everyone else seemed to be?

Surely the mods will be here to remind you that it's against the rules to direct personal attacks towards other community members, to fulminate and brigade.

Or do those protections only cover whiny open source developers upset about a chat bot writing blogs?

Well it was 9 days ago, at the time they were not confident, but maybe the results were insanely good.

no matter how good the results are, this kind of rewrites deserves an experimental build to be battle tested by bleeding edge users.

It takes a lot of rigorous testing automated and manual and by community before such changes are cosnidered permanent.

One does not simply YOLO a full langugae rewrite without user feedback. it is insane.

>One does not simply YOLO a full langugae rewrite without user feedback. it is insane.

The whole ai thing today is pretty insane, I would say. Why not ride with it, especially if your company is one of the biggest leaders?

You should really read TFA because... that's exactly what they're doing?

The Zig version has not been removed and this only exists got canary builds. No rust binaries are being distributed as stable.

But the official canary/bleeding edge/nightly/whatever version is now the LLM rewrite, yes?

The page is not loading for me.

Does anything from that comment say that there was 0% chance the experiment wouldn't be merged into main? I see "very high chance all this code gets thrown out completely", which just means the low chance of it not being thrown out has occurred.

It doesn't say what will happen, but isn't their comment responding to people who don't like the look of this rewrite, and telling them basically that they don't have to think/worry about it? I definitely read it as 'not yet' and not 'another week or so'.

https://github.com/oven-sh/bun/pulls

and now we have lots of troll PRs...

> People keep opening issues about "unsafe usage" in the codebase. This PR solves that problem at the root by introducing a yolo! macro and replacing all 10,421 instances of unsafe {} across 732 files.

This is actually pretty funny.

I don't really understand the point of this. Is it Anthropic showing off well their LLMs work? Was it too difficult to find Zig devs so Bun swapped to Rust? Did Jarred read one too many memes about "rewriting in rust" and took it at face value??

I would imagine that there will be bugs migrating all at once, performance will probably be close to the same, and the maintainers will need to context shift from Zig to Rust. A very confusing decision for sure.

Claude is significantly better at rust than zig. Zig is changing all the time. If you check my profile comments I did a quick experiment recently to demonstrate. Essentially, Claude could generate a basic working tcp echo server in a few seconds. For zig, either asking it to do it just with zig, or with specific versions (.15 and .16 because some fundamental language changes necessitate different implementations) failed to produce working code in all three cases and also took magnitudes longer to generate the code.

Aside from the big marketing play, Claude not being able to easily generate zig code was probably a big motivator - it doesn’t make anthropic look good and it doesn’t fit into how they’re doing things

Also, you’re assuming that actual traditional maintainers even exist now. Likely it’s a smaller team of people running mythos agents with an unlimited budget and no real need to fully understand the code

I suspect one part of the puzzle is that Bun used its own fork of Zig, that had diverged signficantly in design and direction from mainline Zig.

The point of it is to hype anthropics IPO.

Probably some combination of: Anthropic is heavily invested in the Rust ecosystem and they want their core tools to be built on Rust. More Rust developers. More Rust training data so LLMs write better Rust code than Zig code. Advertisement for Claude Code doing major work on a high profile open source project.

Why didn't they ask Claude to remove all of the `unsafe` at the same time??

"at the same time" is a recipe for failure with coding agents.

It's also a recipe for failure for ports in general. Same goes for the "not idiomatic Rust" comments above — that would be nonsense.

You want to port it as faithfully as possible to the original, porting it bug-for-bug, quirk-for-quirk. Then, over time, after the port has been proven to be as identical to the original as possible, you can gradually fix those kinds of internals.

That's why TypeScript's tsgo native port is so good.

tsgo will inherit many benefits from go, even if it is never fully "idiomatic".

This is in direct contrast to this port, which requires significant re-architecting (or made "idiomatic", if you wish) in rust to achieve any of the benefits of the language. You can't re-architect one step at a time.

I don't think you want to achieve any benefits of Rust in the initial port. Because at this scale you will definitely introduce new, and probably subtle, bugs that are not present in the Zig version.

You just want it to be the same, to the maximum extent the language allows. E.g. 1000+ unsafe is the right move, for now.

Reaping the benefits of Rust is for _future_ development.

That's my point - I don't see any hope of removing the 10,000+ unsafe calls, especially not one step at a time.

As such, this is a publicity stunt.

You could do, but maybe they never will. I have no idea.

But the point is, in 2027, 2028... your new code doesn't have to suffer from these frankly 1970s issues

You could also gradually fix the internals — if you wanted to

The irony being that machine-translation of code language also dates from the 1970's.

Right, so what we have here is a very expensive regex.

It sounds like some bugs were fixed in order to make it compile.

How does the no async work? Would have thought Bun would need that

Async presumably happens in the JS runtime that bun calls into. Just need 1 thread to host that

People were doing async I/O before coroutines existed. They are using callbacks and their own networking.

“+1,000,000” changes in a single commit is insane.

Why would they do it like this? It makes no real sense to me. At that point it's an entirely different project, with the same functionality.

If you use Bun in production, does this feel like a well managed upstream?

I don't use Bun, I don't care that they are using an LLM (though it is impressive that this actually worked), but the project management aspects of this is just wacky.

Because Anthropic owns Bun and they use it for marketing purposes.

The really interesting thing to do would be to ask the agents to submit the diff as a coherent patchset...

> "The codebase is otherwise largely the same. The same architecture, the same data structures."

Ship of Theseus.

[deleted]

except it's more like someone used the Philosopher's Stone to turn all the wood bits into metal

And 6700 commits.

No wonder GitHub is down

/s

That OpenClaw guy seems to make 6000 commits everyday or something.

No /s needed

If this means that segfaults become rarer with Bun I might consider using it in production again. As it stands, Bun has been great as an all-in-one TS/JS package manager, build system and test runner but unstable enough that I still want Node running in production backends.

Yes. That is the plan.

See jared comment [0]

If this helps bun and rust is a better lang for developing bun going forward with the help of claude. Then i think that is just fine.

I thought rust was making the codebase complex so zig won on speed and dx.

But with llm and a large codebase it seems like rust gives fewer bug and you can develop it faster & safer.

https://news.ycombinator.com/reply?id=48133519&goto=threads%...

Surely there are no bugs in the 1000000 lines of code that no one has reviewed…

This is a massive marketing for Anthropic. It shows how capable their systems to enterprises customers.

Also this is a perfect task for LLMs. They have the most detailed spec (Production Zig code) ever, and since it as file for file and line for line rewrite, agents were able to quickly complete a massive 1 Millon line rewrite.

We will continue to see more of these in future.

1 million additions. 4k deletions. 0 approvals.

With weird sadness I have to say, we are getting targeted with new kind of marketing. It doesn’t look like it was just technical decision. If anyone was following what was going on X, it was crazy with amount of content about it.

I couldn’t believe before with all fearmongering being marketing, but I am coming to conclusion it is. It’s hard to get any signal over noise in attention economy. They know what they are doing and it’s Deja Vu of crypto, but now we are targets with rage baits, guerilla marketing, buzz

How they gonna do refactoring, bugfix or other maintenance on generated code? Ask LLM?

Yes, only LLMs from now on.

The average quality of the Zig projects got up.

Has he estimated the token cost for this (if he had to pay that is)? I'm curious how much this would cost a paying customer.

Bun is owned by Anthropic.

This is just marketing budget.

The acquisition money is coming from marketing budget :D

Probably in the six figures.

Depending on the model I could easily see it approaching 7 figures since Mythos security scans have been 6 figures already and don't require nearly as much output.

For those looking for an alternative no-compilation TypeScript runner, I'm quite satisfied with TSX: https://github.com/privatenumber/tsx

Node.js itself is getting quite close to running TypeScript natively, but they don't support using ES imports of CJS packages and importing with no-extension qualifier.

Huh, it makes sense that Anthropic acquired these guys. This kind of AI nativity in thought directing to action is actually incredibly uncommon.

I wonder, did they consider an approach of vibe-coding a deterministic converter and then running it? This should be much more token efficient.

I wonder if the whole acquisition was done so that they have guinea pigs that can’t say no…

or if I want to be cynical… so that they have a big enough project where they can force gigantic rewrites without considering the outcome from the project’s point of view, all so that they can fuel their marketing strategy.

To be honest, kind of obvious looking back.

This canary will never leave the mine. (unless Anthropic opens their wallet again)

I have full faith, it's the same really smart people that built bun (Jarred and team) that have spearheaded this and are running it. So I have no reason to believe that this was done carelessly.

That said, I'm still shocked and amazed that something this big is possible these days. But as we've seen multiple times now, one of the most important things your codebase can have is a solid test suite.

I will continue to use bun, because at the end of the day, it isn't just the technology, but the talent/people behind the technology that ensures that it will be solid.

And since that hasn't changed, I will still trust bun and its direction.

Also, bun is mostly glue code and sort of "user space" libraries (my words) as Jarred has said on X, most of the underlying runtimes like JavascriptCore, etc weren't rewritten.

So this isn't like 100% of what we think of as bun was rewritten. It's more like the scaffolding and harness.

Just because it's possible, doesn't mean that it's sensible

> So I have no reason to believe that this was done carelessly.

Writing software with an LLM is doing it carelessly.

Doesn't doing this in the matter of a week or so, by definition mean it was done carelessly?

How could it be possible to test such a complicated piece of software, and review such a large amount of code in such a small timeframe? Spoiler, it's not. They're merging slop.

yeah but it also made some tests pass by changing the tests. i’m not super familiar so i’ll dig more on weekend but it seems sus pending more review. i’ve had ai do similar things that i caught in manual review. cheating the test is bad.

It is welk known that agents can cheat or go off on tangents and not recover. Just recently deleted a bunch of code files that I didn't ask for. The code wasn't even used anywhere.

That's why they've merged it into canary so they can continue working on it.

[dead]

Maybe a good advert for Claude; but a terrible, terrible advert for the stewardship and governance of the Bun project.

This is the most accurate take lol. Claude's done impressive work, but I would absolutely never trust this project in production now.

On one hand I kinda feel validated for having jumped ship on Zig 3+ Years ago[1] and moving everything to Rust[2], with the language simply being too unstable and unsafe in my eyes, despite my love for comptime and people arguing that Bun and Tigerbeetle were proof that it wasn't the languages fault.

But I also feel bad for the Zig project to loose one of their flagship projects, because while I find the project ultimately anachronistic, I know what it's like to pour your sweat, heart and soul into something, and having it replaced within a week is a sobering experience even from afar.

A couple years ago this would have been unthinkable because of how slow legacy codebases and rewrites are.

I wonder if Tigerbeetle will also have problems arguing for their solution now that the other project they can point to for customer assurance is gone. And I wonder if they will follow suit eventually simply due to marketing pressure (after having been bitten by the Zig compiler I was surprised that they were putting their super duper high reliability database on top of it at all, but with another big player using it there was at least some peace of mind for their enterprise customers).

1: https://github.com/triblespace/tribles-zig

2: https://github.com/triblespace/triblespace-rs

> I wonder if Tigerbeetle will also have problems arguing for their solution now that the other project they can point to for customer assurance is gone.

In general, we never like to appeal to popularity (a logical fallacy), but why would you assume here that we would point to Bun specifically (or any project for that matter) [1] as an example of Zig’s quality?

We prefer to judge Zig’s quality on its own intrinsic merit:

For example, we subject the language through TigerBeetle to inordinate amounts of fuzzing, perhaps more than any other language (you could say Zig is lucky to have TB’s test suite aimed against it!).

Literally 1,024 dedicated CPU cores, 24/7.

Zig holds up remarkably well.

We also recently pledged $512K to the ZSF, together with Synadia.

These are the kinds of things we prefer to point to. Not hype, but real end-to-end systems engineering, and long term financial support, regardless of the language we choose to use.

[1] I picked Zig back in July 2020. At the time, the largest project was River, but already Zig was a phenomenal choice, and the years have only shown that Zig was probably one of the best design decisions in the development of TigerBeetle. It turned out better than I imagined.

Correct me if I'm wrong, but the three largest Zig project (by far, with a huge gap between them and the rest of the pack) are Bun, Ghostty, and TigerBeetle.

A language so niche that it only has 3 major projects is a liability. Now it has 2 major projects, one of which is yours. Even I as a weird language connoisseur would raise an eyebrow at that.

After switching from Zig to Rust, I felt like the language was helping me improve the correctness of my project, to argue that the fuzzing of your project helps improve the correctness of the language feels backwards and adds to my suspicions.

We both know that fuzzing is great, but that wether you fuzz with 1000 cores or 1.000.000 cores, at an exponentially growing state space it doesn't make (that much of a) difference (I know that you guys are not doing naive fuzzing, which is extremely cool, but the shape of the problem is still O of evil shaped). Most things you can find with fuzzing are shallow-ish, and if you want to go deeper you need formal verification (for which a strong type system is a good first approximation and I'm not aware of something like Kani in Zig).

I like TigerBeetle and I still wish you guys all the success in the world, but I can't help and wonder where you could be by now if your language was lifting you up, instead of you having to lift up your language.

> Correct me if I'm wrong, but the three largest Zig project

I did correct you where you are wrong (“appeal to popularity” as a logical fallacy).

> I can't help and wonder where you could be by now if your language was lifting you up, instead of you having to lift up your language.

Did you know we’ve had on the order of 3 memory bugs in 6 years of TigerBeetle?

We also reached production in 3.5 years, bringing not only a global consensus implementation, but also a local storage engine to market. (Each of these typically take 5-10 years elsewhere to reach maturity).

Zig does lift us up.

In fact, Zig’s memory model has always been the perfect expression of TigerStyle. And TB could not have been designed the way it is today in any other language (including Rust). Implicit allocation, global allocator… you automatically lose OOM-safety. But the zero-copy intrusive memory techniques we use… Zig is perfect for TigerBeetle.

I'd concur with the sibling commenter; they put their money where their mouth is and they've addressed your arguments, particularly the popularity fallacy.

I'll also say Zig got Bun to their big acquisition; not unlike how other startups started with Ruby and then later to switched to Java at scale. Those startups didn't need to ruminate on their past experience as a horrible mistake or disappointment; they just moved on.

While I don’t have personal experience with either project, I feel it is safe to say that Bun and TigerBeetle are not comparable projects: TigerBeetle has a strong focus on testing and correctness, and Bun maybe not so much. IIRC, TB did well in the Jepsen test and had one segfault in a client library. Bun has had quite a few memory safety issues, in fact, the stated motivation for the Rust move is to eliminate those going forward. We shall see how that pans out.

I doubt the Zig maintainers will miss the giant PRs from Bun!

I'm pretty sure they'll miss the full developer salary that Oven used to sponsor them, which they no longer do. I'd wager one doesn't do a rewrite like that, if you are in great personal standing with the language foundation.

That same "just don't use it" attitude was what drove me away from Zig btw. I would have been fine in restricting myself to a somewhat stable subset, e.g. if, loop + function calls, but they didn't want to provide any tiered stability guarantees for the language.

Opinionated is great, no local minima is great, but you have to accept that if you don't want to engage with the needs of your (professional) community then what you do is a hobby project. A very cool hobby project beloved by thousands, but a hobby project.

I think if you use a programming language that is clearly version zero you can't complain that it's not stable...

I'm not expecting the whole language to be stable, but I expect certain parts of it to be more stable than others. E.g. control flow vs. async.

I'm not saying that they can't work that way, more power to them. But then having the expectation of anybody using it in a professional setting is also unrealistic. You can't have your cake and eat it too, either it's your personal project and you are fine with nobody using it but you, or you evangelise for people to use it, but then you also need to make at least some effort to not break their stuff on a whim, or to accept their change requests when they put in the work as was apparently the case for bun.

Tbh I don't see Zig hit 1.0 with a meaningful user-base, it's probably going to mostly get eaten by Rust or some other language and will continue to exist as a niche thing, kinda like D.

Having one of the flagship/showcase codebases rewritten to Rust in a week feels like a death knell. Either the community or the language is too unworkable if someone that heavily invested into it jumps ship, and I'm afraid it's kinda both.

Having tried both, I think Zig is a replacement for C, while Rust is a replacement for C++.

One thing Zig has that lots of "niche" languages don't is that you can include C headers directly. This means if you want to make a game in SDL, for example, you don't need to wait until someone ports SDL to your new language. You can just include SDL.h directly and start using it. D also has this feature, by the way, but Rust requires you to generate the bindings.

Even if people move from Zig to Rust for some things or vice-versa, the strengths of Zig remain there.

I know these strengths, I've written Zig fulltime for ~1 year before switching to Rust, and I do miss comptime pretty much every day.

Still in my experience the strengths do not outweigh the weaknesses.

I'd also push back on the narrative that Rust is not a C replacement. For one because that characterization based on surface level syntactic similarities misses the point of WHY you'd want to have a C replacement in the first place. And also because if this whole situation has shown anything it's that if you want to generate the "extern C" boilerplate in Rust, then these days it requires little more than "hey claude/codex please write the imports for this C library" or even "please port this C library to Rust".

Hopefully this means Bun can now support things that were limitations of the Zig libraries like being able to upgrade standard TCP sockets to tcp without closing them.

The follow-up PR removing the zig source files being auto-tagged by bun's own CI as "ai slop" is so funny

https://github.com/oven-sh/bun/pull/30680

That was me - not CI marking as slop. It kept around 60 .zig files around that should’ve been moved to .rs files.

It looks like you were spamflagged on your last comment https://news.ycombinator.com/item?id=48133806

That's wild, how are people going so crazy over a rewrite.

  $ grep --exclude-dir=.git -r 'unsafe {' | wc -l
  10465
Nice.

It's not that weird to end up with this when translating C/Zig/C++ to Rust. A first pass can use unsafe and then when the code is in Rust you can work on reducing the unsafe.

Trying to eliminate all unsafe as part of the rewrite, whether done by human or LLM, would be making too big of a change in the process of rewriting.

> would be making too big of a change in the process of rewriting

God forbid the already unreviewable -710kloc/+1mloc change get any bigger!

Sure, but that's kind of orthogonal. Imagine doing this by hand I still think going like-for-like with the Zig, even if that means a lot of unsafe, is a good approach.

But I suppose if you are already using LLMs it's more reasonable to try and go from Zig straight to Rust with no/minimal unsafe.

The benefit of using Rust is that you know exactly where the unsafe code is so you can handle it explicitly and deliberately to avoid issues by imposing carefully crafted constraints... oh.

I'm curious how much dollar in LLMs this rewrite cost

[deleted]

This is a wild experiment! I do think the incentives are heavily weighted to Anthropic for this to go well. I have mixed feelings about how it will go, but it will result in an important outcome…

RIP Bun.

Im feeling like i won the lottery that i picked deno over bun a few years ago for a bigger project.

It's cool how you can just do this now in 2026. I hope it gets cheaper and easier to do with other big projects written in outdated or just not good enough languages

Will be interesting to see how this pans out. Some people will see minor issues as proof that AI is terrible, but honestly if this gets released and is relatively uneventful it just highlights how the art of building software had changed completely in the last few years.

Probably one of the most reckless things I've seen in software. Beyond safety or quality, at the very least: what about all the existing contributors' PRs? Fuck 'em?

I'm curious where this leaves Zig. Bun was the most prominent and biggest project using it. What's left?

TigerBeetle (https://github.com/tigerbeetle/tigerbeetle) and Ghostty (https://github.com/ghostty-org/ghostty) come to mind as decently popular projects.

Zig is still a moving target with big fundamental changes being made to the language from version to version - nowhere near v1. When rust was at this stage of its development you wouldn’t have been able to name many projects either.

I though TigerBeetle was the biggest Zig project. Anyway, I am sure there's plenty of projects in Zig out there.

It leaves it in the same vibe realm as Nim. A terrific language but probably never hitting mainstream. You're familiar with Nim. ;)

Doesn’t seem like it is in the same adoption realm. I wasn’t aware Ghostty was written in Zig and I’m not aware of any Nim project ever reaching the heights of Ghostty (or indeed Bun). Plus as others state, Zig is still pre-1.0.

Things do look significantly better for Zig adoption-wise than for Nim as far as I can tell.

Ghostty

>No async rust.

I wonder why does that deserve an explicit statement? Is there anything wrong with async rust?

I don't know their reasoning (not much Rust) but this was on the front page last week:

Async Rust never left the MVP state

https://news.ycombinator.com/item?id=48019163

Because bun controls it’s own runtime and event loop in particular way

+1,009,257 -4,024

wild

Least unstable js project

"And Icarus laughed as he fell, for he knew to fall means to once have soared"

I low key hope a codex shop, perhaps OpenAI themselves, do this too, so we can compare results.

For those daring to put this in production: you're crazy!

It shows that the choices/philosophy chosen by Zig isn’t the right one and that memory safety is still too boring/hard to handle at scale.

[deleted]

This is kinda sad, I liked having bun as a good example of software in Zig

I mean aside from the somewhat...dishonest statements from the people involved, giving false explanations is one thing, but calling people who smelled this "overreacting" gives this a weird taste.

I am neutral on such a rewrite itself, there are pros and cons to the whole "rewrite in Rust" topic. People are making decent arguments. But the way the initiator here reacted makes it seem like the Bun team itself thinks they are doing something weird here...

Guess reviewing any code isn't exactly their thing either anymore? And I guess adjusting the tests themselves is certainly one way to make things pass.

Ultimately this just seems like it was done specifically to make Bun more "ai friendly". Whether it turns out good or not that appears to be the motivation behind it.

I feel like there's an iron triangle here, that involves "is vibe-coded", "is secure" and "accepts bugfixes".

Like, you didn't review that 1M LoC. There's no way to have done so. If we're accepting slop-fest PRs, then nothing stops an attacker from burying a security bug in a slop-fest PR that then gets reviewed. And if I'm the attacker, I'm crafting that security hole to have subtle clues to the security AIs reading it as to why it's "correct" so that your AI review bot goes "oh, yeah, this logic works".

This will burn the little reputation and trust Bun has been able to achieve in the past couple of years.

I guess this is what happens when you only have to respond to your corporate overlords.

I will migrate my Bun projects in production to something else.

It's interesting that the developer who spearheaded the hype of Zig abandoned the engineering without addressing the segfault. They could have also taken the approach of gradually porting from Zig to Rust via FFI. Yes, this is a slop show by the AI lab.

Hey, it forgot to change the README!

To me the interesting thing to watch about this project is that if it fails and Bun becomes a piece of shit even with all the resources at their disposal, it means LLMs are probably not going to be the revolutionary tech everyone has been hyping it up to be. It’s useful sure, but software engineers aren’t going away. How could anyone interpret this any other way?

I can't imagine doing this to my own code base lol. I suppose only after Anthropic gave me a lot of money I'd say hey fuck it let's find out

9 days to review +1million LOC in Rust is enough? wow..

I'm old. currently in npm dependency hell on my side project. wtf is bun and will switching to it save me?

Unlikely. Not all npm packages are even compatible with Bun (tho 98% are).

Bun started off as an alternative runtime to Node (like Deno) but today is an everything-monster. It even has a built-in test-runner.

To be completely honest, if you're dealing with dependency hell in 2026 you might be misusing npm. Or you're trying to update a really old project

It's going to be absolute mess of total AI slop and black box that nobody understands and is going to cause more issues than it fixes.

Yep. How will we manage those 10x code projects, when LLMs cost increases by 10x?

I've done some pretty incredible things with LLMs. If this were sqlite with its exhaustive test suite... OK, I can see it.

It's hard for me to see this not becoming a pile of slop, but hey, maybe I'm wrong

Deno's approach from the beginning seems to have proven out.

Why would you replace an existing codebase like this instead of forking the repo instead and then making the changes?

They did fork it initially to experiment, then decided this experiment would go forward and thus naturally belong in the main repo.

Git has this branch concept. It's being used correctly here, IMHO.

I wonder if projects like Ladybird will try this approach now. They've been trying to move to Rust (after trying Swift first) for a while.

Now translate it into zig!

Probably goes without saying but they probably had it check out thousands of projects that use bun and compile them using the new rust binary. And that was probably all automated and lifted into a compute structure that probably did all of that testing in 20 minutes. these people have scale.

Congratulations to everyone who uses Bun. You're now working as alpha testers for Anthropic... for free.

Anyone using Bun should consider migrating away immediately. Not because of the LLM angle, but because of how insanely irresponsible this is.

I reviewed the million lines of code added in a week, and I'm horrified. Not running that thing on my machine.

[deleted]

Well this is uncomfy. Not what...a week ago this was just framed as an experiment and now it's being rammed through?

Even if it works/is correct/etc, this is shockingly careless.

If I'm going to be using your thing to build on top of, I sure as hell don't want to see you 180'ing a week after you just said you weren't going to do exactly what you just did.

Hard pass, purely on principle.

The result is so horrible that Anthropic will quietly move to Node in 6 months. Now they got their headlines and in 6 months everyone will have forgotten about it.

Time to fork it for zig

What does this mean for bun add-ons like opencode's opentui? Did FFI also somehow get ported or will that have to be updated? https://github.com/anomalyco/opentui

First, why are you calling it "add-on". Second, it's done via the same C ABI.

Node's been calling native code distributed in a npm package "add-ons" for a decade and a half.

Fair call on the same C abi. Adapting to node 26.1.0's new FFI is happening in https://github.com/anomalyco/opentui/pull/104 . There's also some new FFI adapters opentui is adding there, and they're adding a worker.

So there is some adaption. That was sort of the interesting useful actual look I thought might be informative, where-as I feel like you were mostly just trying to be curt & maintain a status quo of keeping us all uninformed/unknowing. Let's try actually providing useful steps forwards when we post, ok?

This will go down in history as the biggest mistake of software engineering of all time.

Bun is the runtime of Claude Code, which is the core product of a trillion dollar company, which now sits on a vibe-coded app, where not a single person in the world has a proper mental model of.

I don't know, there's been some pretty bad software mistakes, possibly bigger than a PR to convert an app to Rust:

https://en.wikipedia.org/wiki/Therac-25

I hope no one ever builds (or even worse, vibe codes) a radiation treatment machine.

Claude Code itself is purely vibecoded, both CC and Bun leads are saying that humans are not writing code at Anthropic anymore. It is amazing how much money they intend to squander, because it's all funny money to them, investors just give it to them hand over fist for them to burn. Developing wrappers around the model isn't even the hard part and yet they're going to burn themselves to the ground getting high on their own supply.

> Claude Code itself is purely vibecoded [...] money they intend to squander [...] going to burn themselves to the ground getting high on their own supply.

This really really really isn't the burn you think it is. Going from 0 to 2B+ in revenue from a "purely vibecoded" thing is what they've said they're doing, and what they've actually done. Like in already done. It's not going back, no matter how many nuh nuh people write. They've already shown this can be done.

People will continue to think that this is some sort of a gotcha. But it's actually precisely what they've done: they showed that dogfooding works. If this works, why not x y z?

2B+ in revenue on hundreds of billions in investments and future commitments is completely worthless. Anybody can turn $100b into $2b, that's not a fucking accomplishment. And to the extent that something is driving any revenue, it is the model, not the TUI. Any success Claude is having is despite the godawful TUI, not because of it.

claude.ai (their chatgpt equivalent) was nowhere before cc came about. CC was coded in a few weeks by people, then a few months by people + cc, then mostly cc take the wheel. It is without a doubt the main reason why they're successful. It is also the main reason why their coding models are as good as they are. They've incorporated the early data into their training recipes, and evolved model + harness together.

They appear to be lining up a funding round at a $900 billion dollar valuation. Or to be more conservative they already raised at $380 billion. A long way from worthless.

Yeah are we all forgetting that VC valuations are based on hope and unicorn farts? Just because you give a company $100 billion doesn't turn it into a $900 billion company. Especially when said company has only generated $5billion in total revenue:

https://www.reuters.com/commentary/breakingviews/anthropic-g...

I really wish I could tell people my LLC is worth $100 million because I sold a 0.0001% stake for $10k but I would be called a fraud; however if I was to gamble with pension funds and make the same claim suddenly I'm a visionary?

Good lord, no wonder people want to torch data centers.

[deleted]

Maybe this is the best marketing trick for Claude Code ever. Maybe there was pressure from Anthropic to do this and prove the value. Even partial success is enough to prove the value, justify the value and usage, and AI dependency even further.

And as long as Bun doesn't break Claude code, which only uses a subset of it's APIs, this might just pay out.

Running the rust version in their prod for two weeks should be long enough to catch the biggest crashes and fix them. I'll be up to bug bounty hunters to find the big one that crashes all their app servers at once.

It only needs to survive long enough for the IPO

[deleted]

On the other hand they might be super confident in the results, and if it goes well they might use is as an example of how good claude is

Won't touch it with a ten foot pole.

Well, realistically as well, humans gave us softwares that are full of security holes (and bugs), which one have you seen that a human perfected on the first time around? Give AI some time as well to be fair.

My initial reaction was that this is pure insanity but in fairness this is a fairly 1:1 port of existing code, so the developer's mental model of it should still match fairly well.

For instance look at this Zig function: https://github.com/oven-sh/bun/blob/ed1a70f81708d7d137de8de0...

Versus this Rust version: https://github.com/oven-sh/bun/blob/ed1a70f81708d7d137de8de0...

I did pick that at random but it does look like the best case. I skimmed through a lot of the Rust code and there's a surprisingly small amount of `unsafe`.

Still pretty insane to merge this in such a short time with so little testing, but I can easily think of bigger software engineering mistakes. Hell it's not like Bun even needs to be commercially successful any more.

It’s still 400k more lines

Dunno where you got that number from but it's half that. Tokei says:

  ===============================================================================
   Language            Files        Lines         Code     Comments       Blanks
  ===============================================================================
   Zig                  1298       711112       577946        57772        75394
   Rust                 1443       931232       737485       114373        79374
So it's 28% more lines of code (not comments/blanks).

Rust is mostly ~20% bigger. Except comments. Where they basically doubled... what's with that?

I for one am REALLY GLAD to see it consumes itself.

How life feels not using bun

This is so awesome! What a time to be alive that something like this is possible.

i find it hilarious how desperate people are to cope that this can’t possibly work, must be horrible, etc. for all i know, it is. but let’s just see how well it works, rather than “no true scotsman” grouse about it. it is so sad. it reeks of “doth protest too much” energy. if it were so obvious that ai was insufficient to do the work, then i don’t think you’d have to circle the wagons about it. you could just confidently watch the market turn on the product and know the reason why. and all that would prove is just how special you all are that ai cannot replicate your genius. the reality is that foundation model makers have been dogfooding their own vibes for multiple years now, and it is clearly is good enough for _them_. but yeah, i’m sure that’s just a total fluke and they are all idiots. /eyeroll

Last time I took the Time to see details about such crap, it was CCC.

Great advertisement, fails to compile a random C projet I have, waste of my time.

Where are all the guys in the Hacker News comments who have been explaining how bad LLMs are?

LLMs bad¹ ² ³ ⁴

--

¹ when they empower idiots who vibe features with no regard for tech debt

² in a long run when they are used without human oversight

³ even on trivial tasks when results can't be reliably verified (f.ex. tests coverage)

⁴ the above list is not exhaustive, but outlines main points which should be easily recoverable (by any person smarter than a house spider) from the context of discussions involving LLM sceptics.

--

To answer your question "where" – take this as your home assignment. My message contains enough hints to come to the right answer.

Giant slop-filled PR (that will power future slop-generation) has caused slop-coded Github to stop loading properly.

The Anti-Singularity is approaching ever quicker!

It's okay, at this rate Anthropic will be the only ones left using Bun.

This is the Extinguish phase of the process, right?

What a disaster

We have hundreds of projects that run on Bun. (Some are Bun-specific for whatever reason, but most are "runtime-agnostic TypeScript code that runs on Bun, Node 24.2+, and Deno, but that means they run their test suites on Bun, in addition to the other two.)

Out of curiosity, I installed the canary Bun and just ran a bunch of them. It didn't take me long to find one that works on stable Bun and crashes on "canary" Bun.

      schematic git:(main)  bun upgrade --canary
    [1.55s] Upgraded.
    
    Welcome to Bun's latest canary build!
    
    Report any bugs:
    
        https://github.com/oven-sh/bun/issues
    
    Changelog:
    
        https://github.com/oven-sh/bun/compare/0d9b296af...19d8ade2c
    
      schematic git:(main)  bun run main.ts serve
    Schematic Editor running at http://localhost:4200
    Bundled page in 25ms: src/web/index.html
    frontend TypeError: Cannot destructure property 'isLikelyComponentType' from null or undefined value
        at V0 (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:2534)
        at reactRefreshAccept (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:6090)
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:8766:27
        at CY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8973)
        at nY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:9285)
        (...more like this...)
        at m (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8773)
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6482
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6548
        from browser tab http://localhost:4200/
    ^C
      schematic git:(main)  bun upgrade --stable
    Downgrading from Bun 1.3.14-canary to Bun v1.3.14
    [2.02s] Upgraded.
    
    Welcome to Bun v1.3.14!
    
    What's new in Bun v1.3.14:
    
        https://bun.com/blog/release-notes/bun-v1.3.14
    
    Report any bugs:
    
        https://github.com/oven-sh/bun/issues
    
    Commit log:
    
        https://github.com/oven-sh/bun/compare/bun-v1.3.14...bun-v1.3.14
      schematic git:(main)  bun run main.ts serve
    Schematic Editor running at http://localhost:4200
    [browser] Version mismatch, hard-reloading
    Bundled page in 20ms: src/web/index.html
    
    # working fine as usual... ¯\_(ಠ_ಠ)_/¯
I mean "passes test suite" is one thing. And a good thing. But... "doesn't break any (or even, say 99.5%) of the apps deployed around the world that are built on bun" is a pretty radically different thing.

It's hard to feel like this is responsible behavior, but I will reserve judgement for now, and see how long they persist this "canary" phase.

If they extend it for a lengthy period, and even like, fix bugs on the Zig version and the Rust "canary" version, then... I would be mollified to a great extent, since it is so easy to switch between the Zig stable version and the Rust canary version.

As a pretty heavy user of Bun, I'm actually pretty psyched for it to switch to Rust... but given the abruptness and speed so far, I can't quite shake the "new AI dealer getting high on his own supply" vibe.

But I hope they enter an intensive phase of prioritizing any and all "canary" bugs, and come out on the other side with a better product, and an even faster rate of improvement (which has honestly been pretty wild already).

(Yes, of course, I will have my clanker file a bug report with repro... but that may take a few days.)

This bug was already reported very soon after the merge.

The bun is down the drain.

vibe coders keep saying that now you can have 100x productivity, that you can write a million lines of code in a week and do what would take a team of 10 experienced developers a year.

where are all these million lines vibe coded projects? I don't see them. its all hype

This PR appears to be over a million lines (though GitHub won't load for me).

Of course the quality is the real question. I haven't had amazing results with LLMs with Rust, but they're less bad at it than they are at Zig, which is probably the reason for the rewrite.

At least in this case the original code was written carefully by hand, so the design is sane, and now just the auto-translation is in question. Now it just needs to be battle tested.

Bun is now literally vibe-coded, that's your proof. And Bun developers will solely use LLMs at some point (pretty close to "vibe coding").

Show me some gold instead of a continuous stream of pickaxes.

Bun is now the example. It's >1million lines of code, entirely vibe coded. All we do now is wait and see what happens.

Yeah, I believe op is using sarcasm (see username for one data point)

Now pull the branch and roll your own bun without license issues (using an ai) against their test suite.

Anyone using Bun in production excited for this release? (other than Anthropic of course)

I'm bullish on LLM-assisted development but this is just a very stupid way of performing such a critical migration.

Bun alert!

I hate to say this, but this reeks of "We're owned by Anthropic now and we were put to task to prove Claude Opus as the ultimate AI model, so we were forced to do a full port of something millions of developers rely on to Rust in record time. Just ignore the slop and unsafe statements." (sweeps the broom)

This is nothing more than a marketing stunt from Anthropic. Nothing to see here.

> millions of developers

Try a few thousands.

[deleted]

[dead]

Rust, Zig and TS went into a bun... /s

[flagged]

farewell, bun.

HN overreacting again.

I trust Jarred to make the right decisions regarding bun, which seems to be his passion. Bun has always been amazing since i first tried it, it had some bugs along the way, which didn’t last long.

Anything bad that comes from this, will simply be fixed.

I hope more software does this and gets rid of their segmentation fault producing code, written in c++ and other unsafe languages

I can think of a few.

It has 10k unsafe blocks, pretty sure those segfaults are still gonna be there

Definitely. That's what a good translation is.

But then, agents can work on removing each unsafe one by one and this will bubble issues.

good, their time was wasted on trying to upstream into zig with their anachronistic values on LLMs

I might not necessarily agree with the haste / stability of this, but I commend Jarred for pushing boundaries on what AI coding is capable of, can't deny that. 4 years ago this would've seemed like science fiction.