And? This is absolutely the correct and standardized way to do mechanical rewrites: you do a rewrite that maps directly to the original source so you can rely on the original correctness guarantees and bug-for-bug compatibility and log issues, and then you go into the next phase where you begin to use idiomatic constructs.
This is the same in COBOL-to-Java ports that have been done in banking and insurance for the past 20 years.
If the rewrite was zig to C and half the code was in __asm blocks is that different or the same?
COBOL to Java is a completely different thing and pretty much unrelated.
Rust can easily call C libraries and vice versa and so can Zig. A more appropriate and designed rewrite would identify the core pieces of the Zig code that were the primary sources of all the big issues. Then, you rewrite that component in Rust and verify that you get the expected improvements. That keeps the codebase stable, it keeps you honest on actually reducing bugs and issues, and other benefits. Then you either just keep it that way or slowly rinse and repeat.
Without doing the analysis of what the core issues were in the first place, the author of Bun can make no claims towards the rewrite. He claims to have fixed flaky tests and improved memory safety. Where is the analysis that shows this? Where is the proof and data? Does he even know where the issues in the Zig codebase were at? I saw a commit where a test had a one second sleep put in place.
Compare this to say the Racket rewrite where a significant portion of the C core was replaced by Chez Scheme and Racket itself. There were several blog posts doing both pre- and post-analysis, and Racket has far less users than Bun.
This rewrite is totally unprofessional and has been poorly and even antagonistically communicated. The author was on this site just days ago telling everyone to relax and that he'd probably throw out this code, and that was even after it had been brought up that this wasn't pre-communicated to users. If I was a dependent in Bun, I would migrate off immediately.
So I push back in the idea that this is the way to do a rewrite like this.
>This is the same in COBOL-to-Java ports
it isn't, because those guys didn't think a naive 1-1 machine translation would give them the benefits of Java, which somehow the people involved in this rust rewriting seem to think they've already gained despite the virtually identical code.
If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly
You gain some benefits. You could in theory gain benefits in compilation speed, portability or even memory use and execution speed, from an automatic language translation. But everyone, including the bun people, understand that you certainly don't get code clarity benefits, and safety benefits is extremely dubious.
> If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly
If it were just a marketing stunt you wouldn't have a fraction of a percent of the test suite passing with the remaining bugs being realistically very fixable, and everything written in languages with type systems that give far more guarantees than what COBOL is possible.
You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected, and maps with many people's experience with using LLMs for tasks like these.
>You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected
no I'm being negative because as I just said, if you want to do a purely syntactic translation you don't even need an LLM, that's called transpilation and we've been doing it programmatically for decades.
This is the kind of thing that looks great to people who can't program, think this is some new superpower unlocked by the mystery magic of LLMs and that is exactly the kind of impression Claude wants to sell.
Transpilation won't get you passing 99.8% of a comprehensive test suite of a 700K+ codebase in a week (and maybe none at all) and that's assuming transpilation is practical for the pair in question. So if you remotely want these kinds of results, then you most certainly do need an LLM.
There are literally formally verified language transpilers out there today. They can get you 100% coverage without "cheating" like LLMs tend to do by modifying test suites to pass, etc.
I'm currently using an LLM in my day job to accelerate such a 1:1translation, and it's certainly "working"/making progress but God I wish I had a formally verified machine translator instead of this probalistic bullshitting LLM.
Don't get me wrong, it's extremely helpful and impressive in what it can do. But I trust it somewhat less than if I had done it myself, and for good reason. The lies I tell myself tend not to take down production. The lies my LLM tells me do however.
I mean No-one is forcing you to not use a transpiler right? If it was quicker to use one or build a specific, limited one for your existing codebase and run it then you would certainly have done that already.
Sadly none is available for my current use case. Building one is so far out of scope that it'd be the most epic yak shaving of all time. If this was a personal project I would consider it. My personal projects are all about the journey and not the destination so side quests are all part of the fun. Not true for my day job however...
A. Transpilation is not 100% compatible because there are many idioms in some languages that cannot be directly translated to others. The lifetime system in Rust disallows a lot of constructs coming from languages with more relaxed constraints. Ironically transpilation will produce code with worse semantics than an LLM. B. At this point it's clear that LLMs reason very effectively about code and its intent. If you haven't asked Claude Opus with Max Reasoning to do, I suggest you give it a try, because the results are pretty fantastic.
Push comes to shove, you could probably still ask an LLM to generate transpiler code, if you're so inclined, and then have it fix the remaining "edge cases" afterward, right…?