I'll be very interested in how this AI port turns out. I am involved in a number of active projects that are being held back by the language / framework is holding back the project, but where a rewrite would be too big of a project to undertake by using only human power.

I've had more success vibe coding Rust than I have in more dynamic languages. I suspect the strictness of the Rust compiler forces the AI agent to produce better code. Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.

Rust is a good choice to let LLMs run without a ton of supervision. In my experience you need to monitor the progress heavily and take ownership of the design of the thing you're building or porting. Test harness is a must. Each iteration should run the test and ensure it doesn't break things in other places.

I am in the middle of porting TypeScript to Rust and learned a ton doing this. You can check out the work in progress here https://github.com/mohsen1/tsz/

Happy to share my learnings on this

I've been targeting Go instead of Rust for a few things. But same deal, I'm not really a Go programmer and it seems to work well enough. I do have a few decades of engineering all sorts of code bases; so I'm not coming at this completely naively.

My way of compensating for my own inability to do detailed code reviews is making sure the tests, integration tests, end to end tests, cover everything I care about. Without that, you can't be sure it is not skipping detail work. I've also made it do some bench marking and stress testing and then analyze the code base for potential bottlenecks. After it found and fixed a few issues, it got better. Finally, prompting it to do critical reviews, look for refactoring opportunities, etc. can give you a nice list of stuff to fix next. Having it run memory leak checkers and static code analysis tools also is a good strategy. Once you start running low on issues you find this way, the code is probably not horrible. Or at least you hit some sort of local optimum.

The lack of code reviews sounds pretty horrible. But it is now quickly becoming the biggest bottleneck in AI assisted coding. Eliminating that bottleneck is scary but it enables a few step changes in volume of code that becomes possible. Using strict compilers and strict memory management helps eliminate a few categories of bugs and issues.

I was previously doing this with languages I do understand. Once you start routinely dealing with larger and larger commits, reviews become a problem.

I expect working with larger code bases like this will get a lot easier and better over time. I noticed that the main headaches I face with this type of engineering are the tendency of models to keep deliberately cutting corners, only doing happy path testing, or deferring essential work for later. I suspect a lot of the models are simply biased to conserving token usage. Pretty annoying but also easy to compensate for with follow up prompts and testing. And probably something that becomes less of an issue as the models get tuned to behave better without additional prompting.

> It could be just that I am less familiar with Rust so it feels like it's doing a better job.

Dunning Kruger effect. At least you admit it.

This is pretty much the opposite of Dunning Kruger effect.

Yes it generates trash Rust code.

> Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.

Ya think?

Doy!