This series was in response to another thread [1] which wanted to make rust mandatory in an upcoming release.
The authors proposal was to instead take the middle ground and use rust as an optional dependency until a later point of time where it becomes mandatory.
The later point of time was decided based on when rust support lands in gcc, which would make things smoother, since platforms which support gcc would also be included.
The GCC compiler collection has been hit and miss though. Nobody uses gcj for example. I sort of doubt that they'll be able to implement a good compiler for a language that doesn't even have a standard without that implementation going wildly out of date in the future, just like what happened with Java.
There's two different methods by which Rust support can be added to GCC: adding a Rust frontend to GCC and adding a GCC backend to the Rust compiler (rustc_codegen_gcc). The latter approach would not be (as?) susceptible to implementation divergence as an independent frontend.
I am curious, what is the reason behind introducing Rust in Git?
I am not familiar with Git development, I am just a user. But my impression is that it is already a complete tool that won't require much new code to be written. Fixes and improvements here and there, sure, but that does not seem like a good reason to start using a new language. In contrast, I understand why adding it to e.g. Linux development makes sense, since new drivers will always need to be written.
Git is constantly gaining features, even if for the most part it seems like the core functionality is unchanged.
If you'd like to review the changelog, the Git repo has RelNotes but I've found GitHub's blog's Git category to be a more digestible resource on the matter: https://github.blog/open-source/git/
can you elaborate please?
Why jj is more feature complete for you than git?
I tried jj and for now it looks like too raw. The problem is also its git backed. I really don't want to care about two states of repo at the same time - one is my local jj, and another is remote git repo.
I think jj just has other conceptions compared to git. E.g. in git you probably will not change history too much (if pushed to remote especially), while in jj simple editing of commits is a front feature. So, comparing them in feature completeness looks strange to me
After some experience with jj I understand that jj is a user-oriented, user friendly tool with batteries included, while git is double-edged knife which is also highly customizable
https://lore.kernel.org/git/ZZ9K1CVBKdij4tG0@tapette.crustyt... has a couple dozen replies and would be a useful place to start reading about it; beyond that, search that list for Rust. (Note, I’m only responding the opening question, not evaluating the arguments pro/con here or on the list; in any case, someone else surely will.)
Developers who work on git think it will help them do their jobs better. Do you need any more reasons beyond that? They don't need to justify it to users necessarily.
There's also the fact that if you want to recruit systems programmers for a project like git, the 19-year-old catgirls who are likely to be interested in that sort of work all work in Rust. Ask one to hack a legacy C code base and she might nyao at you angrily >:3
I'm not even a Rust or C developer and know this take is BS, Rust pretty clearly has major maintainability and code reliability/safety/stability benefits over C.
The whole point of Rust is that C, and all the code written therein (or as much as is feasible), be eventually replaced and abandoned. The potential costs of continuing to use C, and all the memory and concurrency bugs that come with it, runs in the billions worldwide if not more.
Besides which, in 2025 all the real ones are using jj, which is 100% Rust, not git—so if git wishes to remain competitive it needs to catch up.
not changing working code to prevent issues is unsafe.
we can go in circles all day with blanket statements that are all true. but we have ample evidence that even if we think some real-world C code is safe, it is often not because humans are extremely bad at writing safe C.
sometimes it's worth preventing that more strongly, sometimes it's not, evidently they think that software that a truly gigantic amount of humans and machines use is an area where it's worth the cost.
believing that rewriting to rust will make code safe is unsafe)
Of course it will be safer, but not safe. Safety is a marketing feature of rust and no more. But a lot of people really believe in it and will be zealously trying to prove that rust is safe.
A test will never catch every bug, otherwise it's a proof, and any change has the probability to introduce a new bug, irregardless of how careful you are. Thus, changing correct code will eventually result in incorrect code.
I honestly can't tell if this is meant as serious reply to my question (in that case: let's say I agree that Rust is 100% better than C; my question still stands) or as a way to mock Rust people's eagerness to rewrite everything in Rust (in that case: are you sure this is the reason behind this? They are not rewriting Git from scratch...)
Everyone on hackernews is well aware that C makes it relatively easy to create buffer overflows, and what buffer overflows are. You're still not responding to GP question.
I'm not involved in the initiative so I can't answer the question definitively? I provided one of the major reasons that projects get switched from C. I think it's likely to be a major part of the motivation.
Right, I never mentioned that I am a decently experienced C developer, so of course I got my fair share of buffer overflows and race conditions :)
I have also learned some Rust recently, I find a nice language and quite pleasant to work with. I understand its benefits.
But still, Git is already a mature tool (one may say "finished"). Lots of bugs have been found and fixed. And if more are found, sure it will be easier to fix them in the C code, rather than rewriting in Rust? Unless the end goal is to rewrite the whole thing in Rust piece by piece, solving hidden memory bugs along the way.
> though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?
Based on the descriptions it's not the integer overflows that are issues themselves, it's that the overflows can lead to later buffer overflows. Rust's default release behavior is indeed to wrap on overflow, but buffer overflow checks will remain by default, so barring the use of unsafe I don't think there would have been corresponding vulnerabilities in Rust.
This doesn't matter at all for programs like Git. Any non-free standing program running on a modern OS on modern hardware trying to access memory its not supposed to will be killed by the OS. This seams to be the more reasonable security-boundary then relying on the language implementation to just not issue code, that does illegal things.
Yeah sure, memory-safety is nice for debuggibility and being more confident in the programs correctness, but it is not more than that. It is neither security nor proven correctness.
Not quite the best example, since Git usually has unrestricted file access and network access through HTTP/SSH, any kind of RCE would be disastrous if used for data exfiltration, for instance.
If you want a better example, take distributed database software: behind DMZ, and the interesting code paths require auth.
Git already runs "foreign" code e.g. in filters. The ability to write code that reacts unexpectedly on crafted user input isn't restricted to languages providing unchecked array/pointer access.
I think bugs in the MMU hardware or the kernel accidentally configuring the MMU to allow access across processes that isn't supposed to be are quite rare.
Maybe I'm just old and moany, and I need to step aside for bigger and better things such as Rust.
But.
Now rather than needing to understand just C to work on Git/kernel, you now need to also know Rust. The toolchain complexity is increasing, and the mix of these languages increases the barrier to entry.
I'm highly invested into Git, having learned the tooling and having a significant number of projects constructed within it. I've written my own Git clients and have built a web server around Git repositories. I don't want to lose the hack-ability of Git.
> I'm just old and moany, and I need to step aside for bigger and better things such as Rust.
You are. This is firm "I don't want to have to learn new things" territory, which isn't a viable attitude in this industry.
In any case Rust is usually easier than C (excluding buggy C which is very easy to write), and certainly easier than actually learning the Git or Linux codebases.
We might also have different priorities. I do not care too much that google and apple want to lock down their smartphone spyware and sales platforms. The supply chain risks and maintenance burden imposed onto me by the Rust ecosystem are much more of an concern.
I don't know what this has to do with locking down phones, but I do appreciate not getting compromised just for cloning a repo or opening my laptop at a coffee shop.
This is not what I said, but memory safety is certainly not anything which is a high priority for my own security. I still think memory safety is important and I also think Rust is an interesting language, but... the hype is exaggerated and driven by certain industry interests.
Rust isn't popular just because of memory safety though. I think the memory safety message is maybe a little too loud.
It's also a modern language with fantastic tooling, very high quality library ecosystem and a strong type system that reduces the chance of all kinds of bugs.
It's obviously not perfect: compile time is ... ok, there aren't any mature GUI toolkits (though that's true of many languages), async Rust has way too many footguns. But it's still waaaaay better than C or C++. In a different league.
Rust is a nice language, but it pushed too aggressively with the argument of "memory safety" at all cost ignoring other considerations. And Cargo is certainly a disaster even though it may be considered "fantastic tooling" by some. In any case, I do not think it is funny that I now depend on packages without timely security update in my distribution. This makes me less secure.
I guess this depends on what you consider good tooling. I am relatively happy with C tooling. But if you want to quickly assemble something from existing libraries, then language-level package managers like npm, cargo, pip are certainly super convenient. But then, I think this convenience comes at a high cost. We now have worms again, I thought those times were long over... IMHO package management belongs into a distribution with quality control and dependencies should be minimized and carefully selected.
Fair enough. I just find it mind boggling how much money flows into completely new language ecosystems compared to improvements for C/C++ tooling which would clearly much more effective if you really cared about overall security of the free software world.
The issue with investing similar levels of effort into making C++ safer is the C++ standards committee doesn't want to adopt those kinds of improvements.
Which is also the reason why we don't have #pragma once and many other extensions like it. Except we do. Compilers can add rust-like static analyzers without the standard committee mandating it.
I am not interested in C++, it is also far too complex. In my opinion software needs to become simpler and not more complicated, and I fear Rust might be a step into the wrong direction.
Personally, I use Rust (and have been using it for close to 9 years) because I've been part of multiple teams that have delivered reliable, performant systems software in it, within a budget that would clearly be impossible in any other language. Rust acts as a step change in getting things done.
While I really really want devices I can own, I don't want to compromise security to do it. We need to do two things:
1. Lobby politicians to write laws that allow us to actually own the devices we bought.
2. Stop the FUD that a device that can be jailbroken is insecure. I heard this from our frigging CSO, of all people, and it's patently false, just FUD by Apple and Google who want you to be afraid of owning your device.
I want a device that's as secure as possible, but that I can own. I don't want to hack my own self just to get what I paid for.
It is a sad thing but I do root against secure boot initiatives because they almost entirely work to limit user's freedom instead of improving their security.
> You are. This is firm "I don't want to have to learn new things" territory, which isn't a viable attitude in this industry.
It's viable, but limiting. Sometimes you have to do things you don't want to, which is why it's called work. But if you can choose what platforms you work on, you can orient towards things where things change less, and then you don't need to learn new things as often.
Chances are, if you get into the weeds in a lot of C programs, Rust is in your future, but it's viable to not want that, and to moan about it while doing it when you need to.
As someone with experience in this specific niche, yes they absolutely are. There are no longer ten thousand retail chains asking for COBOL-based counterpoint PoS mods on a yearly basis.
The COBOL market is basically tenured experts in existing systems or polyglots helping migrate the systems to VB or C# at this point. The market has plummeted and now it's in the final deflationary shrink before death.
It's not "having to learn something new", but "having to be good at two things, both of which are full languages with their own specifics, problems and ways to solve them, two sets of compilers and some duct tape to hold them together.
It's like putting steak on a pizza... pizza is good, steak is good, pizza on a steak might be good too, but to actually do that in production, you now need two prep stations and you can't mess up either one.
Rust is over 10 years old now. It has a track record of delivering what it promises, and a very satisfied growing userbase.
OTOH static analyzers for C have been around for longer than Rust, and we're still waiting for them to disprove Rice's theorem.
AI tools so far are famous for generating low-quality code, and generating bogus vulnerability reports. They may eventually get better and end up being used to make C code secure - see DARPA's TRACTOR program.
The applicability of Rice's theorem with respect to static analysis or abstract interpretation is more complex than you implied. First, static analysis tools are largely pattern-oriented. Pattern matching is how they sidestep undecidability. These tools have their place, but they aren't trying to be the tooling you or the parent claim. Instead, they are more useful to enforce coding style. This can be used to help with secure software development practices, but only by enforcing idiomatic style.
Bounded model checkers, on the other hand, are this tooling. They don't have to disprove Rice's theorem to work. In fact, they work directly with this theorem. They transform code into state equations that are run through an SMT solver. They are looking for logic errors, use-after-free, buffer overruns, etc. But, they also fail code for unterminated execution within the constraints of the simulation. If abstract interpretation through SMT states does not complete in a certain number of steps, then this is also considered a failure. The function or subset of the program only passes if the SMT solver can't find a satisfactory state that triggers one of these issues, through any possible input or external state.
These model checkers also provide the ability for user-defined assertions, making it possible to build and verify function contracts. This allows proof engineers to tie in proofs about higher level properties of code without having to build constructive proofs of all of this code.
Rust has its own issues. For instance, its core library is unsafe, because it has to use unsafe operations to interface with the OS, or to build containers or memory management models that simply can't be described with the borrow checker. This has led to its own CVEs. To strengthen the core library, core Rust developers have started using Kani -- a bounded model checker like those available for C or other languages.
Bounded model checking works. This tooling can be used to make either C or Rust safer. It can be used to augment proofs of theorems built in a proof assistant to extend this to implementation. The overhead of model checking is about that of unit testing, once you understand how to use it.
It is significantly less expensive to teach C developers how to model check their software using CBMC than it is to teach them Rust and then have them port code to Rust. Using CBMC properly, one can get better security guarantees than using vanilla Rust. Overall, an Ada + Spark, CBMC + C, Kani + Rust strategy coupled with constructive theory and proofs regarding overall architectural guarantees will yield equivalent safety and security. I'd trust such pairings of process and tooling -- regardless of language choice -- over any LLM derived solutions.
Sure it's possible in theory, but how many C codebases actually use formal verification? I don't think I've seen a single one. Git certainly doesn't do anything like that.
I have occasionally used CBMC for isolated functions, but that must already put me in the top 0.1% of formal verification users.
It's not used more because it is unknown, not because it is difficult to use or that it is impractical.
I've written several libraries and several services now that have 100% coverage via CBMC. I'm quite experienced with C development and with secure development, and reaching this point always finds a handful of potentially exploitable errors I would have missed. The development overhead of reaching this point is about the same as the overhead of getting to 80% unit test coverage using traditional test automation.
You're describing cases in which static analyzers/model checkers give up, and can't provide a definitive answer. To me this isn't side-stepping the undecidability problem, this is hitting the problem.
C's semantics create dead-ends for non-local reasoning about programs, so you get inconclusive/best-effort results propped up by heuristics. This is of course better than nothing, and still very useful for C, but it's weak and limited compared to the guarantees that safe Rust gives.
The bar set for Rust's static analysis and checks is to detect and prevent every UB in safe Rust code. If there's a false positive, people file it as a soundness bug or a CVE. If you can make Rust's libstd crash from safe Rust code, even if it requires deliberately invalid inputs, it's still a CVE for Rust. There is no comparable expectation of having anything reliably checkable in C. You can crash stdlib by feeding it invalid inputs, and it's not a CVE, just don't do that. Static analyzers are allowed to have false negatives, and it's normal.
You can get better guarantees for C if you restrict semantics of the language, add annotations/contracts for gaps in its type system, add assertions for things it can't check, and replace all the C code that the checker fails on with alternative idioms that fit the restricted model. But at that point it's not a silver bullet of "keep your C codebase, and just use a static analyzer", but it starts looking like a rewrite of C in a more restrictive dialect, and the more guarantees you want, the more code you need to annotate and adapt to the checks.
And this is basically Rust's approach. The unsafe Rust is pretty close to the semantics of C (with UB and all), but by default the code is restricted to a subset designed to be easy for static analysis to be able to guarantee it can't cause UB. Rust has a model checker for pointer aliasing and sharing of data across threads. It has a built-in static analyzer for memory management. It makes programmers specify contracts necessary for the analysis, and verifies that the declarations are logically consistent. It injects assertions for things it can't check at compile time, and gives an option to selectively bypass the checkers for code that doesn't fit their model. It also has a bunch of less rigorous static analyzers detecting certain patterns of logic errors, missing error handling, and flagging suspicious and unidiomatic code.
It would be amazing if C had a static analyzer that could reliably assure with a high level of certainty, out of the box, that a heavily multi-threaded complex code doesn't contain any UB, doesn't corrupt memory, and won't have use-after-free, even if the code is full of dynamic memory (de)allocations, callbacks, thread-locals, on-stack data of one thread shared with another, objects moved between threads, while mixing objects and code from multiple 3rd party libraries. Rust does that across millions lines of code, and it's not even a separate static analyzer with specially-written proofs, it's just how it works.
Such analysis requires code with sufficient annotations and restricted to design patterns that obviously conform to the checkable model. Rust had a luxury of having this from the start, and already has a whole ecosystem built around it.
C doesn't have that. You start from a much worse position (with mutable aliasing, const that barely does anything, and a type system without ownership or any thread safety information) and need to add checks and refactor code just to catch up to the baseline. And in the end, with all that effort, you end up with a C dialect peppered with macros, and merely fix one problem in C, without getting additional benefits of a modern language.
CBMC+C has a higher ceiling than vanilla Rust, and SMT solvers are more powerful, but the choice isn't limited to C+analyzers vs only plain Rust. You still can run additional checkers/solvers on top of everything Rust has built-in, and further proofs are easier thanks to being on top of stronger baseline guarantees and a stricter type system.
If we mark any case that might be undecidable as a failure case, and require that code be written that can be verified, then this is very much sidestepping undecidability by definition. Rust's borrow checker does the same exact thing. Write code that the borrow checker can't verify, and you'll get an error, even if it might be perfectly valid. That's by design, and it's absolutely a design meant to sidestep undecidability.
Yes, CBMC + C provides a higher ceiling. Coupling Kani with Rust results in the exact same ceiling as CBMC + C. Not a higher one. Kani compiles Rust to the same goto-C that CBMC compiles C to. Not a better one. The abstract model and theory that Kani provides is far more strict that what Rust provides with its borrow checker and static analysis. It's also more universal, which is why Kani works on both safe and unsafe Rust.
If you like Rust, great. Use it. But, at the point of coupling Kani and Rust, it's reaching safety parity with model checked C, and not surpassing it. That's fine. Similar safety parity can be reached with Ada + Spark, C++ and ESBMC, Java and JBMC, etc. There are many ways of reaching the same goal.
There's no need to pepper C with macros or to require a stronger type system with C to use CBMC and to get similar guarantees. Strong type systems do provide some structure -- and there's nothing wrong with using one -- but unless we are talking about building a dependent type system, such as what is provided with Lean 4, Coq, Agda, etc., it's not enough to add equivalent safety. A dependent type system also adds undecidability, requiring proofs and tactics to verify the types. That's great, but it's also a much more involved proposition than using a model checker. Rust's H-M type system, while certainly nice for what it is, is limited in what safety guarantees it can make. At that point, choosing a language with a stronger type system or not is a style choice. Arguably, it lets you organize software in a better way that would require manual work in other languages. Maybe this makes sense for your team, and maybe it doesn't. Plenty of people write software in Lisp, Python, Ruby, or similar languages with dynamic and duck typing. They can build highly organized and safe software. In fact, such software can be made safe, much as C can be made safe with the appropriate application of process and tooling.
I'm not defending C or attacking Rust here. I'm pointing out that model checking makes both safer than either can be on their own. As with my original reply, model checking is something different than static analysis, and it's something greater than what either vanilla C or vanilla Rust can provide on their own. Does safe vanilla Rust have better memory safety than vanilla C? Of course. Is it automatically safe against the two dozen other classes of attacks by default and without careful software development? No. Is it automatically safe against these attacks with model checking? Also no. However, we can use model checking to demonstrate the absence of entire classes of bugs -- each of these classes of bugs -- whether we model check software written in C or in Rust.
If I had to choose between model checking an existing codebase (git or the Linux kernel), or slowly rewriting it in another language, I'd choose the former every time. It provides, by far, the largest gain for the least amount of work.
In my experience current AI is still far from reasoning about the kind of hard-to-spot bugs in C that lead to the worst exploits. Rust solves most of these by design. It isn't about adding a second language - it is about slowly phasing out a language that is being misused in areas it shouldn't be in.
C will at some point be relegated to being an educational language, incredibly valuable due to few but good abstractions over assembly. It will continue to exist for decades in most systems, but hopefully it won't be used outside of the maintenance of legacy systems.
Believing that this will be the case forever is naive. At some point there will be extensions. Then those extensions will become all but mandatory for interacting with other git users.
Perl, TCL and Python are all written in C, as well as many shells, so despite their interdependency the total complexity can be satisfied with a C11 compiler.
I did check this out. The shell, perl and python are likely for scripting and not used during runtime. TCL is likely some form of dynamic scripting.
I think we also have to be honest about what the project here is too, it's not to have both C and Rust together, but to replace all C with Rust. In which case, it probably makes sense to just clone to repo and work on a fork like they did with SSH.
> The shell, perl and python are likely for scripting and not used during runtime.
Some git subcommands are implemented in these. git filter-branch is a shell script, git cvsimport is a Perl script, and git p4 (perforce interop) is a Python script. There are not too many left these days (git add -p/-i also used to call a Perl script), but they exist.
I'm sure you are aware, reading between the lines of what you said, why, but for some others who aren't aware of the history of git; it was originally about 50% C and 50% Perl, the performance critical parts were written in C and then various git commands were written in Perl. Over time almost all the Perl was removed because there were less Perl monks than C devs.
Now it would seem the logic is reversed; even though there are less Rust devs than C devs, Rust is going to replace C. Maybe now that git is large enough and entrenched enough such a move can be forced through.
> it was originally about 50% C and 50% Perl, the performance critical parts were written in C and then various git commands were written in Perl.
IIRC, it was mostly shell, not Perl, and looking at the proportion is misleading: the low-level commands (the "plumbing") like git-cat-file or git-commit-tree were all in C, while the more user-friendly commands (the "porcelain") like git-log or git-commit were all shell scripts calling the low-level commands. Yes, even things we consider fundamental today like "git commit" were shell scripts.
I believe gitk and git-gui are written in tcl. Those are definitely things that get shipped to the user, so (at least for those parts) you wouldn't need to have a toolchain on the build server.
A number of the git commands were implemented in perl and shell. Now I see only git-svn is perl here for me and there's still a few shell scripts in /usr/libexec/git.
Agreed. And if someone is interested in contributing to the Linux kernel, a new programming language is far from the hardest thing that they need to learn...
Rust will, in fact, make it significantly easier to contribute.
In C, you have to remember lots of rules of when what is safe and what locks to hold when. In Rust, APIs are structured to make unsafe use impossible without explicitly saying `unsafe`.
Concrete example: in Rust, locking a mutex returns a handle that lets you access the data protected by the mutex, and the mutex is unlocked when the handle is dropped.
> Concrete example: in Rust, locking a mutex returns a handle that lets you access the data protected by the mutex, and the mutex is unlocked when the handle is dropped.
This is how it works in the kernel on the C side, too. Usually by using guard/scoped_guard which wrap the generic mutexes with some RAII.
Interestingly enough, this is the only mention of scoped_guard in Documentation/. I will definitely argue that (that part of) Rust is way more approachable.
Using device-managed and cleanup.h constructs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Netdev remains skeptical about promises of all "auto-cleanup" APIs,
including even ``devm_`` helpers, historically. They are not the preferred
style of implementation, merely an acceptable one.
Use of ``guard()`` is discouraged within any function longer than 20 lines,
``scoped_guard()`` is considered more readable. Using normal lock/unlock is
still (weakly) preferred.
Low level cleanup constructs (such as ``__free()``) can be used when building
APIs and helpers, especially scoped iterators. However, direct use of
``__free()`` within networking core and drivers is discouraged.
Similar guidance applies to declaring variables mid-function.
Except now these software engineers have to code switch between languages.
Could you software engineers stop making things harder for yourselves and playing this meaningless flex of a status game, and you know, focus on something tangible, meaningful, instead of adding more bureaucracy?
I'm guessing you aren't a software engineer based on this comment, but the difference between programming languages is tangible and meaningful. It isn't like human languages where they're mostly basically the same and achieve the same thing.
And code switching between languages is not hard at all.
It's hilarious that you can assume such a thing just by a couple of words on the internet. Or maybe I'm not a 'software engineer' by your standards because unlike your closed group of SWEs I'm a lot less focused on resume padding and keeping my codebase sane and not exploding in complexity.
I should specify - it's hard in that it's troublesome to have to code switch and do a bunch of recall before working on the thing.
Say you've not worked on this secondary language for a long time, which absolutely happens, and have spend hours of effort to recall it. This takes time that you need not spend on, it's how your memory works.
I didn’t make the assumption but it sounded like a reasonable assumption based on the pronouns you used. You said “could you software engineers stop making things harder for yourselves.” A reasonable interpretation of this is that you aren’t a software engineer.
Reinforced softly by the rest of your comment not being technically sound. Adding a second language that is meaningfully different in its strengths and weaknesses isn’t “bureaucracy”. Bureaucracy is more like “sign a CLA before you can contribute”.
Okay then how about another interpretation: I'm a software engineer questioning the boarder group of SWEs on what they're trying. (Somehow I have to show you another interpretation I can't believe how tunneled people can be).
Also bureaucracy is added friction, usually done by humans. It can be found everywhere where you're working with humans, from leetcode interviews and code styles and practices. It's not just a bunch of signed papers.
Sure you can add the second language if adds value, but let's not pretend that the added friction isn't there. If you could solve your problems without the friction of second language it would be better.
> I should specify - it's hard in that it's troublesome to have to code switch and do a bunch of recall before working on the thing.
You don't sound like you have any experience working on software projects. I can tell you it's not hard to switch between programming languages. If anything, the difficulty level is placed on onboarding onto projects you are not familiar with, but the programming language in use is far from being a relevant factor if you already are familiar with it.
Even if it's 'not hard' your brain has to compensate for switching to another realm/space and that takes energy and time especially if you haven't used that particular space for a long time.
This is backed by science. Go read up on short-term working memory and crystallized memory.
All this will add up the maintenance costs, so it had better be a good trade off.
Look at it from the other angle, there are many developers (myself included), especially younger developers who would much prefer developing rust to c, and for at least some of them, don't want to learn how to write c (including how to avoid undefined behavior).
> I've written my own Git clients and have built a web server around Git repositories. I don't want to lose the hack-ability of Git.
How does the git project using rust inhibit your ability to do any of that?
I've also sent some patches git's way and I can't say I'm thrilled about being forced to (finally) learn Rust if I want to contribute again in the future. I guess I'm outdated...
They're proposing porting over one small piece that has no dependencies and exposing it to the rest of git via a C interface. Yes, they'll presumably port more over in the future if it goes well, but it's a gross exaggeration to characterize this as somehow making it impossible to contribute without knowing Rust.
I know that it is a "slippery slope" argument, but in the future, it will become more difficult to contribute without knowing Rust. That's the entire point of introducing it.
I guess in a certain sense, yes, the total number of lines of code in C will go down, so the difficulty of finding a place to contribute will go down by that metric. On the other hand, I'd argue that it seems rather unlikely that literally all of the C code will be gone from git at least over the next couple of decades (and that's assuming that there's even a desire to rewrite it entirely, which doesn't seem like it's anywhere close to even being possible to discuss seriously any time soon), so it seems like the amount of difficulty will be so small that it's a bit silly to worry about it. Keep in mind that there's still not anything stopping new code from being written in C just because new code might also be possible to write in Rust. Right now, it's literally impossible to contribute Rust code to git, so if it becomes infinitesimally harder to contribute C code to make contributing Rust code possible, that's still a arguably a much larger increase in the net "contributability" of the git codebase, for lack of a better term.
I understand that it's a minor change in its current state. However, it is a fact that the long term goal is to port everything to rust. Once that goal is accomplished, rust will be required. So it is not at all a gross exaggeration. It's a prediction of the future.
I don't even disagree with that goal, I think it's desirable that things be written in rust, it's a really good language that provides a lot of benefits. I think I've just been infected with the C virus too long. I can't even tolerate C++.
> I understand that it's a minor change in its current state. However, it is a fact that the long term goal is to port everything to rust. Once that goal is accomplished, rust will be required. So it is not at all a gross exaggeration. It's a prediction of the future.
Whose goal is this? I know that there's a perception of there being a loud, vocal contingent of people who have this goal in general, but is there anyone who actually is involved in git maintenance who has stated this intent? The proposal linked above states the following:
> As said, the entire goal is for us to have an easy playground that we can experiment on and develop the infrastructure incrementally without yet having to commit to anything.
> I'm mostly splitting out the topic of introducing Rust from the larger series that introduce it into xdiff so that we can focus more on the actual process of introducing Rust into Git and less on the potential features that we want to build on top of it.
My reading of this is that there are specific features that they at least want to consider using Rust for, and that having support for it in the build process is a prerequisite for that. That doesn't imply at all to me that they would want to rewrite all existing features in it, or to prevent new C code from being written for git after some point in the future. Even if there are some people involved with that goal, it hardly seems like that goal is shared by everyone who might be involved in that type of decision, and I'd argue that people wouldn't even have to be in agreement about that goal to be in favor of this step. I don't find it that hard to believe someone might want to allow using Rust for new features but generally be against the idea of rewriting all features in Rust.
Having written Rust professionally for six years and used it for around a decade, my experience is that there are surprisingly few prolific Rust programmers who seem to devote much time to thinking about trying to get existing projects to rewrite all of their codebase into Rust. It's much more likely that they'd just start an entirely new project that overlaps substantially with an existing one, although even then it's rare for the new project to ever get anywhere close to fully replacing the existing one (if that's even the goal); ripgrep might have wide adoption alongside grep, but grep isn't going anywhere, and I suspect that burntsushi would be one of the last people to suggest it would.
There's also a lot of significant work spent on improving Rust's ability to interoperate with other languages. Libraries made with bindgen (and cbindgen in the other direction) probably have done far more to ameliorate Rust programmers to using existing libraries in other languages than to expedite them being rewritten, and there are some popular wrappers that try to go beyond that and try to provide even more idiomatic wrappers for specific languages like pyo3 for Python, neon for NodeJS, and cxx for C++ (which was written by the same person who basically single-handedly created the current proc macro ecosystem in Rust alongside specific libraries utilizing it like serde and thiserror, so hardly someone who would have no motivation to try to have more code rewritten in Rust). If there are people making an effort being made to try to tell everyone to rewrite everything in Rust, there's just as much effort going on from people writing Rust to actively try to work with existing code in other languages, and their work is having far more impact than the first group.
I honestly can't help but wonder if the only reason the debate about rewriting stuff in Rust is still going on is that the people against it engage with it rather than just ignoring it as empty bluster. My hot take is that there's never been anywhere close to the critical mass of people with the skill and desire to put in the work that would be required to make it happen, and there likely never will be, so the debate been sustained on one side by a range from armchair quarterbacking to intentional ragebait, and on the other side by a range from misguided attempts to engage seriously with what's essentially always been just a meme to pearl-clutching at the idea that someone would dare question the status quo. Maybe there was an interesting philosophical debate to be had about the hypothetical merits of rewriting the world in Rust in the early days, but we're long past the point where there's anything useful left to say on the topic, so we'd all be better off by just collectively moving on and figuring out how things will play out in the real world. C and C++ are definitely not going anywhere in our lifetimes, and Rust has sufficiently proved that it can be used successfully in professional contexts, so the remaining questions are all going to be about tradeoffs between legitimate choices rather than jockeying to see who sticks around in a "winner-takes-all" ecosystem.
I think it is really as simple as this: change is hard and a lot of people struggle with it to varying degrees for different reasons. Just look around at the people in your life and how they react to changes. It's really the same sort of pattern that plays out with Rust.
I distinctly remember reading the comments in the thread here about the initial release of ripgrep, and I remember coming away with a strong impression not just of your technical skill (which was apparent even before reading the thread), but just how pragmatic your viewpoint was. I didn't get the feeling you had any desire to displace anything, but just to solve a specific problem for people who wanted it, and if some people preferred not to use it, that was fine too! As someone who was fairly early on in my software career then, it was an extremely valuable lesson in humility from someone with a pedigree that I presumably wouldn't ever match.
Your reappearance here after my mention is probably another useful lesson for me to have a bit more empathy for those who are reacting more strongly to this announcement than I'd otherwise understand.
Thanks for the kind words! And I'm not perfect either. I find the resistance to change to be extremely frustrating at points. And especially so when it involves misinformation of some sort.
Im on the same boat :) But no worries. You can always build and use older git without rust. Of course, it will work for a while until those kids will change the proto for the "better". And being old and grumpy also means, you can slowly care less and less about all that moot :)
Kids: now downvote it into oblivion :) Like I give a shit...
Rust suffers from the same problems that functional programming languages suffer from: deep learning curve and high complexity. The high complexity is intended to push more runtime errors back to compile time, but boy does that mean the language pays for it. Rust is a tire fire of complexity.
For these reasons I believe it is not a good idea. The kernel also sort of rejected Rust. The kernel is complex enough without adding a Haskell type system and a lisp-level macro system capable of obfuscating what code calls what code. serde code is so hard to spelunk for this reason. Contrast this with Go's Unmarshall, much easier to follow.
I personally find functional programming languages, including Rust, much clearer than C or Go, in particular because you can offload much information onto the compiler. The example of Serde feels a bit weird, because I don't think I've ever encountered issues with Serde code, while almost 100% of the times I've used Go in production, I've needed to debug through Go's Unmarshal and its... interesting implementation.
Also, last time I checked, the kernel didn't reject Rust. There was a conflict between two specific developers on the best place to store some headers, which is slightly different.
Yes, but :) Rust isn't complex because it has functional traits, but rather because of its other design choices. Complex, nonetheless, but, I'd also say, looks "scarier" from the outside. I recently gave in and learned it, and it's much easier to handle than I thought before.
I actually think Rust is pretty easy to pick up for anyone that’s written Typescript and can use their linter to understand references and unwrapping a Result and catching an error.
> Though I would argue the absurd amount of undefined behavior makes it not even simple by design.
What? UB is the simplest thing you can do when you just don't want to specify behavior. Any specified behavior can't be simpler that unspecified because it's just comparing nothing with something
Ferrocene has donated their specification to the project, so there absolutely is a specification now. What you can argue is that the memory model isn‘t fully defined, but it‘s almost certainly going to land somewhere around stacked borrows or tree borrows. Arguably C doesn‘t fare much better in that regard though as it doesn‘t even properly define its pointer provenance model either and Rust is much closer to defining its.
Note that, in compiler lingo, unspecified and undefined are two different things. C++ is specified to death, but full of undefined behavior (and also some unspecified behavior).
Rust is largely not specified, but aims to have no undefined behavior (outside of unsafe blocks).
I am aware but without a spec we don’t know which is which. You can’t say it has no undefined behavior because what happens is you try to specify it and find gaps or challenges.
In C undefined is used primarily when there is not a reliable and efficient mechanism for detecting a problem is happening. For example a C implementation may check every single invalid pointer deref, but more realistically it only detects only extreme out of range. So it’s undefined what happens.
That being said, at least in C++, undefined has been used largely as a joker for compiler optimizations. In Rust, if my memory serves, having the same code produce different results depending on the optimization level would be considered a pretty serious bug. In C++, it's par for the course.
I was going to roll my eyes at "Rust is a tire fire of complexity". Because it's not. Especially compared to C++. But then you just go on to outright lie in your second paragraph.
Dear Rust haters, lying about Rust in the Linux kernel is not effective for your cause, and in fact just makes it further look like you're throwing a tantrum. Downvoting me doesn't change the fact that more and more Rust is merged into the kernel, new, serious drivers are being written in Rust. It also doesn't change Firefox, Chrome, Microsoft, the US Government and others are recommending and writing new code in Rust. It's over, qq. It's absurd.
I really wish I could find the Lobsters comment the other day from someone that broke down the incredible list of nuanced, spec-level detail you needed to know about C++ to actually use it at scale in large projects. It's laughably, absurdly complex compared to Rust in huge code bases.
What's the point of trying to introduce Rust everywhere? Git is a mature piece of software and I doubt a lot of new code needs to be written. Also, Rust is very complex relative to C. If you really need classes, templates, etc, you can stick to C++ 98 and get something that is still clean and understandable relative to recent C++ standards and Rust.
I bet someone could have easily said the same thing a year ago or even 5 years ago. It's easy to forget the progress and there are a lot of things happening under the hood that are not obvious to casual users. Just take a look at the git log of the git repository itself. It has a steady rate of close to 100 commits per week for the past year.
That says nothing of potential new features either. There is so much more to unlock in the VCS space, look at new tools like jj for example. Additionally, security landscape is getting more aggressive every day, keeping up with and being proactive against vulnerabilities will be needed more than today.
Is this a bit of chickens coming home to roost as far as developer culture forgetting how to work with cross-compiling toolchains? When I started my career, it was common understanding that the developer may be manipulating sourcecode on a different system and/or platform than where it will be executed.
Our source control, editing, compilation, and execution was understood to happen in different computational spaces, with possible copy/staging steps in between. You were doing something very naive if you assumed you could execute the built program on the same system where the sourcecode files existed and the editor/IDE was running.
This was a significant fraction of the build rules we used to manage. E.g. configuration steps had to understand that the target platform being measured/characterized is not the same as the platform executing the build tools. And to actually execute a built object may require remote file copies and remote program invocation.
Actually, the Rust toolchain makes cross-compiling way easier than any other fully-compiled language I've ever used. There are like 100 different platforms you can target by just setting the `--target` flag, and they all pretty much just work on any host platform.
Sounds like the real issue is that some Git developers have ancient, rigid requirements for their own development machines.
The way Zig solves this problem "better" than Rust is by claiming the target libraries as part of its distribution and building those on demand. It makes for a really excellent experience cross-building.
Rust might have a harder time if it wanted a corresponding feature because it doesn't natively build C like Zig does (using libclang). Either it would have to start using libclang or ship with rust re-implementations of the C library. AFAIK it's impossible to write the C++ library in Rust though.
That has not been my experience. I develop on Windows and need to compile for Linux. After spending several hours trying to get cross-compilation working, I gave up and do it via WSL now.
I switched from Go and I feel like Go was much better at this than Rust.
(I tried “cross” but it was very slow and I found it faster to rsync the files inside the container and then run the build scripts)
I'd bet the difference is that Go has a default assumption that everything is reimplemented in Go and calling C is awkward and slow, meanwhile lots of low-level Rust libraries are actually just type-safety wrappers over C libraries.
But, my point is you shouldn't even have to cross-compile Git to a platform like NonStop in order to develop NonStop apps. So the portability of Rust shouldn't even matter here. The app developer should be able to run their Git commands on a supported platform and cross-compile their own app to NonStop.
I haven't double checked, but my recollection of that story was that they were using Git as part of the operations at runtime, not (just) as a development dependency.
I suspect the majority of developers never even learnt as such. Cross-compilation is almost always a second-class citizen and I never expect it to work correctly on an external project. Linux distros have given up, with fedora even insisting on running compilation on the real target hardware for platforms like the raspberry pi, which is kind of insane, and as a result basically no-one puts in the effort to make it work.
> Is this a bit of chickens coming home to roost as far as developer culture forgetting how to work with cross-compiling toolchains?
I don't understand your comment. Completely ignorning Rust the modern state of cross-compilation is an unmitigated disaster.
Linux is especially bad because glibc is badly architected pile of garbage stuck in the 80s. It should be trivially possible to target any minimum glibc version for any possible Linux hardware environment. But glibc and Linux distros don't even attempt to make this possible. Linux toolchains make it nearly impossible to not use the default system libraries which is the opposite of correct for cross-compiling.
Zig moves mountains to make cross-compiling possible. But almost no projects actually attempt to support cross-compile.
You mostly understand my comment, but not my graybeard perspective.
The modern disaster is exactly that developer culture has forgotten how to do this for the most part.
But, you're focusing on Rust compiling when I don't think it is relevant. If those weird financial platform developers were aware of cross-compiling, they wouldn't think that a developer tool like Git has to be built to run on the target financial server platform. They would be capable of cross-compiling or otherwise staging their build into that platform while still using Git on a supported workstation platform to manage the sources.
Definitely agree the world has utterly lost the principle of cross-compiling. Support for cross-compile really should be a first-class and sacrosanct principle.
There's at least one proprietary platform that supports Git built by via a vendor-provided C compiler, but for which no public documentation exists and therefore no LLVM support is possible.
Shouldn't these platforms work on getting Rust to support it rather than have our tools limited by what they can consume? https://github.com/Rust-GCC/gccrs
A maintainer for that specific platform was more into the line of thinking that Git should bend over backwards to support them because "loss of support could have societal impact [...] Leaving debit or credit card authorizers without a supported git would be, let's say, "bad"."
To me it looks like big corps enjoying the idea of having free service so they can avoid maintaining their own stuff, and trying the "too big to fail" fiddle on open source maintainers, with little effect.
It's additionally ridiculous because git is a code management tool. Maybe they are using it for something much more wild than that (why?) but I assume this is mostly just a complaint that they can't do `git pull` from their wonky architecture that they are building on. They could literally have a network mount and externally manage the git if they still need it.
It's not like older versions of git won't work perfectly fine. Git has great backwards compatibility. And if there is a break, seems like a good opportunity for them to fork and fix the break.
And lets be perfectly clear. These are very often systems built on top of a mountain of open source software. These companies will even have custom patched tools like gcc that they aren't willing to upstream because some manager decided they couldn't just give away the code they paid an engineer to write. I may feel bad for the situation it puts the engineers in, I feel absolutely no remorse for the companies because their greed put them in these situations in the first place.
Yes. It benefits them to have ubiquitous tools supported on their system. The vendors should put in the work to make that possible.
I don’t maintain any tools as popular as git or you’d know me by name, but darned if I’m going to put in more than about 2 minutes per year supporting non-Unix.
(This said as someone who was once paid to improve Ansible’s AIX support for an employer. Life’s too short to do that nonsense for free.)
As you're someone very familiar with Ansible, what are your thoughts on it in regards to IBM's imminent complete absorption of RedHat? I can't imagine Ansible, or any other RedHat product, doing well with that.
I wouldn’t say I’m very familiar. I don’t use it extensively anymore, and not at all at work. But in general, I can’t imagine a way in which IBM’s own corporate culture could contribute positively to any FOSS projects if they removed the RedHat veneer. Not saying it’s impossible, just that my imagination is more limited than the idea requires.
IBM has been, and still is, a big contributor to a bunch of Eclipse projects, as their own tools build on those.
The people there were both really skilled, friendly and professional.
Different divisions and departments can have huge cultural differences and priorities, obviously, but “IBM” doesn’t automatically mean bad for OSS projects.
On the other hand: why should the entire open-source world screech to a halt just because some new development is incompatible with the ecosystem of a proprietary niche system developed by a billion-dollar freeloader?
HPE NonStop doesn't need to do anything with Rust, and nobody is forcing them to. They have voluntarily chosen to use an obscure proprietary toolchain instead of contributing to GCC or LLVM like everyone else: they could have gotten Rust support for free, but they believed staying proprietary was more important.
Then they chose to make a third-party project (Git) a crucial part of that ecosystem, without contributing time and effort into maintaining it. It's open source, so this is perfectly fine to do. On the other hand, it also means they don't get a say in how the project is developed, and what direction it will take in the future. But hey, they believed saving a few bucks was more important.
And now it has blown up in their face, and they are trying to control the direction the third-party project is heading by playing the "mission-critical infrastructure" card and claiming that the needs of their handful of users is more important than the millions of non-HPE users.
Right now there are three options available to HPE NonStop users:
1. Fork git. Don't like the direction it is heading? Then just do it yourself. Cheapest option short-term, but it of course requires investing serious developer effort long-term to stay up-to-date, rather than just sending the occasional patch upstream.
2. Port GCC / LLVM. That's usually the direction obscure platforms go. You bite the bullet once, but get to reap the benefits afterwards. From the perspective of the open-source community, if your platform doesn't have GCC support it might as well not exist. If you want to keep freeloading off of it, it's best to stop fighting this part. However, it requires investing developer effort - especially when you want to maintain a proprietary fork due to Business Reasons rather than upstreaming your changes like everyone else.
3. Write your own proprietary snowflake Rust compiler. You get to keep full control, but it'll require a significant developer effort. And you have to "muck around" with Rust, of course.
HPE NonStop and its ecosystem can do whatever it wants, but it doesn't get to make demands just because their myopic short-term business vision suddenly leaves them having to spend effort on maintaining it. This time it is caused by Git adopting Rust, but it will happen again. Next week it'll be something like libxml or openssl or ssh or who-knows-what. Either accept that breakage is inevitable when depending on third-party components, or invest time into staying compatible with the ecosystem.
> Everything should use one compiler, one run-time and one package manager.
If you think that calling out closed C compilers is somehow an argument for a single toolchain for all things, I doubt there's anything I can do to help educate you about why this isn't the case. If you do understand and are choosing to purposely misinterpret what I said, there are a lot of much stronger arguments you could make to support your point than that.
Even ignoring all of that, there's a much larger point that you've kind of glossed over here by:
> The shitheads who insist on using alternative compilers and platforms don't deserve tools
There's frequently discussion around the the expectations between open source project maintainers and users, and in the same way that users are under no obligation to provide compensation for projects they use, projects don't have any obligations to provide support indefinitely for any arbitrary set of circumstances, even if they happen to for a while. Maintainers sometimes will make decisions weighing tradeoffs between supporting a minority of users or making a technical change they feel will help them maintain the project better in the long-term differently than the users will. It's totally valid to criticize those decisions on technical grounds, but it's worth recognizing that these types of choices are inevitable, and there's nothing specific about C or Rust that will change that in the long run. Even with a single programming language within a single platform, the choice of what features to implement or not implement could make or break whether a tool works for someone's specific use case. At the end of the day, there's a finite amount of work people spend on a given project, and there needs to be a decision about what to spend it on.
For various libs, you provide a way to build without it. If it's not auto-detected, or explicitly disabled via the configure command line, then don't try to use it. Then whatever depends on it just doesn't work. If for some insane reason git integrates XML and uses libxml for some feature, let it build without the feature for someone who doesn't want to provide libxml.
> At the end of the day, there's a finite amount of work people spend on a given project
Integrating Rust shows you have too much time on your hands; the people who are affected by that, not necessarily so.
> Integrating Rust shows you have too much time on your hands; the people who are affected by that, not necessarily so.
As cited elsewhere in the this thread, the person making this proposal on the mailing list has been involved in significant contributions to git in the past, so I'd be inclined to trust their judgment about whether it's a worthwhile use of their time in the absence of evidence to the contrary. If you have something that would indicate this proposal was made in bad faith, I'd certainly be interested to see it, but otherwise, I don't see how you can make this claim other than as your own subjective opinion. That's fine, but I can't say I'm shocked that the people actually making the decisions on how to maintain git don't find it convincing.
Rust has an experimental C backend of its own as part of rustc_codegen_clr https://github.com/FractalFir/rustc_codegen_clr . Would probably work better than trying to transpile C from general LLVM IR.
Given that the maintainer previously said they had tried to pay to get GCC and LLVM ported multiple times, all of which failed, money doesn’t seem to have helped.
> There's at least one proprietary platform that supports Git built by via a vendor-provided C compiler, but for which no public documentation exists and therefore no LLVM support is possible.
That's fine. The only impact is that they won't be able to use the latest and greatest release of Git.
Once those platforms work on their support for Rust they will be able to jump back to the latest and greatest.
It's sad to see people be so nonchalant about potentially killing off smaller platforms like this. As more barriers to entry are added, competition is going to decrease, and the software ecosystem is going to keep getting worse. First you need a lib C, now you need lib C and Rust, ...
But no doubt it's a great way for the big companies funding Rust development to undermine smaller players...
It's kind of funny to see f-ing HPE with 60k employees somehow being labeled as the poor underdog that should be supported by the open-source community for free and can't be expected to take care of software running on their premium hardware for banks etc by themselves.
I think you misread my comment because I didn't say anything like that.
In any case HPE may have 60k employees but they're still working to create a smaller platform.
It actually demonstrates the point I was making. If a company with 60k employees can't keep up then what chance do startups and smaller companies have?
HP made nearly $60b last year. They can fund the development of the tools they need for their 50 year old system that apparently powers lots of financial institutions. It's absurd to blame volunteer developers for not wanting to bend over backwards, just to ensure these institutions have the absolute latest git release, which they certainly do not need.
Oh they absolutely can, they just choose not to. To just make some tools work again there's also many slightly odd workarounds one could choose over porting the Rust compiler.
> It's sad to see people be so nonchalant about potentially killing off smaller platforms like this.
Your comment is needlessly dramatic. The only hypothetical impact this has is that whoever uses these platforms won't have upgrades until they do something about it, and the latest and greatest releases will only run if the companies behind these platforms invests in their maintenance.
This is not a good enough reason to prevent the whole world from benefiting from better tooling. This is not a lowest common denominator thing. Those platforms went out of their way to lag in interpretability, and this is the natural consequence of these decisions.
Seriously, I guess they just have to live without git if they're not willing to take on support for its tool chain. Nobody cares about NonStop but the very small number of people who use it... who are, by the way, very well capable of paying for it.
I strongly agree. I read some of the counter arguments, like this will make it too hard for NonStop devs to use git, and maybe make them not use it at all. Those don’t resonate with me at all. So what? What value does them using git provide to the git developers? I couldn’t care less if NonStop devs can use my own software at all. And since they’re exclusively at giant, well-financed corporations, they can crack open that wallet and pay someone to do the hard work if it means than much to them.
"You have to backport security fixes for your own tiny platform because your build environment doesn't support our codebase or make your build environment support our codebase" seems like a 100% reasonable stance to me
> your build environment doesn't support our codebase
If that is due to the build environment deviating from the standard, then I agree with you. However, when its due to the codebase deviating from the standard, then why blame the build environment developers for expecting codebases to adhere to standards. That's the whole point of standards.
Is there a standard that all software must be developed in ANSI C that I missed, or something? The git developers are saying - we want to use Rust because we think it will save us development effort. NonStop people are saying we can't run this on our platform. It seems to me someone at git made the calculus: the amount that NonStop is contributing is less than what we save going to Rust. Unless NonStop has a support contract with git developers that they would be violating, it seems to me the NonStop people want to have their cake and eat it too.
An important point of using C is to write software that adheres to a decades old very widespread standard. Of course developers are free to not do that, but any tiny bit of Rust in the core or even in popular optional code amounts to the same as not using C at all, i.e. only using Rust, as far as portability is concerned.
If your codebase used to conform to a standard and the build environment relies on that standard, and now the your codebase doesn't anymore, then its not the build environment that deviates from the standard, its the codebase that brakes it.
They enjoy being portable and like things to stay that way so when they introduce a new toolchain dependency which will make it harder for some people to compile git, they point it out in their change log?
I am curious, does anyone know what is the use case that mandates the use of git on NonStop? Do people actually commit code from this platform? Seems wild.
because the rust compiler just doesn't support some platforms (os / architecture combination)?
RESF members tend to say it the other way around as in the platform doesn't support rust, but the reality is that it's the compiler that needs to support a platform, not the other way around.
Rust can't support a platform when that platform's vendors just provide a proprietary C compiler and nothing else (no LLVM, no GCC). Perhaps someone could reverse-engineer it, but ultimately a platform with zero support from any FOSS toolchain is unlikely to get Rust support anytime soon.
Furthermore, how could it without the donation of hardware, licenses and so forth?! This is a problem entirely of the proprietary platforms making, and it should be their customers problem for having made a poor decision.
HPE's customers are big-pocketed enough that they absolutely could manage a Rust port themselves, or pay HPE however much money they need to get them to do it if they're going to play games with ABI documentation. NonStop isn't some kind of weird hobbyist or retrocomputing platform.
Actually, I'm surprised HPE doesn't already ship a Rust fork, given how NonStop is supposed to be a "reliable" OS...
It's unclear what point you're trying to make here.
Proprietary platforms with proprietary-only toolchains are bad, for a wide variety of reasons. Open toolchains are a good thing, for many reasons, including that they can support many different programming languages.
Me who wrote long, long comment and then accidentally pushed close tab shortcut.
!!@@!
I would recommend building some packages containing rust, especially on older hardware - and then realize that because of static linking you will need to rebuild it very very often - and don't forget that you are building clean.
Because it is expected that you will use required shared libraries to make life easier.
I think that rust people should maybe sometimes just consider - that rust if pushed in such way will be more hated than C.
Maybe you should not try to deflect criticism about stable ABI and shared libraries - linux OSes REQUIRE IT - nobody will change OS architecture because you want it.
And maybe we should be more conservative architecturally in especially most critical pieces of software architecture.
It gives rust hate from many people.
And once someone hates language it sticks.
Also add to that the rust zealots who behave like sometimes like political preachers.
"We are future, you are backwards" - says every ideologue.
But conveniently does not say "in direction I want".
When rust started political fight instead of language one they should expect that every rust porting will become political quagmire.
Also you are incorrect - because you are already making wrong assumption:
> crates are not libraries
"never provided a shared library interface" - it doesn't need to, it just need to USE library - distros will convert static one to shared one if that what is reasonable.
Now we have to have C library connected by C headers to (in future) rust application. Sure this somehow works - at cost of memory safety. So someone WILL suggest using rust crate instead of C library, and the problem will inevitably pop up.
You could only say it works correctly as platform stipulates if you did not use any rust crate, or used ones that only your app/lib uses, or trivial finished ones - and I do not see people use rust like that. Even then it is from most linux distributions perspective the distribution job to decide if it should be static or shared linked NOT app-developer.
SSL is something that is prime example of would it best to be written in memory safe language, with safe headers, provided that language makes stable ABI connections, so we can update 0-day not waiting for app developer.
Rust fails spectacularly at last point unless library uses C headers.
But at least it seems that OpenSSL is dynamically loaded after start so they are not changing that too soon.
When I decide to patch some library for my use case I may want to use such library in every instance in every program on the system. Rust crate makes this impossible - now I need to rebuild everything even if I could not reasonably touch ABI boundary in same C code.
Ultimately I think many of linux rust critics see it correctly as company-first/app-centered/containerized/not developement-aware user language (i.e user who can patch software for their specific needs who actively want to inspect every dependency in ONE way), and they prefer the known pro-community/pro-distro/pro-user-developer C/C++ paradigm instead.
(At least fact that many criticism start immediately when GPL project get BSD rust rewrite does point it to free-software/open-source i.e pro-community/pro-company schism)
Many linux users especially developement-aware users just have enough of pip, cargo and every 'modern' stuff - they just want old good apt or pacman.
Then you have people that think slow development and no revolutionary changes should be IT priority in modern times.
Then you have people that do believe that any alternative should be better, easier and simpler than old stuff before it should be treated as even a alternative way.
Thanks for the specifics, really fascinating list! I'm sure I'm being a bit flippant, but it's pretty funny that a list including the Playstation 1, N64, and Apple Watches is in the same conversation as systems that need to compile git from source.
Anyone know of anything on that list with more than a thousand SWE-coded users? Presumably there's at least one or two for those in the know?
What I like about seeing a project support a long list of totally irrelevant old obscure platforms (like Free Pascal does, and probably GCC) is that it gives some hope that they will support some future obscure platform that I may care about. It shows a sign of good engineering culture. If a project supports only 64-bit arm+x86 on the three currently most popular operating systems that is a red flag for future compatibility risks.
The problem is that "support" usually isn't quite the right word. In practice for obscure platforms it is often closer to "isn't known to be horribly broken". Rust at least states this explicitly with their Tier 1/2/3 system, but the same will apply to every project.
Platform support needs to be maintained. There is no way around that. Any change in the codebase has the possibility of introducing subtle platform-specific bugs. When platform support means that some teenager a decade ago got it to compile during the summer holiday and upstreamed her patches, that's not worth a lot. Proper platform support means having people actively contributing to the codebase, regularly running test suites, and making sure that the project stays functional on that platform.
On top of this, it's important to remember that platform support isn't free either. Those platform-specific patches and workarounds can and will hold back development for all the other platforms. And if a platform doesn't have a maintainer willing to contribute to keeping those up-to-date, it probably also doesn't have a developer who's doing the basic testing and bug fixing, so its support is broken anyways.
In the end, is it really such a big deal to scrap support for something which is already broken and unlikely to ever be fixed? At a certain point you're just lying to yourself about the platform being supported - isn't it better to accept reality and formally deprecate it?
In theory I agree with you, and code written in a platform-agnostic way is definitely something we should strive for, but in practice: can keeping broken code around really be called "good engineering culture"?
I don't think the concern is whether a user can compile git from source on said platform, but rather whether the rust standard lib is well supported on said platform, which is required for cross compiling.
Rust doesn't run on all of their platforms so this is a good example of where git may not be viable for OpenBSD long-term (if they were to switch from CVS one day, which is a big IF)
You’re chasing after the meaning of “impossible.” Easy. There’s two categories of developers:
> I like programming
> I program to make money
If you belong to the second category - I’m going to be super charitable, it sounds like I’m not going to be charitable and I am, so keep reading - such as by being paid by a giant bank to make applications on Nonstop, there might be some policy that’s like
“You have to vet all open source code that runs on the computer.”
So in order to have Rust, on Nonstop, to build git, which this guy likes, he’d need to port llvm, which isn’t impossible. What’s impossible is to get llvm code reviewed by legal, or whatever, which they’re not going to do, they’re going to say “No. No llvm. HP who makes Nonstop can do it, and it can be their legal problem.”
I’m not saying it’s impossible. The other guy is saying it’s impossible, and I’m trying to show how, in a Rube Goldberg way, it looks impossible to him.
You and I like programming, and I’m sure we’re both gainfully employed, though probably not making as much money at guy, but he doesn’t like programming. You are allowed to mock someone’s sincerity if they’re part of a system that’s sort of nakedly about making lots of money. But if you just like programming, you’d never work for a bank, it’s really fucking boring, so basically nobody who likes programming would ever say porting Rust or whatever is impossible. Do you see?
It’s tough because, the Jane Street people and the Two Sigma people, they’re literally kids, they’re nice people, and they haven’t been there for very long, they still like programming! They feel like they need to mook for the bank, when they could just say that living in New York and having cocktails every night is fun and sincere. So this forum has the same problem as the mailing list, where it sounds like it’s about one thing - being able to use fucking hashmaps in git - and it’s really about another - bankers. Everywhere they turn, the bankers run into people who make their lifestyle possible, whether it’s the git developers who volunteer their time or the parents of the baristas at the bars they’re going to paying the baristas’ rent - and the bankers keep hating on these people. And then they go and say, well everyone is the problem but me. They don’t get it yet.
Rust is generally a much better tool for building software than C. When your software is built with better tools, you will most likely get better software (at least eventually / long term, sometimes a transition period can be temporarily worse or at least not better).
I'm not sure exactly what you mean but of course people are facing implementation deficiencies in Git. Last I checked submodules were still "experimental" and extremely buggy, and don't work at all with worktrees. (And yeah submodules suck but sometimes I don't have a choice.)
Your reply seems to imply that using rust would make submodules better. Since that's not the case, maybe you can provide an alternative where rust would address an actual issue git users have.
If we're talking about feelings, I find it "not likely" unless, perhaps as a side-effect of rethinking the whole feature all together. Or do you have some actual indicators that the issues with how modules are likely to break your work directory are related to problems that rust avoids?
Yes I do. Rust's strong type system makes logic bugs less likely, because you can encode more invariants into the type system.
This also makes it easier to refactor and add features without risk of breaking things.
The borrow checker also encourages ownership structures that are less error-prone.
Finally the more modern tooling makes it easier to write tests.
If you're thinking "where is the peer reviewed study that proves this?" then there isn't one, because it's virtually impossible to prove even simple things like that comments are useful. I doubt there's even a study showing that e.g. it's easier to write Python than assembly (although that one probably isn't too hard to prove).
That doesn't mean you get to dismiss everything you disagree with simply because it hasn't been scientifically proven.
The things I'm talking about have been noted many times by many people.
OK, but I'm not convinced for this specific case. And it wouldn't take a peer reviewed study to convince me. Issues in the git submodules handling that you could link to C's lack of safety would suffice.
However what you're doing is to reply with the same platitudes and generalities that all rust aficionados seem to have ready on demand. Sure, rust is better at those things, but I don't see how that would make a rewrite of an existing feature better by default. I don't doubt that new features of git that would be written in rust will be safer and more ergonomic, but for existing code to be rewritten, which is what I understand to be your stance, I remain skeptical.
You missed "IMO". We get it, you love Rust and/or hate C, and if so, I wonder why. Try Ada + SPARK though if you really want REAL safety. Its track record speaks for itself.
The developers of git will continue to be motivated to contribute to it. (This isn’t specific to Rust, but rather the technical choices of OSS probably aren’t generally putting the user at the top of the priority list.)
And the reason this is a problem is because of the me-first attitude of language developers these days. It feels like every language nowadays feels the need to implement its own package manager. These package managers then encourage pinning dependencies, which encourages library authors to be a less careful about API stability (though obviously this varies from library to library) and makes it hard on distro maintainers to make all the packages work together. It also encourages program authors to use more libraries, as we see in the Javascript world with NPM, but also in the Rust world.
Now, Rust in Git and Linux probably won't head in these directions, so Debian might actually be able to support these two in particular, but the general attitude of Rustacians toward libraries is really off-putting to me.
IMHO the reason is that these languages are industry-funded efforts. And they are not funded to help the free software community. Step-by-step this reshapes the open-source world to serve other interests.
Semantic versioning is culturally widespread in Rust, so the problem of library authors being "less careful about API stability" rarely happens in practice. If pinned packages were the problem, I'd imagine they would have been called out as such in the Debian page linked by parent.
Semantic versioning is a way to communicate how much a new version breaks new shit not a way to encourage not breaking shit. If anything, having a standardized way to communicate that you are breaking shit kind of implies that you are already planning to break shit often enough for that to make sense.
While we are on Hacker News, this is still an enormously obtuse way to communicate.
Are you saying that as users of git we will be negatively affected by deps being added and build times going up? Do you have evidence of that from past projects adding rust?
We will see how larger the binary will become, we will see how many more (if any) shared libraries it will depend on, and we will see how long it will take to compile.
Clear enough for you? It is a note to myself, and for others who care. You might not care, I do, and some other people do, too.
I would safely bet that the pool of C developers willing to work on a C Git going forward is much closer to exhaustion than the pool of Rust developers willing to work on a Rust(-ish) Git.
Well it would probably take at least 5 years to rewrite all of Git in Rust (git-oxide is 5 years old and far from finished). Then another few years to see novel features, then a year or two to actually get the release.
Btw 10 lines of code per day is a typical velocity for full time work, given it's volunteers 1 line per day might not be as crazy as you think.
It's to a "test balloon" if you have a plan to mandate it and will be announcing that. Unless I suppose enough backlash will cause you to cancel the plan.
It's literally a test of how people will react, so yes, finding out if people will react negatively would be exactly the point of doing the test in the first place. Would you prefer that they don't publicize what their follow-up plans would be to try to make it harder to criticize the plans? If you're against the plan, I'm pretty sure that's the exact type of feedback they're looking for, so it would make more sense to tell them that directly if it actually affects you rather than making a passive-aggressive comment they'll likely never read on an unrelated forum.
What's there to test? It was obvious that the reaction would be overwhelmingly negative, so that's definitely not something they would care about. What else?
Is the reaction overwhelmingly negative? I haven’t read all of the emails but they seemed basically neutral or positive to me. Could you link me to some extremely negative ones, I’m curious.
Ah, so the people whose opinions they care about is going to be git contributors, not random Twitter users (some of whom can literally make money from outrage farming). The folks who actually do the work.
If they’re running the project with a Linus-type approach, they won’t consider backlash to be interesting or relevant, unless it is accompanied by specific statements of impact. Generic examples for any language to explain why:
> How dare you! I’m going to boycott git!!
Self-identified as irrelevant (objector will not be using git); no reply necessary, expect a permaban.
> I don’t want to install language X to build and run git.
Most users do not build git from source. Since no case is made why this is relevant beyond personal preference, it will likely be ignored.
> Adopting language X might inhibit community participation.
This argument has almost certainly already been considered. Without a specific reason beyond the possibility, such unsupported objections will not lead to new considerations, especially if raised by someone who is not a regular contributor.
> Language X isn’t fully-featured on platform Y.
Response will depend on whether the Git project decides to support platform Y or not, whether the missing features are likely to affect Git uses, etc. Since no case is provided about platform Y’s usage, it’ll be up to the Git team to investigate (or not) before deciding
> Language X will prevent Git from being deployed on platform Z, which affects W installations based on telemetry and recent package downloads, due to incompatibility Y.
This would be guaranteed to be evaluated, but the outcome could be anywhere from “X will be dropped” to “Y will be patched” to “Z will not be supported”.
If you're looking for reasons to ignore criticism like this then you were never interested in anything other than an affirmative nod and pat on the back in the first place.
I suggest waiting till the gcc side matures, with the minimum of a working gcc frontend for a non optional dependency. Optional dependencies with gcc_codegen might be okay. Git is pretty core to a lot of projects, and this change is risky, it's on a fairly short time frame to make it a core dep (6 months?).
Does anyone with insight into Git development know if we should care about this? Is this just a proposal out of nowhere from some rando or is this an idea that a good portion of Git contributors have wanted?
You can perhaps learn more about their involvement in the community from this year’s summit panel interview: https://youtu.be/vKsOFHNSb4Q
In a brief search, they’re engineering manager for GitLab, appear to be a frequent contributor of high-difficulty patches to Git in general, and are listed as a possible mentor for new contributors.
Given the recent summit, it seems likely that this plan was discussed there; I hadn’t dug into that possibility further but you could if desired.
Looking at the comment thread, at least one person I recognize as a core maintainer seems to be acting as if this is an official plan that they've already agreed on the outline of, if not the exact timing. And they seem to acknowledge that this breaks some of the more obscure platforms out there.
Interesting! I'd certainly say that's worth something. Definitely didn't expect it though given how poorly some people have reacted to Rust being introduced as an optional part of the Linux kernel.
It's a lot more understandable for developer tooling like Git to more quickly adopt newer system requirements. Something like the Linux kernel needs to be conservative because it's part of many people's bootstrapping process.
rustc_codegen_gcc is close to becoming stable, and conversely the Linux kernel is dropping more esoteric architectures. Once the supported sets of architectures fully overlap, and once the Linux kernel no longer needs unstable (nightly-only) Rust features, it'd be more reasonable for Linux to depend on Rust for more than just optional drivers.
I would also say that it’s a lot easier to learn to write rust when you’re writing something that runs sequentially on a single core in userspace as opposed to something like the Linux kernel. Having dipped my toes in rust that seems very approachable. When you start doing async concurrency is when the learning curve becomes steep.
Those footguns still exist in C, they’re just invisible bugs in your code. The Rust compiler is correct to point them out as bad architecture, even if it’s annoying to keep fighting the compiler.
You could read "Rust will become mandatory" as "all contributors will need to be able to code Rust" or even "all new code has to be written in Rust" or similar variations
I see. No, I understood it the way it is, as introducing it as a new hard dependency in git 3. I suppose it is a pilot for making it mandatory for contributions / incrementally replacing the existing code in the future, though.
Git is pretty modular, and it already includes multiple languages. I guess that significant parts of it will remain in C for a long time, including incremental improvements to those parts. Though it wouldn't surprise me if some parts of git did become all-Rust over time.
My last company used Jenkins, so our build infrastructure depended on Java. We used zero code outside of supporting Jenkins. So Java was required to build our stuff, but not to write or run it.
Edit: nope, I’m wrong. On reading the link, they’re setting up the build infrastructure to support Rust in the Git code itself.
One argument from the git devs is that it’s very hard to implement smarter algorithms in C, though. For example, it uses arrays in places where a higher level language would use a hash, because the C version of that is harder to write, maintain, and debug. It’s also much easier to write correct threaded code in Rust than C. Between those 2 alone, using a more robust language could make it straightforward to add performance gains that benefit everyone.
That's a one time gain though. There's no reason for every platform to check the validity of some hash table implementation when that implementation is identical on all of them.
In my opinion, the verification of the implementation should be separate from the task of translating that implementation to bytecode. This leaves you with a simple compiler that is easy to implement but still with a strong verifier that is harder to implement, but optional.
Nobody needs to change a language standard for 9 lines of code. When you really want to use a hash map, its likely that you care about performance, so you don't want to use a generic implementation anyway.
> or a at least a community consensus about which one you pick
There is a hash table API in POSIX:
GNU libc: https://sourceware.org/glibc/manual/latest/html_node/Hash-Search-Function.html
Linux hsearch(3): https://man7.org/linux/man-pages/man3/hsearch.3.html
hsearch(3posix): https://www.man7.org/linux/man-pages/man3/hcreate.3p.html
And who’s volunteering for that verification using the existing toolchain? I don’t think that’s been overlooked just because the git devs are too dumb or lazy or unmotivated.
That came across more harshly than I meant, but I stand by the gist of it: this stuff is too hard to do in C or someone would’ve done it. It can be done, clearly, but there’s not the return on investment in this specific use case. But with better tooling, and more ergonomic languages, those are achievable goals by a larger pool of devs — if not today, because Rust isn’t as common as C yet, then soon.
As a practical example, the latest Git version can be compiled by an extremely simple (8K lines of C) C compiler[1] without modification and pass the entire test suite. Gonna miss the ability to make this claim.
In theory you should be able to use TCC to build git currently [1] [2]. If you have a lightweight system or you're building something experimental, it's a lot easier to get TCC up and running over GCC. I note that it supports arm, arm64, i386, riscv64 and x86_64.
The nature considering the future is that our actions _now_ affect the answer _then_. If we tie our foundational tools to LLVM, then it's very unlikely a new platform can exists without support for it. If we don't tie ourselves to it, then it's more likely we can exist without it. It's not a matter of if LLVM will be supported. We ensure that by making it impossible not to be the case. It's a self fulfilling prophecy.
I prefer to ask another question: "Is this useful". Would it be useful, if we were to spin up a different platform in the future, to be able to do so without LLVM. I think the answer to that is a resounding yes.
That doesn't leave rust stranded. A _useful_ path for rust to pursue would be to defined a minimal subset of the compiler that you'd need to implement to compile all valid programs. The type checker, borrow checker, unused variable tracker, and all other safety features should be optional extensions to a core of a minimal portable compiler. This way, the rust compiler could feasibly be as simple as the simplest C compiler while still supporting all the complicated validation on platforms with deep support.
rustc is only loosely tied to LLVM. Other code generation backends exist in various states of production-readiness. There are also two other compilers, mrustc and GCC-rs.
mrustc is a bootstrap Rust compiler that doesn't implement a borrow checker but can compile valid programs, so it's similar to to your proposed subset. Rust minus verification is still a very large and complex language though, just like C++ is large and complex.
A core language that's as simple to implement as C would have to be very different and many people (I suspect most) would like it less than the Rust that exists.
Would anyone know how to view the patch in question (as opposed to the `--stat`-like view in the thread) without pulling down source or Googling around?
Given that rust only works on e.g. cygwin recently (and still does not build many crates: i try to compile jujutsu and failed), this is a big blow to portability IMHO. While I try to like rust, I think making it mandatory for builds of essential tools like git is really too early.
As a Windows user, I find random Rust projects work on Windows far more often than random C ones, even if the authors didn’t make a specific attempt to support Windows.
My colleague Bryan Cantrill, famously a huge Unix guy, once said to me “if you had told me that projects I write would just work on Windows at the rate they do, I wouldn’t have believed you.” When I started at Oxide I had to submit like one or two patches to use Path instead of appending strings and that was it, for my (at the time) main work project.
As said before I wasn't complaining about windows, but rather of not so common posix layers like cygwin [0]. Most C posix compliant stuff compiles in my experience.
Right, but Rust makes it so you don't have to use Cygwin. It's one of the great portability advantages of Rust that you can write real Windows programs with it.
I am not really sure if I can follow here. How could a rust compiled program like git honor my cygwin emulated mount points in paths, which I need, when working with other posix compliant software.
I thought that if you invoke a native Windows binary with Cygwin, it translates Unix-looking paths into Windows ones. But it's been a long time since I used Cygwin so I could be wrong.
I want it to be cygwin native, i.e. passing calls through the cygwin posix layer and not use the windows binary. Sure I can use the windows binary, but that is a different thing.
Ironically it's original use was in political* parlance.
From wiki it's "information sent out to the media in order to observe the reaction of an audience. It is used by companies sending out press releases to judge customer reaction, and by politicians who deliberately leak information on a policy change."
Yup I have no doubt that there's a Rust 'evangelist' group somewhere aiming for inorganic growth of the language.
You can't really count "dependencies" in the Rust ecosystem by counting the number of crates. Gix itself has 65 crates but if you depended on it that would only really be one dependency.
Your average Rust project will have more dependencies than your average C project, but it's not as dramatic as you might think.
Okay, but when I compile a Rust project and I see "0/2000" that gets pulled and built, I panic.
> You can't really count "dependencies" in the Rust ecosystem by counting the number of crates.
Can you elaborate as to why? I have much less packages (many of them are not even C libraries) installed by my operating system than what a typical Rust project pulls and builds.
Because Rust crates are the "compilation unit" as well as the "publishing unit". So if you are a largish library then you'll likely want to split your library across several crates (to enable things like parallelism in the build process). Then you'll end up with several crates from the same git repo, same developers, that will show up individually in the raw crate count.
It's not a perfect analogy (because crates are generally multiple files), but imagine if in a C project you counted each header file as a separate dependency, it's kinda like that.
---
There is a culture in the Rust ecosystem of preferring shared crates for functionality rather than writing custom versions of data structures or putting too much in the standard library (although it's not nearly so extreme as in the JavaScript ecosystem). And I do think the concern around supply-chain attacks is not entirely unwarranted. But at the same time, the quality standards for these crates are excellent, and in practice many of them are maintained by a relatively small group of people that as a Rust developer I know and trust.
And are these dependencies that get pulled and built general-purpose? I presume it is since it is published, but I have no idea if it is indeed general-purpose, or something like "internal/*/*" in Go where the code is not supposed to be used by any other codebase.
This is four crates, so it shows up as 4/2000. But last week, it would have been 3/2000, because serde_core was extracted very recently: https://github.com/serde-rs/serde/pull/2608
As a serde user, this reorganization doesn’t change the amount of code you’ve been depending on, or who authors that code, but it did add one more crate. But not more actual dependency.
> Normal users who have to install the Rust toolchain to build a previously simple piece of software do not count.
"Normal users" would just install the same way they already do today without bothering about the toolchain.
"Normal users" who want to build by theirselves probably won't find it too difficult. Given the size of Git it's incredibly easy to build: just install the dependencies and run `make`.
This series was in response to another thread [1] which wanted to make rust mandatory in an upcoming release.
The authors proposal was to instead take the middle ground and use rust as an optional dependency until a later point of time where it becomes mandatory.
The later point of time was decided based on when rust support lands in gcc, which would make things smoother, since platforms which support gcc would also be included.
[1]: https://lore.kernel.org/git/pull.1980.git.git.1752784344.git...
The GCC compiler collection has been hit and miss though. Nobody uses gcj for example. I sort of doubt that they'll be able to implement a good compiler for a language that doesn't even have a standard without that implementation going wildly out of date in the future, just like what happened with Java.
Since OpenJDK was released there isn't much point maintaining GCJ.
There's two different methods by which Rust support can be added to GCC: adding a Rust frontend to GCC and adding a GCC backend to the Rust compiler (rustc_codegen_gcc). The latter approach would not be (as?) susceptible to implementation divergence as an independent frontend.
yep, if git is content with rustc_codegen_gcc, then it's very doable they can require rust in the next few years
I am curious, what is the reason behind introducing Rust in Git?
I am not familiar with Git development, I am just a user. But my impression is that it is already a complete tool that won't require much new code to be written. Fixes and improvements here and there, sure, but that does not seem like a good reason to start using a new language. In contrast, I understand why adding it to e.g. Linux development makes sense, since new drivers will always need to be written.
Can anyone explain what I might be missing?
Git is constantly gaining features, even if for the most part it seems like the core functionality is unchanged.
If you'd like to review the changelog, the Git repo has RelNotes but I've found GitHub's blog's Git category to be a more digestible resource on the matter: https://github.blog/open-source/git/
git feels complete until you use a tool like jj or git-branchless (latter of which has things like in-memory merges in rust)
can you elaborate please? Why jj is more feature complete for you than git? I tried jj and for now it looks like too raw. The problem is also its git backed. I really don't want to care about two states of repo at the same time - one is my local jj, and another is remote git repo.
I think jj just has other conceptions compared to git. E.g. in git you probably will not change history too much (if pushed to remote especially), while in jj simple editing of commits is a front feature. So, comparing them in feature completeness looks strange to me
After some experience with jj I understand that jj is a user-oriented, user friendly tool with batteries included, while git is double-edged knife which is also highly customizable
Or if you use its predecessor, bitkeeper.
https://lore.kernel.org/git/ZZ9K1CVBKdij4tG0@tapette.crustyt... has a couple dozen replies and would be a useful place to start reading about it; beyond that, search that list for Rust. (Note, I’m only responding the opening question, not evaluating the arguments pro/con here or on the list; in any case, someone else surely will.)
Developers who work on git think it will help them do their jobs better. Do you need any more reasons beyond that? They don't need to justify it to users necessarily.
There's also the fact that if you want to recruit systems programmers for a project like git, the 19-year-old catgirls who are likely to be interested in that sort of work all work in Rust. Ask one to hack a legacy C code base and she might nyao at you angrily >:3
Why not zig tho.. keep the C, compile with zigcc and write new code in zig. Best of both worlds.
uwu but zig doesn't give you memory and concurrency safety guarantees, oniichan!
Idk if it's funny or sad, cause it's true.
To capture existing status for Rust promoters.
> I am curious, what is the reason behind introducing Rust in Git?
More developers. Old C projects simply don't have enough incoming developers anymore.
No one is clamoring to join the Git project and write C code.
The Rewrite It In Rust(tm) brigade, on the other hand, will be happy, for now, to join and spread the gospel of Rust.
I'm not even a Rust or C developer and know this take is BS, Rust pretty clearly has major maintainability and code reliability/safety/stability benefits over C.
The whole point of Rust is that C, and all the code written therein (or as much as is feasible), be eventually replaced and abandoned. The potential costs of continuing to use C, and all the memory and concurrency bugs that come with it, runs in the billions worldwide if not more.
Besides which, in 2025 all the real ones are using jj, which is 100% Rust, not git—so if git wishes to remain competitive it needs to catch up.
I don't know even one developer who uses Jujutsu.
[dead]
C is unsafe.
Changing well-tested code is unsafe.
not changing working code to prevent issues is unsafe.
we can go in circles all day with blanket statements that are all true. but we have ample evidence that even if we think some real-world C code is safe, it is often not because humans are extremely bad at writing safe C.
sometimes it's worth preventing that more strongly, sometimes it's not, evidently they think that software that a truly gigantic amount of humans and machines use is an area where it's worth the cost.
believing that rewriting to rust will make code safe is unsafe) Of course it will be safer, but not safe. Safety is a marketing feature of rust and no more. But a lot of people really believe in it and will be zealously trying to prove that rust is safe.
If the code is brittle to change, it must not have been particularly safe in the first place, right?
And if it's well-tested, maybe that condition is achieved by the use of a test suite which could verify the changes are safe too?
A test will never catch every bug, otherwise it's a proof, and any change has the probability to introduce a new bug, irregardless of how careful you are. Thus, changing correct code will eventually result in incorrect code.
I'm not sure if that's how probability works.
I mean if you want Git to never change you're free to stick with the current version forever. I'm sure that will work well.
I obviously don’t think that is wise, but Git is literally designed with this in mind: https://git-scm.com/docs/repository-version/2.39.0
Just like SQLite has an explicit compatibility guarantee through 2050. You literally do not have to update if you do not want to.
And it’s still a choice you can make regardless of Git moving to Rust or not, so what’s the problem?
This is the repo format version.
It's pretty different from the git version, which receives new releases all the time for things like security patches, improvements, and new features.
https://github.com/Speykious/cve-rs
Rust is not perfect, but perfect C is nearly impossible.
I honestly can't tell if this is meant as serious reply to my question (in that case: let's say I agree that Rust is 100% better than C; my question still stands) or as a way to mock Rust people's eagerness to rewrite everything in Rust (in that case: are you sure this is the reason behind this? They are not rewriting Git from scratch...)
As a user, you may not be aware that C makes it relatively easy to create https://en.m.wikipedia.org/wiki/Buffer_overflow which are a major source of security vulnerabilities.
This is one of the best reasons to rewrite software in Rust or any other more safe by default language.
Everyone on hackernews is well aware that C makes it relatively easy to create buffer overflows, and what buffer overflows are. You're still not responding to GP question.
I'm not involved in the initiative so I can't answer the question definitively? I provided one of the major reasons that projects get switched from C. I think it's likely to be a major part of the motivation.
I didn't know that C makes it easy.
Right, I never mentioned that I am a decently experienced C developer, so of course I got my fair share of buffer overflows and race conditions :)
I have also learned some Rust recently, I find a nice language and quite pleasant to work with. I understand its benefits.
But still, Git is already a mature tool (one may say "finished"). Lots of bugs have been found and fixed. And if more are found, sure it will be easier to fix them in the C code, rather than rewriting in Rust? Unless the end goal is to rewrite the whole thing in Rust piece by piece, solving hidden memory bugs along the way.
https://access.redhat.com/articles/2201201 and https://github.com/git/git/security/advisories/GHSA-4v56-3xv... are interesting examples to consider (though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?).
> Unless the end goal is to rewrite the whole thing in Rust piece by piece, solving hidden memory bugs along the way.
I would assume that's the case.
> though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?
Based on the descriptions it's not the integer overflows that are issues themselves, it's that the overflows can lead to later buffer overflows. Rust's default release behavior is indeed to wrap on overflow, but buffer overflow checks will remain by default, so barring the use of unsafe I don't think there would have been corresponding vulnerabilities in Rust.
This doesn't matter at all for programs like Git. Any non-free standing program running on a modern OS on modern hardware trying to access memory its not supposed to will be killed by the OS. This seams to be the more reasonable security-boundary then relying on the language implementation to just not issue code, that does illegal things.
Yeah sure, memory-safety is nice for debuggibility and being more confident in the programs correctness, but it is not more than that. It is neither security nor proven correctness.
Not quite the best example, since Git usually has unrestricted file access and network access through HTTP/SSH, any kind of RCE would be disastrous if used for data exfiltration, for instance.
If you want a better example, take distributed database software: behind DMZ, and the interesting code paths require auth.
Git already runs "foreign" code e.g. in filters. The ability to write code that reacts unexpectedly on crafted user input isn't restricted to languages providing unchecked array/pointer access.
Unintentional bugs that caused data destruction would also be disastrous for a tool like git
Which are more likely to be introduced by a full rewrite.
> Any non-free standing program running on a modern OS on modern hardware trying to access memory its not supposed to will be killed by the OS.
This seems like a rather strong statement to me. Do you mind elaborating further?
I think bugs in the MMU hardware or the kernel accidentally configuring the MMU to allow access across processes that isn't supposed to be are quite rare.
[dead]
Maybe I'm just old and moany, and I need to step aside for bigger and better things such as Rust.
But.
Now rather than needing to understand just C to work on Git/kernel, you now need to also know Rust. The toolchain complexity is increasing, and the mix of these languages increases the barrier to entry.
I'm highly invested into Git, having learned the tooling and having a significant number of projects constructed within it. I've written my own Git clients and have built a web server around Git repositories. I don't want to lose the hack-ability of Git.
> I'm just old and moany, and I need to step aside for bigger and better things such as Rust.
You are. This is firm "I don't want to have to learn new things" territory, which isn't a viable attitude in this industry.
In any case Rust is usually easier than C (excluding buggy C which is very easy to write), and certainly easier than actually learning the Git or Linux codebases.
I think it is often under appreciated by people who haven't worked in security how hard high quality C is in practice.
We might also have different priorities. I do not care too much that google and apple want to lock down their smartphone spyware and sales platforms. The supply chain risks and maintenance burden imposed onto me by the Rust ecosystem are much more of an concern.
I don't know what this has to do with locking down phones, but I do appreciate not getting compromised just for cloning a repo or opening my laptop at a coffee shop.
(There is a persistent idea that the lack of memory safety in C is good because it allows people to jailbreak their phones.)
This is not what I said, but memory safety is certainly not anything which is a high priority for my own security. I still think memory safety is important and I also think Rust is an interesting language, but... the hype is exaggerated and driven by certain industry interests.
Rust isn't popular just because of memory safety though. I think the memory safety message is maybe a little too loud.
It's also a modern language with fantastic tooling, very high quality library ecosystem and a strong type system that reduces the chance of all kinds of bugs.
It's obviously not perfect: compile time is ... ok, there aren't any mature GUI toolkits (though that's true of many languages), async Rust has way too many footguns. But it's still waaaaay better than C or C++. In a different league.
Rust is a nice language, but it pushed too aggressively with the argument of "memory safety" at all cost ignoring other considerations. And Cargo is certainly a disaster even though it may be considered "fantastic tooling" by some. In any case, I do not think it is funny that I now depend on packages without timely security update in my distribution. This makes me less secure.
Is there better tooling in C/C++? No snark intended?
I guess this depends on what you consider good tooling. I am relatively happy with C tooling. But if you want to quickly assemble something from existing libraries, then language-level package managers like npm, cargo, pip are certainly super convenient. But then, I think this convenience comes at a high cost. We now have worms again, I thought those times were long over... IMHO package management belongs into a distribution with quality control and dependencies should be minimized and carefully selected.
It can have supply chain attacks like npm... That high quality library system is also a liability.
I'm an industry interest, in the sense that I work in the software industry and I have an interest in Rust.
Fair enough. I just find it mind boggling how much money flows into completely new language ecosystems compared to improvements for C/C++ tooling which would clearly much more effective if you really cared about overall security of the free software world.
The issue with investing similar levels of effort into making C++ safer is the C++ standards committee doesn't want to adopt those kinds of improvements.
Which is also the reason why we don't have #pragma once and many other extensions like it. Except we do. Compilers can add rust-like static analyzers without the standard committee mandating it.
I am not interested in C++, it is also far too complex. In my opinion software needs to become simpler and not more complicated, and I fear Rust might be a step into the wrong direction.
Personally, I use Rust (and have been using it for close to 9 years) because I've been part of multiple teams that have delivered reliable, performant systems software in it, within a budget that would clearly be impossible in any other language. Rust acts as a step change in getting things done.
While I really really want devices I can own, I don't want to compromise security to do it. We need to do two things:
1. Lobby politicians to write laws that allow us to actually own the devices we bought.
2. Stop the FUD that a device that can be jailbroken is insecure. I heard this from our frigging CSO, of all people, and it's patently false, just FUD by Apple and Google who want you to be afraid of owning your device.
I want a device that's as secure as possible, but that I can own. I don't want to hack my own self just to get what I paid for.
It is a sad thing but I do root against secure boot initiatives because they almost entirely work to limit user's freedom instead of improving their security.
Thanks, that take is... Something. I'm all for user-controllable hardware but I think that's a regulatory problem not a technical one.
How often do you clone a repo and don't immediately run build commands that execute scripts provided by the repo.
Who says you do not? :)
Oh, I think it's a real problem, that's why I'm in favor of improved tools.
> You are. This is firm "I don't want to have to learn new things" territory, which isn't a viable attitude in this industry.
It's viable, but limiting. Sometimes you have to do things you don't want to, which is why it's called work. But if you can choose what platforms you work on, you can orient towards things where things change less, and then you don't need to learn new things as often.
Chances are, if you get into the weeds in a lot of C programs, Rust is in your future, but it's viable to not want that, and to moan about it while doing it when you need to.
No one’s laying off COBOL programmers. Specialization has its upsides once the market isn’t saturated!
Well only because 99% of the world's COBOL developers were laid off decades ago (or switched to another language).
The more things change,
As someone with experience in this specific niche, yes they absolutely are. There are no longer ten thousand retail chains asking for COBOL-based counterpoint PoS mods on a yearly basis.
The COBOL market is basically tenured experts in existing systems or polyglots helping migrate the systems to VB or C# at this point. The market has plummeted and now it's in the final deflationary shrink before death.
Ah, damn, I’m sad to hear that. Always respected the language. :/
Technical debt is real tho and the rust-c interop is not the best ever.
Why not rewrite the entire git in rust and have two compatible versions?
It's not "having to learn something new", but "having to be good at two things, both of which are full languages with their own specifics, problems and ways to solve them, two sets of compilers and some duct tape to hold them together.
It's like putting steak on a pizza... pizza is good, steak is good, pizza on a steak might be good too, but to actually do that in production, you now need two prep stations and you can't mess up either one.
[flagged]
Rust is over 10 years old now. It has a track record of delivering what it promises, and a very satisfied growing userbase.
OTOH static analyzers for C have been around for longer than Rust, and we're still waiting for them to disprove Rice's theorem.
AI tools so far are famous for generating low-quality code, and generating bogus vulnerability reports. They may eventually get better and end up being used to make C code secure - see DARPA's TRACTOR program.
The applicability of Rice's theorem with respect to static analysis or abstract interpretation is more complex than you implied. First, static analysis tools are largely pattern-oriented. Pattern matching is how they sidestep undecidability. These tools have their place, but they aren't trying to be the tooling you or the parent claim. Instead, they are more useful to enforce coding style. This can be used to help with secure software development practices, but only by enforcing idiomatic style.
Bounded model checkers, on the other hand, are this tooling. They don't have to disprove Rice's theorem to work. In fact, they work directly with this theorem. They transform code into state equations that are run through an SMT solver. They are looking for logic errors, use-after-free, buffer overruns, etc. But, they also fail code for unterminated execution within the constraints of the simulation. If abstract interpretation through SMT states does not complete in a certain number of steps, then this is also considered a failure. The function or subset of the program only passes if the SMT solver can't find a satisfactory state that triggers one of these issues, through any possible input or external state.
These model checkers also provide the ability for user-defined assertions, making it possible to build and verify function contracts. This allows proof engineers to tie in proofs about higher level properties of code without having to build constructive proofs of all of this code.
Rust has its own issues. For instance, its core library is unsafe, because it has to use unsafe operations to interface with the OS, or to build containers or memory management models that simply can't be described with the borrow checker. This has led to its own CVEs. To strengthen the core library, core Rust developers have started using Kani -- a bounded model checker like those available for C or other languages.
Bounded model checking works. This tooling can be used to make either C or Rust safer. It can be used to augment proofs of theorems built in a proof assistant to extend this to implementation. The overhead of model checking is about that of unit testing, once you understand how to use it.
It is significantly less expensive to teach C developers how to model check their software using CBMC than it is to teach them Rust and then have them port code to Rust. Using CBMC properly, one can get better security guarantees than using vanilla Rust. Overall, an Ada + Spark, CBMC + C, Kani + Rust strategy coupled with constructive theory and proofs regarding overall architectural guarantees will yield equivalent safety and security. I'd trust such pairings of process and tooling -- regardless of language choice -- over any LLM derived solutions.
Sure it's possible in theory, but how many C codebases actually use formal verification? I don't think I've seen a single one. Git certainly doesn't do anything like that.
I have occasionally used CBMC for isolated functions, but that must already put me in the top 0.1% of formal verification users.
It's not used more because it is unknown, not because it is difficult to use or that it is impractical.
I've written several libraries and several services now that have 100% coverage via CBMC. I'm quite experienced with C development and with secure development, and reaching this point always finds a handful of potentially exploitable errors I would have missed. The development overhead of reaching this point is about the same as the overhead of getting to 80% unit test coverage using traditional test automation.
You're describing cases in which static analyzers/model checkers give up, and can't provide a definitive answer. To me this isn't side-stepping the undecidability problem, this is hitting the problem.
C's semantics create dead-ends for non-local reasoning about programs, so you get inconclusive/best-effort results propped up by heuristics. This is of course better than nothing, and still very useful for C, but it's weak and limited compared to the guarantees that safe Rust gives.
The bar set for Rust's static analysis and checks is to detect and prevent every UB in safe Rust code. If there's a false positive, people file it as a soundness bug or a CVE. If you can make Rust's libstd crash from safe Rust code, even if it requires deliberately invalid inputs, it's still a CVE for Rust. There is no comparable expectation of having anything reliably checkable in C. You can crash stdlib by feeding it invalid inputs, and it's not a CVE, just don't do that. Static analyzers are allowed to have false negatives, and it's normal.
You can get better guarantees for C if you restrict semantics of the language, add annotations/contracts for gaps in its type system, add assertions for things it can't check, and replace all the C code that the checker fails on with alternative idioms that fit the restricted model. But at that point it's not a silver bullet of "keep your C codebase, and just use a static analyzer", but it starts looking like a rewrite of C in a more restrictive dialect, and the more guarantees you want, the more code you need to annotate and adapt to the checks.
And this is basically Rust's approach. The unsafe Rust is pretty close to the semantics of C (with UB and all), but by default the code is restricted to a subset designed to be easy for static analysis to be able to guarantee it can't cause UB. Rust has a model checker for pointer aliasing and sharing of data across threads. It has a built-in static analyzer for memory management. It makes programmers specify contracts necessary for the analysis, and verifies that the declarations are logically consistent. It injects assertions for things it can't check at compile time, and gives an option to selectively bypass the checkers for code that doesn't fit their model. It also has a bunch of less rigorous static analyzers detecting certain patterns of logic errors, missing error handling, and flagging suspicious and unidiomatic code.
It would be amazing if C had a static analyzer that could reliably assure with a high level of certainty, out of the box, that a heavily multi-threaded complex code doesn't contain any UB, doesn't corrupt memory, and won't have use-after-free, even if the code is full of dynamic memory (de)allocations, callbacks, thread-locals, on-stack data of one thread shared with another, objects moved between threads, while mixing objects and code from multiple 3rd party libraries. Rust does that across millions lines of code, and it's not even a separate static analyzer with specially-written proofs, it's just how it works.
Such analysis requires code with sufficient annotations and restricted to design patterns that obviously conform to the checkable model. Rust had a luxury of having this from the start, and already has a whole ecosystem built around it.
C doesn't have that. You start from a much worse position (with mutable aliasing, const that barely does anything, and a type system without ownership or any thread safety information) and need to add checks and refactor code just to catch up to the baseline. And in the end, with all that effort, you end up with a C dialect peppered with macros, and merely fix one problem in C, without getting additional benefits of a modern language.
CBMC+C has a higher ceiling than vanilla Rust, and SMT solvers are more powerful, but the choice isn't limited to C+analyzers vs only plain Rust. You still can run additional checkers/solvers on top of everything Rust has built-in, and further proofs are easier thanks to being on top of stronger baseline guarantees and a stricter type system.
If we mark any case that might be undecidable as a failure case, and require that code be written that can be verified, then this is very much sidestepping undecidability by definition. Rust's borrow checker does the same exact thing. Write code that the borrow checker can't verify, and you'll get an error, even if it might be perfectly valid. That's by design, and it's absolutely a design meant to sidestep undecidability.
Yes, CBMC + C provides a higher ceiling. Coupling Kani with Rust results in the exact same ceiling as CBMC + C. Not a higher one. Kani compiles Rust to the same goto-C that CBMC compiles C to. Not a better one. The abstract model and theory that Kani provides is far more strict that what Rust provides with its borrow checker and static analysis. It's also more universal, which is why Kani works on both safe and unsafe Rust.
If you like Rust, great. Use it. But, at the point of coupling Kani and Rust, it's reaching safety parity with model checked C, and not surpassing it. That's fine. Similar safety parity can be reached with Ada + Spark, C++ and ESBMC, Java and JBMC, etc. There are many ways of reaching the same goal.
There's no need to pepper C with macros or to require a stronger type system with C to use CBMC and to get similar guarantees. Strong type systems do provide some structure -- and there's nothing wrong with using one -- but unless we are talking about building a dependent type system, such as what is provided with Lean 4, Coq, Agda, etc., it's not enough to add equivalent safety. A dependent type system also adds undecidability, requiring proofs and tactics to verify the types. That's great, but it's also a much more involved proposition than using a model checker. Rust's H-M type system, while certainly nice for what it is, is limited in what safety guarantees it can make. At that point, choosing a language with a stronger type system or not is a style choice. Arguably, it lets you organize software in a better way that would require manual work in other languages. Maybe this makes sense for your team, and maybe it doesn't. Plenty of people write software in Lisp, Python, Ruby, or similar languages with dynamic and duck typing. They can build highly organized and safe software. In fact, such software can be made safe, much as C can be made safe with the appropriate application of process and tooling.
I'm not defending C or attacking Rust here. I'm pointing out that model checking makes both safer than either can be on their own. As with my original reply, model checking is something different than static analysis, and it's something greater than what either vanilla C or vanilla Rust can provide on their own. Does safe vanilla Rust have better memory safety than vanilla C? Of course. Is it automatically safe against the two dozen other classes of attacks by default and without careful software development? No. Is it automatically safe against these attacks with model checking? Also no. However, we can use model checking to demonstrate the absence of entire classes of bugs -- each of these classes of bugs -- whether we model check software written in C or in Rust.
If I had to choose between model checking an existing codebase (git or the Linux kernel), or slowly rewriting it in another language, I'd choose the former every time. It provides, by far, the largest gain for the least amount of work.
In my experience current AI is still far from reasoning about the kind of hard-to-spot bugs in C that lead to the worst exploits. Rust solves most of these by design. It isn't about adding a second language - it is about slowly phasing out a language that is being misused in areas it shouldn't be in.
C will at some point be relegated to being an educational language, incredibly valuable due to few but good abstractions over assembly. It will continue to exist for decades in most systems, but hopefully it won't be used outside of the maintenance of legacy systems.
> I've written my own Git clients and have built a web server around Git repositories. I don't want to lose the hack-ability of Git.
And they will keep working because the repository format isn't affected by the language git is written in.
Believing that this will be the case forever is naive. At some point there will be extensions. Then those extensions will become all but mandatory for interacting with other git users.
AFAIK git already uses multiple languages, github says its 50% C, 38% shell, 4% perl, then 4% TCL python 1%
So "another language" here probably does not weigh as much, especially considering perl/TCL are the weirder one there.
But for big projects like linux and git, this could actually be a consolidation step: you spent decades growing, hacking things on top of each other.
You have mostly figured out what this project is and where it is going, it's time to think about safety, performance and remove old hacks.
Rust feels like a good fit, imho.
Perl, TCL and Python are all written in C, as well as many shells, so despite their interdependency the total complexity can be satisfied with a C11 compiler.
Oh no, we need Rust all the way to the core. /s
I did check this out. The shell, perl and python are likely for scripting and not used during runtime. TCL is likely some form of dynamic scripting.
I think we also have to be honest about what the project here is too, it's not to have both C and Rust together, but to replace all C with Rust. In which case, it probably makes sense to just clone to repo and work on a fork like they did with SSH.
> The shell, perl and python are likely for scripting and not used during runtime.
Some git subcommands are implemented in these. git filter-branch is a shell script, git cvsimport is a Perl script, and git p4 (perforce interop) is a Python script. There are not too many left these days (git add -p/-i also used to call a Perl script), but they exist.
I'm sure you are aware, reading between the lines of what you said, why, but for some others who aren't aware of the history of git; it was originally about 50% C and 50% Perl, the performance critical parts were written in C and then various git commands were written in Perl. Over time almost all the Perl was removed because there were less Perl monks than C devs.
Now it would seem the logic is reversed; even though there are less Rust devs than C devs, Rust is going to replace C. Maybe now that git is large enough and entrenched enough such a move can be forced through.
> it was originally about 50% C and 50% Perl, the performance critical parts were written in C and then various git commands were written in Perl.
IIRC, it was mostly shell, not Perl, and looking at the proportion is misleading: the low-level commands (the "plumbing") like git-cat-file or git-commit-tree were all in C, while the more user-friendly commands (the "porcelain") like git-log or git-commit were all shell scripts calling the low-level commands. Yes, even things we consider fundamental today like "git commit" were shell scripts.
I believe gitk and git-gui are written in tcl. Those are definitely things that get shipped to the user, so (at least for those parts) you wouldn't need to have a toolchain on the build server.
A number of the git commands were implemented in perl and shell. Now I see only git-svn is perl here for me and there's still a few shell scripts in /usr/libexec/git.
> Now rather than needing to understand just C to work on Git/kernel, you now need to also know Rust.
I'm yet to know a single software engineer who isn't well versed on multiple programming languages. This is not a problem.
Agreed. And if someone is interested in contributing to the Linux kernel, a new programming language is far from the hardest thing that they need to learn...
Rust will, in fact, make it significantly easier to contribute.
In C, you have to remember lots of rules of when what is safe and what locks to hold when. In Rust, APIs are structured to make unsafe use impossible without explicitly saying `unsafe`.
Concrete example: in Rust, locking a mutex returns a handle that lets you access the data protected by the mutex, and the mutex is unlocked when the handle is dropped.
> Concrete example: in Rust, locking a mutex returns a handle that lets you access the data protected by the mutex, and the mutex is unlocked when the handle is dropped.
This is how it works in the kernel on the C side, too. Usually by using guard/scoped_guard which wrap the generic mutexes with some RAII.
Interestingly enough, this is the only mention of scoped_guard in Documentation/. I will definitely argue that (that part of) Rust is way more approachable.
Except now these software engineers have to code switch between languages.
Could you software engineers stop making things harder for yourselves and playing this meaningless flex of a status game, and you know, focus on something tangible, meaningful, instead of adding more bureaucracy?
I'm guessing you aren't a software engineer based on this comment, but the difference between programming languages is tangible and meaningful. It isn't like human languages where they're mostly basically the same and achieve the same thing.
And code switching between languages is not hard at all.
It's hilarious that you can assume such a thing just by a couple of words on the internet. Or maybe I'm not a 'software engineer' by your standards because unlike your closed group of SWEs I'm a lot less focused on resume padding and keeping my codebase sane and not exploding in complexity.
I should specify - it's hard in that it's troublesome to have to code switch and do a bunch of recall before working on the thing.
Say you've not worked on this secondary language for a long time, which absolutely happens, and have spend hours of effort to recall it. This takes time that you need not spend on, it's how your memory works.
I didn’t make the assumption but it sounded like a reasonable assumption based on the pronouns you used. You said “could you software engineers stop making things harder for yourselves.” A reasonable interpretation of this is that you aren’t a software engineer.
Reinforced softly by the rest of your comment not being technically sound. Adding a second language that is meaningfully different in its strengths and weaknesses isn’t “bureaucracy”. Bureaucracy is more like “sign a CLA before you can contribute”.
Okay then how about another interpretation: I'm a software engineer questioning the boarder group of SWEs on what they're trying. (Somehow I have to show you another interpretation I can't believe how tunneled people can be).
Also bureaucracy is added friction, usually done by humans. It can be found everywhere where you're working with humans, from leetcode interviews and code styles and practices. It's not just a bunch of signed papers.
Sure you can add the second language if adds value, but let's not pretend that the added friction isn't there. If you could solve your problems without the friction of second language it would be better.
> I should specify - it's hard in that it's troublesome to have to code switch and do a bunch of recall before working on the thing.
You don't sound like you have any experience working on software projects. I can tell you it's not hard to switch between programming languages. If anything, the difficulty level is placed on onboarding onto projects you are not familiar with, but the programming language in use is far from being a relevant factor if you already are familiar with it.
you're completely missing the point.
Even if it's 'not hard' your brain has to compensate for switching to another realm/space and that takes energy and time especially if you haven't used that particular space for a long time.
This is backed by science. Go read up on short-term working memory and crystallized memory.
All this will add up the maintenance costs, so it had better be a good trade off.
Dude you said "Could you software engineers stop..."
In normal English that means you aren't a software engineer.
[flagged]
[flagged]
Look at it from the other angle, there are many developers (myself included), especially younger developers who would much prefer developing rust to c, and for at least some of them, don't want to learn how to write c (including how to avoid undefined behavior).
> I've written my own Git clients and have built a web server around Git repositories. I don't want to lose the hack-ability of Git.
How does the git project using rust inhibit your ability to do any of that?
I've also sent some patches git's way and I can't say I'm thrilled about being forced to (finally) learn Rust if I want to contribute again in the future. I guess I'm outdated...
They're proposing porting over one small piece that has no dependencies and exposing it to the rest of git via a C interface. Yes, they'll presumably port more over in the future if it goes well, but it's a gross exaggeration to characterize this as somehow making it impossible to contribute without knowing Rust.
I know that it is a "slippery slope" argument, but in the future, it will become more difficult to contribute without knowing Rust. That's the entire point of introducing it.
I guess in a certain sense, yes, the total number of lines of code in C will go down, so the difficulty of finding a place to contribute will go down by that metric. On the other hand, I'd argue that it seems rather unlikely that literally all of the C code will be gone from git at least over the next couple of decades (and that's assuming that there's even a desire to rewrite it entirely, which doesn't seem like it's anywhere close to even being possible to discuss seriously any time soon), so it seems like the amount of difficulty will be so small that it's a bit silly to worry about it. Keep in mind that there's still not anything stopping new code from being written in C just because new code might also be possible to write in Rust. Right now, it's literally impossible to contribute Rust code to git, so if it becomes infinitesimally harder to contribute C code to make contributing Rust code possible, that's still a arguably a much larger increase in the net "contributability" of the git codebase, for lack of a better term.
And also, a lot of people who hate C, or who never learned it well, will be able to contribute to more and more areas of the Linux kernel.
I understand that it's a minor change in its current state. However, it is a fact that the long term goal is to port everything to rust. Once that goal is accomplished, rust will be required. So it is not at all a gross exaggeration. It's a prediction of the future.
I don't even disagree with that goal, I think it's desirable that things be written in rust, it's a really good language that provides a lot of benefits. I think I've just been infected with the C virus too long. I can't even tolerate C++.
> I understand that it's a minor change in its current state. However, it is a fact that the long term goal is to port everything to rust. Once that goal is accomplished, rust will be required. So it is not at all a gross exaggeration. It's a prediction of the future.
Whose goal is this? I know that there's a perception of there being a loud, vocal contingent of people who have this goal in general, but is there anyone who actually is involved in git maintenance who has stated this intent? The proposal linked above states the following:
> As said, the entire goal is for us to have an easy playground that we can experiment on and develop the infrastructure incrementally without yet having to commit to anything.
> I'm mostly splitting out the topic of introducing Rust from the larger series that introduce it into xdiff so that we can focus more on the actual process of introducing Rust into Git and less on the potential features that we want to build on top of it.
My reading of this is that there are specific features that they at least want to consider using Rust for, and that having support for it in the build process is a prerequisite for that. That doesn't imply at all to me that they would want to rewrite all existing features in it, or to prevent new C code from being written for git after some point in the future. Even if there are some people involved with that goal, it hardly seems like that goal is shared by everyone who might be involved in that type of decision, and I'd argue that people wouldn't even have to be in agreement about that goal to be in favor of this step. I don't find it that hard to believe someone might want to allow using Rust for new features but generally be against the idea of rewriting all features in Rust.
Having written Rust professionally for six years and used it for around a decade, my experience is that there are surprisingly few prolific Rust programmers who seem to devote much time to thinking about trying to get existing projects to rewrite all of their codebase into Rust. It's much more likely that they'd just start an entirely new project that overlaps substantially with an existing one, although even then it's rare for the new project to ever get anywhere close to fully replacing the existing one (if that's even the goal); ripgrep might have wide adoption alongside grep, but grep isn't going anywhere, and I suspect that burntsushi would be one of the last people to suggest it would.
There's also a lot of significant work spent on improving Rust's ability to interoperate with other languages. Libraries made with bindgen (and cbindgen in the other direction) probably have done far more to ameliorate Rust programmers to using existing libraries in other languages than to expedite them being rewritten, and there are some popular wrappers that try to go beyond that and try to provide even more idiomatic wrappers for specific languages like pyo3 for Python, neon for NodeJS, and cxx for C++ (which was written by the same person who basically single-handedly created the current proc macro ecosystem in Rust alongside specific libraries utilizing it like serde and thiserror, so hardly someone who would have no motivation to try to have more code rewritten in Rust). If there are people making an effort being made to try to tell everyone to rewrite everything in Rust, there's just as much effort going on from people writing Rust to actively try to work with existing code in other languages, and their work is having far more impact than the first group.
I honestly can't help but wonder if the only reason the debate about rewriting stuff in Rust is still going on is that the people against it engage with it rather than just ignoring it as empty bluster. My hot take is that there's never been anywhere close to the critical mass of people with the skill and desire to put in the work that would be required to make it happen, and there likely never will be, so the debate been sustained on one side by a range from armchair quarterbacking to intentional ragebait, and on the other side by a range from misguided attempts to engage seriously with what's essentially always been just a meme to pearl-clutching at the idea that someone would dare question the status quo. Maybe there was an interesting philosophical debate to be had about the hypothetical merits of rewriting the world in Rust in the early days, but we're long past the point where there's anything useful left to say on the topic, so we'd all be better off by just collectively moving on and figuring out how things will play out in the real world. C and C++ are definitely not going anywhere in our lifetimes, and Rust has sufficiently proved that it can be used successfully in professional contexts, so the remaining questions are all going to be about tradeoffs between legitimate choices rather than jockeying to see who sticks around in a "winner-takes-all" ecosystem.
Yes, you're correct about me. :-)
I think it is really as simple as this: change is hard and a lot of people struggle with it to varying degrees for different reasons. Just look around at the people in your life and how they react to changes. It's really the same sort of pattern that plays out with Rust.
I distinctly remember reading the comments in the thread here about the initial release of ripgrep, and I remember coming away with a strong impression not just of your technical skill (which was apparent even before reading the thread), but just how pragmatic your viewpoint was. I didn't get the feeling you had any desire to displace anything, but just to solve a specific problem for people who wanted it, and if some people preferred not to use it, that was fine too! As someone who was fairly early on in my software career then, it was an extremely valuable lesson in humility from someone with a pedigree that I presumably wouldn't ever match.
Your reappearance here after my mention is probably another useful lesson for me to have a bit more empathy for those who are reacting more strongly to this announcement than I'd otherwise understand.
Thanks for the kind words! And I'm not perfect either. I find the resistance to change to be extremely frustrating at points. And especially so when it involves misinformation of some sort.
I feel the same way about C code though. I don't think C gets the right to be the one true programming language that everyone must know forever.
Once the C evangelism strike force pushed C code into rust projects you might have an argument.
Im on the same boat :) But no worries. You can always build and use older git without rust. Of course, it will work for a while until those kids will change the proto for the "better". And being old and grumpy also means, you can slowly care less and less about all that moot :)
Kids: now downvote it into oblivion :) Like I give a shit...
Removing Perl and adding Rust instead is probably reducing complexity rather than increasing it.
Rust suffers from the same problems that functional programming languages suffer from: deep learning curve and high complexity. The high complexity is intended to push more runtime errors back to compile time, but boy does that mean the language pays for it. Rust is a tire fire of complexity.
For these reasons I believe it is not a good idea. The kernel also sort of rejected Rust. The kernel is complex enough without adding a Haskell type system and a lisp-level macro system capable of obfuscating what code calls what code. serde code is so hard to spelunk for this reason. Contrast this with Go's Unmarshall, much easier to follow.
That's... an interesting point of view.
I personally find functional programming languages, including Rust, much clearer than C or Go, in particular because you can offload much information onto the compiler. The example of Serde feels a bit weird, because I don't think I've ever encountered issues with Serde code, while almost 100% of the times I've used Go in production, I've needed to debug through Go's Unmarshal and its... interesting implementation.
Also, last time I checked, the kernel didn't reject Rust. There was a conflict between two specific developers on the best place to store some headers, which is slightly different.
C is simple. Good, fast, secure C is complex.
Rust has a higher initial learning curve than C. But the gap between bare-minimum Rust and fast and secure Rust is much smaller than with C.
> The high complexity is intended to push more runtime errors back to compile time
I would almost say that the ergonomics of allowing this is almost as important as the borrow checker!
Yes, but :) Rust isn't complex because it has functional traits, but rather because of its other design choices. Complex, nonetheless, but, I'd also say, looks "scarier" from the outside. I recently gave in and learned it, and it's much easier to handle than I thought before.
I actually think Rust is pretty easy to pick up for anyone that’s written Typescript and can use their linter to understand references and unwrapping a Result and catching an error.
Beyond that, Rust has pretty forgiving syntax.
No Linux did not reject Rust from the kernel.
> Rust is a tire fire of complexity.
And C isn't?
It really isn't. C is very simple.
Yep, love the simplicity of the strict aliasing rule.
We are talking about two axis here:
- Complex by design
- Complex to use
C is complex to use because it is simple by design.
Though I would argue the absurd amount of undefined behavior makes it not even simple by design.
> Though I would argue the absurd amount of undefined behavior makes it not even simple by design.
What? UB is the simplest thing you can do when you just don't want to specify behavior. Any specified behavior can't be simpler that unspecified because it's just comparing nothing with something
Every part of rust is undefined because there is no spec. It’s whatever their compiler does.
Ferrocene has donated their specification to the project, so there absolutely is a specification now. What you can argue is that the memory model isn‘t fully defined, but it‘s almost certainly going to land somewhere around stacked borrows or tree borrows. Arguably C doesn‘t fare much better in that regard though as it doesn‘t even properly define its pointer provenance model either and Rust is much closer to defining its.
Oh something has changed in the last 6 months? glad they are making progress on the spec.
Note that, in compiler lingo, unspecified and undefined are two different things. C++ is specified to death, but full of undefined behavior (and also some unspecified behavior).
Rust is largely not specified, but aims to have no undefined behavior (outside of unsafe blocks).
I am aware but without a spec we don’t know which is which. You can’t say it has no undefined behavior because what happens is you try to specify it and find gaps or challenges.
In C undefined is used primarily when there is not a reliable and efficient mechanism for detecting a problem is happening. For example a C implementation may check every single invalid pointer deref, but more realistically it only detects only extreme out of range. So it’s undefined what happens.
Good point.
That being said, at least in C++, undefined has been used largely as a joker for compiler optimizations. In Rust, if my memory serves, having the same code produce different results depending on the optimization level would be considered a pretty serious bug. In C++, it's par for the course.
I was going to roll my eyes at "Rust is a tire fire of complexity". Because it's not. Especially compared to C++. But then you just go on to outright lie in your second paragraph.
Dear Rust haters, lying about Rust in the Linux kernel is not effective for your cause, and in fact just makes it further look like you're throwing a tantrum. Downvoting me doesn't change the fact that more and more Rust is merged into the kernel, new, serious drivers are being written in Rust. It also doesn't change Firefox, Chrome, Microsoft, the US Government and others are recommending and writing new code in Rust. It's over, qq. It's absurd.
I really wish I could find the Lobsters comment the other day from someone that broke down the incredible list of nuanced, spec-level detail you needed to know about C++ to actually use it at scale in large projects. It's laughably, absurdly complex compared to Rust in huge code bases.
The title is a bit of a misnomer. Rust will become mandatory in the build system, not mandatory for future patches.
What does that mean? Is it mandatory for building the build system or also for building the application?
If you want to build the project, you will need a Rust compiler in your toolchain.
Thanks! I've added that to the title above. If it's somehow inaccurate, we can change it again.
What's the point of trying to introduce Rust everywhere? Git is a mature piece of software and I doubt a lot of new code needs to be written. Also, Rust is very complex relative to C. If you really need classes, templates, etc, you can stick to C++ 98 and get something that is still clean and understandable relative to recent C++ standards and Rust.
> I doubt a lot of new code needs to be written
I bet someone could have easily said the same thing a year ago or even 5 years ago. It's easy to forget the progress and there are a lot of things happening under the hood that are not obvious to casual users. Just take a look at the git log of the git repository itself. It has a steady rate of close to 100 commits per week for the past year.
That says nothing of potential new features either. There is so much more to unlock in the VCS space, look at new tools like jj for example. Additionally, security landscape is getting more aggressive every day, keeping up with and being proactive against vulnerabilities will be needed more than today.
Is this a bit of chickens coming home to roost as far as developer culture forgetting how to work with cross-compiling toolchains? When I started my career, it was common understanding that the developer may be manipulating sourcecode on a different system and/or platform than where it will be executed.
Our source control, editing, compilation, and execution was understood to happen in different computational spaces, with possible copy/staging steps in between. You were doing something very naive if you assumed you could execute the built program on the same system where the sourcecode files existed and the editor/IDE was running.
This was a significant fraction of the build rules we used to manage. E.g. configuration steps had to understand that the target platform being measured/characterized is not the same as the platform executing the build tools. And to actually execute a built object may require remote file copies and remote program invocation.
Actually, the Rust toolchain makes cross-compiling way easier than any other fully-compiled language I've ever used. There are like 100 different platforms you can target by just setting the `--target` flag, and they all pretty much just work on any host platform.
Sounds like the real issue is that some Git developers have ancient, rigid requirements for their own development machines.
> Actually, the Rust toolchain makes cross-compiling way easier than any other fully-compiled language I've ever used
Zig takes the crown on that one, to the point that some people use Zig to cross-compile Go projects with CGo dependencies.
The way Zig solves this problem "better" than Rust is by claiming the target libraries as part of its distribution and building those on demand. It makes for a really excellent experience cross-building.
Rust might have a harder time if it wanted a corresponding feature because it doesn't natively build C like Zig does (using libclang). Either it would have to start using libclang or ship with rust re-implementations of the C library. AFAIK it's impossible to write the C++ library in Rust though.
That has not been my experience. I develop on Windows and need to compile for Linux. After spending several hours trying to get cross-compilation working, I gave up and do it via WSL now.
I switched from Go and I feel like Go was much better at this than Rust.
(I tried “cross” but it was very slow and I found it faster to rsync the files inside the container and then run the build scripts)
I'd bet the difference is that Go has a default assumption that everything is reimplemented in Go and calling C is awkward and slow, meanwhile lots of low-level Rust libraries are actually just type-safety wrappers over C libraries.
Others have said Rust does not support NonStop.
But, my point is you shouldn't even have to cross-compile Git to a platform like NonStop in order to develop NonStop apps. So the portability of Rust shouldn't even matter here. The app developer should be able to run their Git commands on a supported platform and cross-compile their own app to NonStop.
I haven't double checked, but my recollection of that story was that they were using Git as part of the operations at runtime, not (just) as a development dependency.
Ah, I see Tom the Genius has moved on from using Subversion for his enterprise JSON DSL
A good example of it is how easy it is to do WASM from rust. WASM is even one of the harder platforms to target with rust.
I suspect the majority of developers never even learnt as such. Cross-compilation is almost always a second-class citizen and I never expect it to work correctly on an external project. Linux distros have given up, with fedora even insisting on running compilation on the real target hardware for platforms like the raspberry pi, which is kind of insane, and as a result basically no-one puts in the effort to make it work.
> Is this a bit of chickens coming home to roost as far as developer culture forgetting how to work with cross-compiling toolchains?
I don't understand your comment. Completely ignorning Rust the modern state of cross-compilation is an unmitigated disaster.
Linux is especially bad because glibc is badly architected pile of garbage stuck in the 80s. It should be trivially possible to target any minimum glibc version for any possible Linux hardware environment. But glibc and Linux distros don't even attempt to make this possible. Linux toolchains make it nearly impossible to not use the default system libraries which is the opposite of correct for cross-compiling.
Zig moves mountains to make cross-compiling possible. But almost no projects actually attempt to support cross-compile.
You mostly understand my comment, but not my graybeard perspective.
The modern disaster is exactly that developer culture has forgotten how to do this for the most part.
But, you're focusing on Rust compiling when I don't think it is relevant. If those weird financial platform developers were aware of cross-compiling, they wouldn't think that a developer tool like Git has to be built to run on the target financial server platform. They would be capable of cross-compiling or otherwise staging their build into that platform while still using Git on a supported workstation platform to manage the sources.
Definitely agree the world has utterly lost the principle of cross-compiling. Support for cross-compile really should be a first-class and sacrosanct principle.
> Introducing Rust is impossible for some platforms and hard for others.
Please could someone elaborate on this.
There's at least one proprietary platform that supports Git built by via a vendor-provided C compiler, but for which no public documentation exists and therefore no LLVM support is possible.
Ctrl+F for "NonStop" in https://lwn.net/Articles/998115/
Shouldn't these platforms work on getting Rust to support it rather than have our tools limited by what they can consume? https://github.com/Rust-GCC/gccrs
A maintainer for that specific platform was more into the line of thinking that Git should bend over backwards to support them because "loss of support could have societal impact [...] Leaving debit or credit card authorizers without a supported git would be, let's say, "bad"."
To me it looks like big corps enjoying the idea of having free service so they can avoid maintaining their own stuff, and trying the "too big to fail" fiddle on open source maintainers, with little effect.
It's additionally ridiculous because git is a code management tool. Maybe they are using it for something much more wild than that (why?) but I assume this is mostly just a complaint that they can't do `git pull` from their wonky architecture that they are building on. They could literally have a network mount and externally manage the git if they still need it.
It's not like older versions of git won't work perfectly fine. Git has great backwards compatibility. And if there is a break, seems like a good opportunity for them to fork and fix the break.
And lets be perfectly clear. These are very often systems built on top of a mountain of open source software. These companies will even have custom patched tools like gcc that they aren't willing to upstream because some manager decided they couldn't just give away the code they paid an engineer to write. I may feel bad for the situation it puts the engineers in, I feel absolutely no remorse for the companies because their greed put them in these situations in the first place.
> Leaving debit or credit card authorizers without a supported git would be, let's say, "bad".
Oh no, if only these massive companies that print money could do something as unthinkable as pay for a support contract!
Yes. It benefits them to have ubiquitous tools supported on their system. The vendors should put in the work to make that possible.
I don’t maintain any tools as popular as git or you’d know me by name, but darned if I’m going to put in more than about 2 minutes per year supporting non-Unix.
(This said as someone who was once paid to improve Ansible’s AIX support for an employer. Life’s too short to do that nonsense for free.)
As you're someone very familiar with Ansible, what are your thoughts on it in regards to IBM's imminent complete absorption of RedHat? I can't imagine Ansible, or any other RedHat product, doing well with that.
I wouldn’t say I’m very familiar. I don’t use it extensively anymore, and not at all at work. But in general, I can’t imagine a way in which IBM’s own corporate culture could contribute positively to any FOSS projects if they removed the RedHat veneer. Not saying it’s impossible, just that my imagination is more limited than the idea requires.
IBM has been, and still is, a big contributor to a bunch of Eclipse projects, as their own tools build on those. The people there were both really skilled, friendly and professional. Different divisions and departments can have huge cultural differences and priorities, obviously, but “IBM” doesn’t automatically mean bad for OSS projects.
I'm sure some of RedHat stuff will end up in the Apache Foundation once IBM realizes it has no interest in them.
There isn't even a Nonstop port of GCC yet. Today, Nonstop is big-endian x86-64, so tacking this onto the existing backend is going to be interesting.
That platform doesn’t support GCC either.
Isn’t that’s what’s happening? The post says they’re moving forward.
[flagged]
On the other hand: why should the entire open-source world screech to a halt just because some new development is incompatible with the ecosystem of a proprietary niche system developed by a billion-dollar freeloader?
HPE NonStop doesn't need to do anything with Rust, and nobody is forcing them to. They have voluntarily chosen to use an obscure proprietary toolchain instead of contributing to GCC or LLVM like everyone else: they could have gotten Rust support for free, but they believed staying proprietary was more important.
Then they chose to make a third-party project (Git) a crucial part of that ecosystem, without contributing time and effort into maintaining it. It's open source, so this is perfectly fine to do. On the other hand, it also means they don't get a say in how the project is developed, and what direction it will take in the future. But hey, they believed saving a few bucks was more important.
And now it has blown up in their face, and they are trying to control the direction the third-party project is heading by playing the "mission-critical infrastructure" card and claiming that the needs of their handful of users is more important than the millions of non-HPE users.
Right now there are three options available to HPE NonStop users:
1. Fork git. Don't like the direction it is heading? Then just do it yourself. Cheapest option short-term, but it of course requires investing serious developer effort long-term to stay up-to-date, rather than just sending the occasional patch upstream.
2. Port GCC / LLVM. That's usually the direction obscure platforms go. You bite the bullet once, but get to reap the benefits afterwards. From the perspective of the open-source community, if your platform doesn't have GCC support it might as well not exist. If you want to keep freeloading off of it, it's best to stop fighting this part. However, it requires investing developer effort - especially when you want to maintain a proprietary fork due to Business Reasons rather than upstreaming your changes like everyone else.
3. Write your own proprietary snowflake Rust compiler. You get to keep full control, but it'll require a significant developer effort. And you have to "muck around" with Rust, of course.
HPE NonStop and its ecosystem can do whatever it wants, but it doesn't get to make demands just because their myopic short-term business vision suddenly leaves them having to spend effort on maintaining it. This time it is caused by Git adopting Rust, but it will happen again. Next week it'll be something like libxml or openssl or ssh or who-knows-what. Either accept that breakage is inevitable when depending on third-party components, or invest time into staying compatible with the ecosystem.
At this point maybe it's time to let them solve the problem they've created for themselves by insisting on a closed C compiler in 2025.
[flagged]
>> insisting on a closed C compiler in 2025.
> Everything should use one compiler, one run-time and one package manager.
If you think that calling out closed C compilers is somehow an argument for a single toolchain for all things, I doubt there's anything I can do to help educate you about why this isn't the case. If you do understand and are choosing to purposely misinterpret what I said, there are a lot of much stronger arguments you could make to support your point than that.
Even ignoring all of that, there's a much larger point that you've kind of glossed over here by:
> The shitheads who insist on using alternative compilers and platforms don't deserve tools
There's frequently discussion around the the expectations between open source project maintainers and users, and in the same way that users are under no obligation to provide compensation for projects they use, projects don't have any obligations to provide support indefinitely for any arbitrary set of circumstances, even if they happen to for a while. Maintainers sometimes will make decisions weighing tradeoffs between supporting a minority of users or making a technical change they feel will help them maintain the project better in the long-term differently than the users will. It's totally valid to criticize those decisions on technical grounds, but it's worth recognizing that these types of choices are inevitable, and there's nothing specific about C or Rust that will change that in the long run. Even with a single programming language within a single platform, the choice of what features to implement or not implement could make or break whether a tool works for someone's specific use case. At the end of the day, there's a finite amount of work people spend on a given project, and there needs to be a decision about what to spend it on.
For various libs, you provide a way to build without it. If it's not auto-detected, or explicitly disabled via the configure command line, then don't try to use it. Then whatever depends on it just doesn't work. If for some insane reason git integrates XML and uses libxml for some feature, let it build without the feature for someone who doesn't want to provide libxml.
> At the end of the day, there's a finite amount of work people spend on a given project
Integrating Rust shows you have too much time on your hands; the people who are affected by that, not necessarily so.
> Integrating Rust shows you have too much time on your hands; the people who are affected by that, not necessarily so.
As cited elsewhere in the this thread, the person making this proposal on the mailing list has been involved in significant contributions to git in the past, so I'd be inclined to trust their judgment about whether it's a worthwhile use of their time in the absence of evidence to the contrary. If you have something that would indicate this proposal was made in bad faith, I'd certainly be interested to see it, but otherwise, I don't see how you can make this claim other than as your own subjective opinion. That's fine, but I can't say I'm shocked that the people actually making the decisions on how to maintain git don't find it convincing.
Weighted by user count for a developer tool like Git, Rust is a more portable language than the combination of C and bash currently in use.
Maybe they can resurrect the C backend for LLVM and run that through their proprietary compilers?
It's probably not straightforward but the users of NonStop hardware have a lot of money so I'm sure they could find a way.
Rust has an experimental C backend of its own as part of rustc_codegen_clr https://github.com/FractalFir/rustc_codegen_clr . Would probably work better than trying to transpile C from general LLVM IR.
Some people have demonstrated portability using the WASM target, translating that to C89 via w2c2, and then compiling _that_ for the final target.
Given that the maintainer previously said they had tried to pay to get GCC and LLVM ported multiple times, all of which failed, money doesn’t seem to have helped.
Surely the question is how much they tried to pay? Clearly the answer is "not enough".
I mean at one point I had LLVM targeting Xbox 360, PS3, and Wii so I'm sure it's possible, it just needs some imagination and elbow grease :)
> There's at least one proprietary platform that supports Git built by via a vendor-provided C compiler, but for which no public documentation exists and therefore no LLVM support is possible.
That's fine. The only impact is that they won't be able to use the latest and greatest release of Git.
Once those platforms work on their support for Rust they will be able to jump back to the latest and greatest.
It's sad to see people be so nonchalant about potentially killing off smaller platforms like this. As more barriers to entry are added, competition is going to decrease, and the software ecosystem is going to keep getting worse. First you need a lib C, now you need lib C and Rust, ...
But no doubt it's a great way for the big companies funding Rust development to undermine smaller players...
It's kind of funny to see f-ing HPE with 60k employees somehow being labeled as the poor underdog that should be supported by the open-source community for free and can't be expected to take care of software running on their premium hardware for banks etc by themselves.
I think you misread my comment because I didn't say anything like that.
In any case HPE may have 60k employees but they're still working to create a smaller platform.
It actually demonstrates the point I was making. If a company with 60k employees can't keep up then what chance do startups and smaller companies have?
> If a company with 60k employees can't keep up then what chance do startups and smaller companies have?
They build on open source infrastructure like LLVM, which a smaller company will probably be doing anyway.
Sure, but let's not pretend that doesn't kill diversity and entrench a few big players.
The alternative is killing diversity of programming languages, so it's hard to win either way.
HP made nearly $60b last year. They can fund the development of the tools they need for their 50 year old system that apparently powers lots of financial institutions. It's absurd to blame volunteer developers for not wanting to bend over backwards, just to ensure these institutions have the absolute latest git release, which they certainly do not need.
Oh they absolutely can, they just choose not to. To just make some tools work again there's also many slightly odd workarounds one could choose over porting the Rust compiler.
> It's sad to see people be so nonchalant about potentially killing off smaller platforms like this.
Your comment is needlessly dramatic. The only hypothetical impact this has is that whoever uses these platforms won't have upgrades until they do something about it, and the latest and greatest releases will only run if the companies behind these platforms invests in their maintenance.
This is not a good enough reason to prevent the whole world from benefiting from better tooling. This is not a lowest common denominator thing. Those platforms went out of their way to lag in interpretability, and this is the natural consequence of these decisions.
Why should free software projects bend over backwards to support obscure proprietary platforms? Sounds absurd to me
Won't someome think of the financial sector
Reminds me of a conversation about TLS and how a certain bank wanted to insert a backdoor into all of TLS for their convenience.
Sucks to be that platform?
Seriously, I guess they just have to live without git if they're not willing to take on support for its tool chain. Nobody cares about NonStop but the very small number of people who use it... who are, by the way, very well capable of paying for it.
I strongly agree. I read some of the counter arguments, like this will make it too hard for NonStop devs to use git, and maybe make them not use it at all. Those don’t resonate with me at all. So what? What value does them using git provide to the git developers? I couldn’t care less if NonStop devs can use my own software at all. And since they’re exclusively at giant, well-financed corporations, they can crack open that wallet and pay someone to do the hard work if it means than much to them.
"You have to backport security fixes for your own tiny platform because your build environment doesn't support our codebase or make your build environment support our codebase" seems like a 100% reasonable stance to me
> your build environment doesn't support our codebase
If that is due to the build environment deviating from the standard, then I agree with you. However, when its due to the codebase deviating from the standard, then why blame the build environment developers for expecting codebases to adhere to standards. That's the whole point of standards.
Is there a standard that all software must be developed in ANSI C that I missed, or something? The git developers are saying - we want to use Rust because we think it will save us development effort. NonStop people are saying we can't run this on our platform. It seems to me someone at git made the calculus: the amount that NonStop is contributing is less than what we save going to Rust. Unless NonStop has a support contract with git developers that they would be violating, it seems to me the NonStop people want to have their cake and eat it too.
According to git docs they seem to try to make a best effort to stick to POSIX but without any strong guarantees, which this change seems to be entirely in line with: https://github.com/git/git/blob/master/Documentation/CodingG...
An important point of using C is to write software that adheres to a decades old very widespread standard. Of course developers are free to not do that, but any tiny bit of Rust in the core or even in popular optional code amounts to the same as not using C at all, i.e. only using Rust, as far as portability is concerned.
If your codebase used to conform to a standard and the build environment relies on that standard, and now the your codebase doesn't anymore, then its not the build environment that deviates from the standard, its the codebase that brakes it.
Had you been under the impression that any of these niche platforms conform to any common standard other than their own?
Because they don’t. For instance, if they were fully POSIX compliant, they’d probably already have LLVM.
I expect them to conform to the C standard or to deal with the deviation. I don't think POSIX compliance is of much use on an embedded target.
I’m sold.
How is this git's concern?
They enjoy being portable and like things to stay that way so when they introduce a new toolchain dependency which will make it harder for some people to compile git, they point it out in their change log?
I don't think "NonStop" is a good gauge of portability.
But, I wasn't arguing against noting changes in a changelog, I'm arguing against putting portability to abstruse platforms before quality.
I don’t think staying portable means you have to do concession on quality. That merely limit your ability to introduce less portable dependancies.
But even then Git doesn’t mind losing some plateformes when they want to move forward on something.
Git's main concern should, of course, be getting Rust in, in some shape or form.
I am curious, does anyone know what is the use case that mandates the use of git on NonStop? Do people actually commit code from this platform? Seems wild.
Nonstop is still supported? :o
because the rust compiler just doesn't support some platforms (os / architecture combination)?
RESF members tend to say it the other way around as in the platform doesn't support rust, but the reality is that it's the compiler that needs to support a platform, not the other way around.
Rust can't support a platform when that platform's vendors just provide a proprietary C compiler and nothing else (no LLVM, no GCC). Perhaps someone could reverse-engineer it, but ultimately a platform with zero support from any FOSS toolchain is unlikely to get Rust support anytime soon.
Furthermore, how could it without the donation of hardware, licenses and so forth?! This is a problem entirely of the proprietary platforms making, and it should be their customers problem for having made a poor decision.
HPE's customers are big-pocketed enough that they absolutely could manage a Rust port themselves, or pay HPE however much money they need to get them to do it if they're going to play games with ABI documentation. NonStop isn't some kind of weird hobbyist or retrocomputing platform.
Actually, I'm surprised HPE doesn't already ship a Rust fork, given how NonStop is supposed to be a "reliable" OS...
Reverse that: "C can't support a platform when that platform's vendors just provide a proprietary Rust compiler and nothing else".
Seems to me that that is equally true and doesn't remove any validity from the argument.
[flagged]
It's unclear what point you're trying to make here.
Proprietary platforms with proprietary-only toolchains are bad, for a wide variety of reasons. Open toolchains are a good thing, for many reasons, including that they can support many different programming languages.
My understanding: As Rust is built on LLVM and not GCC, it is also limited to operating systems supporting LLVM.
GCC simply supports more platforms.
Rust has a GCC backend as well, rustc_codegen_gcc. However, the NonStop platform just has a proprietary C compiler.
MSVC is also proprietary. However LLVM is supported by Microsoft. The developer of Nonstop is apparently not doing that.
To put in simply.
Linux systems: Any library is system library because otherwise there will be no real OS libraries or API.
Rust: No ABI. = No (real) shared libraries.
Debian: https://www.debian.org/releases/trixie/release-notes/issues....
Me who wrote long, long comment and then accidentally pushed close tab shortcut. !!@@!
I would recommend building some packages containing rust, especially on older hardware - and then realize that because of static linking you will need to rebuild it very very often - and don't forget that you are building clean. Because it is expected that you will use required shared libraries to make life easier.
I think that rust people should maybe sometimes just consider - that rust if pushed in such way will be more hated than C.
Maybe you should not try to deflect criticism about stable ABI and shared libraries - linux OSes REQUIRE IT - nobody will change OS architecture because you want it. And maybe we should be more conservative architecturally in especially most critical pieces of software architecture.
Not especially relevant for Git which has never provided a shared library interface.
It gives rust hate from many people. And once someone hates language it sticks. Also add to that the rust zealots who behave like sometimes like political preachers. "We are future, you are backwards" - says every ideologue. But conveniently does not say "in direction I want". When rust started political fight instead of language one they should expect that every rust porting will become political quagmire.
Also you are incorrect - because you are already making wrong assumption:
> crates are not libraries
"never provided a shared library interface" - it doesn't need to, it just need to USE library - distros will convert static one to shared one if that what is reasonable.
Now we have to have C library connected by C headers to (in future) rust application. Sure this somehow works - at cost of memory safety. So someone WILL suggest using rust crate instead of C library, and the problem will inevitably pop up.
You could only say it works correctly as platform stipulates if you did not use any rust crate, or used ones that only your app/lib uses, or trivial finished ones - and I do not see people use rust like that. Even then it is from most linux distributions perspective the distribution job to decide if it should be static or shared linked NOT app-developer.
SSL is something that is prime example of would it best to be written in memory safe language, with safe headers, provided that language makes stable ABI connections, so we can update 0-day not waiting for app developer.
Rust fails spectacularly at last point unless library uses C headers.
But at least it seems that OpenSSL is dynamically loaded after start so they are not changing that too soon.
When I decide to patch some library for my use case I may want to use such library in every instance in every program on the system. Rust crate makes this impossible - now I need to rebuild everything even if I could not reasonably touch ABI boundary in same C code.
Ultimately I think many of linux rust critics see it correctly as company-first/app-centered/containerized/not developement-aware user language (i.e user who can patch software for their specific needs who actively want to inspect every dependency in ONE way), and they prefer the known pro-community/pro-distro/pro-user-developer C/C++ paradigm instead. (At least fact that many criticism start immediately when GPL project get BSD rust rewrite does point it to free-software/open-source i.e pro-community/pro-company schism)
Many linux users especially developement-aware users just have enough of pip, cargo and every 'modern' stuff - they just want old good apt or pacman.
Then you have people that think slow development and no revolutionary changes should be IT priority in modern times.
Then you have people that do believe that any alternative should be better, easier and simpler than old stuff before it should be treated as even a alternative way.
And then you have contrarians.
See this page [1], particularly the 'Tier 3' platforms.
[1] https://doc.rust-lang.org/beta/rustc/platform-support.html
Thanks for the specifics, really fascinating list! I'm sure I'm being a bit flippant, but it's pretty funny that a list including the Playstation 1, N64, and Apple Watches is in the same conversation as systems that need to compile git from source.
Anyone know of anything on that list with more than a thousand SWE-coded users? Presumably there's at least one or two for those in the know?
What I like about seeing a project support a long list of totally irrelevant old obscure platforms (like Free Pascal does, and probably GCC) is that it gives some hope that they will support some future obscure platform that I may care about. It shows a sign of good engineering culture. If a project supports only 64-bit arm+x86 on the three currently most popular operating systems that is a red flag for future compatibility risks.
The problem is that "support" usually isn't quite the right word. In practice for obscure platforms it is often closer to "isn't known to be horribly broken". Rust at least states this explicitly with their Tier 1/2/3 system, but the same will apply to every project.
Platform support needs to be maintained. There is no way around that. Any change in the codebase has the possibility of introducing subtle platform-specific bugs. When platform support means that some teenager a decade ago got it to compile during the summer holiday and upstreamed her patches, that's not worth a lot. Proper platform support means having people actively contributing to the codebase, regularly running test suites, and making sure that the project stays functional on that platform.
On top of this, it's important to remember that platform support isn't free either. Those platform-specific patches and workarounds can and will hold back development for all the other platforms. And if a platform doesn't have a maintainer willing to contribute to keeping those up-to-date, it probably also doesn't have a developer who's doing the basic testing and bug fixing, so its support is broken anyways.
In the end, is it really such a big deal to scrap support for something which is already broken and unlikely to ever be fixed? At a certain point you're just lying to yourself about the platform being supported - isn't it better to accept reality and formally deprecate it?
In theory I agree with you, and code written in a platform-agnostic way is definitely something we should strive for, but in practice: can keeping broken code around really be called "good engineering culture"?
I don't think the concern is whether a user can compile git from source on said platform, but rather whether the rust standard lib is well supported on said platform, which is required for cross compiling.
In practice, the only systems any significant number of people care about running Git on are arm64 and x86-64, and those are very well supported.
More precise link
https://doc.rust-lang.org/beta/rustc/platform-support.html#t...
Rust doesn't support as many CPU architectures as C does (SH4 for example, though there's likely many more better examples.)
This might make a much more interesting case for GOT than before https://www.gameoftrees.org/
got is a waste of time, imo.
they could just port the multiprocess pledge stuff to git (and benefit linux too with namespaces)
then all the userfacing changes (i.e. work on git bare instrad of wc) I've been doing for the last decade with a couple lines on my gitconfig file.
Rust doesn't run on all of their platforms so this is a good example of where git may not be viable for OpenBSD long-term (if they were to switch from CVS one day, which is a big IF)
You’re chasing after the meaning of “impossible.” Easy. There’s two categories of developers:
> I like programming
> I program to make money
If you belong to the second category - I’m going to be super charitable, it sounds like I’m not going to be charitable and I am, so keep reading - such as by being paid by a giant bank to make applications on Nonstop, there might be some policy that’s like
“You have to vet all open source code that runs on the computer.”
So in order to have Rust, on Nonstop, to build git, which this guy likes, he’d need to port llvm, which isn’t impossible. What’s impossible is to get llvm code reviewed by legal, or whatever, which they’re not going to do, they’re going to say “No. No llvm. HP who makes Nonstop can do it, and it can be their legal problem.”
I’m not saying it’s impossible. The other guy is saying it’s impossible, and I’m trying to show how, in a Rube Goldberg way, it looks impossible to him.
You and I like programming, and I’m sure we’re both gainfully employed, though probably not making as much money at guy, but he doesn’t like programming. You are allowed to mock someone’s sincerity if they’re part of a system that’s sort of nakedly about making lots of money. But if you just like programming, you’d never work for a bank, it’s really fucking boring, so basically nobody who likes programming would ever say porting Rust or whatever is impossible. Do you see?
It’s tough because, the Jane Street people and the Two Sigma people, they’re literally kids, they’re nice people, and they haven’t been there for very long, they still like programming! They feel like they need to mook for the bank, when they could just say that living in New York and having cocktails every night is fun and sincere. So this forum has the same problem as the mailing list, where it sounds like it’s about one thing - being able to use fucking hashmaps in git - and it’s really about another - bankers. Everywhere they turn, the bankers run into people who make their lifestyle possible, whether it’s the git developers who volunteer their time or the parents of the baristas at the bars they’re going to paying the baristas’ rent - and the bankers keep hating on these people. And then they go and say, well everyone is the problem but me. They don’t get it yet.
What are you on about?
I'm wondering what's on the horizon with git 3.0?
From my (very limited) perspective, I just kind of thought git had settled in to 2.x and there wasn't any reason to break compatibility.
See https://git-scm.com/docs/BreakingChanges#_git_3_0
SHA-256 will become the default hash.
How does this help me as a user of git?
Rust is generally a much better tool for building software than C. When your software is built with better tools, you will most likely get better software (at least eventually / long term, sometimes a transition period can be temporarily worse or at least not better).
That would be a stronger argument if people were facing implementation deficiencies in git
I'm not sure exactly what you mean but of course people are facing implementation deficiencies in Git. Last I checked submodules were still "experimental" and extremely buggy, and don't work at all with worktrees. (And yeah submodules suck but sometimes I don't have a choice.)
Your reply seems to imply that using rust would make submodules better. Since that's not the case, maybe you can provide an alternative where rust would address an actual issue git users have.
No, I'm implying that it would make Git's implementation of submodules less buggy. That is likely the case.
If we're talking about feelings, I find it "not likely" unless, perhaps as a side-effect of rethinking the whole feature all together. Or do you have some actual indicators that the issues with how modules are likely to break your work directory are related to problems that rust avoids?
Yes I do. Rust's strong type system makes logic bugs less likely, because you can encode more invariants into the type system.
This also makes it easier to refactor and add features without risk of breaking things.
The borrow checker also encourages ownership structures that are less error-prone.
Finally the more modern tooling makes it easier to write tests.
If you're thinking "where is the peer reviewed study that proves this?" then there isn't one, because it's virtually impossible to prove even simple things like that comments are useful. I doubt there's even a study showing that e.g. it's easier to write Python than assembly (although that one probably isn't too hard to prove).
That doesn't mean you get to dismiss everything you disagree with simply because it hasn't been scientifically proven.
The things I'm talking about have been noted many times by many people.
OK, but I'm not convinced for this specific case. And it wouldn't take a peer reviewed study to convince me. Issues in the git submodules handling that you could link to C's lack of safety would suffice.
However what you're doing is to reply with the same platitudes and generalities that all rust aficionados seem to have ready on demand. Sure, rust is better at those things, but I don't see how that would make a rewrite of an existing feature better by default. I don't doubt that new features of git that would be written in rust will be safer and more ergonomic, but for existing code to be rewritten, which is what I understand to be your stance, I remain skeptical.
I mean I don’t encounter bugs when I use the program. So telling me rust is going to fix bugs is meh. A web browser is more interesting.
> Rust is generally a much better tool for building software than C.
This is an extremely strong statement. And factually incorrect.
You missed "IMO". We get it, you love Rust and/or hate C, and if so, I wonder why. Try Ada + SPARK though if you really want REAL safety. Its track record speaks for itself.
The developers of git will continue to be motivated to contribute to it. (This isn’t specific to Rust, but rather the technical choices of OSS probably aren’t generally putting the user at the top of the priority list.)
I am pretty sure that developers motivated to contribute code benefits end users plenty.
By not getting timely security updates:
https://www.debian.org/releases/trixie/release-notes/issues....
And the reason this is a problem is because of the me-first attitude of language developers these days. It feels like every language nowadays feels the need to implement its own package manager. These package managers then encourage pinning dependencies, which encourages library authors to be a less careful about API stability (though obviously this varies from library to library) and makes it hard on distro maintainers to make all the packages work together. It also encourages program authors to use more libraries, as we see in the Javascript world with NPM, but also in the Rust world.
Now, Rust in Git and Linux probably won't head in these directions, so Debian might actually be able to support these two in particular, but the general attitude of Rustacians toward libraries is really off-putting to me.
IMHO the reason is that these languages are industry-funded efforts. And they are not funded to help the free software community. Step-by-step this reshapes the open-source world to serve other interests.
Semantic versioning is culturally widespread in Rust, so the problem of library authors being "less careful about API stability" rarely happens in practice. If pinned packages were the problem, I'd imagine they would have been called out as such in the Debian page linked by parent.
Semantic versioning is a way to communicate how much a new version breaks new shit not a way to encourage not breaking shit. If anything, having a standardized way to communicate that you are breaking shit kind of implies that you are already planning to break shit often enough for that to make sense.
Only one number communicates breaking shit, two do not.
It doesn't, it hurts you by limiting the number of platforms Git is available on.
If it works on mac & linux I've got nothing to worry about
[flagged]
My guess is that we (I am also a user of git) won't even notice.
I will leave this here for the future:
I did not measure but it does not take long on my old hardware to compile git from scratch either, for now.Ok, I'll bite.
While we are on Hacker News, this is still an enormously obtuse way to communicate.
Are you saying that as users of git we will be negatively affected by deps being added and build times going up? Do you have evidence of that from past projects adding rust?
Why not just say that??
Git is already an uncomfortably large binary for embedded applications. Rust binaries tend to be even more bloated.
Why would you want to run a VCS in an embedded application? Any halfway usable development platform (even VIM) will be much bigger anyways.
It is sure convenient to be able to use git (and vim!) on embedded Linux. You can get by without them of course...
No need to bite. :P
We will see!
See what? Why are you vague-posting?
We will see how larger the binary will become, we will see how many more (if any) shared libraries it will depend on, and we will see how long it will take to compile.
Clear enough for you? It is a note to myself, and for others who care. You might not care, I do, and some other people do, too.
In future it might be more reliable and faster, maybe with more features.
But we probably won't see any effect for 10 years or so.
Except there are far less Rust developers than C developers, so contributions will start to drop as Rust usage expands in git.
I would safely bet that the pool of C developers willing to work on a C Git going forward is much closer to exhaustion than the pool of Rust developers willing to work on a Rust(-ish) Git.
10 years? are they going to contribute 1 line of a code a day or something?
Well it would probably take at least 5 years to rewrite all of Git in Rust (git-oxide is 5 years old and far from finished). Then another few years to see novel features, then a year or two to actually get the release.
Btw 10 lines of code per day is a typical velocity for full time work, given it's volunteers 1 line per day might not be as crazy as you think.
That git development will be modern, secure and fast?
It's to a "test balloon" if you have a plan to mandate it and will be announcing that. Unless I suppose enough backlash will cause you to cancel the plan.
It's literally a test of how people will react, so yes, finding out if people will react negatively would be exactly the point of doing the test in the first place. Would you prefer that they don't publicize what their follow-up plans would be to try to make it harder to criticize the plans? If you're against the plan, I'm pretty sure that's the exact type of feedback they're looking for, so it would make more sense to tell them that directly if it actually affects you rather than making a passive-aggressive comment they'll likely never read on an unrelated forum.
> It's literally a test of how people will react
What's there to test? It was obvious that the reaction would be overwhelmingly negative, so that's definitely not something they would care about. What else?
Is the reaction overwhelmingly negative? I haven’t read all of the emails but they seemed basically neutral or positive to me. Could you link me to some extremely negative ones, I’m curious.
The only reactions I was seeing were overwhelmingly negative. Just random people on Twitter.
While I love rust, I can't imaging being both sane and positive about that change.
Ah, so the people whose opinions they care about is going to be git contributors, not random Twitter users (some of whom can literally make money from outrage farming). The folks who actually do the work.
If they’re running the project with a Linus-type approach, they won’t consider backlash to be interesting or relevant, unless it is accompanied by specific statements of impact. Generic examples for any language to explain why:
> How dare you! I’m going to boycott git!!
Self-identified as irrelevant (objector will not be using git); no reply necessary, expect a permaban.
> I don’t want to install language X to build and run git.
Most users do not build git from source. Since no case is made why this is relevant beyond personal preference, it will likely be ignored.
> Adopting language X might inhibit community participation.
This argument has almost certainly already been considered. Without a specific reason beyond the possibility, such unsupported objections will not lead to new considerations, especially if raised by someone who is not a regular contributor.
> Language X isn’t fully-featured on platform Y.
Response will depend on whether the Git project decides to support platform Y or not, whether the missing features are likely to affect Git uses, etc. Since no case is provided about platform Y’s usage, it’ll be up to the Git team to investigate (or not) before deciding
> Language X will prevent Git from being deployed on platform Z, which affects W installations based on telemetry and recent package downloads, due to incompatibility Y.
This would be guaranteed to be evaluated, but the outcome could be anywhere from “X will be dropped” to “Y will be patched” to “Z will not be supported”.
If you're looking for reasons to ignore criticism like this then you were never interested in anything other than an affirmative nod and pat on the back in the first place.
That's fair, but I also don't think that nuance somehow makes it less of a "test balloon".
They did expect backlash, so I believe no amount will cause them to cancel. Rust fanboys() thrive off backlash.
() am myself. Love rust. Hate rust rewrites.
I suggest waiting till the gcc side matures, with the minimum of a working gcc frontend for a non optional dependency. Optional dependencies with gcc_codegen might be okay. Git is pretty core to a lot of projects, and this change is risky, it's on a fairly short time frame to make it a core dep (6 months?).
Does anyone with insight into Git development know if we should care about this? Is this just a proposal out of nowhere from some rando or is this an idea that a good portion of Git contributors have wanted?
You can perhaps learn more about their involvement in the community from this year’s summit panel interview: https://youtu.be/vKsOFHNSb4Q
In a brief search, they’re engineering manager for GitLab, appear to be a frequent contributor of high-difficulty patches to Git in general, and are listed as a possible mentor for new contributors.
Given the recent summit, it seems likely that this plan was discussed there; I hadn’t dug into that possibility further but you could if desired.
For whatever it might be worth...
Looking at the comment thread, at least one person I recognize as a core maintainer seems to be acting as if this is an official plan that they've already agreed on the outline of, if not the exact timing. And they seem to acknowledge that this breaks some of the more obscure platforms out there.
Interesting! I'd certainly say that's worth something. Definitely didn't expect it though given how poorly some people have reacted to Rust being introduced as an optional part of the Linux kernel.
It's a lot more understandable for developer tooling like Git to more quickly adopt newer system requirements. Something like the Linux kernel needs to be conservative because it's part of many people's bootstrapping process.
rustc_codegen_gcc is close to becoming stable, and conversely the Linux kernel is dropping more esoteric architectures. Once the supported sets of architectures fully overlap, and once the Linux kernel no longer needs unstable (nightly-only) Rust features, it'd be more reasonable for Linux to depend on Rust for more than just optional drivers.
I would also say that it’s a lot easier to learn to write rust when you’re writing something that runs sequentially on a single core in userspace as opposed to something like the Linux kernel. Having dipped my toes in rust that seems very approachable. When you start doing async concurrency is when the learning curve becomes steep.
I've found that when you're doing concurrency, Rust makes things easier, and it becomes simpler to get right.
However, adapting the conventions and mechanisms of a large complex C system like the Linux kernel to Rust is taking time.
Those footguns still exist in C, they’re just invisible bugs in your code. The Rust compiler is correct to point them out as bad architecture, even if it’s annoying to keep fighting the compiler.
"Announce that Git 3.0 will make Rust a mandatory part of our build infrastructure."
Sounds like it will be mandatory to use Rust to build all of Git. The title implies Rust itself will be mandatory.
how is that not the same thing?
You could read "Rust will become mandatory" as "all contributors will need to be able to code Rust" or even "all new code has to be written in Rust" or similar variations
It's still effectively the same thing. You don't take on a huge dependency like that without planning to use it extensively.
One phrasing implies contributions will have to be in Rust, the other doesn’t.
I was confused in the same way after reading the submission title. Mandating Rust would be a far more radical change.
I see. No, I understood it the way it is, as introducing it as a new hard dependency in git 3. I suppose it is a pilot for making it mandatory for contributions / incrementally replacing the existing code in the future, though.
Git is pretty modular, and it already includes multiple languages. I guess that significant parts of it will remain in C for a long time, including incremental improvements to those parts. Though it wouldn't surprise me if some parts of git did become all-Rust over time.
My last company used Jenkins, so our build infrastructure depended on Java. We used zero code outside of supporting Jenkins. So Java was required to build our stuff, but not to write or run it.
Edit: nope, I’m wrong. On reading the link, they’re setting up the build infrastructure to support Rust in the Git code itself.
So they are adding technical debt to everything? Its either in the form of C code that will need to be rewritten in rust
Or rust code that the C devs now will need to learn to understand the entire system
Doesn't matter which way you look at it
Will they introduce Ada and announce that it will become mandatory
Only if safety really is their concern... so no.
It seems unwise, to me, to tie the life of a project as fundamental, and conceptually simple, as git to a compiler and runtime as complicated as rust.
The beauty of the unsafety of C is partially that it's pretty easy to spin up a compiler on a new platform. The same cannot be said of Rust.
One argument from the git devs is that it’s very hard to implement smarter algorithms in C, though. For example, it uses arrays in places where a higher level language would use a hash, because the C version of that is harder to write, maintain, and debug. It’s also much easier to write correct threaded code in Rust than C. Between those 2 alone, using a more robust language could make it straightforward to add performance gains that benefit everyone.
That's a one time gain though. There's no reason for every platform to check the validity of some hash table implementation when that implementation is identical on all of them.
In my opinion, the verification of the implementation should be separate from the task of translating that implementation to bytecode. This leaves you with a simple compiler that is easy to implement but still with a strong verifier that is harder to implement, but optional.
C is 50 years old or something like that, and it still doesn't have a standard hash map.
Sure its not impossible for C to get that, but at the same time, they are trying to write git not fix C.
* My point is, that hash maps and data structures like that are clearly not the priority of C or they would **exist by now.
** by exist I mean either in C standard, or a at least a community consensus about which one you pick, unless you need something specific.
> or they would *exist by now.
See: https://news.ycombinator.com/item?id=45120171
Nobody needs to change a language standard for 9 lines of code. When you really want to use a hash map, its likely that you care about performance, so you don't want to use a generic implementation anyway.
> or a at least a community consensus about which one you pick
There is a hash table API in POSIX:
And who’s volunteering for that verification using the existing toolchain? I don’t think that’s been overlooked just because the git devs are too dumb or lazy or unmotivated.
> just because the git devs are too dumb or lazy or unmotivated.
That's a very unkind assumption of my argument.
I ask that you read https://news.ycombinator.com/item?id=45314707 to hopefully better understand my actual argument. It doesn't involve calling anybody stupid or lazy.
That came across more harshly than I meant, but I stand by the gist of it: this stuff is too hard to do in C or someone would’ve done it. It can be done, clearly, but there’s not the return on investment in this specific use case. But with better tooling, and more ergonomic languages, those are achievable goals by a larger pool of devs — if not today, because Rust isn’t as common as C yet, then soon.
As a practical example, the latest Git version can be compiled by an extremely simple (8K lines of C) C compiler[1] without modification and pass the entire test suite. Gonna miss the ability to make this claim.
[1] https://github.com/fuhsnn/widcc
Do you think any new, Git-relevant platform is going to gain C compiler support via anything other than Clang/LLVM?
In theory you should be able to use TCC to build git currently [1] [2]. If you have a lightweight system or you're building something experimental, it's a lot easier to get TCC up and running over GCC. I note that it supports arm, arm64, i386, riscv64 and x86_64.
[1] https://bellard.org/tcc/
[2] https://github.com/TinyCC/tinycc
> I note that it supports arm, arm64, i386, riscv64 and x86_64.
But like, so does LLVM.
Code doesn't need to "gain C compiler support", that's the point of having a language standard.
Someone has to write the platform-specific backend. A language standard doesn't help you if nothing implements it for your new platform.
Which Rust still does not have. If serious projects like Git and Linux are adopting Rust, the Rust team might want to consider writing a spec.
https://blog.rust-lang.org/2025/03/26/adopting-the-fls/
The nature considering the future is that our actions _now_ affect the answer _then_. If we tie our foundational tools to LLVM, then it's very unlikely a new platform can exists without support for it. If we don't tie ourselves to it, then it's more likely we can exist without it. It's not a matter of if LLVM will be supported. We ensure that by making it impossible not to be the case. It's a self fulfilling prophecy.
I prefer to ask another question: "Is this useful". Would it be useful, if we were to spin up a different platform in the future, to be able to do so without LLVM. I think the answer to that is a resounding yes.
That doesn't leave rust stranded. A _useful_ path for rust to pursue would be to defined a minimal subset of the compiler that you'd need to implement to compile all valid programs. The type checker, borrow checker, unused variable tracker, and all other safety features should be optional extensions to a core of a minimal portable compiler. This way, the rust compiler could feasibly be as simple as the simplest C compiler while still supporting all the complicated validation on platforms with deep support.
rustc is only loosely tied to LLVM. Other code generation backends exist in various states of production-readiness. There are also two other compilers, mrustc and GCC-rs.
mrustc is a bootstrap Rust compiler that doesn't implement a borrow checker but can compile valid programs, so it's similar to to your proposed subset. Rust minus verification is still a very large and complex language though, just like C++ is large and complex.
A core language that's as simple to implement as C would have to be very different and many people (I suspect most) would like it less than the Rust that exists.
Would anyone know how to view the patch in question (as opposed to the `--stat`-like view in the thread) without pulling down source or Googling around?
Curious what this means for libgit2.
Ideally upstream git would become better as a library as part of being rewritten in Rust.
[flagged]
Given that rust only works on e.g. cygwin recently (and still does not build many crates: i try to compile jujutsu and failed), this is a big blow to portability IMHO. While I try to like rust, I think making it mandatory for builds of essential tools like git is really too early.
?? I build Jujutsu and many other Rust programs from source on Windows.
Rust has a much better Windows story than C and bash do, due to its heritage as a language built by Mozilla for Firefox.
> Rust has a much better Windows story than C
This is an extremely strong statement. Which is so obviously factually incorrect that I tend to think you might have meant something else.
As a Windows user, I find random Rust projects work on Windows far more often than random C ones, even if the authors didn’t make a specific attempt to support Windows.
"work" as in "build"? I would agree with that.
And run.
My colleague Bryan Cantrill, famously a huge Unix guy, once said to me “if you had told me that projects I write would just work on Windows at the rate they do, I wouldn’t have believed you.” When I started at Oxide I had to submit like one or two patches to use Path instead of appending strings and that was it, for my (at the time) main work project.
I meant exactly what I said.
As said before I wasn't complaining about windows, but rather of not so common posix layers like cygwin [0]. Most C posix compliant stuff compiles in my experience.
[0] https://github.com/rust-lang/rust/issues/137819
Right, but Rust makes it so you don't have to use Cygwin. It's one of the great portability advantages of Rust that you can write real Windows programs with it.
I am not really sure if I can follow here. How could a rust compiled program like git honor my cygwin emulated mount points in paths, which I need, when working with other posix compliant software.
I thought that if you invoke a native Windows binary with Cygwin, it translates Unix-looking paths into Windows ones. But it's been a long time since I used Cygwin so I could be wrong.
Git works only on cygwin too?
No, it doesn't. OP meant that the Rust support on Cygwin is bad; it is better with the native Windows API.
I don't quite understand. Why use a janky, lossy Linux emulation layer when you can just target Windows natively?
Cygwin is an ugly hack anyway.
jj has msvc builds and is still tire 1 target maybe something particular about your configuration?
I want it to be cygwin native, i.e. passing calls through the cygwin posix layer and not use the windows binary. Sure I can use the windows binary, but that is a different thing.
Maybe rewrite or create a new SCM called `grit`, etc
As long as binary sizes don't explode...
what's a 'test balloon'?
A viral commit
Ironically it's original use was in political* parlance.
From wiki it's "information sent out to the media in order to observe the reaction of an audience. It is used by companies sending out press releases to judge customer reaction, and by politicians who deliberately leak information on a policy change."
Yup I have no doubt that there's a Rust 'evangelist' group somewhere aiming for inorganic growth of the language.
> Yup I have no doubt that there's a Rust 'evangelist' group somewhere aiming for inorganic growth of the language.
So anything using Rust now must be the ‘evangelists’ work right?
Feel like there’s a ton of interesting things ahead for SCM — want to see more of those proposals.
For example…had to build my own tool to extend git blame and track the AI generated code in our repository and save prompts:
https://github.com/acunniffe/git-ai
See also: https://github.com/GitoxideLabs/gitoxide which is a full rewrite of git in Rust.
So, I have been complaining about how Rust projects have over hundreds and often thousands of dependencies. I gave this random Rust project a try.
The results:
No thank you.Can you Rust people stop doing this? Hundreds of dependencies is the norm in Rust culture. I swear humans will never learn.
You can't really count "dependencies" in the Rust ecosystem by counting the number of crates. Gix itself has 65 crates but if you depended on it that would only really be one dependency.
Your average Rust project will have more dependencies than your average C project, but it's not as dramatic as you might think.
Okay, but when I compile a Rust project and I see "0/2000" that gets pulled and built, I panic.
> You can't really count "dependencies" in the Rust ecosystem by counting the number of crates.
Can you elaborate as to why? I have much less packages (many of them are not even C libraries) installed by my operating system than what a typical Rust project pulls and builds.
> Can you elaborate as to why?
Because Rust crates are the "compilation unit" as well as the "publishing unit". So if you are a largish library then you'll likely want to split your library across several crates (to enable things like parallelism in the build process). Then you'll end up with several crates from the same git repo, same developers, that will show up individually in the raw crate count.
It's not a perfect analogy (because crates are generally multiple files), but imagine if in a C project you counted each header file as a separate dependency, it's kinda like that.
---
There is a culture in the Rust ecosystem of preferring shared crates for functionality rather than writing custom versions of data structures or putting too much in the standard library (although it's not nearly so extreme as in the JavaScript ecosystem). And I do think the concern around supply-chain attacks is not entirely unwarranted. But at the same time, the quality standards for these crates are excellent, and in practice many of them are maintained by a relatively small group of people that as a Rust developer I know and trust.
And are these dependencies that get pulled and built general-purpose? I presume it is since it is published, but I have no idea if it is indeed general-purpose, or something like "internal/*/*" in Go where the code is not supposed to be used by any other codebase.
Lots of projects break themselves up into multiple crates for various reasons, but they’re still maintained as a whole by the same people.
Take serde, for example: https://github.com/serde-rs/serde
This is four crates, so it shows up as 4/2000. But last week, it would have been 3/2000, because serde_core was extracted very recently: https://github.com/serde-rs/serde/pull/2608
As a serde user, this reorganization doesn’t change the amount of code you’ve been depending on, or who authors that code, but it did add one more crate. But not more actual dependency.
mandatorty: best new word of 2025
mandatorty (adj.): Simultaneously required and a civil offense.
I need you to fill out this TPS report. Unfortunately it's mandatorty to fudge section 15A.
[flagged]
> Normal users who have to install the Rust toolchain to build a previously simple piece of software do not count.
"Normal users" would just install the same way they already do today without bothering about the toolchain.
"Normal users" who want to build by theirselves probably won't find it too difficult. Given the size of Git it's incredibly easy to build: just install the dependencies and run `make`.
bruh what is that goofy ass capcha protection??
That's Anubis. A proof-of-work based protection against AI-Crawlers.
https://github.com/TecharoHQ/anubis