This is an issue that you would face in any language with strong typing. It only rears its head in Rust because Rust tries to give you both low-level control and strong types.
For example, in something like Go (which has a weaker type system than Rust), you wouldn't think twice about, paying for the re-allocation in buffer-reuse example.
Of course, in something like C or C++ you could do these things via simple pointer casts, but then you run the risk of violating some undefined behaviour.
In C I wouldn't use such a fluffy high-level approach in the first place. I wouldn't use contiguous unbounded vec-slices. And no, I wouldn't attempt trickery with overwriting input buffers. That's a bad inflexible approach that will bite at the next refactor. Instead, I would first make sure there's a way to cheaply allocate fixed size buffers (like 4 K buffers or whatever) and stream into those. Memory should be used in a allocate/write-once/release fashion whenever possible. This approach leads to straightforward, efficient architecture and bug-free code. It's also much better for concurrency/parallelism.
> In C I wouldn't use such a fluffy high-level approach in the first place.
Sure, though that's because C has abstraction like Mars has a breathable atmosphere.
> This approach leads to straightforward, efficient architecture and bug-free code. It's also much better for concurrency/parallelism.
This claim is wild considering that Rust code is more bug-free than C code while being just as efficient, while keeping in mind that Rust makes parallelism so much easier than C that it's stops being funny and starts being tragic.
I'm not even sure what it means for a language to "have" abstractions. Abstractions are created by competent software engineers, according to the requirements. A language can have features that make creating certain kinds of abstractions easier -- for example type-abstractions. I've stopped thinking that type abstractions are all that important. Somehow creating those always leads to decision paralysis and scope creep, and using those always leads to layers of bloat and less than straightforward programs.
& and &mut are pretty fundamental Rust abstractions.
I can tell that were plenty of libraries doing that in the 2000's, back when it was common to write enterprise software in C.
Plenty of abstraction possible using TU as modules, and applying Abstract Data Types design, while following Yourdon structured method with C.
> straightforward, efficient architecture and bug-free code
The grace with which C handles projects of high complexity disagrees.
You get a simple implementation only by ignoring edge cases or improvements that increase complexity.
The idea that a language can handle any complexity for you is an illusion. A language can automate a lot of the boring and repetitive small scale work. And it can have some of the things you would have otherwise coded yourself as built-ins. However you still have to deal with the complexity caused by buying into these built-ins. The larger a project gets the more likely the built-ins are to get in the way, and the more likely you are to rewrite these features or sub-systems yourself.
I'd say, more fully featured languages are most useful for the simpler side of projects (granted some of them can scale quite a way up with proficient use).
Now go research how some of the most complex, flexible, and efficient pieces of software are written.
> The idea that a language can handle any complexity for you is an illusion
I think this is wrong on its face. We wouldn't see any correlation between the language used and the highest complexity programs achieved it in.
As recently mentioned on HN it takes huge amounts of assembly to achieve anything at all, and to say that C doesn't handle any of the complexity you have to deal with when writing assembly to achieve the same result is absurd.
EDIT: > Now go research how some of the most complex, flexible, and efficient pieces of software are written.
I'm quite aware. To say that the choice of say, C++ in the LLVM or Chromium codebase doesn't help deal with the complexities they operate over, and that C would do just as well at their scale... well, I don't think history bears that out.
No, C doesn't actually handle the _complexity_ of writing assembly. It abstracts and automates a lot of the repetitive work of doing register allocation etc -- sure. But these are very local issues -- I think it's fair to say that the complexity of a C program isn't really much lower than the equivalent program hand-coded in assembler.
I'm not sure that LLVM would be the first consideration for complex, flexible, efficient? It's quite certainly not fast, in particular linking isn't. I'm not sure about Chromium, it would be interesting to look at some of the more interesting components like V8, rendering engine, OS interfacing, the multimedia stack... and how they're actually written. I'd suspect the code isn't slinging shared_ptr's and unique_ptrs and lambdas and is keeping use of templates minimal.
I would have thought of the Linux kernel first and foremost. It's a truly massive architecture, built by a huge number of developers in a distributed fashion, with many intricate and highly optimized parts, impressive concurrency, scaling from very small machines to the biggest machines on the planet.
> I would have thought of the Linux kernel first and foremost. It's a truly massive architecture, built by a huge number of developers in a distributed fashion, with many intricate and highly optimized parts, impressive concurrency, scaling from very small machines to the biggest machines on the planet.
This is what the Linux kernel achieved, and when it started C was definitely the right choice for its (primary) implementation language. That doesn't mean C made the job easier, or that if Linux started today it would follow the same trajectory.
I would say Linux succeeds and has good abstractions due to strong discipline, in spite of what C provides it.
Have you written a linker before? It sounds like you’re describing I/O but that isn’t the work being done here by the linker.
I've considered writing one. Why do you think what I'm describing here only applies to I/O (like syscalls)? And, more abstractly speaking -- isn't everything an I/O (like input/output) problem?
Because the data structures for a linker aren’t dealing with byte buffers. I’m pretty sure you’ve got symbolic graph structures that you’re manipulating. And as they mention, sometimes you have fairly large object collections that you’re allocating and freeing and fast 4k allocations don’t help you with that kind of stuff. Consider that the link phase for something like Chrome can easily use 1gib of active memory and you see why your idea will probably fall flat on its face for what Wild is trying to accomplish in terms of being super high performance state of the art linker.
No, without more context I don't immediately see how it would fall flat. If you're dealing with larger data, like Gigabytes of data, it might be better to use chunks of 1 MB or sth like that. My point is that stream processing is probably best done in chunks. Otherwise, you have to to fit all the transformed data in RAM at once -- even when it's ephemeral. Wasting Gigabytes without need isn't great.
Because you don’t know a priori whether any particular data isn’t needed anymore and linking is not a stream processing problem - you have to access any part of the “stream” multiple times randomly. Almost any part of the program can end up linking to another. So you do end up needing to keep it all in memory. And modern linkers need to do LTO which ends up needing to load all compilation units into memory at once to do that optimization.
But sure, if you’re that confident go for it. Writing a linker that can use a fraction of the memory, especially during LTO would be a ground breaking achievement.
From a skim, my impression was that there was a specific stream processing problem discussed but I probably have gotten that wrong, or a comment was edited. If I understand better now, in the OP, Rust streaming features are being used here to achieve essentially a typecast, and the streaming transformations (looking at each individual element) are supposed to be optimized away. And I restate that this is essentially solving a language-inflicted problem, not some interesting performance problem.
For sure, linking "graphs" generally have random connections so obviously I'm not saying that linking in general is a trivial stream processing problem.
Thinking about it though, linking graphs are _almost_ free of cyclic dependencies in practice (even when looking at entire chunks of code or whole object files as units), so there are some basic tricks we can use so we don't need to load all compilation units into RAM as I think you claimed we need to. I don't think it's necessary to load all the data in RAM.
The way I understand it, linking is essentially concatenating object files (or rather the sections contained in them) into a final executable. But while doing so, relocations must be applied, so object code must not be written before all symbols referenced by that code are known. This can be done in a granular fashion, based on fixed-size blocks. You collect all relocations in per-block lists (unsorted, think std::vector but it can be optimized). When a symbol or code location becomes known (i.e. looking at the library that has the required symbol), that information gets appended to the list. When a list is full (i.e. all relocations known) that block can be committed to the file while applying all fixups. Applying fixups is still random writes but only to fixed size blocks.
That way the problem is almost reduced to streaming fixed size blocks in practice. Typically, all symbols that some object file depends on are typically resolved by the object files that were already added before. So most new chunks that we're looking at can immediately be fixed up and be streamed to the output file. (In fact some linkers are kind of strict about the order of objects that you pass on the command line). What does grow more-or-less linearly though is the set of accumulated symbols (e.g., name + offset) as we add more and more objects files to the output executable.
I don't know a lot about LTO but it seems just like an extension of the same problem. Balancing memory consumption and link times with output performance is partly the user's job. Most projects should probably just disable LTO even for release builds. But making LTO fast in a linker is probably all about using the right approach, doing things in the right order, grouping work to avoid reads and writes (both RAM and disk).
> And I restate that this is essentially solving a language-inflicted problem, not some interesting performance problem.
Sort of yes, sort of no. Yes in that the main alternative (std::mem::transmute) is unsafe, so if you want to avoid dealing with that (e.g., "I personally like the challenge of doing things without unsafe if I can, especially if I can do so without loss of performance") then this can indeed be described as a language-inflicted problem. No in that this is arguably not an "inherent" language-inflicted problem, and there is work towards ways for the Rust compiler to guarantee that certain transmutes will work as expected in safe code, and I want to say that this would be one of those cases.
> in something like C or C++ you could do these things via simple pointer casts
No you don't. You explicitly start a new object lifetime at the address, either of the same type or a different type. There are standard mechanisms for this.
Developers that can't be bothered to do things correctly is why languages like Rust exist.
And that is safer... how?
For implicit-lifetimes types (iirc types with trivial default constructors (or are aggregates) plus trivial destructors), you can use memcpy, bit_cast and soon std::start_lifetime_as (to get a pointer) when it is implemented.
If I'm not mistaken, in C, the lifetime rules are more or less equivalent to implicitly using C++'s start_lifetime_as
Ironically, Rust doesn't need any of that, you literally can just cast to a different pointer type between arbitrary types and start using it without it inherently being UB (you know, as long as your access patterns are more generally valid).
That's because Rust doesn't have constructors/assignment operators, is it not? Because of that, all objects are trivially relocatable (sort of, because Rust allows destructors for these objects, I guess).
And strict aliasing is not a concern due to Rust's aliasing models, thus the combination of the two makes it safe to type-pun like that. But Rust's models has its downsides/is a tradeoff, so...
I don't particularly mind the C++ object model (since C++20), it makes sense after all: construct your object if it needs to, or materalize it through memcpy/bit_cast. std::start_lifetime_as should fix the last remaining usability issue about the model.
You're right that Rust doesn't have constructors or assignment operators, but its abstract machine also views memory as simply being a bag of bytes, rather than inherently typed like C/C++'s model.