This is a mindblower. To quote Bruce Dubbs:
''As a personal note, I do not like this decision. To me LFS is about learning how a system works. Understanding the boot process is a big part of that. systemd is about 1678 "C" files plus many data files. System V is "22" C files plus about 50 short bash scripts and data files. Yes, systemd provides a lot of capabilities, but we will be losing some things I consider important.
However, the decision needs to be made.''
Runit is 5474 SLOCs. Most source files are shorter than 100 lines. Works like a charm. Implements an init system; does not replace DNS, syslog, inetd, or anything else.
Systemd, by construction, is a set of Unix-replacing daemons. An ideal embedded system setup is kernel, systemd, and the containers it runs (even without podman). This makes sense, especially given the Red Hat's line of business, but it has little relation to the Unix design, or to learning how to do things from scratch.
I love how people worship UNIX design in Linux circles, especially when complaining about decisions where Linux is catching up with commercial UNIXes, as in the init systems replacements.
UNIX design was so great that its authors did two other operating systems trying to make UNIX done right.
One of the few times I agree with Rob Pike,
> We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.
This is not about mindless worship, but about the fact that the UNIX design has stood the test of time for this long, and is still a solid base compared to most other operating systems. Sure, there are more modern designs that improve on security and capability (seL4/Genode/Sculpt, Fuchsia), but none are as usable or accessible as UNIX.
So when it comes to projects that teach the fundamentals of GNU/Linux, such as LFS, overwhelming the user with a large amount of user space complexity is counterproductive to that goal. I would argue that having GNOME and KDE in BLFS is largely unnecessary and distracting as well, but systemd is core to this issue. There are many other simpler alternatives to all of this software that would be more conducive to learning. Users can continue their journey with any mainstream distro if they want to get familiar with other tooling. LFS is not the right framework for building a distribution, nor should it cover all software in the ecosystem.
The first version of UNIX was released in 1971 and the first version of Windows NT in 1993. So UNIX is only about 60% older than NT. Both OSes have "stood the test of time", though one passed it with a dominant market share, whereas the other didn't. And systemd is heavily inspired by NT.
Time flies fast, faster than recycled arguments. :)
I'm confused as to which OS is the one that passed the other with dominant market share. Last I checked, Linux is everywhere, and Windows just keeps getting worse with every iteration.
I'm not sure I'd be smugly pronouncing anything about the superiority of Windows if I were a Microsoft guy today.
It's not surprising that systemd was heavily inspired by NT. That's exactly what Poettering was paid to create, by his employer Microsoft. (Oh, sorry--RedHat, and then "later" Microsoft.)
Linux is "everywhere" only if you count Android, which is not very Unix-like.
Except that it didn't, Linux has nothing to do with UNIX design, it isn't a UNIX System V in 2026.
> Linux has nothing to do with UNIX design
Respectfully, that's nonsense. Linux is directly inspired by Unix (note: lowercase) and Minix, shares many of their traits (process and user model, system calls, shells, filesystem, small tools that do "one thing well", etc.), and closely follows the POSIX standard. The fact that it's not a direct descendant of commercial Unices is irrelevant.
In fact, what you're saying here contradicts that Rob Pike quote you agree with, since Linux is from the 1990s.
But all of this is irrelevant to the main topic, which is whether systemd should be part of a project that teaches the fundamentals of GNU/Linux. I'll reiterate that it's only a distraction to this goal.
Yet, UNIX or Unix proper descendents, have replaced, or complemented their init systems, with systemd like approaches, before systemd came to be.
So is UNIX design only great when it serves the message?
I'm not familiar with what UNIX or its modern descendants have or have not implemented. But why should Linux mimic them? Linux is a Unix-like, and a standalone implementation of the POSIX standard. The init system is implementation-specific, just like other features. There has been some cross-system influence, in all directions (similar implementations of FUSE, eBPF, containers, etc.), but there's no requirement that Linux must follow what other Unices do.
If you're going to argue that Linux implementing systemd is a good idea because it's following the trend in "proper" UNIX descendants, then the same argument can be made for it following the trend of BSD-style init systems. It ultimately boils down to which direction you think is better. I'm of the opinion that simple init systems, of which there are plenty to choose from, are a better fit for the Linux ecosystem than a suite of tightly coupled components that take over the entire system. If we disagree on that, then we'll never be on the same page.
A project which is intended to be a learning experience in building a Unix variant (in this case, Linux) is a kinda right place for sticking to the Unix philosophy and design, for illustrative purposes.
Mr Pike has indeed constructed a better OS than Unix; too bad AT&T neither knew how to achieve viral popularity, nor why Free Software (as in GPL) is going to dominate the world. By about 1995, it was already too late. (Something similar happened to Inferno vs Java.)
Still, the Unix principles of modularity, composability, doing one thing well, and unified interfaces are widely considered very sane, and adopted.
Not as much as people in Linux community think, especially those that never used commercial UNIX offerings.
GPL is on its way out, a good example is that all Linux competitors in the embedded space, including Linux Foundation's Zephyr, none of them has adopted GPL.
GPL based software is now a minority, almost everything uses licenses that businesses rather reach for.
I suspect that GPL2 was instrumental in guaranteeing that the work sacrificed into the common pot of Linux kernel is not going to be taken by a competitor when it's still unpolished, closed, and used to achieve market domination.
FreeBSD came before Linux (as 386BSD), and is also active used by the industry. How much code did Sony or Raytheon shared back to FreeBSD? (LLVM is not FreeBSD proper.)
See Android for how much that is working in practice, outside the kernel.
Or the Linux distros used by NVidia.
I find Zephyr to be a somewhat poor example. It's typically used on MMUless microcontrollers where the application is linked into the same binary as the OS. I'm sure you'll point out that it's not strictly necessary to use it in that manner, but that's how most people use it and that's how they expect it to work. Licensing it as GPL would mean that basically nobody would use it because it would require releasing your entire firmware source code, especially when there's other permissively licensed alternatives in that space like RTEMS, ThreadX, and FreeRTOS.
Exactly, there are no other FOSS kernels using GPL nowadays, the Linux kernel was the first and last one with commercial success.
I will be honest mentioning Zephyr in a situation when talking about how outdated the Unix design philosophy is, is a bit funny to me since Zephyr (like ecos kinda did once) tries to be Posix-like in its APIs (but ends up not really improving things over the other embedded OSes TBH).
I am talking about Zephyr in the context of GPL, nothing else.
I think the main problem of Unix today is that it's not Unix-style enough. Too many namespaces with too many non-composable separate APIs on them instead of "everything is a file". Plan9 is more Unix than Unix and that's indeed better. Redox OS, too.
The Unix security model is mostly useless today, but it seems like something better is possible as an incremental change, and there are projects that do that, like RSBAC.
Yes, "everything is a file" but the mouse on Rio is written in stone.
Aside of that, plan9 wins on the theoretical side, it was a research OS, but in the practical one... it's opinionated.
And we've all heard of the linux people, as opposed to whoever is pushing these post-Cassidy OS. Linux isn't where it is because of some imperial decree, it has been winning out in a slow, protracted war for what OS programmers choose when they want to get work done.
Pike is more than entitled to an opinion, but I think there is some cause-effect reversal at work here. The linux circles aren't people driving the UNIX-love. The UNIX-love is effective in practice - especially the blend of principle and pragmatism that the linux community settled on - so the linux circles happen to be bigger than the most similar alternatives. Better alternatives are going to have to fight through the same slog as linux if they want recognition.
Compared to plan9, past its sell-by date. Compared to redhat poetteringware, I will continue to attend services.
> We really are using a 1970s era
1970 Anno Domini no less
Making it even more so of a religion.
UNIX is only an OS with some good ideas, and also plenty of bad ones.
No reason to stick with it ad eternum as some kind of holy scriptures.
The article is not about UNIX, what's good and bad, but what's better for understanding Linux. And replacing SysVInit with systemd is, objectively, bad for understand the core of Linux. And this is the core of LFS.
Discussing whether UNIX is good or bad seems narrow-minded, as there is no solution to that. It's like discussing whether iOS is better than Android. We can always isolate some specific parts and discuss that, but just slashing the whole concept doesn't help anyone and rarely yields any meaningful results.
> And replacing SysVInit with systemd is, objectively, bad for understand the core of Linux.
I know there are strong opinions on this, but isn’t systemd part of the core of most Linux desktops nowadays?
All of my Debian out of the box has systemd. On Gentoo it's OpenRC, which I find easier. But! There are some work-around packages that implement some stubs of systemd things because other packages are designed for systemd only world (one such stub is elogind)
Unfortunately yes
It's "problem" unfortunately is that it happens to be the only major foss os. If there were other foss oses with good support and "better" models I'd gladly try them out. I know I personally would never switch to any non foss os after the user friendliness I have experienced. I would say that's the main reason many stick to it, including game theoretic arguments for commercial players also. Not because people like to stick to ancient models. It's not a ideal system obviously but going back to locked down crap is a no go for me and perhaps many others. BSDs are ok too but the suicidal licensing makes me less inclined.
What’s suicidal about the BSD license? BSD code is everywhere
Yes and much of it is sealed off and proprietary. The bsd oses got MacOS for all their hard work, a closed off system that they can't read or port back anything from. Someone would say linux or gpl projects also have been fucked over this way. I suppose if your house has been burgled, such a person would argue we must remove all protections rather than add more.
MIT et al are winning over GPL for a reason.
I'm not a big corporation. I prefer MIT, or better yet, public domain.
Are they winning as in more people are picking the license or are they winning as in we are getting a overall more enriched foss community?
I don't understand why people have such difficulties with the Golden Rule, sounds a simple and fair enough concept.
We are winning as in "we have more freedom to do as we like without a bunch of lawyers breathing down our necks."
Freedom and liberty are what I value. There is no harm occurring to software as a result of more freedom or more liberty. Quite to the contrary.
Is your Golden Rule "you will use 'my' software exactly how I dictate, or else I'll call my dogs to attack you"? That's not the one I was taught.
I release all my code in the public domain.
Why are you acting so strange and making up misinterpretations of what I wrote? You depend on lawyers either way, whichever license you use, I fail to see how copyright law can be implemented and defended without lawyers. The golden rule is simple, anyone can look it up, I really don't understand what difficulty you have that made you make up such a strange "not even wrong" theory about it. "do to others what you would have them do to you" , here it means you have benefited from countless man hours of work by other people, so you too should pass on any improvements you made to it just like they did to you.
Regarding freedoms, let us take this scenario. Your small company depend on a complex bsd library thats hard to replicate. It gets the attention of a much larger company, they fork it make various changes to make it much better and keep it closed, their product kills yours. While, if it was GPL (or AGPL as its needed today), the company either has to redevelop it inhouse if they wish to serve it as product to the public without releasing its sources, or they do the same thing as in the bsd case, they make a much improved version...and you have equal access to the same sources, you can take that and pivot upon it instead of your company dying. Its very simple, more or less mathematical game theory. Nobody can force anyone to choose a license, its your choice. Again, Mac OS is not a very encouraging example of the overall outcome of BSD licensing. No freebsd/openbsd/whatever person is permitted to read or use Apple's "fork" now. Apple took the hard work of others and instead of paying it back in like, it doesn't take a single cent of money, "paying" back here simply means doing the same the others did, they generously provided you their work as foss, you pass back your delta to it as foss. Thus raising the high water mark of the entire ecosystem. Think academic research. Its usually released in open, so any improvements made by one team are available for others to use and further improve upon. That's it. Nothing more. Nothing less. How does GPL "force" anyone to do anything? They can either choose to follow the license, or choose another library or home grown an alternative if they dislike the terms.
> You depend on lawyers either way, whichever license you use
No, in fact I don't. Indeed, I go far out of my way to avoid these parasites entirely, and anyone who depends on them. I don't give a damn what anyone does with my software. I don't need the attorneys to do anything.
For accusing me of "misinterpreting" what you wrote, you seem to be quite confused yourself. What part of "public domain" don't you understand? The means I don't give a shit what you do with the code. You can decide for yourself. You know, the mature, unselfish approach. Busybodies and control freaks hate this one simple trick.
> Regarding freedoms, let us take this scenario.
Here we go, lol. We're headed down the rabbit hole straight to the juicy caramel center of your flawed thinking.
> Your small company depend on a complex bsd library thats hard to replicate. It gets the attention of a much larger company, they fork it make various changes to make it much better and keep it closed, their product kills yours.
Sounds like you had a very poor business model. Probably because you have no idea what you're doing. Your monetization strategy failed. Pick yourself back up and try again.
The solution is not Big Brother and his machine guns to force others to comply with your dictates. (i.e. the lawyers and legal system, if I have to spell it out for you.)
> While, if it was GPL (or AGPL as its needed today)
AGPL is strongly avoided by almost everyone, for good reason. It's even more of a cancer than the GPL.
> the company either has to redevelop it inhouse if they wish to serve it as product to the public without releasing its sources, or they do the same thing as in the bsd case, they make a much improved version...and you have equal access to the same sources, you can take that and pivot upon it instead of your company dying.
...or they just decide to develop their own version from scratch instead, keep it closed source from day one, and you get nothing at all. Happens all the time.
If you were truly a shit-hot developer you would not be concerned about anyone ripping you off. You'd know you're so creative and putting out so much quality effort on a consistent basis that you'd never worry about being surpassed by anyone.
Big company thought of a good idea to add to your big pile of good ideas? No problem. Copy that and come out with another good idea or two for him to steal. If they're always imitating you, then that means you're the industry leader, doesn't it?
If you're not the industry leader however, because you really only had one good idea and Big Company has more, then what right do you have to try and Stop Progress just for your own selfish ends? That's what this all boils down to: selfishness, due to insecurity.
Your mentality is completely foreign to a true winner, but oh-so-common among the insecure midwits. They're always deathly afraid that their One Thing will get ripped off and they will be left with nothing.
It's a scarcity mentality. That's the problem. It's all in your head.
You're a squirrel with one little nut that you cling to desperately, in hopes that nobody else will grab it. You make all your life about protecting that nut at all costs. You're so glad that Big Brother offers you his machine guns to help you protect it. You don't care about the harm that comes from bringing thugs with guns into the picture to push people around. You're just desperate to protect Your Thing, so you will accept anything that you believe will help this end. It's the same broken mentality that manifests itself everywhere else besides software also. Nothing new under the sun.
Do not pretend that I don't understand you far better than you know yourself, or that I am misinterpreting you in some way. I've seen ten thousand of your type if I've seen one. You're everywhere, especially on HN. I'm well aware of what your mentality is. The root of the problem is your insecurity.
> Its very simple, more or less mathematical game theory.
You don't have a clue about how economics actually works--which is typical for those of your loudly expressed opinions. But you think of yourself as some enlightened game theorist. Not quite.
The bottom line is, you can't FORCE people to behave how you want through your favorite legal fiction or any other, and you damn sure should never try, as it's a fool's errand that only leads to tears. One of the basic laws of the universe.
The people who created GPL knew this from day 1. That's exactly why they created it to be the way it is. Irt was an act of sabotage. This knowledge is currently far above your level however, and is likely to remain so for a long time to come; probably forever.
The world is not falling and BSD is winning the license war for good reason. End of discussion. It's all over but your crying.
>If you're not the industry leader however, because you really only had one good idea and Big Company has more, then what right do you have to try and Stop Progress just for your own selfish ends? That's what this all boils down to: selfishness, due to insecurity.
How the fuck is a GPL library stopping progress? Why does Big Company feel tied up due to a library being GPL? You said it yourself, they could redo it inhouse? If they were such hot shit they'd do it and continue the march of progress anyways.
Its very simple, its so simple I am not even sure I am talking to a functional level of iq: do you think more progress is made from less eyes on an idea? If the changes made by Big Co were available to the public, that's a much larger pool of engineers to take it in all sorts of directions. You are so fucking dumb its beyond words.
>...or they just decide to develop their own version from scratch instead, keep it closed source from day one, and you get nothing at all. Happens all the time.
You again seem very confused. Its exactly the same as they closing up a bsd fork. So how is the outcome or incentive any different? With bsd they can do that without any effort, with gpl at least they have the friction and may deem it too much of a friction. Google's fuchsia attempt failed despite its behemoth size, Android is still linux.
How exactly is it a sabotage? You are again making up utter absolute fucking crap out of thin air and acting retarded making up an entire fantasy universe in your head.
Since you are such a smartass wanker, tell me this, how is the other company being forced to release their changes making the market less competitive? On contrary, this makes it more competitive, since everyone is forced to compete to this level now, they themselves will have to keep developing something better, and again paying the inhouse cost if they wanna be jealous.
AGPL is strongly avoided by...yes companies who live off of turning existing libraries into websites...who'd have thought, hardly a surprise why and who avoids it.
Big company can add a good idea, but big company has big resources. I am telling you that you can now pivot on their changes and putting them under the pressure cooker again: more competition. Competition is nice.
How the fuck is a simple license that nobody is forced to use a "sabotage", are you really even thinking? How much fucked in the head can someone be to think a completely legal and simple license is a "sabotage"? A "sabotage" against what or whom? If its a sabotage, then protest whatever legal framework allowed it. Do you disagree with copyright, is that what you are saying?
If you disagree with copyright, then I hope you have no problem with taking the source code of competitors by any means. After all, if licenses are bad, and government enforcement of copyright is bad, why should copying and releasing a company's internal sources be bad?
If this is a "scarcity" mentality, then the entire history of Mathematics for the past few centuries is a scarcity mentality. If you are man enough to follow through, then say it out right in the reply that you believe Mathematics is a scarcity mentality.
I mean if I wanted to win at all costs, why shouldn't I steal your code and release it and make life harder for you. Or if I wanted to be a real winner, why don't I go and shoot you.
Tell me again moron, how the fuck does a license "force" you, who the fuck is "forcing" you to use gpl if you dislike gpl? I don't even know how deep a level of mental illness one can have to imagine someone with guns is coming out to kill you and rape you and force you to use GPL programs and libraries. Are you even thinking man? This is literal violent paranoid psychosis level of insanity. You are fucked in the head beyond repair.
You seem to be unaware of the basic fact that government--laws and legal systems--is men with guns.
"Government is not reason, it is not eloquence, it is force! Like fire, it is a dangerous servant and a fearful master." - George Washington
It's difficult to have a conversation with someone so profoundly ignorant of reality. Do some research and stop wasting our time with your angry rantings.
I had a feeling you'd be too fucking stupid to be able to respond to any of the points and duly vanish just as I expected from your sackless kind.
You also still failed to have the balls to say out loud you think Mathematics is a scarcity mentality. Be a man.
You are the one who chimped out with a long rant to a perfectly calm explanation man. You are still acting mentally ill. Yeah duh, govt is men with guns. And where in all this are you seeing this supposed "sabotage" of you being forced to use GPL? If you don't like GPL, don't use GPL software. Very simple. Nobody pressured you to. Since government enforcement of copyright is not something you like, I think it's a perfectly fine sentiment to have. I hope you don't mind people releasing all the materials of competitors to the public then. Tell me again , I am not sure its any point trying to reach the skull of someone this mentally ill but I'll still try. If I release something as GPL, whos coming in to your house with guns blazing forcing you to use my library? You choose to not use it, its simple. The government will use violent force to enforce any law, but in this case its easy since you already dislike GPL, just don't use it, government will have no interest in killing you. What a fucking brain dead moron, man.
That's true, but some of the arguably worst ideas are the ones which makes it the most approachable, hackable and understandable.
Hindsight is an interesting thing. Makes mistakes more visible while making Chesterton's Fences invisible.
We shouldn't forget these. These fences are there for the reasons. Yes, fences can be revised, but shall not be ignored.
My point was, that there’s plenty of ancient things we plod along with, even though they’re not perfect. Many have tried to improve upon them but few have stuck.
You are so vague in your attack on Unix approach that it's borderline trolling. What are your problems with it? Modularity and minimalism have been working perfectly and that systemd does not follow them is a bad thing.
There is a book on that, gets posted every now and then on HN.
In case you never read it, https://web.mit.edu/~simsong/www/ugh.pdf
Hardly the piece of OS beauty that gets praised about FOSS circles.
I'm not talking about the OS though but about the approach.
Goes to both, otherwise UNIX authors would not have tried to improve their creations, working on successors to both UNIX and C.
I love that book but isn’t it nearly 30 years old?
And yet many of the pain points are still kind of relevant, go figure.
But that book is a waste. It is just MIT dunning-krugerites who were salty that LISP machines never took off. When it comes to real life, the bell labs approach won, and for several good reasons. Not "worse is better" (another dunning-krugerite cope), but "less is more."
Turns out free beer is great, even when it is warm.
From your perspective, what would be an "OS done right"? I have a running list of things I would change in Unix, but replacing sysvinit with systemd's one-ring-to-rule-them-all would not be on it.
The only good beer is warm beer. If the beer tastes like shit when it's warm, it's not good beer.
But your comment is a waste. It is just HN dunning-krugerites who were salty that the UNIX way never took off. When it comes to real life, the Poettering approach won, and for several good reasons.
The UNIX way is still doing fine on OpenBSD, NetBSD, FreeBSD, Alpine, Gentoo... Poetteringware only won on the distros selling support contracts. "Fixing" what wasn't broken is great for those businesses.
> for several good reasons
Such as money from M$?
I use runit on my production workstation and don't think about it; it just works.
Same with systemd.
Except for all those who've accidentally blown their legs off with it, of course.
Just ask the guy who bricked his motherboard due to a systemd bug where his firmware wasn't write protected and got destroyed by a 'rm -rf' command. lol
No software is perfect. Not sysvinit (and it's bash scripts from different vendors), nor systemd. Errors happen. At least for me systemd is a net positive.
> No software is perfect
Especially when it's a giant blob of buggy C code written by a known hack who has multiple decades' worth of history of foisting shit code upon a less than enthused public.
> At least for me systemd is a net positive
For the moment. Just wait until it finds a way to fuck you. It's plotting and scheming behind your back to do so as we speak.
Systemd for some reason seems to uniquely be the epicenter of giant facepalm bugs like LEAVING THE SYSTEM FIRMWARE VULNERABLE TO AN RM -RF COMMAND, a situation which causes alarm to none of the systemd crowd. They just shrug if off. "What's the big deal? I don't get it," they say.
I used to see the same mentality from Microsoft people back in the day. "Why would you use Linux? I don't get it. Windows is fine."
It's because you lack standards. You're completely used to being surrounded by software and hardware that is Fucking Garbage. Everything is like that in your world. You're happier than a pig in shit, oblivious.
> Systemd for some reason seems to uniquely be the epicenter of giant facepalm bugs like LEAVING THE SYSTEM FIRMWARE VULNERABLE TO AN RM -RF COMMAND
I am very sorry to inform you but efivarfs is something coming from the Linux kernel. Being able to rm -rf it is squarely something that is entirely on the kernel implementation, WHICH THE AUTHOR OF EFIVARFS EVEN ADMITS[0]
[0]: https://lwn.net/Articles/978640/
Thanks for the correction. Yes, I have my bone to pick with the Linux kernel too on many different fronts.
#facepalm
> Implements an init system; does not replace DNS, syslog, inetd, or anything else.
You're confusing systemd the init manager and systemd the project. systemd as an init system only "replaces" initd, syslog and udev.
All other components under the systemd project are optional and not required, nor is there any push to make them required.
>"All other components under the systemd project are optional and not required"
Name two major distros that use 'systemd init system' but doesn't use the other parts.
> Implements an init system; does not replace DNS, syslog, inetd, or anything else
Neither does systemd its init.
Unknowledgeable people keep confusing systemd the init and systemd the daemon / utility suite. You can use just the init system without pulling in resolved or networkd or whatever.
Systemd is the Unix philosophy of lots of modularity. But because all the systemd daemons come from the same shop, you get a lot of creature comforts if you use them together. Nothing bad about that.
> because all the systemd daemons come from the same shop, you get a lot of creature comforts if you use them together. Nothing bad about that.
That's how vendor lock-in works, in which a myth is propagated that having it all come from under one roof is best. In fact, it is a guarantee that best-of-breed alternative solutions cannot be used. Interoperability is thwarted. This is why sensible Unix admins historically knew to keep options open for mixed-vendor sourcing as long as the bosses didn't get roped in to a single vendor or source.
Okay, so you code the features that dnsmasq is missing that resolved has. Or pay someone to do it. I promise you systemd does not have special verification protocols that stop you from interfacing with certain features. This isn't Apple.
Think about it, you can't obligate the systemd folks to maintain codebases that aren't theirs.. would be madness.
> but it has little relation to the Unix design
It's more like Windows! /duck
I have been saying for years that Microsoft would eventually deprecate WinNT and switch Windows over to a Linux foundation. Things seem to be slowly but continually moving in that direction.
Makes no sense to dump a superior kernel and executive for Linux.
The Win32 layer is the issue, not the underbelly.
I’ve had more hard crashes and BSODs on Windows than any other OS. And I use Linux & Mac more than Windows. Not sure how it’s superior.
More advanced APIs which allow more fine-grained interaction between system and application IF you can figure out how to use them
My favorite example of this is how Windows NT has had async IO forever, while also being notorious for having slower storage performance than Linux. And when Linux finally got an async API worth using, Microsoft immediately set about cloning it for Windows.
Theoretical or aesthetic advantages are no guarantee that the software in question will actually be superior in practice.
ASync I/O isn't limited to just storage, though. It's /all/ I/O.
And yes, the layered storage stack does have a performance penalty to it. But it's also infinitely more flexible, if that is what you need. Linux still lacks IOCP (which io_uring is not a replacement for).
Windows' VMM and OOM is also generally much better.
> this is how Windows NT has had async IO
Pretty much what I was thinking of. My understanding from reading some commentary in this area is the Linux implementation is yet a little botched due to how it handles waiting threads.
The windows NT kernel is in many ways a better design. However they allow third party device drivers, and run on all kinds of really terrible hardware. Both of them will cause the system to be unstable through no fault of the system.
Don't get me wrong, NT also has its share of questionable design decisions. However overall the technical design of the kernel is great.
They might use the NT kernel and their own version of the Linux userland.
I'd be open to the idea, if the kernel were open sourced (MIT licensed?) so I could play with it too.
Why do that when Win32 is what everyone wants?
We’ve already had NT + Linux userland; that was WSLv1.
I think if we're talking about "what everyone wants", Windows 11 obviously isn't it, so that's not necessarily the driving force here.
As I said, everyone wants Win32. What flavor is up to debate, everyone has their own incorrect opinions.
It would be much unlike Microsoft if they didn't bring Win32/Win64 compatibility along for the ride somehow, and very stupid also, because as you say that is the real core of Windows in a lot of ways.
I have no idea what they're planning or why, just guessing, as they seem to be bringing Linux and Windows closer together all the time.
> It would be much unlike Microsoft if they didn't bring Win32/Win64 compatibility along for the ride somehow, and very stupid also, because as you say that is the real core of Windows in a lot of ways.
This requires NT API compatibility due to applications using NT API. Despite Microsoft telling devs don't use the NT API, devs use the NT API and Microsoft makes adjustments to ensure compatibility.
> I have no idea what they're planning or why
Clearly, because the whole idea not only makes no engineering sense, it makes no financial sense. They need to build the NT kernel anyway -- it runs the entirety of Azure services!
> Makes no sense to dump a superior kernel and executive for Linux.
At this point in time, having programmed deep in the internals of both Linux and Windows, I think it is probably incorrect to call either kernel an inferior or superior one.
I mean, it was true for both of them at some point (Overlapped IO was great on Windows and missing on Linux, for example) but today, in 2026, the only differentiating factor is the userland experience.
For me, Windows loses this hands down.
> switch Windows over to a Linux foundation.
Though it seems to be sneaking in through application space on a WinNT foundation
Hackers design hacker-friendly systems, which are easy to learn and extend. Corporation$ design ops-friendly systems, which are cheap to operate.
We need both.
> We need both
Both can devolve into empire building. We need both to be transparent and open.
What we need is actual, proper, mass-education about how computers work, with the goal of increasing their freedom of interaction. Not towards creating more working class peasants using a tool for work, but creating chaotically creative tinkerers using a tool to create whatever they want, more tools included.
Kids and their Parents learned it in the 80s and they had nothing but a manual. Either these people were massively more intelligent, or the same approach, using modern methods, would work again and again and again.
Considering the 1% rule of the internet (it's about the ratios, not the numbers!), shifting more people from the 90% to, at least, the 9%, seems to be one of the better courses of actions to take.
What we, MY FELLOW HUMANS [1], absolutely do not need is more people being optimized towards using a computer solely as a tool for someone else ... especially because AI can replace 99%+ of them anyway.
[1] https://old.reddit.com/r/totallynotrobots/
Yes, this times a thousand. If we treat people like slaves, they become slaves. Treat them as if they are smart, and they will become smart. It's as simple as that.
[dead]
One might almost say a Hird of Unix-Replacing Daemons.
I don't mind the inevitable death of System V. It's an archaic relic of the Linux era.
Going systemd-only is not necessarily a good choice (though I do understand it from a practical point of view). There are other, better alternatives for System V that are smaller and more modular so you still get the Unix "feel" without the absurd complexity of interlinked shell scripts that System V relies on.
I'd like to see OpenRC getting adopted in System V's place. Upstart seems to be dead (outside of ChromeOS) but it would also have sufficed. Alas, I'm not someone with the time or knowledge to maintain these tools for LFS, and unless someone else steps up to do all the hard work, we'll probably see LFS go systemd-only.
That said, there's no reason to go full-fat systemd, of course.
It's an archaic relic of the Unix era.
The reason it is being removed is precisely because now we are in the Linux era, no longer in the Unix era.
Have another vote in favour of OpenRC, and even Upstart, if it somehow revives.
I think systemd is the one to learn now if you want to learn Linux. Maybe someone can make a Unix from Scratch for people more interested in the Unix philosophy than Linux per se.
SysVInit on Linux isn’t true Unix though as the way it abuses runlevels to start daemons was never intended by the original designers of init.
Yeah, people forget the degree to which sysvinit was hated at the time - "why are you forcing me to deal with an impenetrable forest of symlinks rather than simply hand-edit a couple of basic rc scripts?!?".
If the intention is to create a system that users can reason about, then sysvinit offers the worst of all possible worlds.
> why are you forcing me to deal with an impenetrable forest of symlinks rather than simply hand-edit a couple of basic rc scripts?
Run levels. That's it, sysvinit is about run levels. Each run level starts or kills off its own specific list of runnable things like applications, daemons, capabilities, etc.
Run levels were a desirable feature back in the day amongst System V Unix vendors, so each run level required its own kill and start scripts for each item. Run levels, for example, could take a running system from single user (root admin) mode to multi-user, multi-login, NFS sharing, full X11 mode in one command immediately as the scripts ran. This allowed rapid reconfiguration of a system, such as from a GIS workstation to a headless file server, etc. etc. as needed. Each system could be configured to boot to a specific run level. Rather than duplicate some or all such scripts across some or all run levels, symlinks were the solution.
For example, Solaris had run levels 0 through 6. Zero was a blunt force system halt; 1 was single root user admin mode; 2 was multi-user headless mode with NFS; 3 was multi-user X11 windows mode with NFS; 4 was unspecified and therefore kept for purely local configuration as desired; 5 was a planned, orderly system shutdown; and 6 was a planned, orderly system reboot. The root user could implement their choice of run level directly with the init command.
Each run level had its own run control directory (rc.d) under /etc/rc.d for its appropriate kill and start scripts, which were run in order of their K or S number, so dependencies had to be kept in mind when numbering, and curing a dependency failure was as simple as changing a script's number to rearrange the list. So, why copy S04blahblah from rc2.d to rc3.d when a symlink is far better?
Its not hard to understand when you get the big picture, and it wasn't hard to administer if you had the proper overview of it all. Admittedly, admins coming in cold would have to sort through it all, which is partly why it gained a reputation for murkiness when not properly documented by/for local admins. Keep in mind it was the era of administering sendmail macros and NIS tables by hand and you get the picture.
NOTE: edited for clarity
systemd is most certainly the most pragmatic service to learn, but if you're doing LFS to "learn" how a Linux system gets brought up, something lower-level may be a better idea to pick up.
All this stuff is versioned anyway so if the point is learning youe can still read an old version of the book and use old versions of the repos.
With limited resources, sometimes practicality needs to win. Kudos to Bruce for putting aside his (valid) feelings on the subject and doing what is best for the team and community overall.
I disagree.
I will soon be releasing a distro that is free of systemd, wayland, dbus, and other troublesome software. It is built starting from LFS in 2019, and now consists of over 1,500 packages, cross compiling to x86-32/64, powerpc32/64, and others if I had hardware to test. It's built entirely from shell scripts which are clean, organized, and easy to read.
I need help to get the system ready for release in 60-90 days. In particular, I need a fast build system, as my current 12+ year old workstation is too slow. Alpha/beta testers are welcome too. Anyone who wants to help in some way or hear more details, please get in touch:
domain: killthe.net
user: dave
> I will soon be releasing a distro that is free of systemd, wayland, dbus, and other troublesome software.
What makes you decide that these are troublesome software's? Systemd is usually argued that it is monolithic and breaks the Unix paradigm.
But then you are going for X over Wayland? X is a monolithic application that breaks the Unix paradigms.
Are you just picking things because they are old, or is there a reason you decided to go with this setup?
The difference is that the people who designed X11 were honest in their intentions. The authors of systemd, wayland, etc are not. I'll just leave it at that.
(I recommend staying far away from "X11libre" also, for the same reason, with no further comment.)
Monolithic stuff is OK too, where it makes sense. The kernel is monolithic. ZFS is monolithic.
(Yes, this system has ZFS support. The module is built in to the kernel. In time it will support booting from ZFS also, when I finish the initrd code.)
There is a clear, solid reason for everything this system is or does. I'm not a contrarian or a purist, just someone with opinions gained from long experience who is not happy with the direction mainstream Linux is headed. My system is a zen garden of bliss compared to buggy garbage like Ubuntu.
Really, it's like someone added a turbo button. Ubuntu and friends are so bloated, laggy, and slow. I regularly use this system on 15-20+ year old hardware. The default window manager is Enlightenment e16. It's snappy and responsive everywhere.
KDE, Xfce, etc are supported also and are noticeably peppier than on mainstream distros, just due to the lack of bloat, gazillions of daemons running in the background, etc. Out of the box, nothing runs by default. You enable only what you want.
Another inviolable principle is that no application is allowed to originate or receive network traffic unless the user specifically requests it. There is ZERO network activity going on in the background. None of this steady stream of who knows what contacting who knows where that goes on with other systems. No auto update etc. No internet required or used during the system build. Python module installs do not consult the central repository or download anything. Meson or cmake does not download anything. Etc. All that's patched out and disabled.
It's a distro that is meant to be forked. It's very easily done. It's a blank slate, a vanilla Linux system with subtle and tasteful improvements that is the ideal starting point to customize to your exact specifications. If you want to add in systemd and wayland, fine, I don't care, it's your system and you can build it according to your desires. People can use this platform to build their own custom OS and save themselves a ton of work vs. starting completely from scratch.
It's a system that can be audited. Everything is built with shell scripts, starting with source archives and patches that are applied during the build process. It's all inspectable and the process can be understood step by step.
It's a way to hit the ground running with a full featured, working system, while learning in the process. This distro will teach you what LFS would teach you, but with less of a "sheer cliff face" learning curve, letting you focus more on higher aspects of building the system while still learning the low level details in time.
The build is actually overall simpler than LFS despite being way more featured, with things like Ada support. (Yes, it has GNAT.) I just found a way to do it better, and kept iterating countless times to simplify and improve to the max.
Existing systems did not satisfy my requirements or standards of quality, so I just had to create a new one.
> The difference is that the people who designed X11 were honest in their intentions. The authors of systemd, wayland, etc are not. I'll just leave it at that.
Leave it at what? How is Wayland not honest about it's intentions? It is completely transparent about the motivation behind the project. Whether you agree with the motivations is different, and thats fine to disagree with a project.
However there hasn't been a scenario where Wayland haven't been honest.
Yes, I am ignoring your side comments about systemd because I was asking about Wayland, and mixing the two together implies that you are just complaining about the new, rather than technical/architectural reasons.
(Plus I have to ask as "killthe.net" doesn't come up with anything)
Send me an email and I'll be happy to explain further, to whoever asks. I don't want to clutter up this thread with a bunch of arguing that will surely result, as the focus here is just on "going our separate ways" rather than throwing barbs at anyone, or causing more hard feelings.
People who like software that I don't personally like may continue to use it of course, with this system also even, it's just that it won't be in the official repository is all. But as the whole thing is designed and encouraged to be forked, that shouldn't be too much of a burden if someone likes other aspects of the system and wants to maintain their own 'systemd/wayland' version.
How did you get GTK3/4 to work without dbus?
I got rid of dbus in GTK3 by patching the code so that the "accessibility bridge" (to ATK) can be disabled. GTK4 is beneath contempt and will not be supported.
The system uses GTK2 wherever possible, or GTK3 when not. I will either port everything to GTK2 later or create some kind of shim library. Help wanted here. Porting back to GTK2 isn't hard, I just don't have time to work on any of that at the moment.
I'm running Gentoo without dbus and I'm stuck at gtk 3.24.34. I would love to see those patches. Your site appears to be down.
It's just HTTP only (no SSL) and there's nothing there. ... until now!
Here's some nice GTK3 patches for you:
http://killthe.net/patches/gtk-3.24.43-allow-disabling-atk-b...
http://killthe.net/patches/gtk-3.24.43-allow-transparent-win...
http://killthe.net/patches/gtk-3.24.43-allow-wheel-scrolling...
http://killthe.net/patches/gtk-3.24.43-appearance-tweaks-and...
http://killthe.net/patches/gtk-3.24.43-disable-mnemonics-del...
http://killthe.net/patches/gtk-3.24.43-file-chooser-tweaks.p...
http://killthe.net/patches/gtk-3.24.43-remove-dead-key-under...
http://killthe.net/patches/gtk-3.24.43-restore-old-context-m...
http://killthe.net/patches/gtk-3.24.43-set-default-settings....
http://killthe.net/patches/gtk-3.24.43-show-alternating-row-...
Note that GTK 3.24.43 is the last version of GTK3.
My system is full of patches like this to tweak, improve, and adjust things. The point is to get off the "upgrade" treadmill and focus on making things work right.
Thanks for your work! Getting off the "upgrade" treadmill really resonates with me.
Just to be clear, I did not write these patches, but have collected many like this via scouring the net. I think I did make the ATK one though.
If you'd like to be an alpha/beta/release tester of this system, hit me up via email please. I'll start with an initial closed alpha release here in a month or so, if there's interest.
Now for the donation drive: I have plenty of time and a stable situation to work on this system, but the one drawback is I have little funds--and unfortunately my workstation is getting pretty long in the tooth. (AMD FX. It's been a good system, but I'm getting Left Behind here.) The main thing holding me back is compile speed, especially doing work on Chromium and WebKit. It's 12+ hour compile times for either of those, with the latest C++ standards they're using. The system as a whole builds in about 48 hours on my computer.
So I'm hoping to bump into an "angel investor" who either has some old Xeon (Broadwell or newer?) hardware laying around they would donate, something with lots of cores and memory, or who can make a cash donation for me to buy the gear I'm looking at on Ebay. $400-500 is enough for a nice 5x upgrade. It amazes me how cheap this stuff is. We're talking $5000+ hardware when it was new, for peanuts. Still quite powerful.
(A better video card would be great too, if you're feeling generous. Mine is a GTX570. I'd love to have a GTX7xx or newer, or equivalent AMD. That's more of a want than a need however.)
I'm very interested in ppc64 gear too. I want this system to have first-class PPC support. Anyone got an old POWER8 or POWER9 system laying around, or 32-bit stuff? I've got this system building OK in Qemu for ppc64le but it is SLOW, as you can imagine. Like 5 seconds per line in configure scripts, lol.
If anyone out there is in a position where they can help this project in some way, email me please! Thank you.
By the way--I did not want to disable ATK to get rid of dbus, but only did so temporarily. Ultimately a better solution is to create a UNIX socket just for the ATK<>GTK bridge.
Accessibility should be something that the system fully supports. There is speech synthesis and other useful bits installed so far. Maybe someone would like to work on this project. Email me if interested.
I'm sorry to tell you, but I'm not really interested in a new distribution. I appreciate the effort of what you are trying to do, but I think you are wasting time maintaining a distribution instead of maintaining patches (or a fork). If you have the know-how to patch those cancers out, then do only that and let other people do the packaging. Just make them known and available - a github repo maybe?
So, I'm not going to test your distro or switch from my Gentoo. I like Gentoo a lot, most of all because it's so very-very easy to patch any official package. Just put the patch in /etc/portage/patches/<package> and that's it. It gets automatically applied on the next install.
I'm using a Phenom II x6 1100 on a Gigabyte 880G. Firefox compiles in about 3-4 hours I think, not really sure. I do all Gentoo updates over night and it's usually ready in the morning. I can't say about Chromium or webkit - never used them - but 12h seems waaay too long.
Sorry dude, it's about 7 years too late to tell me to stop.
If you like Gentoo, more power to you! It's not for me.
This isn't just another run of the mill distro. It's like nothing else that's out there.
I forgot to mention that I have PaleMoon on the system also, and it compiles in a much more reasonable time. Like two hours or so, I think.
Chromium and WebKit are ginormous, and worse, they are compiled with the latest C++ standards which are slow as hell to compile. Nothing wrong with my system, it just takes forever to compile this giant bloated crap. I need more CPU cores, to blast my way through the pile of work that needs to be done.
Look of the size of the Chromium source code archives these days. It's fucking outrageous. 15 compressed gb (and growing rapidly!) of third party code vendored inside third party code vendored inside third party code, three or possibly even four levels deep! ("Yo Dawg...") Let's just have 5 complete copies of the LLVM suite in random places in there, because why not? Google has lost its marbles.
Yeah, I'm working to fix Chromium's little red wagon too. I'm on version 3 of my custom Chromium build. The binary of version 2 was slimmed down to 186 mb in size (compare to Google's version), with a 300 mb source tree (same) when I quit on it to start version 3. There was plenty more to take out. This latest version is going to be the best yet.
Personally I've been boycotting all chromium forks for about a decade now. Could consider dropping chromium altogether :P
I guess there are alternative "forks" like QtWebEngine that just try to bring in only the blink engine part.
Who's the guy with Firefox 147 32-bit x86 who downloaded a patch? Nice to see there's still at least a few 32-bit users left out there. My system cross compiles to i686, and builds as multilib (both 32-bit and 64-bit libraries) for x86-64 as well, FYI.
Some of these User Agents have to be fake. Android 6.0.1 with Chrome 144, really? lol
Some wise guy has a "Linux/SystemD" user agent. lol
This fella would like to have a word with you:
https://www.youtube.com/watch?v=XfELJU1mRMg
https://m.youtube.com/watch?v=atcqMWqB3hw
So, devuan?
No, not even close. Totally different projects. This one is for experts only, or those who want to become experts. The type of person who has been toying with the idea of building a LFS system but doesn't really want to go through all the work and headache (and it's a ton, to build a full system.) It also supports cross compiling to other architectures, which LFS does not.
This system has many powerful features like built in ccache/distcc support for the build, support for building in QEMU, etc. Eventually it will be fully sandboxed.
There is a heavy emphasis on Doing Things Right according to an old school way of thinking. Everything is kept as simple as possible, yet as full featured as is practical. A major goal is to have everything documented and explained, starting with the shell scripts which build the system step by step in an easy to follow manner.
No package manager currently, though a simple one is in the works which is integrated into the build scripts. It's not really needed. You just build a complete system with all packages you want installed in a single run, with your own configuration pre-loaded. This gets compressed to a tarball. Then to install, create a partition, extract the tarball, edit a few files, install the bootloader, set passwords, and go.
How is this best? It defeats the whole point. I’m going to stop recommending LFS to people wanting to learn about this stuff.
Learn about what stuff? Linux? System V UNIX?
I haven't done LFS since my tweens (and I'm almost 30 now), but I remember the sysvinit portion amounted to, past building and installing the init binary, downloading and extracting a bunch of shell scripts into the target directory and following some instructions for creating the right symlinks.
Obviously, you can go and check out the init scripts (or any other individual part of LFS) as closely as you wish, and it is easier to "see" than systemd. But I strongly protest that sysvinit is either "Linux" (in that it constitutes a critical part of "understanding Linux" nor that it's really that understandable.
But setting aside all of that, and even setting aside the practical reasons given (maintenance burden), when the majority of "Linux" in the wild is based on systemd, if one wanted to do "Linux From Scratch" and get an idea of how an OS like Debian or Fedora works, you would want to build and install systemd from source.
For me, Linux From Scratch is not about compiling linux from scratch, but on building up an entire Linux distro from the ground up, understanding how every piece fits together.
Doing it via systemd is like drawing a big black box, writing LINUX on the side, and calling it a day.
You are necessarily working with very big blocks when you're doing this, anyway. You don't do a deep dive on a whole bunch of other topics in LFS, because otherwise the scope would become too big.
That's what I was trying to get at -- yes, you can say that sysvinit is easier to understand than systemd, and less of a black box. But, even still, a "real Linux distribution" is full of these black boxes, especially the closer you get to being able to run "real applications". I'd argue that once you get into full desktop seat management, you add so much complexity on top of sysvinit that the difference narrows...
Which is why I asked "learn about what stuff". I think if the goal is to learn about "Unix" or OS design/ideas, you're better off with a leaner, "pedagogical" OS, like xv6. If the goal is to piece together an OS and really understand each piece, I don't think you really want sysvinit. You want something closer to an /etc/rc.local that just kicks off a few daemons and hopes for the best.
You can argue that sysvinit makes a better "compromise" between usability and clarity, and I'd entertain that idea, but then I think dinit is far easier to understand than sysvinit. And of course, at that point you can shave yaks till you fill the bike shed with wool.
Realistically, as much as people may hate it, if you have to pick a single init to standardize on for clarity and "building an entire Linux distro from the ground up, understanding how every piece fits together", systemd is the most rational choice. It's the most representative of the ecosystem, and requires the least "extra layers" to make the "desktop layer" work.
"best" meaning the best decision the LFS team can make given their limited, unpaid time and resources. They feel maintaining guides for two parallel init systems is unsustainable even though they would prefer not to have systemd as the only option.
The actual best decision would be to stick with his principles and make LFS be sysvinit-only instead, with zero fucks given about Gnome/KDE if they refuse to play ball.
I for one will not be strong armed into systemd or any other tech. If KDE makes it impossible for me to run without systemd, it goes into the trash bin. I will just install Trinity (KDE3) and be done with it. (Gnome deserves no consideration whatsoever.)
> To me LFS is about learning how a system works.
I'd just like to interject for a moment. What you're referring to as Linux, is in fact, Linux plus systemd, or as I've recently taken to calling it, Linux/systemd.
Linux is not an operating system unto itself, but rather another free component of a fully functioning systemd system made useful by the systemd corelibs, systemd daemons, and vital systemd components comprising a full OS as defined by Poettering.
-- https://mastodon.social/@fribbledom/116002799114521341
This, of course, is a tongue in cheek.
8/10
https://github.com/systemd/systemd/tree/main/src/core doesn't look like 1678 C files to me.
Github says 2.8k files when selecting c (including headers...) https://github.com/search?q=repo%3Asystemd%2Fsystemd++langua...
If the project is even split in different parts that you need to understand... already makes the point.
Well to be fair, you don't need to understand how SystemD is built to know how to use it. Unit files are pretty easy to wrap your head around, it took me a while to adjust but I dig it now.
To make an analogy: another part of LFS is building a compiler toolchain. You don't need to understand GCC internals to know how to do that.
> Well to be fair, you don't need to understand how SystemD is built to know how to use it.
The attitude that you don't need to learn what is inside the magic black box is exactly the kind of thing LFS is pushing against. UNIX traditionally was a "worse is better" system, where its seen as better design to have a simple system that you can understand the internals of even if that simplicity leads to bugs. Simple systems that fit the needs of the users can evolve into complex systems that fit the needs of users. But you (arguably) can't start with a complex system that people don't use and get users.
If anyone hasn't read the full Worse Is Better article before, its your lucky day:
https://www.dreamsongs.com/RiseOfWorseIsBetter.html
LFS is full of packages that fit your description of a black box. It shows you how to compile and configure packages, but I don't remember them diving into the code internals of a single one.
I understand not wanting to shift from something that is wholly explainable to something that isn't, but it's not the end of the world.
No, its not the end of the world. And I agree, LFS isn't going to be the best resource for learning how a compiler works or cron or ntp. But the init process & systemd is so core to linux. I can certainly see the argument that they should be part of the "from scratch" parts.
You still build it from scratch (meaning you compile from source).. they don't dive into Linux code internals either.
They still explain what an init system is for and how to use it.
The problem is ultimately that by choosing one, the other gets left out. So whatever is left out just has one more nail in its coffin. With LFS being the "more or less official how-to guide of building a Linux system", therefore sysvinit is now essentially "officially" deprecated by Linux. This is what is upsetting people here.
I'm OK with that in the end because my system is a better LFS anyhow. The only part that bothers me is that the change was made with reservations, rather than him saying no and putting his foot down, insisting that sysvinit stay in regardless of Gnome/KDE. But I do understand the desire to get away from having to maintain two separate versions of the book.
Ultimately I just have to part ways with LFS for good, sadly. I'm thankful for these people teaching me how to build a Linux system. It would have been 100x harder trying to do it without them.
Linux is just a kernel, that does not ship with any sort of init system.. so I don't see how anything is being deprecated by Linux.
The LFS project is free to make any decisions that they want about what packages they're going to include in their docs. If anyone is truly that upset about this then they should volunteer their time to the project instead of commenting here about what they think the project should do IMO.
The whole point of LFS is to understand how the thing works.
nothing is actually stopping people from understanding systemd-init except a constant poorly justified flame war. it's better documented than pretty much everything that came before it.
In what way was Bruce incorrect, your one link excepted?
he is counting every c file in the systemd _repository_ which houses multiple projects, libraries and daemons. he equates that to the c file count for a single init. it's a disingenuous comparison. systemd-init is a small slice of the code in the systemd repository.
I'm guessing he shares my belief that systemd-init cannot exist in the wild on its own, correct? When you want a teacup, you have to get the whole 12 place dinner set.
IIRC the mandatory components are the init system, udev, dbus, and journald. Journald is probably the most otherwise-optional feeling one (udev and dbus are both pretty critical for anything linux regardless), though you can put it into a passthrough mode so you don't have to deal with its log format if you don't want. Everything else is optional.
> ... dbus [is] pretty critical for anything linux regardless
Weird. If I weren't a sicko and had OBS Studio installed on my multipurpose box [0] I'd not have dbus installed on it.
dbus is generally optional; not that many packages require it. [1]
[0] Two of its several purposes are video transcoding and file serving.
[1] This is another area where Gentoo Linux is (sadly) one of the absolute best Linux distros out there.
> he is counting every c file in the systemd _repository_ which houses multiple projects, libraries and daemons. he equates that to the c file count for a single init. it's a disingenuous comparison.
See, this is why when I refer to the Systemd Project, I spell it as "SystemD", and when I'm referring to systemd(1), I spell it "systemd". I understand that some folks who only wish to shit on the Systemd Project also spell it that way, but I ain't one of them.
> systemd-init is a small slice of the code in the systemd repository.
Given the context:
I'd say that the topic of discussion was SystemD, rather than systemd. systemd doesn't provide you with all that many capabilities; it's really not much more than what you get with OpenRC + a supervisor (either supervise-daemon or s6).SysVInit is abusing runlevel scripts for starting daemons which has always been a hack to be able to resolve dependencies between daemons.
Learning Linux or Unix from scratch shouldn’t include using crude hacks.
I'm with you on this. SysVinit is better than systemd, but far from perfect. I don't enjoy tediously maintaining all of those symlinks, and prefer the BSD approach myself.
One project on my distro is a new init that will be much, much simpler than SysV, more like BSD and without all the years of cruft.
Linux is literally 62k C files. The amount of time you'll spend understanding how Linux works will dwarf systemd. At least when studying systemd you will be learning a more modern approach of init systems.
Most of those files are device/fs/network drivers and various arch support. The core you need to comprehend is not that much larger than systemd.
Or to put it another way, systemd has grown to become so large and complex it's like a kernel unto itself.
I am looking forward to UnixFromScratch and Year of Unix on the desktop as Linux more and more sells itself out to the overstuffed software virus that is System D.
I know this is a bit tongue in cheek, but the systemd hate is so old and tiresome at this point.
I need my systems to work. Not once in my career have I experienced a showstopping issue with systemd. I cannot say the same for sysV.
I can absolutely say that I've never had a showstopping problem with sysv. That is about 30 years as a unix & linux admin and developer.
The whole point of sysv is the components are too small and too simple to make it possible for "showstoppers". Each component, including init, does so little that there is no room for it to do something wrong that you as the end user at run-time don't have the final power to both diagnose and address. And to do so in a approximately infinite different ways that the original authors never had to try to think up and account for ahead of time.
You have god power to see into the workings, and modify them, 50 years later in some crazy new context that the original authors never imagined. Which is exactly why they did it that way, not by accident nor because it was cave man times and they would invent fancier wheels later.
You're tired of hearing complaints? People still complain because the problem did not go away. I'm tired of still having to live with the fact that all the major distros bought in to this crap and by now a lot of individual packages don't even pretend to support any other option, and my choices are now to eat this crap or go off and live in some totally unsupported hut in the wilderness.
You can just go on suffering the intolerable boring complaints as far as I'm concerned until you grow some consideration for anyone else to earn some for yourself.
The original authors went on to design Plan 9 and Inferno, and did not in any means consider UNIX perfect.
Also Linux is trailing here Solaris, OS X, Aix,...
Your points are well taken. Linux is far from perfect and people shouldn't worship it. sysvinit is inferior to BSD init in my view and there are other questionable design decisions.
The biggest problem is that people are being railroaded into one thing or the other by the strong arm of corporations instead of being given options. My system helps with that.
I won't support systemd/wayland/etc, but others easily can add that in to their version of the distro if they like and support it themselves without too much work, as it's designed to be forked by anyone.
Equally tiring is the “it works for me so stop complaining” replies, which do nothing to stop the complaints but do increase the probability of arguments. Want the complaint posts to stop? Suggesting that they’re in some way invalid is not the way.
Yeah, it’s so tiresome that other people have a philosophy different from mine which seems to have prevailed for now. Like ok so sorry. Systemd on linux is the worst of both worlds imho which apparently according to GP to which I’m progressively less entitled. I like NetBSD and its rc init and config system. Oh no systemd sore winners incoming!
> Not once in my career have I experienced a showstopping issue with systemd.
Like clockwork, we'd have a SystemD edge case cause a production-down incident at a (single!) customer site once per year. Inevitably, we'd burn anywhere from a half day to a week attempting to figure out WTF, and end up in some Github Issue where Systemd Project heavyweights go "Wow. Yeah, that looks bad. Maybe we should document it. Or fix it? IDK." and they'd do neither.
The project is full of accidental complexity that its maintainers can't be bothered to fix when unplanned interactions cause problems and are brought to their attention. I certainly don't blame them; that sort of work is only interesting to a very specific sort of personality, and that sort of personality doesn't tend to thrive in a typical software company.
I can also absolutely say that I've never had a showstopping problem with OpenRC in the nearly twenty-five years I've been using it. It's remarkable how reliable it is.
> and end up in some Github Issue where Systemd Project heavyweights go "Wow. Yeah, that looks bad. Maybe we should document it. Or fix it? IDK." and they'd do neither.
Do you have a reference? Not that I don't believe you, but I hated this behaviour from Poettering (although he seemed to more often blame the user) and we should totally raise up issue like this. It's a mature product that shouldn't have sharp edges any more.
I'm afraid I don't have a reference. The combination of the facts that the bugs are always damn obscure, there are so many Github Issues filed against systemd/systemd, $DAYJOB keeps me so busy with a huge variety of tasks, and the inappropriate lack of giveashit demonstrated by the project maintainers made me so angry means that the details just get blown out of my head.
> ...we should totally raise up issue like this. It's a mature product that shouldn't have sharp edges any more.
To whom would these issues be raised to? Based on my personal and professional experience, the SystemD maintainers (and -for those who are paid to work on the project- those who manage them) seem to disagree that "eliminating sharp edges" is a big priority!
Imagine that, people on the internet disagreeing. I've had both sysv and sysd crap in my cheerios. The thing I appreciated about sysv was that it stayed in its lane and didn't want to keep branching out into new parts of the system. Sysvinit never proposed something like homed.
My experience, and the common experience I’ve read, is the exact opposite. Run scripts worked. They always worked. They were simple. I’ve run into so many difficulties with systemd, on the other hand. I gave up managing my own server as a result.
> Not once in my career have I experienced a showstopping issue with systemd. I cannot say the same for sysV.
I have had both ruin days for me. In particular the "hold down" when it detects service flapping has caused issues in both.
I use runit now. It's been rock solid on dozens of systems for more than a decade.
I understand where you’re coming from but early systemd with both ubuntu and centos was a fucking mess. It’s good now but goddamn it was painful and the hate is 100% justified.
Might be good for some, I'm still having issues!
Funny you should mention CentOS, which it outlived.
OP here. I was hoping we could avoid the interminable, infernal discussion of systemd vis-a-vis emotional states.
What about Windows hate is so old and tiresome now?
I need my system to work!
While I'll ignore the System D hyperbole, your point about Unix has merit.
I think the *BSD are also good, at least from an educational standpoint, with their relative simplicity and low system requirements. Since there is a lot of integration making a from scratch distro might take less material, but it could be supplemented with more in depth/sysadmin exploration.
From an education standpoint for those who really, really want to understand, the *BSD init and SysVinit systems require direct human administration. You break it, you fix it. Then, and only then, does learning systemd's ''then something happens behind the curtain'' type of automation make sense. If the student decides that one is more suitable than the other(s), they've done so from an enlightened vantage point.
I thought systemd was fairly straightforwards, even if it does too many different things for my tastes. What's an example of it doing a too much magic behind the curtain thing?
Bear in mind that the entire purpose of systemd is to replace a huge amount of previous system administration solutions in a fashion that is centralized and automated, and not in need of as much human intervention as previous init systems. For copious examples, look through these comments and the huge number of previous HN threads on this huge topic. That is my answer.
If there are so many such instances, surely you must have one that comes readily to mind, then.
I can do gainsaying too: surely you didn't look through these comments and the huge number of previous HN threads on this huge topic. Do your own work.
https://hn.algolia.com/?dateRange=last24h&page=0&prefix=fals...
I have read the first couple pages of these results. Would like you like to highlight one of these in particular?
Here's an example:
When I was building the initial version of my distro starting from a Linux Mint computer, one time I accidentally double-mounted the virtual filesystems (/tmp, /run, /proc, etc), on the target volume as my script was too primitive and didn't check the mounts first.
Exactly 60 seconds later, the whole system crashed.
Later I accidentally did this again, except this time immediately caught the problem and undid it. No matter--systemd still crashed 60 seconds later anyhow.
Or like the bug that was revealed a while back where the firmware EEPROM was writable by default in /sys or wherever it was, resulting in somebody's firmware getting overwritten and the system bricked. lol
That's the systemd life for you, in a nutshell. That sort of thing times a thousand. Not all at once, mind you--it will just take a nibble out of you here and there on and off until the end of time. After a while it will straight up fuck you, guaranteed. Which is exactly what it was designed to do.
Same with anything "Linux Puttering" touches. The guy who is now officially a Microsoft employee, as people were saying he really was all along.