There are many, many more such issues with that code. The person that posted it is new to C and had an AI help them to write the code. That's a recipe for disaster, it means the OP does not actually understand what they wrote. It looks nice but it is full of footguns and even though it is a useful learning exercise it also is a great example of why it is better run battle tested frame works than to inexpertly roll your own.
As a learning exercise it is useful, but it should never see production use. What is interesting is that the apparent cleanliness of the code (it reads very well) is obscuring the fact that the quality is actually quite low.
If anything I think the conclusion should be that AI+novice does not create anything that is useable without expert review and that that probably adds up to a net negative other than that the novice will (hopefully) learn something. It would be great if someone could put in the time to do a full review of the code, I have just read through it casually and already picked up a couple of problems, I'm pretty sure that if you did a thorough job of it there would be many more.
> What is interesting is that the apparent cleanliness of the code (it reads very well) is obscuring the fact that the quality is actually quite low.
I think this is a general feature and one of the greatest advantages of C. It's simple, and it reads well. Modern C++ and Rust are just horrible to look at.
I slightly unironically believe that one of the biggest hindrances to rust's growth is that it adopted the :: syntax from C++ rather than just using a single . for namespacing.
I believe that the fanatics in the rust community were the biggest factor. They turned me off what eventually became a decent language. There are some language particulars that were strange choices, but I get that if you want to start over you will try to get it all right this time around. But where the Go authors tried to make the step easy and kept their ego out of it, it feels as if the rust people aimed at creating a new temple rather than to just make a new tool. This created a massive chicken-and-the-egg problem that did not help adoption at all. Oh, and toolchain speed. For non-trivial projects for the longest time the rust toolchain was terribly slow.
I don't remember any other language's proponents actively attacking the users of other programming language.
> But where the Go authors tried to make the step easy and kept their ego out of it
That is very different to my memories of the past decade+ of working on Go.
Almost every single language decision they eventually caved on that I can think of (internal packages, vendoring, error wrapping, versioning, generics) was preceded by months if not years of arguing that it wasn't necessary, often followed by an implementation attempt that seems to be ever so slightly off just out of spite.
Let's don't forget that the original Go 1.0 expected every project's main branch to maintain backward compatibility forever or else downstreams would break, and without hacks (which eventually became vendoring) you could not build anything without an internet connection.
To be clear, I use Go (and C... and Rust) and I do like it on the whole (despite and for its flaws) but I don't think the Go authors are that different to the Rust authors. There are (unfortunately) more fanatics in the Rust community but I think there's also a degree to which some people see anything Rust-related as being an attack on other projects regardless of whether the Rust authors intended it to be that way.
Fair enough.
> I believe that the fanatics in the rust community were the biggest factor.
I second this; for a few years it was impossible to have any sort of discussion on various programming places when the topic was C: the conversation would get quickly derailed with accusations of "dinosaur", etc.
Things have gone quiet recently (last three years, though) and there have been much fewer derailments.
As an outsider, I don't really see Rust having done anything different recently than they weren't doing from the start.
What seems to have changed in recent years is the buy-in from corporations that seemingly see value in its promises of safety. This seems to be paired with a general pulling back of corporate support from the C++ world as well as a general recession of fresh faces, a change that at least from the sidelines seems to be mostly down to a series of standards committee own-goals.
I'm not sure that there is a recession of corporate support from C++. Just that the proportion of companies that need C++ is smaller than it once was.
I like the safety promise of Rust. But the complicated interop story with C and C++ hurt it a lot. I mean, in a typical codebase, what proportion of bugs will be memory-safety related vs other reasons? Ideally, we could just wrap the safety-critical bits in a memory-safe wrapper and continue to use C and C++ for everything else.
Being a C++ developer and trafficking mostly in C++ spaces, there is a phenomenon I've noticed that I've taken to calling Rust Derangement Syndrome. It's where C and C++ developers basically make Rust the butt of every joke, and make fun it it in a way that is completely outsized with how much they interact with Rust developers in the wild.
It's very strange to witness. Annoying advocacy of languages is nothing new. C++ was at one point one of those languages, then it was Java, then Python, then Node.js. I feel like if anything, Rust was a victim of a period of increased polarization on social media, which blew what might have been previously seen as simple microaggressions completely out of proportion.
I don't think Rust will ever be as big as C++ because there were fewer options back then.
These days Go/Zig/Nim/C#/Java/Python/JS and other languages are fast enough for most use cases.
And Rust learning curve doesn't help either. C++ was basically C with OOP on steroids. Rust is very different.
I say that because I wouldn't group Rust opposition with any of those languages you cited. It's different for mostly different reasons and magnitudes.
As someone that was there, a few things helped C++ adoption, and even then it wasn't without the C vs C++ flamewars that endure to these days.
- At the time, with a few minor differences, C++ was Typescript for C, thus very easy to adopt into existing projects
- Being born on the same birthplace as C and UNIX, meant all C compiler vendors saw as added value to have it as part of their offering, and it was natural that every UNIX SDK also had C++ support available alongside C.
- Apple, Metrowerks, IBM, Borland and Microsoft helped to push C++ adoption, by making it the official way to use application frameworks. MacApp (originally in Object Pascal), PowerPlant, CSet++, Turbo Vision/OWL/VCL, and MFC respectively.
This kept C++ as the language to go for performance in enterprise computing, while Delphi and VB got the "easy" development role, until Java and .NET took over all those frameworks.
Rust doesn't have this kind of industry wide push, even in OSes where it is being embraced like Windows and Android, note that it isn't being pushed as yet another way to write userspace applications, rather low level OS services.
> Rust doesn't have this kind of industry wide push, even in OSes where it is being embraced like Windows and Android, note that it isn't being pushed as yet another way to write userspace applications, rather low level OS services.
This seems apropos in a world where C++ has been bleeding userspace buy-in for longer than I've been professionally programming.
I started learning Rust a few months ago in an attempt to teach an old dog new tricks, and while it's quite pleasant as far as it went, I can think of several classes of programs that I would be reluctant to use the language for. But I wouldn't dream of using C++ for those types of programs either.
There are rumors floating around that Microsoft is rolling their own rustc-codegen-gcc paired with their C2 codegen backend. Don't know what to make of those rumors, but it helped to reassure me to feel like the time I invested thus far hasn't been wasted.
Not sure about the new backend, but they are indeed quite invested.
"From Blue Screens to Orange Crabs: Microsoft's Rusty Revolution"
https://www.youtube.com/watch?v=uDtMuS7BExE
Software vulnerabilities are an implicit form of harassment.
I'm hoping that's meant to satirise the rust community, because it's horseshit like this that makes a sizeable subset of rust evangelists unbearable.
> I don't remember any other language's proponents actively attacking the users of other programming language.
I just saw someone on Hacker News saying that Rust was a bad language because of its users
Yawn. Really, if you have nothing to say don't do it here.
Gotcha hypocrisy might be a really cheap thing to point out, but they're not wrong.
I have noticed my fair share of Rust Derangement Syndrome in C++ spaces that seems completely outsized from the series of microaggressions that they eventually point out when asked "Why?"
It’s interesting, over the past 15 years I’ve had occasion to work with other c/c++ devs on various contracts, probably 50ish distinct different companies. Not once has rust even come up in casual conversation.
I'm probably a little younger than you, so it's likely a generational thing. I also notice it's a lot more pervasive in internet-driven watercoolers than face to face.
The safer the C code, the more horrible it starts looking though... e.g.
I don't understand why people think this is safer, it's the complete opposite.
With that `char msg[static 1]` you're telling the compiler that `msg` can't possibly be NULL, which means it will optimize away any NULL check you put in the function. But it will still happily call it with a pointer that could be NULL, with no warnings whatsoever.
The end result is that with an "unsafe" `char *msg`, you can at least handle the case of `msg` being NULL. With the "safe" `char msg[static 1]` there's nothing you can do -- if you receive NULL, you're screwed, no way of guarding against it.
For a demonstration, see[1]. Both gcc and clang are passed `-Wall -Wextra`. Note that the NULL check is removed in the "safe" version (check the assembly). See also the gcc warning about the "useless" NULL check ("'nonnull' argument 'p' compared to NULL"), and worse, the lack of warnings in clang. And finally, note that neither gcc or clang warn about the call to the "safe" function with a pointer that could be NULL.
[1] https://godbolt.org/z/qz6cYPY73
> I don't understand why people think this is safer, it's the complete opposite.
Yup, and I don't even need to check your godbolt link - I've had this happen to me once. It's the implicit casting that makes it a problem. You cannot even typedef it away as a new type (the casting still happens).
The real solution is to create and use opaque types. In this case, wrapping the `char[1]` in a struct would almost certainly generate compilation errors if any caller passed the wrong thing in the `char[1]` field.
Compared to other languages, this is still nice.
It is - like everything else - nice because you, me and lots of others are used to it. But I remember starting out with C and thinking 'holy crap, this is ugly'. After 40+ years looking at a particular language it no longer looks ugly simply because of familiarity. But to a newcomer C would still look quite strange and intimidating.
And this goes for almost all programming languages. Each and every one of them has warts and issues with syntax and expressiveness. That holds true even for the most advanced languages in the field, Haskell, Erlang, Lisp and more so for languages that were originally designed for 'readability'. Programming is by its very nature more akin to solving a puzzle than to describing something. The puzzle is to how to get the machine to do something, to do it correctly, to do it safely and to do it efficiently, and all of those while satisfying the constraint of how much time you are prepared (or allowed) to spend on it. Picking the 'right' language will always be a compromise on some of these, there is no programming language that is perfect (or even just 'the best' or 'suitable') for all tasks, and there are no programming languages that are better than any other for any subset of all tasks until 'tasks' is a very low number.
I agree that the first reaction usually is only about what one is used to. I have seen this many times. Still, of course, not all syntax is equally good.
For example, the problem with Vec<Vec<T>> for a 2D array is not that one is not used to it, but that the syntax is just badly designed. Not that C would not have problematic syntax, but I still think it is fairly good in comparison.
C has one massive advantage over many other languages: it is just a slight level above assembler and it is just about as minimal as a language can be. It doesn't force you into an eco-system, plays nice with lots of other tools and languages and gets out of the way. 'modern' languages, such as Java, Rust, Python, Javascript (Node) and so on all require you to buy in to the whole menu, they're not 'just a language' (even if some of them started out like that).
Not forcing you into an eco-system is what makes C special, unique and powerful, and this aspect is not well understood by most critics. Stephen Kell wrote a great essay about it.
Meanwhile, in Modula-2 from 1978, that would be
Now you can use LOW() and HIGH() to get the lower and upper bounds, and naturally bounds checked unless you disabled them, locally or globaly.This should not be downvoted, it is both factually correct and a perfect illustration of these problems already being solved and ages ago at that.
It is as if just pointing this out already antagonizes people.
A certain group of people likes to pretend before C there were no other systems programming languages, other than BCPL.
Ignoring what happened since 1958 (JOVIAL being a first attempt), and thus all its failings are excused because it was discovering the world.
I think the main reason you see this happening over and over again is because we're teaching this whole discipline wrong. By 1960 most of the problems in software development were known and had one or more solutions. Knuth spent decades just cataloging what was mostly already known (and moved the field forward in quite a few occasions as well).
And yet, you can't go a day without someone declaring that now is the time to do it right, this time it will be different. And then they proceed to do one thing after another for which the outcome is already known, just not to them. I think the best way to teach would be to start off with a fairly detailed history of what had gone before, just to give people a map and some basic awareness of the degree to which things have already been done, rather than to find new and interesting ways to shoot themselves in the foot (again).
Unlike actual engineering, software "engineering" as a field has decided to reinvent itself every generation - worse, every turn of the trends, even every project and person. Majority of the practitioners are in it for superficial reasons, unaware of its rich history and culture.
With ignorance comes arrogance of an individualist intellectual, thinking their unique revolutionary contribution will wow the public and move the field forward. Except inevitably they're not only reinventing the wheel but the entire automobile, without knowing basic principles and the work of predecessors. It has a lot in common with modern art.
> we're teaching this whole discipline wrong
I sometimes think languages after C, like C++ and Java, were misguided in some ways. Sure they provided business value, brought new ideas, and the software worked - but their popularity came at a cost of leaving countless great thoughts behind in history, and resulted in a poverty of software culture, education and imagination.
There are optimistic signs of people returning to the roots, re-learning the lessons and re-discovering ideas. I think many are coming to realize the need for a reformation of sorts.
> should never see production use.
I have an issue with high strung opinions like this. I wrote plenty of crappy delphi code while learning the language that saw production use and made a living from it.
Sure, it wasn't the best experience for users, it took years to iron out all the bugs and there was plenty of frustration during the support phase (mostly null pointer exceptions and db locks in gui).
But nobody would be better off now if that code never saw production use. A lot of business was built around it.
Buggy code that just crashes or produces incorrect results are a whole different category. In C a bug can compromise a server and your users. See the openssl heart bleed vulnerability as a prime example.
Once upon a time, you could put up a relatively vulnerable server, and unless you got a ton of traffic, there weren't too many things that would attack it. Nowadays, pretty much anything Internet facing will get a constant stream of probes. Putting up a server requires a stricter mindset than it used to.
There are minimum standards for deployment to the open web. I think - and you're of course entirely free to have a different opinion - that those are not met with this code.
Yes, I have lots of opinions!
I guess the question at spotlight is: At what point would your custom server's buffer overflow when reading a header matter and would that bug even exist at that point?
Could a determined hacker get to your server without even knowing what weird software you cooked up and how to exploit your binary?
We have a lot of success stories born from bad code. I mean look at Micro$oft.
Look at all the big players like discord leaking user credentials. Why would you still call out the little fish?
Maybe I should create a form for all these ahah.
> Could a determined hacker get to your server without even knowing what weird software you cooked up and how to exploit your binary?
Yes.
Yes but how? After the overflow they still have to know the address of the next call site and the server would be in a UB state.
The code is on github. Figure out a way to get a shell through that code and you're hosed if someone recognizes it in active use.
I mean tha hacker won't know what software is running on the server, unless the server announces itself which can be traced to the repo, but then, why ?? Who cares about this guy's vps? This whole thread makes no sense to me and I'm the only one questioning.
> This whole thread makes no sense to me and I'm the only one questioning.
That may well be because this isn't your field?
Or maybe well thought out, intelligent responses are a rare thing. Occam's razor suggests the latter.
UB state doesn’t mean totally uncontrollable or opaque.
There are lots of ways the server could leak information about its internal state, and exploits have absolutely been implemented in the past based only on what was visible remotely.
Yeah, I recently wrote a moderate amount of C code [1] entirely with Gemini and while it was much better than what I initially expected I needed a constant steering to avoid inefficient or less safe code. It needed an extensive fuzzing to get the minimal amount of confidence, which caught at least two serious problems---seriously, it's much better than most C programmers, but still.
[1] https://github.com/lifthrasiir/wah/blob/main/wah.h
I've been doing this the better part of a lifetime and I still need to be careful so don't feel bad about it. Just like rust has an 'unsafe' keyword I realize all of my code is potentially unsafe. Guarding against UB, use-after-free, array overruns and so on is a lot of extra work and you only need to slip up once to have a bug, and if you're unlucky something exploitable. You get better at this over the years. But if I know something needs to be bullet proof the C compiler would not be my first tool of choice.
One good defense is to reduce your scope continuously. The smaller you make your scope the smaller the chances of something escaping your attention. Stay away from globals and global data structures. Make it impossible to inspect the contents of a box without going through a well defined interface. Use assertions liberally. Avoid fault propagation, abort immediately when something is out of the expected range.
I strategy that helps me is just not use open-coded pointer arithmetic or string manipulation but encapsulate those behind safe bounds-checked interfaces. Then essentially only life-time issues remain and for those I usually do have a simple policy and clearly document any exception. I also use signed integers and the sanitizer in trapping mode, which turns any such issue I may have missed into a run-time trap.
This is why I love C. You can build these guard rails at exactly the right level for you. You can build them all the way up to CPython and do garbage collection and constant bounds checking. Or keep them at just raw pointer math. And everywhere in between. I like your approach. The downside being that there are probably 100,000+ bespoke implementations of similar guard rails where python users for example all get them for free.
It definitely is a lot of freedom.
But the lack of a good string library is by itself responsible for a very large number of production issues, as is the lack of foresight regarding de-referencing pointers that are no longer valid. Lack of guardrails seems to translate in 'do what you want' not necessarily 'build guard rails at the right level for you', most projects simply don't bother with guardrails at all.
Rust tries to address a lot of these issues, but it does so by tossing out a lot of the good stuff as well and introducing a whole pile of new issues and concepts that I'm not sure are an improvement over what was there before. This creates a take-it-or-leave it situation, and a barrier to entry. I would have loved to see that guard rails concept extended to the tooling in the form of compile time flags resulting in either compile time flagging of risky practices (there is some of this now, but I still think it is too little) and runtime errors for clear violations.
The temptation to 'start over' is always there, I think C with all of its warts and shortcomings is not the best language for a new programmer to start with if they want to do low level work. At the same time, I would - still, maybe that will change - hesitate to advocate for rust, it is a massive learning curve compared to the kind of appeal that C has for a novice. I'd probably recommend Go or Java over both C and rust if you're into imperative code and want to do low level work. For functional programming I'd recommend Erlang (if only because of the very long term view of the people that build it) or Clojure, though the latter seems to be on its retour.
I think the C standard should provide some good libraries, e.g. a string library. But in any case the problem with 100000+ bespoke implementations in C is not fixed by designing new programming languages and also adding them to the mix. Entropy is a bitch.
> I strategy that helps me [...]
In another comment recently I opined that C projects, initiated in 2025, are likely to be much more secure than the same project written in Python/PHP (etc).
This is because the only people choosing C in 2025 are those who have been using it already for decades, have internalised the handful of footguns via actual experience and have a set of strategies for minimising those footguns, all shaped with decades of experience working around that tiny handful of footguns.[1]
Sadly, this project has rendered my opinion wrong - it's a project initiated in 2025, in C, that was obviously done by an LLM, and thus is filled with footguns and over-engineering.
============
[1] I also have a set of strategies for dealing with the footguns; I would gues if we sat down together and compared notes our strategies would have more in common than they would differ.
If you want something fool-proof where a statistical code generated will not generate issues, then C is certainly not a good choice. But also for other languages this will cause issues. I think for vibe-coding a network server you might want something sand-boxed with all security boundaries outside, in which case it does not really matter anymore.
This is exactly my problem with LLM C code, lack of confidence. On the other hand, when my projects get big enough to the point where I cannot keep the code base generally loaded into my brains cache they eventually get to the point where my confidence comes from extensive testing regardless. So maybe it's not such a bad approach.
I do think that LLM C code if made with great testing tooling in concert has great promise.
That generalizes to anything LLM related.
> It needed an extensive fuzzing to get the minimal amount of confidence, which caught at least two serious problems---seriously, it's much better than most C programmers, but still.
How are you doing your fuzzing? You need either valgrind (or compiler sanitiser flags) in the loop for a decent level of confidence.
The "minimal" amount of confidence, not a decent level of confidence. You are completely right that I need much more to establish anything higher than that.
I agree that it reads really well which is why I was also surprised the quality is not high when I looked deeper. The author claims to have only used AI for the json code, so your conclusion may be off, it could just be a novice doing novice things.
I suppose I was just surprised to find this code promoted in my feed when it's not up to snuff. And I'm not hating, I do in fact love the project idea.
The irony is also that AI could have been used to audit the code and find these issues. All the author had to do was to question.