A hobby audio and text analysis application I've written, with no specific concern for low level performance other than algorithmically, runs 4x as fast in .net10 vs .net8. Pretty much every optimization discussed here applies to that app. Great work, kudos to the dotnet team. C# is, imo, the best cross platform GC language. I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
C# will be a force to reckon with if/when discriminated unions finally land as a language feature.
I think people who last looked at C# 10 years ago or haven't adapted to new language features seriously don't know how good C# is these days.
Switch expressions with pattern matching are absolutely killer[0] for its terseness.
Also, it is possible to use OneOf[1] and Dunet[2] to get access to DU
[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
[1] https://github.com/mcintyre321/OneOf
[2] https://github.com/domn1995/dunet
I write C# and rust fulltime. Native discriminated unions (and their integration throughout the ecosystem) are often the deciding factor when choosing rust over C#.
Very hard to imagine teams cross shopping C# and Rust and DU's being the deciding factor. The tool chains, workflows, and use cases are just so different, IMO. What heuristics were your team using to decide between the two?
This surprises me.
If you want the .NET ecosystem and GC conveniences, there is already F#. If you want no GC and RAII-style control, then you would already pick Rust.
> OneOf
I do like/respect C# but come on now. I know they're fixing it but the rest of the language was designed the same way and thus still has this vestigial layer of OOP-hubris
It's up to each team to decide how they want to write their code. TypeScript is the same with JS having a "vestigial" `class` (you can argue that "it's not the same", but nevertheless, it is possible to write OOP style code in JS/TS and in fact, is the norm in many packages like Nest.js).
The language is a tool; teams decide how to use the tool.
For me, it will be if they ever get checked errors of some sort. I don’t want to use a language with unchecked exceptions flying about everywhere. This isn't saying I want checked exceptions either, but I think if they get proper unions and then have some sort of error union type it would go a long way.
You can get an error union now: https://github.com/amantinband/error-or
The issue is the ecosystem and standard library. They still will be throwing unchecked exceptions everywhere
> C# is, imo, the best cross platform GC language. I really can't think of anything that comes close
How about F#? Isn't F# mostly C# with better ergonomics?
Personally I love F#, but I feel the community is probably even smaller than OCaml...
I once got a temporary F# role without any F# experience simply by having 7 YoE with C# and the knowledge that F# exists.
As much as I'd like to do more with it, the "just use F#" idea flaunted in this thread is a distant pipe dream for the vast majority of teams.
He means the runtime ".NET CLR". They have the same runtime.
It is but in practice it’s very hard to find programmers for it.
Lmao, functional programming is far from ergonomic
F# is hardly modern functional programming. It's more like a better python with types. And that's much more ergonomic than C#.
Python and F# are not very similar. A better comparison is OCaml. F# and OCaml are similar. They're both ML-style functional languages.
I'd much rather code F# than Python, it's more principled, at least at the small scale. But F# is in many ways closer to modern mainstream languages than a modern pure functional language. There's nothing scary about it. You can write F# mostly like Python if you want, i.e. pervasive mutation and side effects, if that's your thing.
If Python is the only language you have to compare other languages to, all other programming languages are going to look like "Python with X and Y differences". It makes no sense to compare Python to F# when OCaml exists and is a far closer relative. F# isn't quite "OCaml on .NET" but it's pretty close.
It absolutely does make sense to compare it to the worlds most popular programming language, especially when dismissed as "functional programming". Who benefits from an OCaml comparison? You think F# should be marketed to OCaml users who might want to try dotnet? That's a pretty small market.
Python is the world's most used scripting language, but for application programming languages there are other languages that are widely used and better to compare to F#. For example, C# and Java.
F# was pitched by Microsoft to be used in areas where Python dominates, especially for scripting in the finance domain and "rapid application development". So it doesn't make sense at all that C# and Java are a "better comparison".
> F# was pitched by Microsoft to be used in areas where Python dominates
Haha, no. Microsoft barely talks about F# at all, and has largely left the evolution of the language up to the open source community that supports it. Furthermore, you shouldn't take your cues about what a language is best suited for from marketing types, you should evaluate it based on its strengths as a language and broader ecosystem. If you seriously doubt that C# is a better comparison to F# than Python, then I suspect you haven't used either C# or F# and you're basing your views on marketing fluff.
Less of the personal attacks please, you know nothing about me. I actually think it is you that is missing context here. Don Syme personally visited and presented at a variety of investment banks. He was the creator not a marketing type. I was present at one of his pitches and met him. One bank, Credit Suisse ended up adopting it. Any comparisons he made to C# where based around readability and time to market (C# is very verbose and boilerplate heavy compared to both Python and F#). This was all on the 2010-2015 timeframe. Python ended up winning in these markets. My point has always been that this now puts it in a difficult position, it's simply not radical enough to disrupt but still carries the perceived "functional programming" barrier to entry.
It's so weird to describe F# as "Python with Types." First of all, Python is Python with Types. And C# is much more similar to Python than F# is.
It all depends on the lens one chooses to view them. None of them are really "functional programming" in the truly modern sense, even F#. As more and more mainstream languages get pattern matching and algebraic data types (such as Python), feature lambdas and immutable values, then these languages converge. However, you don't really get the promises of functional programming such as guaranteed correct composition and easier reasoning/analysis, for that one needs at least purity and perhaps even totality. That carries the burden of proof, which means things get harder and perhaps too hard for some (e.g. the parent poster).
If purity is a requirement for "real" functional programming, then OCaml or Clojure aren't functional. Regarding totality, even Haskell has partial functions and exceptions.
Both OCaml and Clojure are principled and well designed languages, but they are mostly evolutions of Lisp and ML from the 70s. That's not where functional programming is today. Both encourage a functional style, which is good. And maybe that's your definition of a "functional language". But I think that definition will get increasingly less useful over time.
What is an example of a real functional language for you?
Haskell. But there are other examples of "pure functional programming". And the state of the art is dependently typed languages, which are essentially theorem provers but can be used to extract working code.
Like LEAN4 ?
I, too, am curious and keep checking back for a reply!
Sure, Python has types as part of the syntax, but Python doesn't have types like Java, C#, etc. have types. They are not pervasive and the semantics are not locked down.
Exactly what I've observed in practice because most devs have no background in writing functional code and will complain when asked to do so.
Passing or returning a function seems a foreign concept to many devs. They know how to use lambda expressions, but rarely write code that works this way.
We adopted ErrorOr[0] and have a rule that core code must return ErrorOr<T>. Devs have struggled with this and continue to misunderstand how to use the result type.
[0] https://github.com/amantinband/error-or
That really depends on your preferred coding style.
honestly this sounds like you've never really done it. FP is much better for ergonomics, developer productivity, correctness. All the important things when writing code.
I like FP, but your claim is just as baseless as the parent’s.
If FP was really better at “all the important things”, why is there such a wide range of opinions, good but also bad? Why is it still a niche paradigm?
Having worked with C# professionally for a decade, going through the changes with LINQ, async/await, Roslyn, and the rise of .NET Core, to .NET Core becoming .NET, I disagree. I certainly think that C# is a great tool and that it’s the best it has ever been. It’s also relies on very implicit behaviour, it is build upon OOP design principles and a bunch of “needless” abstraction. Things I personally have come to view as anti-patterns over the years. This isn’t because I specifically dislike C#, you could find me saying something similar about Java.
I suspect that the hidden indirection and runtime magic, may be part of why you love the language. In my experience, however, it leads to poor observability, opaque control flow, and difficult debugging sessions in every organisation and company I’ve ever worked for. It’s fair to argue that this is because the people working with C# are bad at software engineering. Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong. To me that means the language itself has a poor design fit for software development in 2025. Which is probably why we see more and more Go adoption, due to its explicit philosophies. Though to be fair, Python seems to be “winning” as far as adoption goes in the cross platform GC language space. Having worked with Django-Ninja I can certainly see why. It’s so productive, and with stuff like Pyrefly, UV and Ruff it’s very easy to make it a YAGNI experience with decent type control.
I am happy you enjoy C# though, and it’s great to see that it is evolving. If they did more to enhance the developer experience so that people were less inclined to do bad engineering on a thursday afternoon after a long day of useless meetings. Then I would probably agree with you. I'm not sure any of the changes going toward .NET 10 are going in that direction though.
You are missing the forest for the trees.
C# has increasingly become more terse (e.g. switch expressions, collection initializers, object initializers, etc) and, IMO, is a good balance between OOP and functional[0].
Functions are first class objects in C# and teams can write functional style C# if they want. But I suspect that this doesn't scale well in human terms as we've encountered a LOT of trouble trying to get TypeScript devs to adopt more functional programming techniques. We've found that many devs like to use functional code (e.g. Linq, `.filter()`, `.map()`), but dislike writing functional code because most devs are not wired this way and do not have any training in how to write functional code and understanding monads. Asking these devs to use a monad has been like asking kids to eat their carrots.
Across much of our backend TS codebase, there are very, very few cases where developers accept a function as input or return a function as output (almost all of it written by 1 dev out of a team of ~20).
Having been working with Nest.js for a while, it's clear to me that most of these abstractions are not "needless" but actually "necessary" to manage complexity of apps beyond a certain scale and the reasons are less technical and more about scaling teams and communicating concepts.Anyone that looks at Nest.js will immediately see the similarities to Spring Boot or .NET Web APIs because it fills the same niche. Whether you call a `*Factory` a "factory" or something else, the core concept of what the thing does still exists whether you're writing C#, Java, Go, or JS: you need a thing that creates instances of things.
You can say "I never use a factory in Go", but if you have a function that creates other things or other functions, that's a factory...you're just not using the nomenclature. Good for you? Or maybe you misunderstand why there is standard nomenclature of common patterns in the first place and are associating these patterns with OOP when in reality, they are almost universal and are rather human language abstractions for programming patterns.
[0] https://medium.com/itnext/getting-functional-with-c-6c74bf27...
I notice that none of the examples in your blog entry on functional C# deals with error handling. I know that is not the point of your article, but that is actually one of my key issues with C# and its reliance on implicit, because like so many other parts of C# you'd probably hand it over to an exeception handler. I'd much rather prefer you to deal with it explicitly right where it happens, and I would prefer if you were actually forced to do it for examples like yours. This is because implicit error handling is hard. I have no doubt you do it well, but it is frankly rare to meet a C# developer who has as much of an understanding on the language that you clearly have.
I think this is an excellent blog post by the way. My issues with C# (and this applies to a lot of other GC languages) is that most developers would learn a lot from your article. Because none of it is an intuitive part of the language philosophy.
I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either. I do think you need to understand why you are doing it though, and C# sort of makes people go to abstractions first, not last in my experience. I don't think these changes to the GC is going to help people who write C# without understanding C#, which is frankly most C# developers around here. Because Go is opinionated and explicit it's simply an issue I have to deal with less in Go teams. It's not an issue I have to deal with less in Python teams, but then, everyone who loves Python knows it sucks.
My team just recently made the switch from a TS backend to a C# backend for net new work. When we made this switch, we also introduced `ErrorOr`[0] which is a monadic result type.
I would not have imagined this to be controversial nor difficult, but it turns out that developers really prefer and understand exceptions. That's because for a backend CRUD API, it's really easy to just throw and catch at a global HTTP pipeline exception filter and for 95% of cases, this is OK and good enough; you're not really going to be able to handle it nor is it worth it to to handle it.
We'll stick with ErrorOr, but developers aren't using it as monad and simply unwrapping the value and the error because, as it turns out, most devs just have a preference/greater familiarity with imperative try-catch handling of errors and practically, in an HTTP backend, there's nothing wrong in most cases with just having a global exception filter do the heavy lifting unless the code path has a clear recovery path.
I do think there is a "silver rule": OOP when you need structural scaffolding, functional when you have "small contracts" over big ones. An interface or abstract class is a "big contract" that means to understand how to use it, you often have to understand a larger surface area. A function signature is still a contract, but a micro-contract.Depending on what you're building, having structural scaffolding and "big contracts" makes more sense than having lots of micro-contracts (functions). Case in point: REST web APIs make a lot more sense with structural scaffolding. If you write it without structural scaffolding of OOP, it ends up with a lot of repetition and even worse opaqueness with functions wrapping other functions.
The silver rule for OOP vs FP for me: OOP for structural templating for otherwise repetitive code and "big contracts"; FP for "small contracts" and algorithmically complex code. I encourage devs on the team to write both styles depending on what they are building and the nature of complexity in their code. I think this is also why TS and C# are a sweet spot, IMO, because they straddle both OOP and have just enough FP when needed.
[0] https://github.com/amantinband/error-or
You introduced a pattern that is simply different than the usual in C#. It's also not clearly better, it's different. In languages designed for result types like this the ergonomics of such a type are usually better.
All the libraries you use and all methods from the standard library use exceptions. So you have to deal with exceptions in any case.
There's also a million or so libraries that implement types like this. There is no standard, so no interoperability. And people have to learn the pecularities of the chosen library.
I like result types like this, but I'd never try to introduce them in C# (unless at some point they get more language support).
These are developers that have never written C# before so there's no difference between whether it's language supported or not. It was in the core codebase on day 1 when they onboarded so it may as well have been native.
But what I takeaway from this is that Go's approach to error handling is "also not clearly better, it's different".
Even if C# had core language support for result types, you would be surprised how many developers would struggle with it (that is my takeaway from this experience).
If you're not coming from a strongly typed functional language, it's still a pattern you're not used to. Which might be a bit of a roundabout way to say that I agree about your last part, developers without contact to that kind of language will struggle at first with a pattern like this.
I know how to use this pattern, but the C# version still feels weird and cumbersome. Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#. And I think it makes a different in understanding, as you would usually build ony our experience with pattern matching to understand how to handle this case of Result|Error.
[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
None of your examples use native C# pattern matching. And without language support like e.g. discriminated unions you can't have exhaustive pattern matching in C#. So you'll have to silence the warnings about the missing default case or always add one, which is annoying.
I mean, it's not a stretch to see how you can use native pattern matching with ErrorOr result types.
(Fully contained program, BTW)Here's the OCaml version:
Still not functional enough?...Or you just don't like C#? No point moving goal posts.What sort of issues do you get debugging?
My experience of .NET even from version 1 is that it has the best debugging experience of any modern language, from the visual studio debugger to sos.dll debugging crash dumps.
I am paid to work in Java and C# among Go, Rust, Kotlin, Scala and I wholeheartedly agree.
I hate the implicitness of Spring Boot, Quarkus etc. as much as the one in C# projects.
All these magic annotations that save you a few lines of code until they don't, because you get runtime errors due to incompatible annotations.
And then it takes digging through pages of docs or even reporting bugs on repos instead of just fixing a few explicit lines.
Explicitness and Verbosity are orthogonal concepts mostly!
I disagree on this.
I am at a (YC, series C) startup that just recently made the switch from TS backend on Nest.js to C# .NET Web API[0]. It's been a progression from Express -> Nest.js -> C#.
What we find is that having attributes in both Nest.js (decorators) and C# allows one part of the team to move faster and another smaller part of the team to isolate complexity.
The indirection and abstraction are explicit decisions to reduce verbosity for 90% of the team for 90% of the use cases because otherwise, there's a lot of repetitive boilerplate.
The use of attributes, reflection, and source generation make the code more "templatized" (true both in our Nest.js codebase as well as the new C# codebase) so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Having the option to dip into source generation, for example, is really powerful in allowing the team to reduce boilerplate.
[0] We are hiring, BTW! Seeking experienced C# engineers; very, very competitive comp and all greenfield work with modern C# in a mixed Linux and macOS environment.
> so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Works well until the 10% that understand the behind the scenes leave and you are left with a bunch of developers copy and pasting magic patterns that they don't understand.
I love express because things are very explicit. This is the JSON schema being added to this route. This route is taking in JSON parameters. This is the function that handles this POST request endpoint.
I joined a team using Spring Boot and the staff engineer there couldn't tell me if each request was handled by its own thread or not, he couldn't tell me what variables were shared across requests vs what was uniquely instantiated per request. Absolute insanity, not understanding the very basics of one's own runtime.
Meanwhile in Express, the threading model is stupid simple (there isn't one) and what is shared between requests is obvious (everything declared in an outer scope).
The trade offs are though that patterns and behind the scenes source code generation is another layer that the devs who have to follow need to deal with when debugging and understanding why something isn’t working. They either spend more time understanding the bespoke things or are bottle necked relying on a team or person to help them get through those moments. It’s a trade off and one that has bit me and others before
I am not talking about C# specifically but also and I agree.
Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.
Yes, "convention over configuration". Namely, ungreppsble and magic.
This kind of stuff must be used with a lot of care.
I usually favor explicit and, for config, plain data (usually toml).
This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).
It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.
Would you rather a team move faster and be more productive or be a purist and disallow abstractions to avoid some potential runtime tracing challenges which can be mitigated with good use of OTEL and logging? I don't know about you, but I'm going to bias towards productivity and use integration tests + observability to safeguard code.
Disallow bespoke abstractions and use the industry standard ones instead. People who make abstractions inflate how productive they’re making everyone else. Your user base is much smaller than popular libs, so your docs and abstractions are not as battle tested and easy to use as much as you think.
This is raw OpenFGA code:
This is an abstraction we wrote on top of it: You would make the case that the former is better than the latter?In the first example, I have to learn and understand OpenFGA, in the second example I have to learn and understand OpenFGA and your abstractions.
Well the point of using abstractions is that you don't need to know the things that it is abstracting. I think the abstraction here is self explaining what it does and you can certainly understand and use it without needing to understand all the specifics behind it.
More importantly: it prevents "usr:alice_123" instead of "user:alice_123" by using the type constraint to generate the prefix for the identifier.
How much faster are we talking? Because you'd have to account for the time lost debugging annotations.
What are you working on that you're debugging annotations everyday? I'd say you've made a big mistake if you're doing that/you didn't read the docs and don't understand how to use the attribute.
(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)
On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.
I don't debug them every day, but when I do, it takes days for a nasty bug to be worked out.
Yes, they make CRUD stuff very easy and convenient.
It has been worth the abstraction in my organization with many teams. Thinking 1000+ engineers, at minimum. It helps to abstract as necessary for new teammates that want to simply add a new endpoint yet follow all the legal, security, and data enforcement rules.
Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.
That's the deal with all metaprogramming.
People were so afraid of macros they ended up with something even worse.
At least with macros I don't need to consider the whole of the codebase and every library when determining what is happening. Instead I can just... Go to the macro.
C# source generators are...just macros?
They are not. They are generators. Macros tends to be local and explicit as the other commenters have said. They are more like templates. Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.
Isn't this just a string template? https://github.com/CharlieDigital/SKPromptGenerator/blob/mai...
Maybe you're confusing `System.Reflection.Emit` and source generators? Source generators are just a source tree walker + string templates to write source files.
> Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.
I agree the syntax is awkward, but all it boils down to is concatenating code in strings and adding it as a file to your codebase.
And the syntax will 100% get cleaner (it;s already happening with stuff like ForAttributeWithMetadataName
What are those magic annotations you are talking about? Attributes? Not much of those are left in modern .net.
Attributes and reflection are still used in C# for source generators, JSON serialization, ASP.NET routing, dependency injection... The amount of code that can fail at runtime because of reflection has probably increased in modern C#. (Not from C# source generators of course, but those only made interop even worse for F#-ers).
Aye, was involved in some really messed up outages from New Relics agent libraries generating bogus byte code at runtime, absolute nightmare for the teams trying to debug it because none of the code causing the crashing existed anywhere you could easily inspect it. Replaced opaque magic from new relic with simpler OTEL, no more outages
That's likely the old emit approach. Newer source gen will actually generate source that is included in the compilation.
Don't we have automated tests for catching this kind of things or is everyone only YOLOing in nowadays? Serialization, routing, etc can fail at runtime regardless of using or not using attributes or reflection.
Ease of comprehension is more important than tests for preventing bugs. A highly testable DI nightmare will have more bugs than a simple system that people can understand just by looking at it.
If the argument is that most developers can't understand what a DI system does, I don't know if I buy that. Or is the argument it's hard to track down dependencies? Because if that's the case the idiomatic c# has the dependencies declared right in the ctor.
But the "simple" system will be full of repetition and boilerplate, meaning the same bugs are scattered around the code base, and obscured by masses of boilerplate.
Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.
I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.
Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.
I haven't experienced a DI 'nightmare' myself yet, but then again, we have integration tests to cover for that.
Try Nest.js and you'll know true DI "nightmares".
OK lets brake this down:
- code generators, I think I saw it only in regex. Logging can be done via `LoggerDefine` too so attributes are optional. Also code generators have access to full tokenized structure of code, and that means attributes are just design choice of this particular generator you are using. And finally code generators does not produce runtime errors unless code that they generated is invalid.
- Json serialization, sure but you can use your own converters. Attributes are not necessary.
- asp.net routing, yes but those are in controllers, my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attributes; you can inject services into minimal APIs and this does not require attributes. Most of the time minimal APIs does not require attributes at all.
- dependency injection, require attributes when you inject services in controllers endpoints, which I never liked nor understood why people do that. What is the use case over injecting it through controller constructor? It is not like constructor is singleton, long living object. It is constructed during Asp.net http pipeline and discarded when no longer necessary.
So occasional usage, may still occur from time to time, in endpoints and DTOs (`[JsonIgnore]` for example) but you have other means to do the same things. It is done via attributes because it is easier and faster to develop.
Also your team should invest some time into testing in my opinion. Integration testing helps a lot with catching those runtime errrors.
> Json serialization, sure but you can use your own converters
And going through converters is (was?) significantly slower for some reason than the built-in serialisation.
> my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attribute
Minimal APIs use attributes to explicitly configure how parameters are mapped to the path, query, header fields, body content or for DI dependencies. These can't always be implicit, which BTW means you're stuck in F# if you ever need them, because the codegen still doesn't match what the reflection code expects.
I haven't touched .NET during work hours in ages, these are mostly my pains from hobbyist use of modern .NET from F#. Although the changes I've seen in C#'s ecosystem the last decade don't make me eager to use .NET for web backends again, they somehow kept going with the worst aspects.
I'm fed up by the increasing use of reflection in C#, not the attributes themselves, as it requires testing to ensure even the simplest plumbing will attempt to work as written (same argument we make for static types against dynamic, isn't it?), and makes interop from F# much, much harder; and by the abuse of extension methods, which were the main driver for implicit usings in C#: no one knows which ASP.NET namespaces they need to open anymore.
I am working on entire new hobby project written on minimal apis and I checked today before writing answer to your comment: I did not used any attributes there, beside one 'FromBody' and that one only because otherwise it tries to map model from everywhere so you could in theory pass it from Query string. Which was extremely weird.
Where did you saw all of those attributes in minimal APIs? I honestly curious because from my experience - it is very forgiving and works mostly without them.
As polyglot developer, I also disagree.
If I wanted explicitness for every little detail I would keep writing in Assembly like in the Z80, 80x86, 68000 days.
Unfortunately we never got Lisp or Smalltalk mainstream, so we got to metaprogramming with what is available, and it is quite powerful when taken advantage of.
Some people avoid wizard jobs, others avoid jobs where magic is looked down upon.
I would also add that in the age of LLM and AI generated applications, discussing programming languages explicitness is kind of irrelevant.
Explicitness is different than verbosity. Often annotations and the like are abused to create a lot of accidental complexity just to not write a few keywords. In almost every lisp project you'll find that macros are not intended for reducing verbosity, they are there to define common patterns. You can have something like
You can then easily expect the generated code. But in Java and others, you'll have something like And there's a whole system hidden behind this, that you have to carefully understand as every annotation implementation is different.This is the trade-off with macros and annotation/code-generation systems.
I tend to do obvious things whwn I use this kind of tools. In fact, I try to avoid macros.
Even if configurability is not important, I favor sinplification over reuse. In case I need reuse, I go for higher order functions if I can. Macro is the last bullet.
In some circumstances like Json or serialization maybe they can be slightly abused to mark fields and such. But whole code generation can take it so far and magic that it is not worth in many circumstances IMHO, thiugh every tool has its use cases, even macros and annotations.
IMO, macros and such should be to improve coding UX. But using it for abstractions and the like is very much not worth it. So something like JSX (or the loop system in Common Lisp) is good. But using it for DI is often a code smell for me.
> IMO, macros and such should be to improve coding UX
Coding UX critically leans on familiarity and spread of knowledge. By definition, making a non-obvious macro not known by others makes the UI just worse for a definition of worse which means "less manageable by anyone that looks at it without previous knowledge".
That is also the reason why standard libraries always have an advantage in usability just because people know them or the language constructs themselves.
Only if those Lisp projects are done by newbies, Clojure is quite known for having a community that takes that approach to macros, versus everyone else on Lisp since its early days.
Using macros for DSLs has been common for decades, and is how frameworks like CLOS were initially implemented.
I have been coding in C# for 16 years and I have no idea what you mean by "hidden indirection and runtime magic". Maybe it's just invisible to me at this point, but GC is literally the only "invisible magic" I can think of that's core to the language. And I agree that college-level OOP principles are an anti-pattern; stop doing them. C# does not force you to do that at all, except very lightly in some frameworks where you extend a Controller class if you have to (annoying but avoidable). Other than that, I have not used class inheritance a single time in years, and 98% of my classes and structs are immutable. Just don't write bad code; the language doesn't force you to su all.
> Just don't write bad code;
If we're writing good code then why do we even need a GC? Heh.
In decades of experience I've never once worked in an organisation where "don't write bad code" applied. I have seen people with decades of experience with C# who don't know that IQuerable and IEnumerable load things into memory differently. I don't necessarily disagree with you that people should "just write good code", but the fact is that most of us don't do that all the time. I guess you could also argue that principles like "foureyes" would help, but it doesn't, even when they are enforced by leglisation with actual risk of punishments like DORA or NIS2.
This is the reason I favour Go as a cross platform GC language over C#, because with Go are given fewer opportunities to fuck up. There is still plenty of chance to do it, but fewer than other GC languages. At least on the plusside for .NET 10 they're going to improve IEnumerable with their devirtualization.
> hidden indirection and runtime magic"
Maybe not in C#, but C# is .NET and I don't think it's entirely fair to decouple C# from .NET and it's many frameworks. Then again, I could have made it more clear.
Hidden indirection & runtime magic almost always refer to DI frameworks.
Reflection is what makes DI feel like "magic". Type signatures don't mean much in reflection-heavy codes. Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd.
My pet theory is this kind of "magic" is what makes some people like Golang, which favors explicit wiring over implicit DI framework magic.
Reminds me with C advices: "Just don't write memory leaks & UAF!".Some examples:
- Attributes can do a lot of magic that is not always obvious or well documented.
- ASP.NET pipeline.
- Source generators.
I love C#, but I have to admit we could have done with less “magic” in cases like these.
Attributes do nothing at all on their own. It's someone else's code that does magic by reflecting on your types and looking for those attributes. That may seem like a trivial distinction, but there's a big difference between "the language is doing magic" and "some poorly documented library I'm using is doing magic". I rarely use and generally dislike attributes. I sometimes wonder if C# would be better off without them, but there are some legitimate usages like interop with unmanaged code that would be really awkward any other way. They are OK if you think of them as a weakly enforced part of the type system, and relegate their use to when a C# code object is representing something external like an API endpoint or an unmanaged call. Even this is often over-done.
Yes, the ASP.NET pipeline is a bit of a mess. My strategy is to plug in a couple adapters that allow me to otherwise avoid it. I rolled my own DI framework, for instance.
Source generators are present in all languages and terrible in all languages, so that certainly is not a criticism of C#. It would be a valid criticism if a language required you to use source generators to work efficiently (e.g. limited languages like VB6/VBA). But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
Maybe it sounds like I'm dodging by saying C# is great even though the big official frameworks Microsoft pushes (not to mention many of their tutorials) are kinda bad. I'd be more amenable to that argument if it took more than an afternoon to plug in the few adapters you need to escape their bonds and just do it all your own way with the full joy of pure, clean C#. You can write bad code in any language.
That's not to say there's nothing wrong with C#. There are some features I'd still like to see added (e.g. co-/contra-variance on classes & structs), some that will never be added but I miss sometimes (e.g. higher-kinded types), and some that are wonderful but lagging behind (e.g. Expressions supporting newer language features).
A common way to work around this is to provide a `IsSet` boolean:
Now you can check if the value is set.However, you can see how tedious this can get without a source Generator. With a source generator, we simply take nullable partial properties and generate the stub automatically.
Now a single marker attribute will generate as many `Is*Set` properties as needed.Of course, the other use case is for AOT to avoid reflection by generating the source at runtime.
> But I haven't used source generators in C# in at least 10 years
Source generators didn't exist in C# 10 years ago. You probably had something else in mind?
I don't really consider any of these magic, particularly source generators.
It's just code that generates code. Some of the syntax is awkward, but it's not magic imo.
> Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong.
Is this sarcasm?
I think this is fair criticism. OOP advocates like Uncle Bob always try to sell you 100 often contradictory and ill-defined rules and guidelines for how to “use it right”. Stuff like
* objects should model a single concept, or
* every domain concept should be an object.
These two alone are already contradictory. And what do they even mean? Concretely?
Then, when OOP invariably breaks down, they can always point to any of the 100 rules that you supposedly violated, and blame the failure on that. “Yes, it did not work out because you did not do it right.” It’s the true scotsman fallacy.
It’s like communism. It would work out if somebody just finally did it properly.
Maybe a system that requires 100 hard to follow rules to have even a chance at success just isn’t a great one.
> I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
This is also why I prefer to use Unity over all other engines by an incredible margin. The productivity gains of using a mature C# scripting ecosystem in a tight editor loop are shocking compared to the competition. Other engines do offer C# support, but if I had to make something profitable to save my own standard of living the choice is obvious.
There's only two vendors that offer built-in SIMD accelerated linear math libraries capable of generating projection matrices out of the box. One is Microsoft and the other is Apple. The benefits of having stuff like this baked into your ecosystem are very hard to overstate. The amount of time you could waste looking for and troubleshooting primitives like this can easily kill an otherwise perfect vision.
Java?
Is supported on more platforms, has more developers, more jobs, more OSS projects, is more widely used (Tiobe 2024). Performance was historically better, but c# caught up.
Reified generics, value types, LINQ are just a few things that you would miss when going to Java. Also Java and .NET are both big, that's not a real argument here. Not that I would trust Tiobe index too much, but as of 2025 September C# is right behind Java at 5th place.
My experience was that .NET programs were typically more tunable for greater perf than Java for many years now even if it didn't come free out of the box which generally is what matters with performance. The ability to optimise further what needs to be optimised means that generally you are faster for your business domain than the alternative - with Java code it generally is harder and/or less ergonomic to do this.
For example just having value types and reified generics as a combination meant you could write generic code against value types which usually meant for hot algorithmic loops or certain data structures a big win w.r.t memory and CPU consumption. For example for a collection type critical to an app I wrote many years ago the use of value types would almost half the memory footprint compared to the best Java one I could find, and was somewhat faster with less cache misses. The Java alternative wasn't an amateur one either but they couldn't get the perf out of it even with significant effort.
It also last time I checked doesn't have a value decimal type for financial math which IMO can be a significant performance loss for financial/money based systems. Anything with math, and lots of processing/data structures for example I would find .NET significantly faster after doing the optimisation work. If I had to choose the 2 targets these days I would find .NET in general an easier target w.r.t performance. Of course perf isn't everything depending on the domain.
Or you can use the "C# without the line noise", which goes under the name of F#.
Yeah, but if you use F# that then you’ll have all the features C# has been working on for years, only in complete and mature versions, and also an opinionated language encouraging similar styles between teams instead of wild dialects of kinda-sorta immutablity and kinda-sorta DU’s, and everything in between, requiring constant vigilance and training… ;)
I’m a fan of all three languages, but C# spent the first years relearning why Visual Basic was very productive and the last many years learning why OCaml was chosen to model F# after. It’s landed in a place where I can make beautiful code the way I need to, but the mature libraries I’ve crafted to make it so simply aren’t recreate-able by most .Net devs, and the level of micro-managing it takes to maintain across groups is a line-by-line slog against orthodoxy and seeming ‘shortcuts’, draining the productivity those high level guarantees should provide. And then there’s the impact of EF combined with sloppy Linq which makes every junior/consultant LOC a potentially ticking time bomb without more line-by-line, function-by-function slog.
Compiler guarantees mean a lot.
>runs 4x as fast in .net10 vs .net8.
Is this open source? Do you have numbers? I've been coding in .NET since it was "a thing" and frankly I'm having trouble mapping the optimizations to a local application at that magnitude.
The optimizations are seen at scale, they really won't mean much for your local application. Not a 3x+ improvement at least.
Except for F#, which also gets all the .NET10 cross-platform GC improvements for free and is a better programming language than C#.
+1 F# is criminally under-used
I've used it, and am still using it, to generate lots of value in a very large org. Having a language where I can bring Go, Node, etc developers over and get relatively better performance without having to teach OOP and all the implicit conventions that are on the C# side is a bit like a cheat code. With modern .NET, its better than Java perf, with better GC, and having the ability to code generic Python/JS looking code whilst still having type checking (HM inference). There are C# libraries we do use but with standard templates for those few with patterns to interface to mostly F# layers you can get very far in a style of code more fitting of a higher more dynamic language. Ease of use vs perf, its kind of in the middle - and it has also benefited from C# features (e.g. spans recently)
Its not one feature with F# IMO, its little things that add up which generally is the reason it is hard to convince someone to use it. To the point when the developers (under my directive) had to write two products in C# they argued with me to switch back.
I used it for many years but ended up switching to C#. The language needs better refactoring tools. And for that it needs something like Roslyn. The existing compiler library is too slow.
That would be nice, but refactoring canonical F# is still far easier than C# due to its referential transparency.
https://dev.to/ruizb/function-purity-and-referential-transpa...
No, it is not. Referential transparency <<< tooling.
Plus F# as a functional language has significant gaps that prevent effective refactoring, such as lack of support for named arguments to curried functions.
Can you give me an example where lack of support for named arguments to curried functions makes refactoring difficult? I'm having trouble understanding how that would happen.
For one there's no way to add a curried parameter without doThing4-style naming and lack of named arguments implies you can't have a default value for the new parameter.
Another one is if you want to add a curried parameter to the end of the parameter list, and you have code like
You can't just say instead, you have to rewrite the whole pipe.OK, I think I see what you mean. I certainly agree that named arguments with default values can be useful, but are not supported by curried functions.
> I really can't think of anything that comes close in terms of [...] developer experience.
Of all the languages that I have to touch professionally, C# feels by far the most opaque and unusable.
Documentation tends to be somewhere between nonexistant and useless, and MSDN's navigation feels like it was designed by a sadist. (My gold standard would be Rustdoc or Scala 2.13 era Scaladoc, but even Javadoc has been.. fine for basically forever.) For third-party libraries it tends to be even more dire and inconsistent.
The Roslyn language server crashes all the time, and when it does work.. it doesn't do anything useful? Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there! (I know there's this thing called "SourceLink" which is.. supposed to solve this? I think? But I've never seen it actually use it in practice.)
Even finding where something comes from is ~impossible without the language server, because `using` statements don't mention.. what they're even importing. (Assuming that you have them at all. Because this is also the company that thought project-scoped imports were a good idea!)
And then there's the dependency injection, where I guess someone thought it would be cute if every library just had an opaque extension method on the god object, that didn't tell you anything about what it actually did. So good luck finding where the actual implementation of anything is.
I almost exclusively work in C# and have never experienced the Roslyn crashes you mentioned. I am using either Rider or Visual Studio though.
> Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there!
If these are projects you have in the same solution then it should never do this. I would only expect this to happen if either symbol files or source files are missing.
I use VS Code on macOS for all of my C# code over the last 5 years and also never experienced Roslyn crashes.
Try "go to implementation" in place of go to definition.