I am paid to work in Java and C# among Go, Rust, Kotlin, Scala and I wholeheartedly agree.

I hate the implicitness of Spring Boot, Quarkus etc. as much as the one in C# projects.

All these magic annotations that save you a few lines of code until they don't, because you get runtime errors due to incompatible annotations.

And then it takes digging through pages of docs or even reporting bugs on repos instead of just fixing a few explicit lines.

Explicitness and Verbosity are orthogonal concepts mostly!

I disagree on this.

I am at a (YC, series C) startup that just recently made the switch from TS backend on Nest.js to C# .NET Web API[0]. It's been a progression from Express -> Nest.js -> C#.

What we find is that having attributes in both Nest.js (decorators) and C# allows one part of the team to move faster and another smaller part of the team to isolate complexity.

The indirection and abstraction are explicit decisions to reduce verbosity for 90% of the team for 90% of the use cases because otherwise, there's a lot of repetitive boilerplate.

The use of attributes, reflection, and source generation make the code more "templatized" (true both in our Nest.js codebase as well as the new C# codebase) so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.

Having the option to dip into source generation, for example, is really powerful in allowing the team to reduce boilerplate.

[0] We are hiring, BTW! Seeking experienced C# engineers; very, very competitive comp and all greenfield work with modern C# in a mixed Linux and macOS environment.

> so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.

Works well until the 10% that understand the behind the scenes leave and you are left with a bunch of developers copy and pasting magic patterns that they don't understand.

I love express because things are very explicit. This is the JSON schema being added to this route. This route is taking in JSON parameters. This is the function that handles this POST request endpoint.

I joined a team using Spring Boot and the staff engineer there couldn't tell me if each request was handled by its own thread or not, he couldn't tell me what variables were shared across requests vs what was uniquely instantiated per request. Absolute insanity, not understanding the very basics of one's own runtime.

Meanwhile in Express, the threading model is stupid simple (there isn't one) and what is shared between requests is obvious (everything declared in an outer scope).

The trade offs are though that patterns and behind the scenes source code generation is another layer that the devs who have to follow need to deal with when debugging and understanding why something isn’t working. They either spend more time understanding the bespoke things or are bottle necked relying on a team or person to help them get through those moments. It’s a trade off and one that has bit me and others before

I am not talking about C# specifically but also and I agree.

Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.

Yes, "convention over configuration". Namely, ungreppsble and magic.

This kind of stuff must be used with a lot of care.

I usually favor explicit and, for config, plain data (usually toml).

This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).

It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.

Would you rather a team move faster and be more productive or be a purist and disallow abstractions to avoid some potential runtime tracing challenges which can be mitigated with good use of OTEL and logging? I don't know about you, but I'm going to bias towards productivity and use integration tests + observability to safeguard code.

Disallow bespoke abstractions and use the industry standard ones instead. People who make abstractions inflate how productive they’re making everyone else. Your user base is much smaller than popular libs, so your docs and abstractions are not as battle tested and easy to use as much as you think.

This is raw OpenFGA code:

    await client.Write(
        new ClientWriteRequest(
            [
                // Alice is an admin of form 123
                new()
                {
                    Object = "form:124",
                    Relation = "editor",
                    User = "user:avery",
                },
            ]
        )
    );

    var checkResponse = await client.Check(
        new ClientCheckRequest
        {
            Object = "form:124",
            Relation = "editor",
            User = "user:avery",
        }
    );

    var checkResponse2 = await client.Check(
        new ClientCheckRequest
        {
            Object = "form:125",
            Relation = "editor",
            User = "user:avery",
        }
    );
This is an abstraction we wrote on top of it:

    await Permissions
        .WithClient(client)
        .ToMutate()
        .Add<User, Form>("alice", "editor", "226")
        .Add<User, Team>("alice", "member", "motion")
        .SaveChangesAsync();

    var allAllowed = await Permissions
        .WithClient(client)
        .ToValidate()
        .Can<User, Form>("alice", "edit", "226")
        .Has<User, Team>("alice", "member", "motion")
        .ValidateAllAsync();
You would make the case that the former is better than the latter?

In the first example, I have to learn and understand OpenFGA, in the second example I have to learn and understand OpenFGA and your abstractions.

Well the point of using abstractions is that you don't need to know the things that it is abstracting. I think the abstraction here is self explaining what it does and you can certainly understand and use it without needing to understand all the specifics behind it.

More importantly: it prevents "usr:alice_123" instead of "user:alice_123" by using the type constraint to generate the prefix for the identifier.

How much faster are we talking? Because you'd have to account for the time lost debugging annotations.

What are you working on that you're debugging annotations everyday? I'd say you've made a big mistake if you're doing that/you didn't read the docs and don't understand how to use the attribute.

(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)

On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.

I don't debug them every day, but when I do, it takes days for a nasty bug to be worked out.

Yes, they make CRUD stuff very easy and convenient.

It has been worth the abstraction in my organization with many teams. Thinking 1000+ engineers, at minimum. It helps to abstract as necessary for new teammates that want to simply add a new endpoint yet follow all the legal, security, and data enforcement rules.

Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.

That's the deal with all metaprogramming.

People were so afraid of macros they ended up with something even worse.

At least with macros I don't need to consider the whole of the codebase and every library when determining what is happening. Instead I can just... Go to the macro.

C# source generators are...just macros?

They are not. They are generators. Macros tends to be local and explicit as the other commenters have said. They are more like templates. Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.

Isn't this just a string template? https://github.com/CharlieDigital/SKPromptGenerator/blob/mai...

Maybe you're confusing `System.Reflection.Emit` and source generators? Source generators are just a source tree walker + string templates to write source files.

> Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.

I agree the syntax is awkward, but all it boils down to is concatenating code in strings and adding it as a file to your codebase.

And the syntax will 100% get cleaner (it;s already happening with stuff like ForAttributeWithMetadataName

What are those magic annotations you are talking about? Attributes? Not much of those are left in modern .net.

Attributes and reflection are still used in C# for source generators, JSON serialization, ASP.NET routing, dependency injection... The amount of code that can fail at runtime because of reflection has probably increased in modern C#. (Not from C# source generators of course, but those only made interop even worse for F#-ers).

Aye, was involved in some really messed up outages from New Relics agent libraries generating bogus byte code at runtime, absolute nightmare for the teams trying to debug it because none of the code causing the crashing existed anywhere you could easily inspect it. Replaced opaque magic from new relic with simpler OTEL, no more outages

That's likely the old emit approach. Newer source gen will actually generate source that is included in the compilation.

Don't we have automated tests for catching this kind of things or is everyone only YOLOing in nowadays? Serialization, routing, etc can fail at runtime regardless of using or not using attributes or reflection.

Ease of comprehension is more important than tests for preventing bugs. A highly testable DI nightmare will have more bugs than a simple system that people can understand just by looking at it.

If the argument is that most developers can't understand what a DI system does, I don't know if I buy that. Or is the argument it's hard to track down dependencies? Because if that's the case the idiomatic c# has the dependencies declared right in the ctor.

But the "simple" system will be full of repetition and boilerplate, meaning the same bugs are scattered around the code base, and obscured by masses of boilerplate.

Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.

I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.

Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.

I haven't experienced a DI 'nightmare' myself yet, but then again, we have integration tests to cover for that.

Try Nest.js and you'll know true DI "nightmares".

OK lets brake this down:

- code generators, I think I saw it only in regex. Logging can be done via `LoggerDefine` too so attributes are optional. Also code generators have access to full tokenized structure of code, and that means attributes are just design choice of this particular generator you are using. And finally code generators does not produce runtime errors unless code that they generated is invalid.

- Json serialization, sure but you can use your own converters. Attributes are not necessary.

- asp.net routing, yes but those are in controllers, my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attributes; you can inject services into minimal APIs and this does not require attributes. Most of the time minimal APIs does not require attributes at all.

- dependency injection, require attributes when you inject services in controllers endpoints, which I never liked nor understood why people do that. What is the use case over injecting it through controller constructor? It is not like constructor is singleton, long living object. It is constructed during Asp.net http pipeline and discarded when no longer necessary.

So occasional usage, may still occur from time to time, in endpoints and DTOs (`[JsonIgnore]` for example) but you have other means to do the same things. It is done via attributes because it is easier and faster to develop.

Also your team should invest some time into testing in my opinion. Integration testing helps a lot with catching those runtime errrors.

> Json serialization, sure but you can use your own converters

And going through converters is (was?) significantly slower for some reason than the built-in serialisation.

> my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attribute

Minimal APIs use attributes to explicitly configure how parameters are mapped to the path, query, header fields, body content or for DI dependencies. These can't always be implicit, which BTW means you're stuck in F# if you ever need them, because the codegen still doesn't match what the reflection code expects.

I haven't touched .NET during work hours in ages, these are mostly my pains from hobbyist use of modern .NET from F#. Although the changes I've seen in C#'s ecosystem the last decade don't make me eager to use .NET for web backends again, they somehow kept going with the worst aspects.

I'm fed up by the increasing use of reflection in C#, not the attributes themselves, as it requires testing to ensure even the simplest plumbing will attempt to work as written (same argument we make for static types against dynamic, isn't it?), and makes interop from F# much, much harder; and by the abuse of extension methods, which were the main driver for implicit usings in C#: no one knows which ASP.NET namespaces they need to open anymore.

I am working on entire new hobby project written on minimal apis and I checked today before writing answer to your comment: I did not used any attributes there, beside one 'FromBody' and that one only because otherwise it tries to map model from everywhere so you could in theory pass it from Query string. Which was extremely weird.

Where did you saw all of those attributes in minimal APIs? I honestly curious because from my experience - it is very forgiving and works mostly without them.

As polyglot developer, I also disagree.

If I wanted explicitness for every little detail I would keep writing in Assembly like in the Z80, 80x86, 68000 days.

Unfortunately we never got Lisp or Smalltalk mainstream, so we got to metaprogramming with what is available, and it is quite powerful when taken advantage of.

Some people avoid wizard jobs, others avoid jobs where magic is looked down upon.

I would also add that in the age of LLM and AI generated applications, discussing programming languages explicitness is kind of irrelevant.

[deleted]

Explicitness is different than verbosity. Often annotations and the like are abused to create a lot of accidental complexity just to not write a few keywords. In almost every lisp project you'll find that macros are not intended for reducing verbosity, they are there to define common patterns. You can have something like

  (define-route METHOD PATH BODY)
You can then easily expect the generated code. But in Java and others, you'll have something like

  @GET(path=PATH)
And there's a whole system hidden behind this, that you have to carefully understand as every annotation implementation is different.

This is the trade-off with macros and annotation/code-generation systems.

I tend to do obvious things whwn I use this kind of tools. In fact, I try to avoid macros.

Even if configurability is not important, I favor sinplification over reuse. In case I need reuse, I go for higher order functions if I can. Macro is the last bullet.

In some circumstances like Json or serialization maybe they can be slightly abused to mark fields and such. But whole code generation can take it so far and magic that it is not worth in many circumstances IMHO, thiugh every tool has its use cases, even macros and annotations.

IMO, macros and such should be to improve coding UX. But using it for abstractions and the like is very much not worth it. So something like JSX (or the loop system in Common Lisp) is good. But using it for DI is often a code smell for me.

> IMO, macros and such should be to improve coding UX

Coding UX critically leans on familiarity and spread of knowledge. By definition, making a non-obvious macro not known by others makes the UI just worse for a definition of worse which means "less manageable by anyone that looks at it without previous knowledge".

That is also the reason why standard libraries always have an advantage in usability just because people know them or the language constructs themselves.

Only if those Lisp projects are done by newbies, Clojure is quite known for having a community that takes that approach to macros, versus everyone else on Lisp since its early days.

Using macros for DSLs has been common for decades, and is how frameworks like CLOS were initially implemented.