More concise? Yes.

More readable? I'm less convinced on that one.

Some of those edge cases and their effects can get pretty nuanced. I fear this will get overused exactly as the article warns, and I'm going to see bloody questions marks all over codebases. I hope in time the mental overhead to interpret exactly what they're doing will become muscle memory...

Hi there! Lang designer here.

> More concise? Yes.

Note: being more concise is not really the goal of the `?` features. The goal is actually to be more correct and clear. A core problem these features help avoid is the unfortunate situation people need to be in with null checks where they either do:

    if (some_expr != null)
        someExpr...
Or, the more correct, but much more unweildy:

    var temp = some_expr;
    if (temp != null)
        temp...
`?` allows the collapsing of all the concepts together. The computation is only performed once, and the check and subsequent operation on it only happens when it is non-null.

Note that this is not a speculative concern. Codebases have shipped with real bugs because people opted for the former form versus the latter.

Our goal is to make it so that the most correct form should feel the nicest to write and maintain. Some languages opt for making the user continuously write out verbose patterns over and over again to do this, but we actually view that as a negative (you are welcome to disagree of course). We think forcing users into unweildy patterns everywhere ends up increasing the noise of the program and decreasing the signal. Having common patterns fade away, and now be more correct (and often more performant) is what we as primary purposes of the language in the first place.

Thanks!

As a really long term C# engineer, I feel quite strongly that C# has become a harder and harder language over time, with a massive over abundance of different ways of doing the same thing, tons of new syntactic sugar, so 5 different devs can write five different ways of doing the same thing, even if it's a really simple thing!

At this point, even though I've been doing .net since version 2, I get confused with what null checks I should be doing and what is the new "right" and best syntax. It's kind of becoming a huge fucking mess, in my opinion anyway.

If you want a kind of proof of this, see this documentation which requires 1000s of words to try and explain how to do null/nullable: https://learn.microsoft.com/en-us/dotnet/csharp/nullable-ref...

Do you think most C# devs really understand and follow this entire (complex and verbose) article?

The issues with null-checks are easily avoided though: Just don’t declare values as nullable.

C# grows because they add improvements but cannot remove older ways of doing things due to backwards compatibility. If you wan’t a language without so much cruft, I recommend F#.

Thanks for stopping by to comment!

I'd love to see some good examples of those bugs you referred to, in order to get some more context.

Is the intent of the second form to evaluate only once, and cache that answer to avoid re-evaluating some_expr?

When some_expr is a simple variable, I didn't think there was any difference between the two forms, and always thought the first form was canonical. It's what I've seen in codebases forever, going all the way back to C, and it's always been very clear.

When some_expr is more complex, i.e. difficult to compute or mutable in my timeframe of interest, I'm naturally inclined to the second form. I've personally found that case less common (eg. how exactly are you using nulls such that you have to bury them so deep down, and is it possible you're over-using nullable types?).

I appreciate what you're saying about nudging developers to the most correct pattern and letting the noise fade away. I always felt C# struck a good balance with that, although as the language evolved it feels like there's been a growing risk of "too many different right ways" to do things.

Btw while you're here, I understand why prefix increment/decrement could get complicated and why it isn't supported, but being forced to do car.Wheel?.Skids += 1 instead of car.Wheel?.Skids++ also feels odd.

When the first wave of null check operators came out our code bases filled up with ? operators. I luckily had used the operator in swift and rust to somewhat know what it can do and what not. Worse the fact that unlike rust the ? operator only works on null. So people started to use null as an optional value. And I think that is at the core the problem of the feature. C# is not advertising or using this themselves in this way. I think the nullable checks etc are great way to keep NPE under control. But they can promote lazy programming as well. In code reviews more often than not the question comes up, when somebody is using ? either as operator or as nullable type like ‘string?’, are you sure the value should be nullable? And why are you hiding a bug with a conditional access when the value should never be null in the first place.

And more better? I'm not sure either.

In all these examples I feel something must be very wrong with the data model if you're conditionally assigning 3 levels down.

At least the previous syntax the annoyingness to write it might prompt you to fix it, and it's clear when you're reading it that something ain't right. Now there's a cute syntax to cover it up and pretend everything is okay.

If you start seeing question marks all over the codebase most of us are going to stop transpiling them in our head and start subconsciously filtering them out and miss a lot of stupid mistakes too.

This is something I see in newbie or extremely lazy code. You have some nested object without a sane constructor and you have to conditionally construct a list three levels down.

This is a fantastic way to make such nasty behavior easier.

And agreed on the question mark fatigue. This happened to a project in my last job. Because nullable types were disabled, everything had question marks because you can't just wish away null values. So we all became blind and several nullref exceptions persisted for far too long.

I'm not convinced this is any better.

Swift has had this from the beginning, and it doesn’t seem to have been a problem.

What?.could?.possibly?.go?.wrong?.

    if (This) {
        if (is) {
            if (much) {
                if (better) {
                    println("I get paid by the brace")
                }
            }
        }
    }

    if (Actually 
        && Actually.you
        && Actually.you.would
        && Actually.you.would.write
        && Actually.you.would.write.it
        && Actually.you.would.write.it.like) {
            return this;
    }

Routinely dealing with that's enough to put you off programming altogether.

False dichotomy. The problem is that the syntax implements a solution that is likely wrong in many situations and pairs with a bad program design. Maybe when we have this:

  what?.could?.possibly.go?.wrong = important_value()
Maybe we want code like this:

  if (!what)
    what = new typeof(what); // default-construct representative instance

  if (!what.could)
    what.could = new typeof(what.could);


  if (!what.could.possibly.go)
    what.could.possibly.go = new typeof(what.could.posssibly.go)


  // now assignment can take place and actually retain the stored value
  // since we may have allocated what, we have to be sure
  // we propagate it out of here.

  what.could.possibly.go.wrong = important_value();

and not code which throws away the value (and possibly its calculation).

Why would you ever write an assignment, but not expect that it "sticks"? Assignments are pretty important.

What if someone doesn't notice the question marks and proceeds to read the rest of the code thinking that the assignment always takes effect? Is that still readable?

> Maybe we want code like this

It should be clear enough that this operator isn't going to run 'new' on your behalf. For layers you want to leave missing, use "?.". For layers you want to construct, use "??=".

> Why would you ever write an assignment, but not expect that it "sticks"? Assignments are pretty important.

If you start with the assignment, then it's important and you want it to go somewhere.

If you start with the variable, then if that variable doesn't have a home you don't need to assign it anything.

So whether you want to skip it depends on the situation.

> What if someone doesn't notice the question marks and proceeds to read the rest of the code thinking that the assignment always takes effect? Is that still readable?

Do you have the same objection with the existing null-conditional operators? Looking at the operators is important and I don't think this makes the "I didn't notice that operator" problem worse in a significant way.

Just because you can't do assignments like that, it doesn't mean you shouldn't use null coalescing for reads. What exactly could go wrong?

Paranoid null checking of every property dereference everywhere (much?.like?.in ?.my?.joke) whether each is ever possibly null or not, usually combined with not thinking through what the code behavior should be for each null case.

(Gets a lot better if you enable nullable references and upgrade the nullable reference warnings to errors.)

NullReferenceException, in line 7.

you didn't null check possibly.go.

Rather, what I didn't null check is "possibly". That's because it doesn't have a question mark in the original expression that I'm starting from.

I wonder, does the important_value function get called and the value discarded or never called at all? Looks like a footgun if it has side-effects.

Not calling the function would be evidence of further derangement in the design.

Such a thing has been perpetrated in C. In C, you can repeat designated initializers. like

  foo f = { .a = x(), .a = y() }
The order in which the expressions are called is unspecified, just like function arguments (though the order of initialization /is/ specified; it follows initialization order).

The implementaton is allowed to realize that since .a is being initialized again by y(), the earlier initialization is discarded. And it is permitted not to emit a call to x().

That's just like permitting x() not to be called in x() * 0 because we know the answer is zero.

Only certain operators in C short-circuit. And they do so with a strict left-to-right evaluation discipline: like 0 && b will not evaluate b, but b && 0 will evaluate b.

The initializer expressions are not sequenced, yet they can be short-circuited-out in left-to-right order.

Consistency? What's that ...

  if (!same) {
    return;
  }

  if (!number) {
    return;
  }

  if (!of_braces) {
    return;
  }

  println("but easier to read")

Yes, you should definitely unnest functions and exit early. But the null-coalesced version is shorter still.

Nothing to worry about:

  What?.could?.possibly?.go?.wrong?
Not so convinced:

  What?.could?.possibly?.go?.wrong = important_value()
Maybe the design is wrong if the code is asked to store values into an incomplete skeleton, and it's just okay to discard them in that case.

[dead]

Oh come on just learn it properly it's not a big deal to read it