This reminds me of an article I read that was posted on HN only a few days ago: Uncertain<T>[1]. I think that a causality graph like this necessarily needs a concept of uncertainty to preserve nuance. I don't know whether this would be practical in terms of compute, but I'd think combining traditional NLP techniques with LLM analysis may make it so?

[1] https://github.com/mattt/Uncertain

I get some vibes of fuzzy logic from this project.

Currently a lot of people research goes in the direction that there is "data uncertainty" and "measurement uncertainty", or "aleatoric/epistemic" uncertainty.

I foumd this tutorial (but for computer vision ) to be very intuitive and gives a good understanding how to use those concepts in other fields: https://arxiv.org/abs/1703.04977

Right. The first example on the site shows disease as a cause, and death as an effect. This is wrong on several levels: There is no such thing as healthy or sick. You’re always fighting off something, it just becomes obvious sometimes. Also, a disease doesn’t necessarily lead to death, obviously.

Since you're always going to die, the problem is solved - the implication is true by the right side always being true, and the left side doesn't matter.

Then it’s correlation instead of causation and the entire premise of a causation graph is moot.