Is the mapping of use cases to software functionality not a power law distribution? Meaning there are a few use cases that have a disproportionate affect on the desired outcome if provided by the software?

It probably applies better to users of software, e.g. 80% of users use just 20% of the features in Postgres (or MS Word). This probably only works, roughly, when the number of features is very large and the number of users is very large, and it's still very very rough, kinda obviously. (It could well be 80% / 5% in these cases!)

For very simple software, most users use all the features. For very specialized software, there's very few users, and they use all the features.

> The claim is that it handles 80%+ of their use cases with 20% of the development effort. (Pareto Principle)

This is different units entirely! Development effort? How is this the Pareto Principle at all?

(To the GP's point, would "ls" cover 80% of the use cases of "cut" with 20% of the effort? Or would MS Word cover 80% of the use cases of postgresql with 20% of the effort? Because the scientific Pareto Principle tells us so?)

Hey, it's really not important, just an idea that with Postgres you can cover a lot of use cases with a lot less effort than configuring/maintaining a Kafka cluster on the side, and that's plausible. It's just that some "nerds" who care about being "technically correct" object to using the term "pareto principle" to sound scientific here, that bit is just nonsense.

You might be right, but does anyone have data to support that hypothesis?