You don't get sued for using a service as it is meant to be used (using an RSS reader on their feed endpoint; cloning repositories that it is their mission to host). It doesn't anger anyone so they wouldn't bother trying to enforce a rule, and secondly it's a fruitless case because the judge would say it's not a reasonable claim they're making
Robots.txt is meant for crawlers, not user agents such as a feed reader or git client
I agree with you, generally you can expect good faith to be returned with good faith (but here I want to make heavy emphasis that I only agree on the judge part iff good faith can be assumed and the judge is informed enough to actually be able to make an informed decision).
But not everyone thinks that's the purpose of robots.txt. Example, quoting Wikipedia[1] (emphasis mine):
> indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.
Quoting the linked `web robots` page[2]:
> An Internet bot, web robot, robot, or simply bot, is a software application that runs automated tasks (scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale. [...] The most extensive use of bots is for web crawling, [...]
("usually" implying that's not always the case; "most extensive use" implying it's not the only use.)
Also a quick HN search for "automated robots.txt"[3] shows that a few people disagree that it's only for crawlers. It seems to be only a minority, but the search results are obviously biased towards HN users, so it could be different outside HN.
Besides all this, there's also the question of whether web scraping (not crawling) should also be subject to robots.txt or not; where "web scraping" includes any project like "this site has useful info but it's so unusable that I made a script so I can search it from my terminal, and I cache the results locally to avoid unnecessary requests".
The behavior of alternative viewers like Nitter could also be considered web scraping if they don't get their info from an API[4], and I don't know if I'd consider Nitter the bad actor here.
But yeah, like I said I agree with your comment and your interpretation, but it's not the only interpretation of what robots.txt is meant for.
[1]: https://en.wikipedia.org/wiki/Robots.txt
[2]: https://en.wikipedia.org/wiki/Internet_bot
[3]: https://hn.algolia.com/?dateRange=all&query=automated%20robo...
[4]: I don't know how Nitter actually works or where does it get its data from, I just mention it so it's easier to explain what I mean by "alternative viewer".