> Just consider how IP address spoofing is still possible today and you'll begin to realize how broken the internet has always been long before you even get into dirt cheap residential smart toaster botnets.

I know I'm going to regret asking but, ok, I'll bite... why does IP address spoofing prove the Internet is broken? Especially considering that a) the point of internet routing is to route packets whenever possible especially around damage. and b) by volume, the internet is TCP and you can't complete a handshake with a spoofed ip.

With spoofed src addr:

* you can do all sorts of udp amplification attacks (e.g. dns - send a zone transfer request in a single packet with a spoofed IP, and the IP you spoofed to gets a lot of traffic in response.)

* you can do tcp syn or ack floods with a spoofed IP, these eat resources on the target machine. syn floods cause the os to allocate a new connection and timers waiting for the third ack.

* you can send lots of bad packets from a spoofed ip that causes automated systems to lock out those IPs as a response to attack traffic. If those lockouts block IPs that should be allowed (a type of denial of service)

And plenty more.

I'm not terribly impressed... this reads like a response from an LLM. Yeah I know what kind of packets you can send with a spoofed source IP... But the question was, given these are all decades old, how does that prove the Internet is broken?

The point of the internet is to provide a robust communictions platform. If fundamental infrastructure of that communcations platform can be abused to deny communications, and further that the abuse can continue with the root cause unaddressed for decades, then the platform is broken.

The fact that routing is designed to go around damage is orthoganal to this, and has not bearing on the fact that the communications platform can be used against itself to prevent communications (via spoofed IPs).

For literal decades partial solutions to the spoofing problem have been known - rp filtering would eliminate a lot of problems yet still isn't close to universal.

BGP has been vulnerable to all sorts of simple human mistakes for decades and decade old solutions like IRR are only slowly being adopted because many of the people that run the internet are too busy pretending they are important and good at building systems to actually make the systems good. When those same simple mistakes are intentional, all sorts of IP traffic can be spoofed including full TCP connections.

The fact that there isn't a widely supported way for the consequences of spoofing to be mitigated without paying out the nose for a 3rd party service is pretty broken too. Allowing destinations be overwhelmed without any sort of backpressure or load shedding is a fundamental flaw in "get packets to destination no matter what". An AS should be able to say "I no longer want packets from this subnet", and have it honored along the entire path. This should be a core feature, not an add-on from some providers.

The internet does work as designed, however it's folly to think that the first attempt at building something so different to anything that came before it is the best way to do it and reusing to address design decisions is fundamentally broken.

First, I want to say thanks for the interesting reply! it's refreshing to read good arguments on HN again :)

> The fact that routing is designed to go around damage is orthoganal to this, and has not bearing on the fact that the communications platform can be used against itself to prevent communications (via spoofed IPs).

> For literal decades partial solutions to the spoofing problem have been known - rp filtering would eliminate a lot of problems yet still isn't close to universal.

It's orthogonal, and yet of the places where it would actually matter or have the strongest effect, it's not used? I wonder why... 'cept not really. The Internet seems to still be functioning pretty well for something fundamental broken. For the vast majority of internet routers it's entirely reasonable for them to accept any source IP from any peer. because it is impossible to prove that peer can't reach somebody else. The exception is a huge number of endpoint ISPs who shouldn't be sending these packets and it's on them to filter them. I would love a way to identify and punish these actors for their malfeasance, but I'm not willing to add a bunch of complexity to do so.

> because many of the people that run the internet are too busy pretending they are important and good at building systems to actually make the systems good.

wow, that's a super toxic comment... and I'm an asshole saying that.

> Allowing destinations be overwhelmed without any sort of backpressure or load shedding is a fundamental flaw in "get packets to destination no matter what".

one mans fundemntal flaw is another's design trade off... every single system that has ever seen widespread adoption has defaulted to open and permissive. every. single. one. it's only after seeing widespread adoption does anything ever add in restrictions and rules and most often when it does it's seen as the enshittification of something. (most often then because exerting control allows you to vampiriclly extract more value). but dropping packets one system is overloaded is exactly what the internet does do, what you're describing sounds more like TCP working around it. (poorly admittedly)

> An AS should be able to say "I no longer want packets from this subnet", and have it honored along the entire path. This should be a core feature, not an add-on from some providers.

I could not agree more. but this is a missing feature not a fundamental flaw. the internet still works for the vast, vast majority of users and as I've said in a different thread the use or dependency on cloudflare is often a skill issue not a requirement.

You're 100% correct, core internet routing has many fixable defects. and many ISPs are moving slower than could be reasonably considered ethical or competent. But for something this core of infrastructure I would actually prefer slow and careful over the break everything on a whim because of the "move fast" mind virus that has overtaken CS.

> The internet does work as designed, however it's folly to think that the first attempt at building something so different to anything that came before it is the best way to do it and reusing to address design decisions is fundamentally broken.

It's also needless absolutist to say every defect is a fundamental design decision. The Internet was built to support trusted peering relationships. Where if someone was being abusive, you'd call your buddy and say "fix your broken script". The core need the internet is now supporting is wildly different, and this "fundamental design flaw", is actually just user error. If you strap a rocket engine on a budget sedan, it's not a design flaw when the whole thing explodes. If you're going to add untrustworthy peers to your network you also have to add a way to deal with them. that's a missing feature not a design flaw.