> It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts,
Anyone remember the promise of ATM networking in the 90's? It was telecom grade networking which used circuit switched networking that would handle voice, video and data down one pipe. Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes. You called a computer as if it were a telephone (or maybe that was Datakit?) and ATM handed the user a byte stream like TCP. Imagine never needing an IP stack or setting traffic priority because the network already handles the QoS. Was it simple to deploy? No. Was it cheap? Nooohooohooohooo. Was Ethernet any of those? YES AND YES. ATM was superior but lost to the simpler and cheaper Ethernet which was pretty crappy in its early days (thinnet, thicknet, terminators, vampire taps, AUI, etc.) but good enough.
The funny part is this has the unintended consequences of needing to reinvent the wheel once you get to the point where you need telecom sized/like infrastructure. Ethernet had to adapt to deterministic real-time needs so various hacks and standards have been developed to paper over these deficiencies which is what TSN is - reinventing ATM's determinism. In addition we also now have OTN, yet another protocol to further paper over the various other protocols to mux everything down a big fat pipe to the other end which allows Ethernet (and IP/ATM/etc) to ride deterministically between data-centers.
> Ethernet had to adapt to deterministic real-time needs
Without being able to get too into the telco detail, I think the lesson was that hard realtime is both much harder to achieve and not actually needed. People will happily chat over nondeterministic Zoom and Discord.
It's both psychological and slightly paradoxical. Once you let go of saying "the system MUST GUARANTEE this property", you get a much cheaper, better, more versatile and higher bandwidth system that ends up meeting the property anyway.
> not actually needed
What you need is more that enough bandwidth.
Think of the difference between a highway with few cars versus a highway filled to the brim with cars. In the latter case traffic slows to a crawl even for ambulances.
It seems like it was just cheaper and easier to build more bandwidth than it was to add traffic priority handling to internet connectivity.
And that cheaper bit I think just came from reduced complexity. With things like ATM circuits and other similar highly reliable and predictable methods, they needed a lot of hand holding. You needed to provision an ATM circuit, you needed to make sure across the network that the path was there, capable, maintained, and configured, and you had visibility end to end
That was a selling point, because "hey we guarantee this circuit" but it was also very expensive and labor intensive
Where just dumping your bits into the internet and letting the network figure it out outsourced a lot of that complexity to every hop along the network you didn't own. But, because they care about their networks everyone would (in theory) make sure each hop was healthy, so you didn't need to hand hold your circuit or route completely end to end
I saw a story once, which may well be completely made up, about why AT&T got out of the cell phone business. They had a research project, but reliability was an issue. They couldn't see a way to do better than 1 dropped call in 10,000. Their standard for POTS at the time was 1 in 2 billion.
Seeing that the tech would never be good enough, they sold off the whole thing for cheap. Years later, they bought it back for way, way more money because they desperately needed to get into the cell phone business that was clearly headed to the moon.
I totally understand the pride they had in the reliability of their system, but it turns out that dropped calls just aren't that big of a deal when you can quickly redial and reconnect.
Seems a little sus. AT&T basically created the cellular mobile phone, and built up an analog, then digital system (D-AMPS/TDMA). AT&T sort of sold out the mobile business in 2004 to Cingular (BellSouth) because TDMA was a dead end. They then bought BellSouth back in 2006 and carried on with CDMA.
Those old phones had a long range. It was hard to make small ones because the old AT&T towers were much farther apart, up to 40km. Meanwhile, their competitors focused on smaller coverage areas (e.g. 2km or less for PCS) and better tech (CDMA), and it seemed to pay off.
Early cell phones were so limited it's sort of amazing they gained adoption. They were big (literally the size of a brick), heavy, and expensive. Battery life was poor. The EM radiation was possibly harmful. Due to all of that most mobile phones were permanently installed and could only be used in a car. Plans were either pay-by-the-minute or had pre-paid minutes with expensive charges if you went over. Roaming off your local network was crazy expensive... somthing like dollars per minute. Texting wasn't even a thing at the time; most phones only had a 10-character display. Voice quality was poor and calls often dropped or would not connect.
It wasn't until phones shrank and service got cheaper, that consumer adoption took off. Businesses and early adopters will pay even if the product is inconvenient and costly to use, as long as the benefit exceeds the cost.
This is a minor detail, but the "AT&T" that bought BellSouth in 2006 was the AT&T formerly known as SBC which bought the husk of Ma Bell and rebranded itself, i.e. the AT&T we have today.
Yes, AT&T was hollowed out because long lines was the family jewel and it became commodified in the late 90s/early 2000s by overinvestment in competitors.
Yeah, big differences between an absolute guarantee and "we'll take as much as we can get"
> People will happily chat over nondeterministic Zoom and Discord.
Well, not "happily". (Doesn't every video conference do the "hold on, can you hear me? I have wifi issues" dance every other day?) But it works on a good day.
At work it became mostly flawless. Everybody is used to it and people can jump in calls quickly when chat discussion etc don't suffice. The glitches are on a comparable level to physical meetings where somebody comes late and disturbs all while getting settled or somebody speaking too quiet for the room.
In my club when there is a virtual club meeting however, where people don't have frequent video meetings there is always somebody with trouble ... often the same.
I was there for ATM, and I'm so freaking glad it lost. It's a prime example of "a camel is a horse designed by committee". A 53 byte cell with a 48 byte payload? Of course! What an excellent idea! We definitely want a 10% overhead on a ludicrously small packet, just so it has tolerable voice latencies if you scale it down to run on a 64Kb DS0, never mind that literally everything in the industry was scaling up to fatter pipes.
ATM was nifty if you had a requirement of establishing voice-style, i.e. billable, connections. No thanks. It was an interesting technology but hopelessly hobbled by the desire to emulate a voice call that fit into a standard invoice line.
If you’re primarily concerned with shuffling low latency voice around the place, and you want to do hardware forwarding on relatively inexpensive silicon, then that cell size is entirely sensible.
That approach of course didn’t age well when voice almost became a niche application.
Thus its acronym, A Technical Mistake. Or, from the telco side, A Tariffing Mechanism.
note that it was 'tolerable latency without echo cancellation in France', most other places had long enough latency anyways that they needed to have it anyways. and of course now everything needs echo cancellation.
I think standards are important, and I'm sad that no one bothers anymore, but stuff like this and the inclusion of interlace in digital video for that little 3 year window when it might have mattered does really sour one on the process.
I'd forgotten about the French connection here.
BTW, I searched Kagi for "tolerable latency without echo cancellation in France" and saw your comment. Wow. I didn't realize web crawlers were that current these days.
Kagi is a Google frontend, and Google appears to index Hacker News in real time. I bet they have written site-specific code to scrape /comments continuously.
Not The Silliest Contrivance to happen to video standards :P
ATM was superior in the context of a bill-by-the-byte telco-style network where oversubscribed links could be carefully planned. The "impedance mismatch" IP's of unreliable datagram delivery with ATM's guaranteed cell delivery created situations where ATM switches could effectively need unlimited buffer RAM to make their delivery guarantees even if the cells were containing IP datagrams that could just be discarded with no ill consequences.
There's likely an element of the "layering TCP on TCP" problem going on, too.
The classic popular treatment of the subject is: https://www.wired.com/1996/10/atm-3/
It was designed by people who were trying to digitally emulate 1920s copper-wire circuits at a time when the entire world was moving to packet-switched digital data. I remember visiting a large telco at the time and having to tell them about this new thing called ADSL that was going to steamroller them if they weren't careful. "Nooo... no, that's not real, you can't do that over a phone line, not possible. And even if it was it'll never take off, if anyone really wants a digital link they can go with our X.25 or ISDN offerings".
When I pointed out in a previous post how much X.400 sucked, even that never got anywhere near X.25. X.25 is the absolute zero on any networking scale, the scale starts with X.25 at -273degC and goes up from there.
The ironic thing is many telephone companies ended up using ATM to serve ADSL. My childhood home in rural Alabama still only has one terrestrial internet option and that is a 6 megabit ADSL line from AT&T (originally Bellsouth) served out of a remote terminal that is fed by an ATM circuit from the Central Office. My brother lives in that house and is prepared any day to finally get a notice from AT&T that they're discontinuing it
These days I think all of AT&T's flavors of DSL, including their IPTV-supporting VDSL, is considered 'legacy', but for the longest time their "IP-DSL" was the future, and for 15+ years they've been trying to shed this ATM-based DSL
atm did not have cell delivery guarantees. it did have per-connection qos negotiation that could include the loss probability as one of the many metrics that were supported. the only way to provide 'zero loss' is to implement hop-by-hop error detection and retransmission, which is only really done in HPC networks, and some satellite transport schemes where the loss is high and bursty and the latency is high.
however, actually building a functional routing infrastructure that supported QOS was pretty intractable. that was one of several nails in ATMs coffin (I worked a little on the PNNI routing proposal).
edit: I should have admitted that yes, loss does have a relationship to queue depth, but that doesn't result in infinite queues here. it does mean that we have to know the link delay and the target bandwidth and have per-flow queue accounting, which isn't a whole lot better really. some work was done with statistical queue methods that had simpler hardware controllers - but the whole thing was indeed a mess.
My college went all-in on ATM-over-fiber and wired all the dorm rooms with it. It was a PITA. Of course no computers came with ATM support and the cards cost $400+ each so the school had hundreds of cards that they would “lease” them out to students each year. There would be a huge “install depot” at the start of the year where students brought in their (desktop) computers and volunteers would open them up, install the cards, install drivers and configure them for our network.
For Linux heads, it was doubly annoying, as ATM was not directly supported in the kernel. You had to download a separate patch to compile the necessary modules, then install and run three separate system daemons, all with the correct arguments for our network, just to get a working network device. And of course you had to download all the necessary packages with another computer, since you couldn’t get online yet. This was the early 2000s, so WiFi was not really common yet.
Even once you got online, one of the admins would randomly crash every so often and you’d have to restart to get back online. It was such a pain.
Wait, they did ATM to the desktop?
What year was that? I could see a college/university network department dominated by old school telecom guys deciding to use ATM to connect buildings, but it's kind of insane to think anyone at any point in time thought it was reasonable to push to individual endpoints
Pretty sure TSN is unrelated to ATM determinism, and comes from a completely separate area (replacing custom field buses where timing and contention is more important than bandwidth). Some of ATM complexity came from wanting to deliver the same quality of experience as plesiosynchronous networks provided for voice (that's how it got the weird cell size).
Once those requirements dropped down (partially because people just started to accept weird echo) the replacement became MPLS and whatever you can send IP over where Ethernet sometimes shows as package around the IP frame but has little relation to Ethernet otherwise.
Not directly related but a consequence.
ATM semantics and TSN semantics are quite different, the closest overlap would be in AFDX (avionics full duplex ethernet) except AFDX creates static circuits
Was it actually superior though? The usual treatment is that packet switching works better at the scale of the internet. With voice, hogging a whole line works, but for the internet it makes more sense to slow everybody down when congestion occurs rather than preventing some people from connecting at all. I get why the telecoms would have you waste your bandwidth reserving a connection you don't need, and I get why they would try and sell that as a superior solution because of some nonsense about reliability, but I don't see it as providing much benefit to the user.
One reason I heard the internet works as well as it does is that it inverts the bell system. Where the bell system is a smart network with dumb edge devices. The internet is a dumb network with smart edge devices. The reason this is supposed to be better is that it is much much easier to upgrade the network.
And this sort of checks out, most of the complaints about the internet architecture is when someone starts putting put smart middle boxes in a load bearing capacity and now it becomes hard to deploy new edge devices.
> Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes
I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.
Wait, are you serious? This is how it works?
Yes: https://fasterdata.es.net/performance-testing/troubleshootin.... A simplistic TCP server will blast packets on the link as fast as it can, up to the size of the TCP receive window. At that point it’ll stop transmitting and wait for an ACK from the client before sending another window’s worth of packets.
To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.
But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.
Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.
I experienced this with a VDI project when we mistakenly got 25Gb links delivered to the hosts.
We were expecting to get some sort of unbelievably fast internet experience, but it was awful as the internet gateway was 1 Gb or something similar.
Heh heh. If that shocks you, search engine for "bufferbloat" and prepare to be horrified.
This is how old-school TCP figures out how fast it can send data, regardless of the underlying transport. It ramps up the speed until it starts seeing packet loss, then backs off. It will try increasing speed again after a bit, in case there's now more capacity, and back off again if there's loss.
You can achieve a bit of performance here by tuning it so it will never exceed the true speed of the link - which is only really useful when you know what that is and can guarantee it.
Anyone remember the incredible disrepute of the phone company in the 80s?
We just wanted our own stuff. We did not want to coordinate with a proprietary vendor to network or be charged by the byte to do so.
And for a while, telco engineers tried to retrofit Internet to their purposes.
I worked on a network that used RSVP ( https://en.wikipedia.org/wiki/Resource_Reservation_Protocol ) to emulate the old circuit-switched topology. It was kinda amazing to see how it could carve guaranteed-bandwidth paths through the network fabric.
Of course, it also never really worked with dynamic routing and brought in tons of complexity with stuck states. In our network, it eventually was just removed entirely in favor of 1gbit links with VLANs for priority/normal traffic.
I started my career at France Telecom's R&D lab in Caen, Normandy. They had their own home-grown X.400 email client, and even though they could have set up a SMTP server for free, they deliberately chose to MX to a paid SMTP to X.400 gateway out of OSI ideology.
It was complete garbage.
Another lab of theirs proudly made a Winsock that would use ATM SVCs instead of TCP and proudly made a brochure extolling their achievement "Web protocol without having to use TCP". Because clearly it was TCP hindering adoption of the Web /s
The Bellhead vs. Nethead was a real thing back then. To paraphrase an old saying about IBM, Telcos think if they piss on something, it improves the flavor.
One of the jobs I had applied out of college was to lead Schengen's central police database (think stolen car reports, arrest warrants etc) which would federate national databases. For some unfathomable reason, they chose X.400 as messaging bus for that replication, and endured massive delays and cost overruns for that reason. I guess I dodged a bullet by not going there.