ATM was superior in the context of a bill-by-the-byte telco-style network where oversubscribed links could be carefully planned. The "impedance mismatch" IP's of unreliable datagram delivery with ATM's guaranteed cell delivery created situations where ATM switches could effectively need unlimited buffer RAM to make their delivery guarantees even if the cells were containing IP datagrams that could just be discarded with no ill consequences.
There's likely an element of the "layering TCP on TCP" problem going on, too.
The classic popular treatment of the subject is: https://www.wired.com/1996/10/atm-3/
It was designed by people who were trying to digitally emulate 1920s copper-wire circuits at a time when the entire world was moving to packet-switched digital data. I remember visiting a large telco at the time and having to tell them about this new thing called ADSL that was going to steamroller them if they weren't careful. "Nooo... no, that's not real, you can't do that over a phone line, not possible. And even if it was it'll never take off, if anyone really wants a digital link they can go with our X.25 or ISDN offerings".
When I pointed out in a previous post how much X.400 sucked, even that never got anywhere near X.25. X.25 is the absolute zero on any networking scale, the scale starts with X.25 at -273degC and goes up from there.
The ironic thing is many telephone companies ended up using ATM to serve ADSL. My childhood home in rural Alabama still only has one terrestrial internet option and that is a 6 megabit ADSL line from AT&T (originally Bellsouth) served out of a remote terminal that is fed by an ATM circuit from the Central Office. My brother lives in that house and is prepared any day to finally get a notice from AT&T that they're discontinuing it
These days I think all of AT&T's flavors of DSL, including their IPTV-supporting VDSL, is considered 'legacy', but for the longest time their "IP-DSL" was the future, and for 15+ years they've been trying to shed this ATM-based DSL
atm did not have cell delivery guarantees. it did have per-connection qos negotiation that could include the loss probability as one of the many metrics that were supported. the only way to provide 'zero loss' is to implement hop-by-hop error detection and retransmission, which is only really done in HPC networks, and some satellite transport schemes where the loss is high and bursty and the latency is high.
however, actually building a functional routing infrastructure that supported QOS was pretty intractable. that was one of several nails in ATMs coffin (I worked a little on the PNNI routing proposal).
edit: I should have admitted that yes, loss does have a relationship to queue depth, but that doesn't result in infinite queues here. it does mean that we have to know the link delay and the target bandwidth and have per-flow queue accounting, which isn't a whole lot better really. some work was done with statistical queue methods that had simpler hardware controllers - but the whole thing was indeed a mess.