> It...handles 50,000 transactions per second with sub-100ms latency on hardware that costs a fraction of an equivalent cloud footprint. It has been doing this for 60 years.
Eat that, Bitcoin.
> It...handles 50,000 transactions per second with sub-100ms latency on hardware that costs a fraction of an equivalent cloud footprint. It has been doing this for 60 years.
Eat that, Bitcoin.
50,000 transactions a second is a bunch for humans.
It’s nothing for even an ancient CPU - let alone our modern marvels that make a Cray 1 cry.
The key is an extremely well-thought and tested design.
It was mostly written in COBOL. This is how I got into XENIX/UNIX. A machine went down that was a training machine: It required XENIX on a PC, because it ran on RM/COBOL on XENIX, because all the screens, and the encoding, already written were in COBOL. RM/COBOL had an ancient compatibility, but the code was extremely simple, having been ironed out many decades earlier. ( I got it in 1985, but all the creation dates were 1982. The original files must have been from the mid 1960s. I pointed this out, and someone called me on it. I found a training manual from 1966, and the screens were exactly The same, except of course for the 3270 status line.
It was fast on an 4.77Mhz IBM PC, and much faster on a 10Mhz V20.
50,000 transactions was pretty standard for a IBM Mainframe, now? The z/ Series is still about the same, but it scales up to 32 processors. ( excuse me, billions. per day )
Most of the systems used for GDS’s were in fact written in Assembler. I know as I have worked with a number of the systems. This is one, but not the only reason the different systems have remained on TPF machines.
And there are people who think virtualization was invented on the 386 :)
I mean you can easily do 100K TPS on a M1 with sqlite and a dynamic language. With sub 100ms latency.
People don't do it because it's not fashionable (the cool kids are all on AWS with hundreds of containers, hosting thousands micro services, because that's web scale).
Well, transactions in this context are business transactions, which may involve 1 or N remote calls. Imagine checks against no fly lists, fraud detection, flight delay and so on. Speed of light is also another concern. So it’s not as simple as doing 35k TPS on a local SQL database.
But yes, you don’t always need cool technologies.
True, but a lot of those checks will be against a local snapshots of the data.
> But yes, you don’t always need cool technologies.
That's kinda the irony mainframes are incredibly cool piece's of tech, just not fashionable. They have insane consistency guarantee at the instruction level. Hot swapping features etc. Features you'd struggle to replicate with the dumpster fire that is modern microservice based cloud computing.
Ah yes a completely centralized system that scales, who would have thought.
(For the pedantic, it's not exactly centralized nor federated since each airline treats their view of the world as absolutely correct)
It’s not decentralized either, at least not in the Bitcoin sense of the word. Interactions between participants may be automated but they can ultimately rely on legal contracts and people. IATA is one of those participants, but everyone has to trust IATA in the airline industry because of their role. A decentralized airline system built to avoid trust in a central authority would be pretty different (actually the booking part may be the least of their problems there).
It probably doesn’t require consensus among all participants (pairwise consensus at every step should be fine), so there is very likely no voting.
It’s not even permissionless. It’s not like a random company could join this “chain” simply because they can generate a keypair.
It’s a fundamentally different problem, and it makes sense that the architecture is different.