Why would you think Eliezer's argument, which he's been articulating since the late 2000s or even earlier, is specifically about Large Language Models?

It's about Artificial General Intelligences, which don't exist yet. The reason LLMs are relevant is because if you tried to raise money to build an AGI in 2010, only eccentrics would fund you and you'd be lucky to get $10M, whereas now LLMs have investors handing out $100B or more. That money is bending a generation of talented people into exploring the space of AI designs, many with an explicit goal of finding an architecture that leads to AGI. It may be based on transformers like LLMs, it may not, but either way, Eliezer wants to remind these people that if anyone builds it, everyone dies.

Artificial General Intelligence, as classically defined by Yud and Bostrom, was invented in 2022.

They didn't coin the term, there is nothing "classical" about their interpretation of the terminology.