Something that deeply frustrates me, as someone who did R&D on model architectures, is how similar the modern LLM model architectures are to GPT2.

(This is a bit disingenuous, as lots/most of work is spent on the scaling and training side of things.)

[flagged]