Not all transformers are LLMs.

Yes, that is not in contention. Not all transformers are LLMs, not all neural networks are transformers, not all machine learning methods are neural networks, not all statistical methods are machine learning.

I'm not saying this is an LLM, margalabargala is not saying this is an LLM. They only said they hoped that they did not integrate an LLM into the weather model, which is a reasonable and informed concern to have.

Sigmar is correctly pointing out that they're using a transformer model, and that transformers are effective for modeling things other than language. (And, implicitly, that this _isn't_ adding a step where they ask ChatGPT to vibe check the forecast.)

“I hope these experts who have worked in the field for years didn’t do something stupid that I imagine a novice would do” is a reasonable concern?

A simple explanation would be: orders from the top to integrate an LLM. The people at the top often aren't experts who have worked in the field for years.

Yes, it is a very reasonable concern.

The quoted NOAA Administrator, Neil Jacobs, published at least one falsified report during the first Trump administration to save face for Trump after he claimed Hurricane Dorian would hit Alabama.

It's about as stupid as replacing magnetic storage tapes with SSDs or HDDs, or using a commercial messaging app for war communications and adding a journalist to it.

It's about as stupid as using .unwrap() in production software impacting billions, or releasing a buggy and poorly-performing UX overhaul, or deploying a kernel-level antivirus update to every endpoint at once without a rolling release.

But especially, it's about as stupid as putting a language model into a keyboard, or an LLM in place of search results, or an LLM to mediate deals and sales in a storefront, or an LLM in a $700 box that is supported for less than a year.

Sometimes, people make stupid decisions even when they have fancy titles, and we've seen myriad LLMs inserted where they don't belong. Some of these people make intentionally malicious decisions.