Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!
And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!
I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.
But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?
This is about learning concepts, and the rest of this is mostly moot.
On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.
Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?
In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
I'm not sure. Microsoft calls Phi-4 a small language model, so the distinction is considered meaningful to some people working in the space. My own view is that the term "LLM" implies something about the capabilities of the model in 2026. Maybe there's not a hard definition of the term, but whatever the definition is, the model in the article wouldn't make it.
Calling anything "large" in computing is problematic since hardware keeps improving. GPT-1 was an LLM in 2017 and had 117M parameters, when did it stop being large?
GPT would have been a better term than LLM, but unfortunately became too associated with OpenAI. And then, what about non-transformer LLMs? And multimodal LLMs?
Maybe we should just give up, shrug and call it "AI".
If you have a credit card with a "normal" ceiling you probably can rent enough on neocloud providers like HuggingFace or Mistral Forge.
I'm not saying it's worth it but you don't need to buy a GPU yourself to be able to train.
This is the whole point of Karpathy's nanochat which OP refers to, to train a GPT-2 level LLM for under $100, renting an 8xH100 VM.
You can fully train a 1.6b model on a single 3090. That’s a reasonably big model.
you can train it, but not fully
Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!
And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!
I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.
But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?
This is about learning concepts, and the rest of this is mostly moot.
On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.
Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?
In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
> Yeah it's just a semantic pet peeve.
I'm not sure. Microsoft calls Phi-4 a small language model, so the distinction is considered meaningful to some people working in the space. My own view is that the term "LLM" implies something about the capabilities of the model in 2026. Maybe there's not a hard definition of the term, but whatever the definition is, the model in the article wouldn't make it.
Calling anything "large" in computing is problematic since hardware keeps improving. GPT-1 was an LLM in 2017 and had 117M parameters, when did it stop being large?
GPT would have been a better term than LLM, but unfortunately became too associated with OpenAI. And then, what about non-transformer LLMs? And multimodal LLMs?
Maybe we should just give up, shrug and call it "AI".
Then rewrite the title and call it "learn how to do a non usable llm from scratch"
Opus 4.7 is non-usable for the tasks I have — but it’s considered an LLM.
And no one is stopping anyone from tweaking few parameters in this repo to go above 10M parameters.
What tasks is it non-usable for?