The best way to really understand how something works is to build it yourself. So I am wondering if there are any good tutorials on building your own LLM from scratch. I.e. implementing tokenisation, embeddings, attention and so on. I am not suggesting one could replicate chatGPT, but more a toy model that implements the core features but based on a much smaller corpus and training data.
Since you're posting here, you're looking for the shortcut.
The shortcut is Karpathy's "Let's Build GPT: from scratch, in code, spelled out" video:
https://www.youtube.com/watch?v=kCc8FmEb1nY
Then there is a good video that dives into LLMs and how they work that is quite approachable:
https://www.youtube.com/watch?v=7xTGNNLPyMI
From there, flesh out knowledge with his other videos, where he goes both extremely light and extremely deep:
https://www.youtube.com/@AndrejKarpathy/videos
Anyway, I really like's Karpathy's video because he's very good at explaining LLMs at every level.
Andrej Karpathy: Let's build GPT: from scratch, in code, spelled out. https://www.youtube.com/watch?v=kCc8FmEb1nY
Andrej Karpathy's Nano GPT is reasonably accessible and easy to run.
https://github.com/karpathy/nanoGPT
How about this?
https://mathstodon.xyz/@empty/115088095028020763
thanks
https://www.amazon.com/Build-Large-Language-Model-Scratch/dp...
I'd get it straight from Manning and save a few bucks and take out the middle man: https://www.manning.com/books/build-a-large-language-model-f...
thanks. looks potential