Interactive Visualisations

Neural Networks:
Zero to Hero

8 interactive lessons based on Andrej Karpathy's Neural Networks: Zero to Hero series. Each lesson lets you step through the concepts hands-on — no setup required.

Lecture 1
Micrograd: Backpropagation from Scratch
Build a tiny autograd engine. Watch how gradients flow through a compute graph one operation at a time.
Lecture 2
Makemore: Bigram Language Model
Predict the next character using bigram counts. Visualise the 27×27 probability matrix and sample names.
Lecture 3
Makemore: MLP Language Model
Replace the bigram table with a multi-layer perceptron. Explore embeddings, hidden layers, and softmax.
Lecture 4
Activations, Gradients & BatchNorm
Diagnose dead neurons, vanishing gradients, and see how batch normalisation stabilises training.
Lecture 5
Backprop Ninja Playground
Derive gradients by hand for every operation in the MLP. Interactive calculus — no shortcuts.
Lecture 6
WaveNet-Style Architecture
Stack hierarchical convolutional blocks to build a deeper character-level model inspired by WaveNet.
Lecture 7
GPT from Scratch
Build a decoder-only transformer: token + position embeddings, multi-head self-attention, and a language model head.
Lecture 8
Tokenization
Understand byte-pair encoding (BPE). See how raw text is split into tokens and why tokenisation decisions matter.