
Learn neural networks by watching one think.
GradVex is a self-explanatory AI lab: draw a digit, see every layer fire, inspect the live math, and experiment with the parts that make neural networks work.
Model
784 → 128 → 64 → 10
Parameters
109,386
Runtime
Browser inference
Dataset
MNIST digits
Input
784
H1
128·ReLU
H2
64·ReLU
H3
32·ReLU
Out
4·Softmax
What happens inside the black box.
Input
28×28 pixel image flattened into a 784-dimensional vector. Each pixel is a feature — brightness normalized to [0, 1]. No spatial structure — the network sees pure numbers.
Hidden 1
128 neurons each compute a weighted sum of all 784 inputs, then ReLU zeroes negatives. These neurons learn to detect strokes, edges, and pen directions.
Hidden 2
64 neurons combine H1 stroke detectors into higher abstractions — curves, loops, corners. This is where "digit parts" emerge as recognizable patterns.
Output
10 neurons — one per digit. Softmax converts raw scores to probabilities. The highest probability is the prediction. All 10 values always sum to 1.
Live neural playground
Draw a digit and watch every layer respond — pixel activations, weighted sums, ReLU gates, and final probabilities, all updating in real time.
Math without hand-waving
Forward pass, weights, biases, ReLU, Softmax, and gradient explanation are shown as the model runs — no vague metaphors.
Break It Mode
Disable biases, inject weight noise, or zero entire layers. See exactly why each component matters by watching the model fail without it.
2D and 3D visualization
Switch between focused 2D inspection and an immersive 3D network where you can orbit the full 784→128→64→10 architecture.