Neural Lab
Draw, infer, inspect, and experiment in one workspace.
Network Architecture
784 → 128 ReLU → 64 ReLU → 10 Softmax
Draw a digit to fire the network
Disable biases
Sets every b term to zero. Neurons lose their learned thresholds, so off-center or faint strokes become harder to classify.
Weight noise
Adds random perturbation to learned weights. This simulates corrupted training or model drift in production systems.
Zero layer weights
Cuts an entire information path. W1 removes pixel-to-feature learning, W2 removes shape composition, W3 removes final class mapping.
What each layer does
Input
784 pixelsYour drawing as numbers
The canvas is resized to 28×28 px and each pixel becomes one number in [0, 1]. No spatial structure — just 784 raw values.
x ∈ ℝ⁷⁸⁴Hidden 1
128 · ReLUStroke & edge detectors
Each of the 128 neurons computes a weighted sum of all 784 pixels, then ReLU zeroes negatives. These neurons learn to fire on specific pen strokes.
a₁ = ReLU(W₁x + b₁)Hidden 2
64 · ReLUShape & part detectors
64 neurons combine H1 stroke evidence into higher-level features — loops, curves, vertical segments. "Digit parts" emerge here.
a₂ = ReLU(W₂a₁ + b₂)Output
10 · SoftmaxDigit probabilities
One neuron per digit class. Softmax converts raw scores to probabilities summing to 1. The highest becomes the prediction.
ŷ = Softmax(W₃a₂ + b₃)Draw a digit to activate computation trace.
Each matrix multiply, ReLU activation, Softmax probability, weight distribution, and gradient will appear here as the model runs.