Neural Lab

Draw, infer, inspect, and experiment in one workspace.

Offline
Draw one large, centered digit. Multi-digit strokes are detected separately when spacing is clear.

Network Architecture

784 → 128 ReLU → 64 ReLU → 10 Softmax

Hover nodes
Input784 pxHidden 1128·ReLUHidden 264·ReLUOutput10·Softmax28×28+114 more+54 more0123456789

Draw a digit to fire the network

positive weight negative weight top prediction
Break It Mode— damage the network and watch it fail

Disable biases

Sets every b term to zero. Neurons lose their learned thresholds, so off-center or faint strokes become harder to classify.

Weight noise

Adds random perturbation to learned weights. This simulates corrupted training or model drift in production systems.

Zero layer weights

Cuts an entire information path. W1 removes pixel-to-feature learning, W2 removes shape composition, W3 removes final class mapping.

What each layer does

Input

784 pixels

Your drawing as numbers

The canvas is resized to 28×28 px and each pixel becomes one number in [0, 1]. No spatial structure — just 784 raw values.

x ∈ ℝ⁷⁸⁴

Hidden 1

128 · ReLU

Stroke & edge detectors

Each of the 128 neurons computes a weighted sum of all 784 pixels, then ReLU zeroes negatives. These neurons learn to fire on specific pen strokes.

a₁ = ReLU(W₁x + b₁)

Hidden 2

64 · ReLU

Shape & part detectors

64 neurons combine H1 stroke evidence into higher-level features — loops, curves, vertical segments. "Digit parts" emerge here.

a₂ = ReLU(W₂a₁ + b₂)

Output

10 · Softmax

Digit probabilities

One neuron per digit class. Softmax converts raw scores to probabilities summing to 1. The highest becomes the prediction.

ŷ = Softmax(W₃a₂ + b₃)
Live Computation

Draw a digit to activate computation trace.

Each matrix multiply, ReLU activation, Softmax probability, weight distribution, and gradient will appear here as the model runs.

Forward pass: x ∈ ℝ⁷⁸⁴ → z₁ = W₁x + b₁ → a₁ = ReLU(z₁) → z₂ = W₂a₁ + b₂ → a₂ = ReLU(z₂) → z₃ = W₃a₂ + b₃ → ŷ = Softmax(z₃)