ml

Backprop Stepper

Step through all 13 forward/backward-pass operations of a 2-layer MLP — gradients, weight updates, vanishing/exploding gradient warnings.

Network Config
Input size:Hidden units:
Loss:
Training Example
Weights
W₁ (3×2)
b₁ (3)
W₂ (1×3)
b₂
1 / 13
forwardInputx = input vector
x[1, 0.5]
y[1]
Forward: z₁=W₁x+b₁, a₁=σ(z₁), z₂=W₂a₁+b₂, a₂=σ(z₂)
Backward: δ₂=∂L/∂z₂, δ₁=W₂ᵀδ₂⊙σ′(z₁)
Update: W ← W − η·∂L/∂W, b ← b − η·∂L/∂b