One-Line Summary: SGD, momentum, RMSProp, Adam, and AdamW -- adaptive methods that navigate loss landscapes faster than vanilla gradient descent.
Prerequisites: Gradient descent, partial derivatives, backpropagation, loss functions, learning rate concept.
What Are Optimizers?
Imagine hiking down a mountain in dense fog. Vanilla gradient descent tells you to always step in the steepest downhill direction. Momentum is like being a heavy ball that accumulates speed and rolls through small bumps. Adaptive methods are like having a different step size for every direction -- taking large steps across flat plains and small steps along steep ravines. Optimizers are the algorithms that use gradient information to update network parameters toward a loss minimum.
Formally, given a loss function and its gradient at step , an optimizer defines the update rule . The design of determines convergence speed, stability, and the quality of the final solution.
How It Works
Stochastic Gradient Descent (SGD)
The simplest optimizer computes gradients on a mini-batch and updates directly:
where is the learning rate. SGD is noisy due to mini-batch sampling, but this noise can help escape shallow local minima. The learning rate is the most critical hyperparameter -- too large causes divergence, too small causes painfully slow convergence.
SGD with Momentum
Momentum accumulates a running average of past gradients, smoothing updates and accelerating convergence along consistent gradient directions:
where is the momentum coefficient (typically 0.9). Momentum helps in two ways: (1) it accelerates progress along low-curvature directions where the gradient is consistent, and (2) it dampens oscillations along high-curvature directions where the gradient alternates sign.
The effective step size in a consistent direction grows to -- with , this is a 10x amplification.
Nesterov Accelerated Gradient (NAG)
Nesterov momentum evaluates the gradient at the "lookahead" position rather than the current position:
This "look before you leap" approach provides a corrective factor that reduces overshooting. NAG has provably better convergence rates than standard momentum for convex functions.
Adagrad
Adagrad adapts the learning rate per-parameter based on the history of squared gradients:
Parameters with large cumulative gradients get smaller learning rates; parameters with small gradients get larger ones. This is excellent for sparse features (NLP, recommendations) but problematic for deep learning because only grows, eventually driving the learning rate to zero.
RMSProp
RMSProp (Hinton, unpublished lecture notes, 2012) fixes Adagrad's decaying learning rate by using an exponential moving average of squared gradients:
where is the decay rate (typically 0.99). This forgets old gradient information, keeping the effective learning rate from vanishing.
Adam (Adaptive Moment Estimation)
Adam (Kingma and Ba, 2014) combines momentum (first moment) with RMSProp (second moment), plus bias correction:
Default hyperparameters: , , . The bias correction terms compensate for the zero-initialization of and , which would otherwise bias early estimates toward zero.
AdamW (Decoupled Weight Decay)
Loshchilov and Hutter (2017) showed that Adam's handling of regularization is flawed. In standard Adam, weight decay is applied to the gradient before adaptive scaling, which means heavily-updated parameters get less regularization -- the opposite of the intended effect.
AdamW decouples weight decay from the gradient update:
where is the weight decay coefficient. AdamW is now the default optimizer for training transformers and large language models.
Learning Rate Warmup
Starting training with a large learning rate can cause divergence because the adaptive moment estimates in Adam are inaccurate initially. Warmup linearly increases the learning rate from a small value to the target value over the first few hundred to few thousand steps:
Warmup is especially important for large-batch training and transformer architectures.
Why It Matters
The optimizer determines how efficiently a model navigates its loss landscape. SGD with momentum remains competitive for computer vision tasks (often finding flatter minima that generalize better), while Adam/AdamW dominates in NLP and transformer training. Choosing the right optimizer and tuning its hyperparameters can reduce training time by orders of magnitude.
Key Technical Details
- SGD + momentum: Fewer hyperparameters, often better generalization, but requires careful learning rate scheduling.
- Adam: Faster convergence, less sensitive to learning rate, but can generalize slightly worse without weight decay.
- AdamW: The standard for transformers. Typical: to , to .
- Memory: Adam/AdamW store two additional tensors per parameter (first and second moments), tripling memory versus SGD.
- Learning rate schedules (cosine decay, linear decay, step decay) are as important as the optimizer choice itself.
Common Misconceptions
- "Adam always converges faster than SGD." Adam converges faster initially but SGD with momentum and proper scheduling often reaches better final performance in computer vision. The choice is task-dependent.
- "Adaptive learning rates mean you don't need to tune the learning rate." Adam still requires tuning . The adaptation is per-parameter relative scaling, not automatic global tuning.
- "Weight decay and regularization are the same thing." They are equivalent for SGD but not for adaptive optimizers like Adam, which is precisely why AdamW was developed.
Connections to Other Concepts
backpropagation.md: Computes the gradients that all optimizers consume.weight-initialization.md: Good initialization reduces the burden on the optimizer by starting in a favorable region of the loss landscape.batch-normalization.md: Smooths the loss landscape, making optimization easier for any optimizer.dropout-and-regularization.md: Weight decay in AdamW is a form of regularization; the optimizer and regularizer are deeply intertwined.
Further Reading
- Kingma and Ba, "Adam: A Method for Stochastic Optimization" (2014) -- The original Adam paper.
- Loshchilov and Hutter, "Decoupled Weight Decay Regularization" (2017) -- The AdamW paper, now standard for transformers.
- Ruder, "An Overview of Gradient Descent Optimization Algorithms" (2016) -- Excellent survey of the optimizer landscape.
- Zhang et al., "Which Algorithmic Choices Matter at Scale?" (2019) -- Empirical comparison of optimizers on large-scale tasks.