One-Line Summary: Iteratively adjusting parameters to minimize a loss function -- the engine that drives model training.
Prerequisites: Derivatives and Gradients, Vectors and Matrices.
What Is Optimization in ML?
Training a machine learning model is, at its core, an optimization problem. You have a loss function that quantifies how poorly the model (parameterized by ) fits the data, and your goal is to find the parameter values that minimize it:
Imagine you are blindfolded on a mountain range and can only feel the slope beneath your feet. You take a step downhill, feel the slope again, step again. This is gradient descent: a simple, iterative algorithm that uses local slope information to navigate toward a minimum.
How It Works
Convex vs. Non-Convex Optimization
A function is convex if the line segment between any two points on its graph lies above or on the graph:
For convex functions, every local minimum is a global minimum, and gradient descent is guaranteed to converge to it. Linear regression and logistic regression have convex loss surfaces.
Non-convex functions have multiple local minima, saddle points, and plateaus. Deep neural network loss surfaces are highly non-convex. Despite this, gradient-based methods work surprisingly well in practice, partly because saddle points are more common than poor local minima in high dimensions.
The Gradient Descent Update Rule
The basic update at iteration is:
where is the learning rate. The negative gradient is the direction of steepest descent.
The Learning Rate
The learning rate is arguably the most important hyperparameter:
- Too large: The updates overshoot, oscillate, or diverge.
- Too small: Convergence is painfully slow and the optimizer may get stuck in shallow local minima.
- Just right: Rapid, stable convergence.
For a quadratic loss , convergence requires , and the optimal fixed learning rate is .
Variants: Batch, Stochastic, and Mini-Batch
Batch gradient descent computes the gradient over the entire dataset:
This is exact but expensive for large .
Stochastic gradient descent (SGD) uses a single random sample:
This is an unbiased estimator of the true gradient but has high variance, introducing noise that can help escape shallow local minima.
Mini-batch SGD compromises by using a batch of samples, reducing variance while maintaining computational efficiency. Typical batch sizes range from 32 to 512.
Momentum
Vanilla SGD oscillates in ravines (directions with high curvature) and progresses slowly along the bottom (low curvature). Momentum adds a velocity term that accumulates past gradients:
where (typically 0.9). This dampens oscillations and accelerates progress along consistent gradient directions, much like a ball rolling downhill gains speed.
Nesterov accelerated gradient evaluates the gradient at a "look-ahead" position , yielding better convergence rates.
Adaptive Learning Rate Methods
Different parameters may need different learning rates. Adaptive methods adjust per-parameter:
- AdaGrad: Scales learning rate by inverse square root of accumulated squared gradients. Good for sparse data but can decay too aggressively.
- RMSProp: Uses exponential moving average of squared gradients, preventing premature decay.
- Adam: Combines momentum (first moment) and RMSProp (second moment) with bias correction:
Adam (with defaults , , ) is the most widely used optimizer in deep learning.
Learning Rate Schedules
Rather than a fixed , schedules systematically reduce it during training:
- Step decay: Reduce by a factor every epochs.
- Cosine annealing: .
- Warmup: Start with a small and linearly increase it over the first few thousand steps, then decay. Critical for training Transformers.
Local Minima and Saddle Points
In high-dimensional non-convex optimization, saddle points (where some eigenvalues of the Hessian are positive and others negative) vastly outnumber local minima. Gradient descent naturally escapes saddle points because the negative curvature directions provide descent. Stochastic noise further helps escape.
Why It Matters
Optimization is the bridge between a model's mathematical formulation and its practical usefulness. A perfectly specified model is worthless if you cannot find good parameters. The choice of optimizer, learning rate, and schedule directly determines whether training converges, how fast, and to what quality of solution.
Key Technical Details
- Convergence rate for convex, smooth functions with batch GD: for general convex, for strongly convex (linear convergence) where .
- Gradient clipping caps the norm of the gradient to prevent exploding updates: .
- Weight decay adds to the loss, equivalent to L2 regularization in SGD but subtly different in Adam (decoupled weight decay, or AdamW).
- Second-order methods (Newton, L-BFGS) use Hessian information and converge faster per step but are impractical for very large models due to memory and computation costs.
Common Misconceptions
- "SGD is worse than Adam." SGD with momentum often generalizes better than Adam in practice, especially for image classification. Adam converges faster but can converge to sharper, less generalizable minima.
- "Non-convex means gradient descent fails." In high-dimensional spaces, the loss landscape of overparameterized models is often benign: most local minima have loss values close to the global minimum.
- "Learning rate is just a small number." The learning rate interacts with batch size, architecture, and initialization. Scaling rules (e.g., linear scaling with batch size) and warmup schedules are essential for stable training.
Connections to Other Concepts
derivatives-and-gradients.md: The gradient is the input to every optimization step. The Hessian determines curvature and convergence speed.matrix-decompositions.md: The condition number of the Hessian (obtained via eigendecomposition) determines how elongated the loss contours are and thus how fast GD converges.maximum-likelihood-estimation.md: MLE is an optimization problem -- find the maximizing the likelihood (or equivalently minimizing negative log-likelihood).norms-and-distance-metrics.md: L1 and L2 norms appear in regularization terms added to the loss, changing the geometry of the optimization landscape.
Further Reading
- Boyd & Vandenberghe, Convex Optimization (2004) -- The standard reference for convex optimization theory, freely available online.
- Bottou, Curtis & Nocedal, "Optimization Methods for Large-Scale Machine Learning" (2018) -- Comprehensive survey of SGD variants and their convergence properties.
- Kingma & Ba, "Adam: A Method for Stochastic Optimization" (2015) -- The original Adam paper with derivations and empirical results.