One-Line Summary: Eigendecomposition, SVD, and Cholesky -- factoring matrices to reveal structure, compress data, and solve systems efficiently.

Prerequisites: Vectors and Matrices, basic linear algebra (rank, inverse, transpose).

What Are Matrix Decompositions?

Think of factoring the integer 60 into . The factorization reveals structure -- which primes compose it and in what proportions. Matrix decompositions do the same for matrices: they break a matrix into a product of simpler, interpretable factors. These factors expose the principal directions of variation in data, enable dramatic compression, and transform hard computational problems into easy ones.

Formally, a matrix decomposition expresses as a product of structured matrices. The three workhorses for ML are eigendecomposition (), singular value decomposition (), and Cholesky factorization ().

How It Works

Eigendecomposition

An eigenvector of a square matrix satisfies:

where is the corresponding eigenvalue. Geometrically, stretches by a factor of without changing its direction. If is a covariance matrix, eigenvectors point along the axes of maximum variance and eigenvalues quantify the variance along each axis.

For a diagonalizable matrix with linearly independent eigenvectors:

where and . When is symmetric (real), is orthogonal (), which simplifies computation and guarantees all eigenvalues are real.

Singular Value Decomposition (SVD)

SVD generalizes eigendecomposition to any matrix , not just square ones:

where (left singular vectors), (diagonal matrix of singular values ), and (right singular vectors).

Geometric interpretation: Any linear transformation can be decomposed into three steps: (1) rotate/reflect by , (2) scale along coordinate axes by , (3) rotate/reflect by .

Key relationships:

  • Singular values:
  • = number of nonzero singular values
  • (Frobenius norm)

Truncated SVD for Compression

The Eckart-Young theorem states that the best rank- approximation (in both Frobenius and spectral norm) is obtained by keeping only the top singular values:

where has columns, is , and has columns. The approximation error is:

This is the mathematical foundation of latent semantic analysis in NLP, image compression, and recommendation systems (matrix factorization).

Cholesky Decomposition

If is symmetric positive definite (all eigenvalues ), then it has a unique factorization:

where is lower triangular with positive diagonal entries. Cholesky is roughly twice as fast as LU decomposition and numerically more stable. It is used to:

  • Solve via forward/backward substitution
  • Sample from multivariate Gaussians: if , then
  • Compute log-determinants efficiently:

Connection to PCA

Principal Component Analysis is eigendecomposition of the covariance matrix (for centered data ). Equivalently, PCA can be computed via SVD of itself. If , then the principal components are the columns of , and the variance explained by each component is .

Why It Matters

Matrix decompositions are the computational backbone of ML. PCA (via eigendecomposition or SVD) is the most widely used dimensionality reduction technique. SVD powers recommender systems and low-rank matrix completion. Cholesky enables efficient Gaussian process inference and sampling. Without these tools, many models would be computationally intractable.

Key Technical Details

  • Eigendecomposition applies only to square matrices; SVD applies to any matrix.
  • Not all matrices are diagonalizable. Defective matrices require the Jordan normal form, but this rarely arises in ML where matrices of interest (covariance, kernel) are symmetric.
  • Computing full SVD of costs . Randomized SVD algorithms reduce this for the truncated case.
  • The condition number measures sensitivity to perturbation. Ill-conditioned matrices () cause numerical instability.
  • For positive semi-definite matrices (eigenvalues , not strictly ), Cholesky can be applied with pivoting or a small diagonal shift (Tikhonov regularization: ).
  • Eigenvalue decomposition of symmetric matrices is guaranteed to yield real eigenvalues and orthogonal eigenvectors -- a fact that underpins PCA, spectral clustering, and Laplacian-based methods.

Common Misconceptions

  • "SVD and eigendecomposition are the same thing." Eigendecomposition requires a square matrix and may not exist. SVD exists for every matrix and decomposes it into orthogonal bases for both the domain and codomain.
  • "Keeping more singular values is always better." Truncated SVD acts as a regularizer. Retaining noise-dominated components can hurt generalization -- this is the bias-variance tradeoff applied to representation.
  • "PCA always works." PCA finds linear directions of maximum variance. If the data lies on a nonlinear manifold, PCA may miss its structure entirely. Kernel PCA or autoencoders may be more appropriate.

Connections to Other Concepts

  • vectors-and-matrices.md: Decompositions require fluency with rank, column space, and matrix operations.
  • cost-latency-optimization.md: The condition number from SVD determines how fast gradient descent converges. Preconditioning uses decompositions to accelerate optimization.
  • probability-fundamentals.md: The covariance matrix is the object being decomposed in PCA; its eigenvalues are variances along principal axes.
  • norms-and-distance-metrics.md: The spectral norm and Frobenius norm are defined through singular values.
  • statistical-inference.md: Decompositions enable efficient computation of likelihoods involving multivariate Gaussians.

Further Reading

  • Strang, Linear Algebra and Its Applications (2006) -- Thorough treatment of eigendecomposition and SVD with geometric intuition.
  • Trefethen & Bau, Numerical Linear Algebra (1997) -- The definitive reference on the numerical aspects and stability of matrix decompositions.
  • Halko, Martinsson & Tropp, "Finding Structure with Randomness" (2011) -- Landmark paper on randomized algorithms for low-rank approximation.