---
title: HDS 9. Principal component analysis (PCA) and singular value decomposition
(SVD)
author: "Matti Pirinen, University of Helsinki"
date: "17.2.2026"
output:
html_document: default
pdf_document: default
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
Now we move from variable selection to dimension reduction.
The goal is to represent high dimensional data
in a low dimensional subspace in a useful way, for example,
to visualize the data in
human interpretable ways or to pre-process the data for downstream analyses
that benefit from reduced data size. The key question is what is
a *useful* way to reduce the dimension. Intuitively, given a target dimension,
we want to find such a mapping from the original data space to the target space that
maintains as much of the original information as possible.
PCA, first defined in 1901 by Karl Pearson, is one of the most
widely used method for this purpose
with a clear statistical motivation and linear algebraic implementation.
Thus, we will start from it.
#### An example in 2 dimensions
Let's consider a set of $n=50$ points on 2-dimensional space ($p=2$):
```{r}
set.seed(18)
n = 50
x1 = rnorm(n)
x2 = 0.4 * x1 + rnorm(n, 0, 0.4)
X = cbind(x1 - mean(x1), x2 - mean(x2)) # Always mean-center data for PCA
plot(X, asp = 1, pch = 19, xlab = expression(x[1]), ylab = expression(x[2]))
grid()
var(X) # what is the variance in x1 and x2 directions and what is the covariance?
```
These variables are quite correlated, so they are telling partly a redundant
story about the study samples. Do we really need both dimensions
to represent these data?
Or could only one dimension be enough for some purposes?
How could we project these points to
one dimension in a useful way that would preserve the main structure
of the points.
Let's draw some lines through the origin, which is also the mean of the data points,
and project the data on those lines.
Then we will have a set of one-dimensional representations of the data.
The unit vector of a line that passes through the origin with slope $b$
is $\pmb v = (1,b)^T/\sqrt{1+b^2}$.
The projection of a point $\pmb x=(x_1, x_2)^T$
on that line is
$$
(\pmb v^T \pmb x) \pmb v = \left(\frac{(1,b)\cdot(x_1,x_2)}{\sqrt{1+b^2}}\right)\frac{(1,b)^T}{\sqrt{1+b^2}}=
\left(\frac{x_1+b x_2}{1+b^2}\right) (1,b)^T
$$
And the coordinate of the point along the line is
$$(\pmb v^T \pmb x) = \frac{x_1+b x_2}{\sqrt{1+b^2}}.$$
```{r}
plot(X, pch = 1, xlim = c(-3, 3), ylim = c(-3,3), asp = 1,
xlab = expression(x[1]), ylab = expression(x[2]))
grid()
b = c(-0.7, 0.2, 2) # example set of slopes of possible lines
cols = c("purple", "blue", "orange")
vars = rep(NA, length(b)) # variances of the projected point on each line
for(ii in 1:length(b)){
abline(0, b[ii], col = cols[ii], lty = 2, lwd = 0.7)
coord_on_line = (X[,1] + X[,2]*b[ii]) / sqrt(1 + b[ii]^2) # coordinate along each line
z = coord_on_line / sqrt(1 + b[ii]^2) # temporary variable
points(z, z*b[ii], col = cols[ii], pch = 1, cex = 0.7) # points on the line
# Show projection for point jj = 10 using arrows from the point to the lines
jj = 10
arrows(X[jj, 1], X[jj, 2], z[jj], z[jj]*b[ii], code = 2,
length = 0.1, lwd = 2, col = cols[ii])
vars[ii] = var(coord_on_line)
}
points(X, pch = 1) # redraw the original points on top
cbind(b, vars) # print the variance of the projected points on each line
```
Because the projections are orthogonal to each line, we can think that the
projections preserve some of the original relationships between the points.
It seems that the line 2, with slope of 0.2, separates the points most from each other
in the sense that on that line the projected points have the largest empirical variance.
In this sense, it preserves more information about the
structure of the original data than the other two lines.
Think about the two extreme examples of possible projections on a line.
First, if each original point was projected
on the same point on a line, then we lost
all the information about the differences between the original data points.
Second, if the original data were already
on a line, then we do not lose anything by simply recording the points'
coordinates on that line instead of keeping track of
the original multidimensional coordinates.
In reality, our data lies somewhere in between these extremes,
and in the PCA the defining criterion of the dimension reduction
transformation is that
in the target subspace the projections of the data points
preserve the highest possible variance.
What would be the line on which our example points had the largest variance?
Let's find the unit vector $\pmb v = (v_1,v_2)^T$ $(v_1^2 + v_2^2 =1)$ that
maximizes the empirical variance of the projections of $X$ on that line.
As the columns of $\pmb X$ are mean-centered, then also the
mean of $\pmb X \pmb v$ is 0, and the variance is proportional to the
sum of squares of $\pmb X \pmb v$:
$$\widehat{\textrm{Var}}\left(\pmb X \pmb v \right) \propto
\pmb v^T \pmb X^T \pmb X \pmb v.$$
We can use the method of Lagrange multipliers to maximize this
quantity among all unit vectors $\pmb v$, that is, the vectors for which
$\pmb v^T \pmb v - 1 = 0$.
This method requires us to maximize the extended function
$f(\pmb v, \alpha) = \pmb v^T \pmb X^T \pmb X \pmb v + \alpha (\pmb v^T \pmb v - 1)$
with respect to $\pmb v$ and $\alpha$.
The derivative with respect to $\pmb v$ is
$$\frac{\partial f(\pmb v, \alpha)}{\partial \pmb v} =
2\pmb X^T \pmb X \pmb v + 2\alpha \pmb v,$$
and equals to 0 when $\pmb X^T \pmb X \pmb v = -\alpha \pmb v,$
that is, when $\pmb v$ is an *eigenvector* of the matrix $\pmb X^T \pmb X$
and $-\alpha$ is the corresponding eigenvalue.
Among all eigenvectors, we thus choose the one that maximizes
$\pmb v^T \pmb X^T \pmb X \pmb v = -\alpha \pmb v^T\pmb v = -\alpha,$
which shows that $\pmb v$ should correspond to the largest eigenvalue of
$\pmb X^T \pmb X$.
Let's apply this result in practice.
```{r}
eig = eigen(t(X) %*% X)
eig$values
eig$vectors # eigenvectors are in columns, in decreasing order of eigenvalues
v = eig$vectors[,1] # corresponds to the largest eigenvalue.
plot(X, pch = 1, xlim = c(-3, 3), ylim = c(-3, 3), asp = 1,
xlab = expression(x[1]), ylab = expression(x[2])) # plot the data again
abline(0, v[2]/v[1], lty = 2, col = "red") # add the max variance line whose unit vector is u
# do projections on the max variance line
coord_on_line = (X %*% v) # coordinates of points in X projected on line u
points(v[1] * coord_on_line, v[2] * coord_on_line, col = "red", cex = 0.7)
var(coord_on_line)
```
The chosen line looks plausible.
Visually, the projections on this line preserves a lot of the
information of the original configuration of the points in the 2-dimensional plane.
For an interactive demo of the possible projections you can see the top answer
to this [stackexchane question](https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues).
Since the eigenvectors (of a symmetric matrix such as $\pmb X^T \pmb X$)
are orthogonal, they give a new coordinate system
on which the data have the maximum variance on the 1st axis
and the 2nd axis is the orthogonal direction in which the data vary
less compared to the 1st axis.
These are called the principal components (PCs)
of the data $\pmb X$. The idea is that since the PC 1 has been chosen to
maximize the variability between the points, it is a good candidate
to give a useful one-dimensional representation of the original data.
With PCA, we may
reduce the dimension of the data by storing only the points' coordinates
on the PC 1 and ignoring the PC 2.
We will lose some information, but,
as measured by the variance,
this loss is as small as possible
given that we are reducing the dimension.
You can study PCA interactively in 2D and 3D here:
.
#### PCA more generally
More generally, when $p>2$, the number of PCs is $\min\{p,n\}$.
The PC $k$ is defined by the unit vector $\pmb v_k = (v_{k1},\ldots,v_{kp})^T$,
$\left(||\pmb v_k||_2 = 1\right)$, and the **score** (previously called
"coordinate") of data point $i$ on the PC $k$ is
$\textrm{pc}_k(i) = \pmb x_i^T \pmb v_k$.
The coefficient $v_{kj}$, by which the variable $j$ contributes to the PC $k$,
is called the **loading** of the variable $j$ on the PC $k$.
PCs have the following properties
* PC 1 is the one-dimensional subspace of $\mathbb{R}^p$ in which the projection
of the rows of data $\pmb X$ have the maximum empirical variance.
* PC $k>1$ is the one-dimensional subspace of $\mathbb{R}^p$ that is orthogonal
to all PCs 1,...,$k-1$ and in which the projection
of the rows of data $\pmb X$ have the maximum empirical variance among all
one-dimensional subspaces orthogonal to the previous PCs.
Each PC is a one-dimensional subspace (a line) in the $p$-dimensional
space of all variables. Equivalently, it is an image of
a linear combination of all $p$ variables. Suppose that we have a
new data point $\pmb x^*$ measured on the same $p$ variables on which we
have constructed the PCs earlier with the data matrix $\pmb X$.
We can project this new point on the existing PCs by the linear mapping
$\textrm{pc}_k(\pmb x^*) = {\pmb x^*}^T \pmb v_k = \sum_{j=1}^p x^*_j v_{kj}$.
The dimension reduction to $k\leq p$ dimensions with the PCA
happens by simply ignoring the PCs after the $k$ largest components.
For visualization of the data, we can only use 2 or 3 components in any one plot
but it is possible to make multiple plots each showing different pairs/triples of PCs.
It can be shown that the first $k$ PCs minimize the total sum of the squared
distances between the original points and their projections on
any $k$ dimensional subspace.
Remember that linear regression minimizes the distance
between a particular outcome variable $y$ and its prediction made as a
linear combination of a set of predictor variables $x_j$.
PCA, instead, treats all variables equally,
(not separating predictors from an "outcome"),
and minimizes the total squared distance between its predictions,
that are the projections on the first $k$ PCs, and
the original points in the $p$ dimensional space.
#### Another derivation of PCA
Above we saw that the question of the maximum
variance line is solved by the leading eigenvector of the product
matrix of the data $\pmb X^T \pmb X$.
We can derive the PCA solution also by relying on a result
on the existence of an eigenvalue decomposition of a symmetric ($p$ x $p$)
matrix $\pmb S$ (here taken to be the covariance matrix
of the data vector $\pmb x$ when considered as a random variable).
This result says that $\pmb S$ can be written as
$\pmb S = \pmb V \pmb D \pmb V^T$,
where ($p$ x $p$) matrix $\pmb V$ is orthonormal,
(that is, for any columns $i\neq j$, $\pmb v_i^T \pmb v_j = 0$
and $\pmb v_i^T \pmb v_i = 1$)
and where $\pmb D = (d_j)_{j=1}^p$ is a diagonal matrix.
Note that $\pmb V \pmb V^T = \pmb V^T \pmb V = \pmb I$.
Let's set as our goal to find such an orthogonal $p\times p$ matrix $\pmb Q$ that
when the data are transformed by the linear mapping $\pmb Q$,
the covariances between the original variables are removed.
In other words,
we look for an orthogonal $\pmb Q$ that will transform our observed
$p$-dimensional vector $\pmb x$, with original covariance matrix $\pmb S_x$ (with
eigendecomposition $\pmb S_x = \pmb V_x \pmb D_x \pmb V_x^T$), into a new
$p$-dimensional vector $\pmb y=\pmb Q \pmb x$ whose covariance would be diagonal.
Since we require $\pmb Q$ to be orthogonal,
it can be seen as preserving all the structure in the data and it is
simply changing the coordinate system on which the variables are measured.
Since
$$\textrm{Var}(\pmb Q \pmb x) = \pmb Q \pmb S_x \pmb Q^T =
\pmb Q \pmb V_x \pmb D_x \pmb V_x^T \pmb Q^T,$$
we see that by a choice $\pmb Q = \pmb V_x^T$ we have that
$\textrm{Var}(\pmb Q \pmb x) = \pmb D_x$ is diagonal and hence
the transformed variables have zero covariances.
Thus, by transforming the variables by the eigenvectors of the
empirical covariance matrix of $\pmb x$, which is
$\widehat{\pmb S}_x = \frac{1}{n-1} \pmb X^T \pmb X$, we will rotate
the data into new coordinates
where different dimensions do not covary at all. This is the PCA
transformation, and the new variable with the largest variance corresponds
to the PC 1 etc.
Note that the constant $\frac{1}{n-1}$ does not affect
the eigenvectors (although it does affect the eigenvalues multiplicatively).
Thus, for deriving the PCA transformation, it does not matter whether
we compute the eigenvectors from $\pmb X^T \pmb X$ or
$\frac{1}{n-1} \pmb X^T \pmb X$.
### PCA in practice
Let's use PCA on the `Boston` data set.
Let's apply both the R's standard function `prcomp()` and also check
that the above defined eigenvalue decomposition method agrees.
Note that the help of `prcomp` says that the calculation is done
by a singular value
decomposition of the (centered and possibly scaled) data matrix,
and not by applying `eigen()` on the data covariance matrix.
```{r}
library(MASS)
str(Boston) # remind what's in there
# We will do PCA WITHOUT lstat included.
keep = c(1:12, 14) # keep these columns for PCA
outcome = setdiff(1:14, keep) # leave this out from PCA and consider it as an outcome variable
# Do PCA with eigen decomposition:
Boston_scaled = scale(as.matrix(Boston)) # always mean center for PCA; scaling is not obligatory
eig = eigen(1 / (nrow(Boston) - 1) * t(Boston_scaled[,keep]) %*% Boston_scaled[,keep])
# Do PCA with prcomp:
prc = prcomp(Boston[,keep], center = TRUE, scale = TRUE)
str(prc)
# square roots of the eigenvalues = SD of PCs
cbind(prc$sdev[1:3], sqrt(eig$values[1:3]))
# matrix of PC loadings in columns = eigenvectors in columns
cbind(prc$rotation[1:3,1], eig$vectors[1:3,1])
# PC scores of sample 5 on PC 3, note that the sign is arbitrary!
c(prc$x[5,3], t(eig$vectors[,3]) %*% Boston_scaled[5,keep])
```
If we had a new Boston area with measured values for these 13
variables, how could we project it on the PCs?
Let's do projection for the area 15,
even though it was already in the original PC analysis,
and compare to the scores that were produced by the original PCA.
```{r}
cbind(as.numeric(predict(prc, newdata = Boston[15, keep])), prc$x[15,])
```
Let's see how large a proportion of the total variance the PCs cumulatively capture.
```{r}
cumsum(prc$sdev^2) / sum(prc$sdev^2)
```
The first component alone explains 46% and the first 7 components together explain 90%.
Often the variance explained by the components is plotted,
and the resulting **scree plot** can be used to evaluate visually
how many components have clearly more weight than the rest.
```{r, fig.height = 4, fig.width = 5}
par(mar = c(5, 5, 1, 0.2))
plot(prc$sdev^2 / sum(prc$sdev^2), t = "b",
xlab = "PC", ylab = "proportion of total variance")
```
Let's see how the variables make up the leading PC, i.e., what are the loadings:
```{r}
prc$rotation[,1]
```
Let's visualize the first six PCs and color each point according to the `lstat` value
(higher values in red, lower in blue).
```{r, fig.width =9, fig.height = 3}
cols = rep("blue", nrow(Boston))
cols[Boston[,outcome] > median(Boston[,outcome])] = "red"
par(mfrow = c(1, 3))
for(ii in seq(1, 6, 2)){
plot(prc$x[, ii], prc$x[, ii + 1], pch = 3, col = cols, cex.lab = 1.5,
xlab = paste0("PC", ii), ylab = paste0("PC", ii + 1))}
```
We see some structure, at least in the 1st and 3rd plots,
where the scores of the PCs seem to associate
with the `lstat` value, which was not included in the PCA.
#### Principal components regression (PCR)
Let's do linear regression of `lstat ~ PC1` and compare it to
how the individual variables explain `lstat`.
```{r}
summary(lm(Boston[, outcome] ~ prc$x[,1]))
# How well do the individual variables explain lstat?
for(ii in keep){
cat(paste(names(Boston)[ii],":",
round(summary(lm(Boston[, outcome] ~ Boston[,ii]))$adj.r.squared,2)),"\n")}
```
The PC 1 explains similarly `lstat` as `medv`, and much more
than any other variable alone.
When many predictors in a regression model are highly correlated, one approach
for a robust regression analysis is first to compute the
leading PCs out of the correlated
predictors and then to do the regression on some of the leading PCs.
This is called *principal components regression (PCR)*.
For example, when regressing the outcome variable $y$ on the PC 1 and PC 2,
the model is
\begin{eqnarray*}
y_i &=& \mu + \textrm{pc}_1(i) \beta_1 + \textrm{pc}_2(i) \beta_2 + \varepsilon\\
&=& \mu + \beta_1 \sum_{j = 1}^p x_{ij}v_{1j} + \beta_2 \sum_{j = 1}^p x_{ij}v_{2j} + \varepsilon
\end{eqnarray*}
Thus, the model has only two coefficients, but still each of the $p$ original predictors $x_j$
is contributing to the model by $x_{ij}(\beta_1 v_{1j} + \beta_2 v_{2j})$.
The idea is that by focusing on the leading PCs, we can capture more of the main structure
of the predictors than with an equal number of individual predictors, but we will
reduce the variance compared to the standard linear model with $p$ coefficients that
is prone to overfit when $p$ is high.
An advantage of PCR is that the PCs are uncorrelated with each other,
and still capture a large part of the joint information of the original
correlated predictors.
A disadvantage is that the interpretation of the regression coefficient is not anymore
given in terms of the original predictors, which is a problem in cases
where the original predictors had a clear meaning for the analyst but
the PCs didn't.
#### PCA on the samples rather than on the variables
Let's then use the PCA the other way around.
We now consider the $p=14$
variables as the "samples" that have been measured on $n=506$ "variables"
which are the Boston areas.
We want to find the 2-dimensional description of the relationships
between these $p$ variables. Let's first show them with a heat map.
```{r}
heatmap(cor(Boston))
```
It seems that there are two main groups. Let's see what the leading PC gives:
```{r, fig.width = 5, fig.height = 5}
prc = prcomp(t(Boston_scaled))
plot(prc$x[,1], prc$x[,2], col = "white", xlab = "PC1", ylab = "PC2")
text(prc$x[,1], prc$x[,2], labels = names(Boston))
```
So the leading PC also divides the variables into the same two main groups
as the correlation matrix.
In general, the correlation matrix of $p$ variables
has $p(p-1)/2$ unique elements,
whereas the first $k$ PCs have together $p\cdot k$
elements. Already when $p$ is in hundreds,
and we keep $k$ small, say $k\leq 10$,
there is a considerable difference
in favor of the PCs in the efficiency of the data
reduction, and this difference
becomes crucial in applications where $p$
is in thousands.
#### See examples of PCA in slides HDS9_slides.pdf
#### Read ISL section 12.2 PCA
* What does PCA aim to do?
* What are the *loadings* and what are the *scores*?
* PCA solution minimizes something and maximizes something else. What are these quantities?
* Should we scale variables before PCA?
#### Tutorial on PCA by Jon Shlens
https://arxiv.org/pdf/1404.1100.pdf
This is an excellent tutorial, but unfortunately
it uses the rows and columns of the data
matrix the other way as we are using meaning that
the data matrix is given as a $p \times n$ matrix,
and $p$ (the number of variables) is called $m$.
### Singular value decomposition (SVD)
The SVD is the fundamental decomposition of a matrix
$\pmb X$ (of any size $n \times p$) into three components:
$$ \pmb X = \pmb U \pmb \Sigma \pmb V^T, \textrm{ where }$$
* $\pmb U$ is an $n \times n$ orthonormal matrix,
* $\pmb V$ is a $p \times p$ orthonormal matrix,
* $\pmb \Sigma$ is $n \times p$ rectangular
diagonal matrix with elements $\sigma_1,\ldots,\sigma_m \geq 0$ on
the diagonal, where $m = \min\{n,p\}$, and
* $\pmb X \pmb v_k = \sigma_k \pmb u_k$, for all $k \leq m,$ where $\pmb v_k$ and
$\pmb u_k$ are the $k$th columns of $\pmb V$ and $\pmb U$ respectively.
We call the columns of $\pmb U$ as the left-singular vectors of $\pmb X$,
the columns of $\pmb V$ as the right-singular vectors of $\pmb X$ and values
$\sigma_k \geq 0$ as the singluar values of $\pmb X$. We order the singular values
in decreasing order $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_m \geq 0$
(and we define $\sigma_j=0$ for all $j > m$).
SVD exposes the basis $\pmb V$ for $\mathbb{R}^p$ and
basis $\pmb U$ for $\mathbb{R}^n$ that are particularly suitable
for understanding the structure of the original matrix $\pmb X$.
#### Interpretation of SVD
**Linear transformation.**
Let's think what the matrix $\pmb X$ does when it is used as the matrix
of a linear mapping from $\mathbb{R}^p$ to $\mathbb{R}^n$.
For example, consider how the parameter vector $\widehat{\pmb \beta}$
turns into the predicted values $\widehat{\pmb y}$ of the linear regression model
through the linear mapping
$\widehat{\pmb y} = \pmb X \widehat{\pmb \beta}$ defined by the data matrix
$\pmb X$.
The SVD says that the transformation $\widehat{\pmb \beta} \mapsto \widehat{\pmb y}$
between the standard bases of $\mathbb{R}^p$ and $\mathbb{R}^n$
happens in 3 basic steps.
* Rotate the standard coordinate system of $\mathbb{R}^p$ so that the
(orthonormal) columns of $\pmb V$ become the basis vectors,
and express $\widehat{\pmb \beta}$ in
this orthonormal basis, when the new
coordinates of $\widehat{\pmb \beta}$
are $\widehat{\pmb \beta}^{(V)} = \pmb V^T \widehat{\pmb \beta}$.
* Scale the $p$ elements of $\widehat{\pmb \beta}^{(V)}$
by matrix $\pmb \Sigma$, i.e., get the scaled coordinates
$$ \widehat{\pmb \beta}^{(V,\Sigma)} = \pmb \Sigma \widehat{\pmb \beta}^{(V)}
= \left( \sigma_1 \widehat{\beta_1}^{(V)}, \ldots, \sigma_m \widehat{\beta}_m^{(V)},0,\ldots,0\right)^T,$$
(Note that if $m = p < n$ then the last $n-p$ elements of the vector are 0. Otherwise $m=n$ and there are no extras zeros at the end of the vector after the element $m$.)
It is important that this scaling happens by the non-negative
constants $\sigma_k \geq 0$ along each coordinate axis of the basis $\pmb V$
in the input space $\mathbb{R}^p$ and ends up in the output space $\mathbb{R}^n$.
* Rotate the scaled coordinates $\widehat{\pmb \beta}^{(V,\Sigma)}$, that are given
in the orthonormal basis $\pmb U$, to the standard basis of $\mathbb{R}^n$
to get the final
$$\widehat{\pmb y} = \pmb U \widehat{\pmb \beta}^{(V,\Sigma)} =
\pmb U \pmb \Sigma \pmb V^T \widehat{\pmb \beta} = \pmb X \widehat{\pmb \beta}.$$
It is important that the output space coordinate $\sigma_k \widehat{\beta_k}^{(V)}$
along the basis vector $\pmb u_k$ depends
only on the input space coordinate along the basis vector $\pmb v_k$ and not
on any other direction of the input space.
Thus, the SVD tells that the linear transformation defined by **any** matrix $\pmb X$
is simply a (non-negative) scaling along certain characteristic directions,
combined with two suitable rotations that reveal those characteristic directions
with respect to the standard bases of the input and output spaces of the matrix.
https://en.wikipedia.org/wiki/Singular-value_decomposition#/media/File:Singular-Value-Decomposition.svg
In the special case, where $\pmb X = \pmb X^T$ is a symmetric square matrix,
the SVD coincides with the eigenvalue decomposition and $U=V$ is the
basis formed by the eigenvectors of $\pmb X$ and the singular
values are the eigenvalues of $\pmb X$.
**A low rank approximation to $\pmb X$.**
Let's get back to our main topic of dimension reduction.
Since the matrix $\pmb \Sigma$ is diagonal in the SVD
$\pmb X = \pmb U \pmb \Sigma \pmb V^T$,
we can use the SVD to decompose $\pmb X$ into a sum of
simple $n \times p$ matrices, each of which has a rank of 1
$$\pmb X = \sum_{k=1}^m \sigma_k \pmb u_k \, \pmb v_k^T.$$
This tells how the structure of $\pmb X$ piles up from the
individual pieces determined by
the pairs of left and right singular vectors weighted by the singular
values. Each rank 1 matrix $\pmb u_k\, \pmb v_k^T$ can only capture
one quantitative pattern shared by all rows (encoded by $\pmb v_k$)
and one pattern shared by all columns (encoded by $\pmb u_k$),
and the components are given in the decreasing order of contribution
to the final matrix.
It can be shown (*Eckart-Young theorem*) that by truncating
this sum of rank 1 matrices after the first $K$ components
leads to the rank $K$ approximation to $\pmb X$ that minimizes
the elementwise squared error (*Frobenius norm*)
from $\pmb X$ among all rank $K$ matrices.
Thus, by keeping only the leading $K$ components of SVD,
we reduce the amount of information
from $np$ elements (original size of $\pmb X$) to $K(n+p+1)$ elements
in the optimal way in terms of the MSE.
**Example.** Let's consider a $2\times 3$ matrix $X$ and see how it is formed
as a sum of two rank 1 matrices determined from its SVD.
```{r}
X = matrix(c(1, 0, -1,
2, 1, 1), ncol = 3, byrow = TRUE)
X
# make SVD
S = svd(X)
S
# Two rank 1 matrices rk = u_k %*% v_k^T, for k = 1,2:
r1 = S$u[,1] %*% t(S$v[,1])
r2 = S$u[,2] %*% t(S$v[,2])
r1
r2
# when they are weighted by singular values, they will make up the original X:
S$d[1] * r1 + S$d[2] * r2
# and the best rank 1 approximation to X (measured by MSE) is given by
S$d[1] * r1
```
A concrete way to visualize the rank $K$ approximations is to consider an
image as a rectangular matrix and then compare the different
approximations to the original one.
In particular, notice how the complexity of the image has a large
effect on how many components are needed to represent it well.
http://timbaumann.info/svd-image-compression-demo/
#### Connection between SVD and PCA
We determined earlier that the PCs were the eigenvectors of
$\pmb X^T \pmb X$.
If we write this crossproduct matrix using the SVD of
$\pmb X = \pmb U \pmb \Sigma \pmb V^T$, we have that
$$\pmb X^T \pmb X = \pmb V \pmb \Sigma^T \pmb U^T \pmb U \pmb \Sigma \pmb V^T
= \pmb V \pmb \Sigma^T \pmb \Sigma \pmb V^T = \pmb V \pmb D \pmb V^T,$$
where $n \times n$ diagonal matrix $\pmb D = \pmb \Sigma^T \pmb \Sigma$.
Thus, the eigenvectors of $\pmb X^T \pmb X$ can be read from the columns of
$\pmb V$ and the corresponding eigenvalues are the
squares of the singular values, that is, $\lambda_k = \sigma_k^2$.
Furthermore, the scores of the samples on the PCs are
$$ \pmb X \pmb V = \pmb U \pmb \Sigma,$$
and can be formed from the columns of $\pmb U$ by multiplying each column
by the corresponding singular value.
Hence, by doing the SVD of $\pmb X$ we also get
both the loadings and the scores of the PCs of the matrix $\pmb X$.
The function `prcomp` actually uses the SVD for computing the PCA.
### Computing the PCA
What is the quickest way to do the PCA?
If $p \leq n$, then
the eigenvalue decomposition of a $p \times p$ matrix $\pmb X^T \pmb X$
is a cubic procedure, that is, requires the number of basic operations (such
as multiplications and additions) that grows as $p^3$ as a function of $p$.
We denote this by saying that its time complexity is $\mathcal{O}(p^3)$,
("of the order of $p^3$").
The eigenvalue decomposition gives us the
loadings and the computation of all PC scores takes
an additional $\mathcal{O}(np^2)$ time.
Computation of the crossproduct
$\pmb X^T \pmb X$ to start with takes yet another $\mathcal{O}(np^2)$ time.
Thus the whole process is $\mathcal{O}(2np^2 + p^3)$.
If $p>n$, then we will have only $n$ PCs and we can instead consider
the eigendecomposition of $n \times n$ matrix
$$\pmb X \pmb X^T = \pmb U \pmb \Sigma \pmb V^T \pmb V \pmb \Sigma^T \pmb U^T =
\pmb U \pmb \Sigma \pmb\Sigma^T \pmb U^T,$$
that gives us the scores on the PCs (in columns of $\pmb U \pmb \Sigma$)
and the eigenvalues of each PC (the diagonal of $\pmb \Sigma \pmb \Sigma^T$).
From there, we can compute the loadings $\pmb V$ from the relationship
$\pmb V \pmb \Sigma^T = \pmb X^T \pmb U$ which is an $\mathcal{O}(pn^2)$
operation. Computing the $\pmb X \pmb X^T$ takes $\mathcal{O}(pn^2)$.
Thus, the whole process is $\mathcal{O}(2pn^2 + n^3)$.
All in all, PCA through an eigendecomposition
takes $\mathcal{O}(\min\{(p^3+np^2,n^3+pn^2\})$ time.
The algorithms for the SVD of $\pmb{X}$ have similar theoretical complexity
(depending on the relationship between $n$ and $p$).
```{r eval = FALSE,echo = FALSE}
Let's see how they compare in practice.
{r, eval = FALSE, warning = FALSE}
set.seed(67)
n = 1000
ps = c(500, 1000, 2000, 3000, 4000)
times = matrix(NA, ncol = 3, nrow = length(ps))
filename = "rprof.out" # time the algorithms
for(ii in 1:length(ps)){
p = ps[ii]
X = scale(matrix(rnorm(n * p), ncol = p))
times[ii,1] = p
# prcomp is using the SVD
time_res = system.time({prc = prcomp(X)})
times[ii,2] = as.numeric(time_res[3])
if(n > p){
# use the pxp matrix: t(X) %*% X
time_res = system.time({
eig = eigen(1 / (n - 1) * crossprod(X)) # has eig. values and loadings
scores = X %*% eig$vectors}) # compute scores
}else{
# use the nxn matrix: X %*% t(X)
time_res = system.time({
eig = eigen(tcrossprod(X)) # has eig. values and scores
loadings = crossprod(X, eig$vectors) %*% diag(1 / sqrt(eig$values))}) # compute loadings
}
times[ii,3] = as.numeric(time_res[3])
}
plot(times[,2:3], xlab = "prcomp/SVD (in sec)", ylab = "eigen (in sec)", t = "b",
xlim = c(0, max(times)), ylim = c(-1, max(times)), main = paste0("n=",n,"; p varies"))
text(times[,2], times[,3] - 1.2, labels = times[,1])
abline(0, 1, lty = 2)
grid()
Seems quite similar for smaller $p$ but seems to
favor the eigendecomposition for larger $p$.
This could be because when $n=1000$ remains constant,
the eigendecomposition of $\pmb{X}\pmb{X}^T$
has the time complexity dominated by $\mathcal{O}(n^3)$
even for large $p\,(>n)$,
and hence the computing time does not grow that much with $p$ because
the pre-processing time grows only linearly in $p$.
```
In practice, when both $n$ and $p$ are large, the PCA is done
by finding only a few of the leading components.
For example, `prcomp()` can take in a parameter `rank`
that specifies the number of components.
There are efficient methods that extract the leading components
one-by-one without ever computing the SVD or an eigendecomposition.
These methods are based on the idea of *power iteration*, that is,
they iterate sequential multiplication of a random vector $z$
by the target (symmetric) square matrix $\pmb A$ to get a sequence
$\pmb z, \pmb A \pmb z, \pmb A^2 \pmb z, \ldots$, which
converges towards the
leading eigenvector of $\pmb A$, if a suitable normalization
is carried out at each step.
Intuitively, this happens because when we write the symmetric
square matrix $\pmb A$ using its eigendecomposition as
$\pmb A = \pmb V \pmb D \pmb V^T$, then
$\pmb{A}^k= \pmb V \pmb D^k \pmb V^T$ for $k\geq 1$, meaning
that the eigenvalue $i$ of $\pmb{A}^k$ is $d_i^k$ where
$d_i$ is the eigenvalue $i$ of $\pmb A$.
Thus, the ratio between the largest and the second largest eigenvalue,
$\frac{d_1^k}{d_2^k} = \left(\frac{d_1}{d_2}\right)^k \to \infty$ as $k$ grows
(as long as $d_1 > d_2$) and the leading eigenvalue becomes more and more dominant
relative to the other eigenvalues.
It follows that the direction of vector
$\pmb A^k \pmb z$ turns more and more towards the leading eigenvector
as $k$ grows.
Let's see the power iteration in practice. To get the first PC,
we set $\pmb A = \pmb X^T \pmb X$.
```{r}
set.seed(14)
n = 100
p = 10
X = scale(matrix(rnorm(n*p), ncol = p)) # random matrix
A = (t(X) %*% X) # A = X^T X, whose 1st eigenv vector we want
eig = eigen(A)
eig$values[1:3] # check that the 1st is clearly larger than others
A_eig1 = eig$vector[,1] # this is our target eigenvector
z = rnorm(p) # a random vector in p-space
niter = 50
res = rep(NA, niter) # distance between the 1st eig.vector and the power iterated vector
for(ii in 1:niter){
z = z / sqrt(sum(z^2)) # normalize
res[ii] = sqrt(sum((A_eig1 - z)^2 )) # distance to target
z = A %*% z
}
plot(res, t = "l", xlab = "iteration", ylab = "distance from 1st eigenvec of A")
abline(h = 0,lty = 2, col = "red")
```
To get subsequent PCs, we would run the algorithm again but now
with an additional step where, at each iteration, the vector
$\pmb z$ would be projected to the orthogonal complement of the
leading PCs that had been already found before. This way we would be looking
for the leading eigenvector that is orthogonal to the eigenvectors
already discovered.
The actual methods in use, such as Implicitly Restarted Arnoldi Method,
are more sophisticated and robust than the above simple power iteration.
See the slides for an example analysis of *FastPCA* on the UK biobank data.
Another method is *FlashPCA*.
### A connection between the PCA and ridge regression
Consider a regression problem of estimating $\pmb \beta \in \mathbb{R}^p$
in $\pmb y = \pmb X \pmb \beta + \pmb \varepsilon,$ where
$\pmb y$ is mean centered and the size of $\pmb X$ is $n \times p$.
Let's apply the SVD of $\pmb X = \pmb U \pmb \Sigma \pmb V^T$ to
the predicted outcome values from the OLS estimate
$$\widehat{\pmb y}_\textrm{OLS} = \pmb X \widehat{\pmb \beta}_\textrm{OLS} =
\pmb X \left( \pmb X^T \pmb X\right)^{-1} \pmb X^T \pmb y=
\pmb U \pmb \Sigma \pmb V^T
\left( \pmb V \pmb \Sigma^T \pmb U^T \pmb U \pmb \Sigma \pmb V^T \right)^{-1}
\pmb V \pmb \Sigma^T \pmb U^T \pmb y = \pmb U \pmb U^T \pmb y.
$$
We have assumed in this SVD that
* $\pmb U$ is an $n \times r$ matrix and spans the column space of $\pmb X$,
* $\pmb \Sigma$ is an $r \times r$ diagonal matrix with positive diagonal elements
$\sigma_1\geq \ldots \geq \sigma_r>0$
* $\pmb V$ is an $r \times r$ matrix that spans the row space of $\pmb X$,
* $r\leq n$ is the rank of $\pmb X$.
Thus, if $r0$)
the coordinates along each $\pmb u_k$ direction by the coefficient
$\frac{\sigma_k^2}{\sigma_k^2 + \lambda}$, which means that the directions
with the smallest singular values will be shrunk the most.
These are the directions
in the column space, where the data have the smallest variance.
We see that the ridge regression penalty induces relatively stronger
shrinkage in the directions where the data vary the least.
This sounds like a good approach if those directions are dominated by noise
rather than by the true signal.
The idea is similar to that of PCA:
we remove or downweight the directions in which the data vary
little and get both dimension reduction and more robust analyses.
As a consequence, we will also lose some of the original information.