This document is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The slide set referred to in this document is “GWAS 5”.
Several relatedness estimation methods considered above were base on the assumption of a homogeneous population, i.e., a population without genetic structure. Genetic population structure is present in the sample, if the sample can be divided into groups in such a way that individuals from one group are, on average, more genetically similar among themselves than with the individuals from the other groups.
Most common variants among the humans are shared across the world in the sense that those common alleles are present all around the world. Still, even the common variants carry information about the geographic location of their carrier since their allele frequencies differ across the world. (Slides 19-20.)
Suppose that a population with \(100\) individuals is living in a particular area. Suppose that half of the population migrates to a new area and the two subpopulations do not have any contact with each other anymore, and in particular, do not have any genetic exhange in the following generations. Let’s visualize how the allele frequency of a particular variant evolves in the two populations as a function of generations they have been separated. The assumption is that at time 0 the allele frequency is the same in both populations, and, in each of the following generations, the offspring alleles are randomly sampled from the existing alleles with replacement (one individual can have none, one or multiple offspring). This sampling means that Hardy-Weinberg equilibrium holds in each subpopulation. There are 50 individuals in each population, so 100 alleles in each population. The population size is assumed constant.
n = 100 #alleles in each generation of each subpopulation
f0 = 0.5 #starting allele frequency
K = 100 #number of generations in simulation
npop = 10 #Let's make 10 rather than just 2 pops to illustrate more general behavior
f = matrix(NA, ncol = K+1, nrow = npop) #results of allele freqs across pops and generations
for(pop in 1:npop){
a = rbinom(n,1,f0) #starting configuration of alleles
f[pop, 1] = mean(a) #allele frequency at generation 0
for(ii in 1:K){
a = sample(a, size = n, replace = TRUE) #resample generation ii
f[pop, 1 + ii] = mean(a) #allele frequency at generation ii (index ii+1 since generation 0 is at index 1)
}
}
plot(NULL, xlab = "generation", ylab = "allele frequency",
main = paste("n =",n), xlim = c(0, K), ylim = c(0, 1))
for(pop in 1:npop){lines(0:K, f[pop,], lwd = 1.4, col = topo.colors(npop)[pop])}
The figure shows ten possible evolutions for our subpopulations. We see that each population starts near the common allele frequency value (here 0.5) but, as time goes by, some populations differ strongly in their allele frequency. If we were to genotype individuals from generation 80 at this variant, we could tell, for example, that heterozygotes are definitely from one of the 5 populations not yet fixed to either allele and we could tell which are the possibilities for each homozygote individual. Thus, pure random variation in allele frequencies between generations generates marked allele frequency differences between populations over time. This is called genetic drift.
The amount of genetic drift is strongly dependent on the population size. Statistician might think that each new generation tries to “estimate” its parents’ generation allele frequency by sampling alleles from the parents’ generation and by computing their frequencies, and the precision of this estimate decreases as the sample size decreases.
Let’s repeat similar analysis but with 10 times larger population size. (And let’s not repeat the code in the output, just the updated Figure.)
Within the same time span, the larger populations were much less
affected by genetic drift. This phenomenon also explains why some human
populations are more strongly diverged genetically from their neighbours
than some other, geographically equally distant populations. The Finnish
genetic ancestry is an example of a population with a relatively strong
genetic isolation among the European populations, to a large part due to
a relatively small historical “founder population” size in Finland.
Consequently, the overall genetic background among individuals with
Finnish genetic ancestry is less heterogeneous than in many other
European countries (which make some genetic analyses simpler in
Finland), and due to a strong historical genetic drift effect in
Finland, some functional alleles have survived in Finland
with much higher frequency than elsewhere (say, e.g., 1% in Finland and
0.01% elsewhere) even though selection effect might be against them. We
already know what a huge difference such a frequency difference has in
terms of statistical power to discover the phenotype associations! These
are reasons why international research community has put much focus on Finland when it comes to
doing genetics research.
Above we saw that already a single variant shows informative genetic population structure between population groups that have been separated for a while. And when we combine information across 100,000s of variants across the genome, we can expect to see quite a clear structure, if we just can find suitable tools to pick it up.
Suppose we have an \(n \times p\) matrix \(\mathbf{X}\) of genotypes of \(n\) individuals measured on \(p\) SNPs. We would like to summarize the main structure of these data with respect to the individuals by using only much less than \(p\) dimensions. Such dimension reduction task is most intuitive, if it results in only one or two dimensions since then we can easily visualize the results by simply plotting the individuals in their new coordinates on a line or a plane.
Imagine each individual being originally as a point in the \(p\)-dimensional space where each dimension corresponds to one SNP and the individual’s coordinate on the axis is the genotype at that SNP. If we draw one line through the \(p\)-dimensional space and project each individual on that line, then we can represent each individual by using only one value, the individual’s coordinate on the chosen line. Now each individual is represented by one value instead of the original \(p\) values so we have completed a dimension reduction from \(p\) dimensions to 1 dimension. Is this useful? Only if we can choose that one line in such a way that it will capture a useful amount of the information in the data. In principal component analysis (PCA), first defined in 1901 by Karl Pearson, our criterion to choose the line is that the individuals’ projections on the line should have the largest variance possible among all possible lines that we could draw through the \(p\)-dimensional space.
Let’s make an example data of 2 SNPs where we have 50 individuals from two populations. In the blue population, the allele 1 frequencies are 5% and 95% at the two SNPs, and in the red population they are the opposite: 95% and 5%. We don’t expect that either SNP alone would completely separate the two populations but maybe PCA is able to combine the information into one dimension in a way that shows the population structure from the data better than either of the SNPs alone. Let’s draw a picture.
set.seed(20)
npop = 2
cols = c("blue","red")
f = matrix(c(0.05, 0.95, 0.95, 0.05), byrow = TRUE, ncol = npop)
p = nrow(f) #number of SNPs
n = rep(50, npop) #number of samples from each population
pop = rep(1:npop, n) #from which population each individual comes
X = c() #empty genotype data matrix
for(ii in 1:p){
x = c() #empty genotype vector for SNP ii
for(jj in 1:npop){
x = c(x, rbinom(n[jj], size = 2, f[ii,jj]) ) #add genotypes of pop jj to x
}
X = cbind(X,x) #add SNP x as a new column to genotype matrix X
}
jitt.1 = runif(n[1], -0.03, 0.03) #add some noise to coordinates in plot to avoid overlap
jitt.2 = runif(n[2], -0.03, 0.03)
plot(X[,1] + jitt.1, X[,2] + jitt.2,
col = cols[pop], xlab = "SNP1", ylab = "SNP2")
We have clear patterns in the original two dimensional SNP space
where the individuals from the red population are mainly at the lower
right corner and from the blue at the upper left corner. But neither SNP
alone can separate the two populations, and if we colored all dots black
we would not see a clear separation of the points in to two populations
on this plot. Let’s see which is the line on which the projections of
these points have the largest variance. We get than from PCA computed by
prcomp()
.
X = scale(X) #always standardize each variant before PCA
pca = prcomp(X) #do PCA; we look later what this function returns
plot(X[,1] + jitt.1, X[,2] + jitt.2, asp = 1, #Plot the points, now after scaling
col = cols[pop], xlab = "SNP1 (stnd'd)",
ylab = "SNP2 (stnd'd)")
abline(a = 0, b = pca$rotation[2,1]/pca$rotation[1,1]) #add the PC1 line
If we now project all points on the line determine by PC1, as has
been done in vector pca$x[,1]
, and use just random
y-coordinates for visualization, we have:
plot(pca$x[, 1], runif(sum(n), -1, 1), col = cols[pop],
yaxt = "n", xlab = "PC1", ylab = "")
The direction picked by the PC1 also happens to separate the two populations. So even with only two SNPs, neither of which alone separates the populations, PCA can combine their information to capture the main structure of the data, which here matches the existence of the two populations.
Although the 2-SNP-example above is just a toy demonstration, it gives us a good reason to expect that when we use 100,000s of SNPs, the PCs, i.e., the directions of the largest genetic variation in the data, will capture population structure, if such exists.
Terms:
Principal components of the genetic data are the linear one dimensional subspaces of the original genotype space that have the following properties: The first PC explains the maximum variance possible by any linear one dimensional subspace. For any \(k>1\), the \(k^{th}\) PC explains the maximum variance possible by any linear one dimensional subspace conditional on being orthogonal to all preceding PCs. The number of PCs is \(\min\{n,p\}.\)
Scores or principal component scores are the
coordinates of the individuals when they have been projected on the PC.
They are computed using the PC loadings and the individuals’
(standardized) genotypes. In the object returned by
prcomp()
, the scores are in matrix x
, so,
e.g., the scores on PC 6 are in vector pca$x[,6]
.
Loadings are the coefficients that determine how
the scores are computed from the (standardized) genotypes. Each PC \(k\) has a loading vector \(\mathbf{l}_k =
(l_{k1},\ldots,l_{kp})^\intercal\) and the score of individual
\(i\) on PC \(k\) is \[\textrm{score}_{ik} = \mathbf{l}_k^\intercal
\mathbf{x}_i = \sum_{m=1}^p l_{km}x_{im},\] where \(x_{im}\) is the standardized SNP genotype
of \(i\) at SNP \(m\). Note that when we have the loadings at
hand, we can project also any external individual on the existing PCs
generated by our reference data set. Loadings from prcomp()
object pca
are in matrix
pca$rotation
.
Example 5.3. Let’s generate data from three poulations P1,P2,P3 of which P1 and P2 are more closely related with each other and more distant from P3. A standard measure of differentiation is the Fst value that describes how large a proportion of genetic variation is present between populations compared to within populations. For example, we have Fst ~ 0.003 between the ancestries of Eastern and Western Finland and ~ 0.10 between genetic ancestries of different continents. The Balding-Nichols model to generate allele frequencies for two populations with Fst value of \(F\) first picks a background allele frequency \(f\), for example, from a Uniform distribution, and then samples the subpopulation allele frequencies as independent draws from the distribution \[\textrm{Beta}\left(\frac{1-F}{F}f, \frac{1-F}{F}(1-f)\right ).\] Let’s generate data so that Fst between P1 and P2 is 0.003 and Fst between P3 and the shared ancestral population of P1 and P2 is 0.05. (This is a level of differentiation that we might observe between Eastern (P1) and Western Finnish ancestries (P2) and Northern African ancestry (P3).) Let’s sample \(n=100\) individuals from each population using \(p = 3000\) SNPs and show PCs 1-2.
n = 100 #per population
p = 3000 #SNPs
fst.12 = 0.003
fst.12.3 = 0.05
f = runif(p, 0.1, 0.5) #common SNPs in background population
f.3 = rbeta(p, (1-fst.12.3)/fst.12.3*f, (1-fst.12.3)/fst.12.3*(1-f))
f.12 = rbeta(p, (1-fst.12.3)/fst.12.3*f, (1-fst.12.3)/fst.12.3*(1-f)) #P1&P2's shared ancestral population
f.1 = rbeta(p, (1-fst.12)/fst.12*f.12, (1-fst.12)/fst.12*(1-f.12))
f.2 = rbeta(p, (1-fst.12)/fst.12*f.12, (1-fst.12)/fst.12*(1-f.12))
#Let's check that f.1 and f.2 looks similar compared to f.1 and f.3 or f.2 and f.3
par(mfrow = c(1,3))
plot(f.1,f.2, main = paste("Fst", fst.12), xlim = c(0,1), ylim = c(0,1), pch = 3)
plot(f.1,f.3, main = paste("Fst >", fst.12.3), xlim = c(0,1), ylim = c(0,1), pch = 3)
plot(f.2,f.3, main = paste("Fst >", fst.12.3), xlim = c(0,1), ylim = c(0,1), pch = 3)
Let’s then generate the genotype data and do PCA.
x = cbind(
replicate(n, rbinom(p, size = 2, p = f.1)), #generate n inds from P1
replicate(n, rbinom(p, size = 2, p = f.2)), # from P2
replicate(n, rbinom(p, size = 2, p = f.3))) # from P3
x = t(x) #each replicate (=ind) is now in a column, but we want inds to rows and SNPs to cols
pca = prcomp(x, scale = TRUE) #do PCA
cols = rep( c("cyan","skyblue","magenta"), each = n) #color for each ind according to pop
plot(pca$x[,1], pca$x[,2], col = cols, xlab = "PC1", ylab = "PC2")
We see that the first PC picks P3 apart from P1 & P2 but does not separate P1 and P2. The second PC starts to separate P1 and P2. The separation between P1 and P2 would become more clear if we added a few thousand more SNPs or if we increased the Fst valeu between P1 and P2 (left as an exercise).
Example 5.4. Let’s do PCA with real allele frequency data from the 1000 Genomes project. Let’s read in a frequency file (that is based on the files from IMPUTE2 webpage) and see what’s in it.
af = read.table("http://www.mv.helsinki.fi/home/mjxpirin/GWAS_course/material/afreq_1000G_phase1_chr15-22.txt",
as.is = TRUE, header = TRUE)
dim(af)
## [1] 5266 19
af[1,]
## chr id position a0 a1 ASW CEU CHB CHS CLM FIN GBR
## 1 15 rs11248847 20101049 G A 0.2377 0.1882 0.4278 0.335 0.1917 0.1613 0.1461
## IBS JPT LWK MXL PUR TSI YRI
## 1 0.2143 0.3876 0.2165 0.25 0.2091 0.1888 0.09659
We have allele 1 frequency info for 5,266 SNPs in 14 populations. The SNPs have been chosen from chromosomes 15-22 and are at least 100,000 bps apart to avoid highly correlated SNPs. Additionally, the global MAFs of all these SNPs are > 5% but some of them may very rare in some of the populations.
Note that sample sizes for some populations (IBS in particular) is small so we won’t use them. Let’s demonstrate a European PCA using GBR, TSI, CEU and FIN. We simply simulate some number of individuals (\(n=50\)) from each population and do PCA. (We could use the original individual level 1000 Genomes data as well, but since that requires large files, here we just work with the allele frequencies and simulate our individuals from them.)
p = nrow(af) #number of SNPs
n = 50 #samples per population
pop.labs = c("GBR","TSI","CEU","FIN")
pop = rep(1:length(pop.labs), each = n)
x = c()
for(ii in 1:length(pop.labs)){
x = rbind(x, t(replicate(n, rbinom(p, size = 2, prob = af[,pop.labs[ii]]) ) ) )
}
x = x[, apply(x, 2, var) > 0 ] #keep only SNPs that have variation in data
pca = prcomp(x, scale = TRUE)
plot(pca$x[,1], pca$x[,2], col = pop, pch = 19, xlab = "PC1", ylab= "PC2",
main = "Simulation from 1000 Genomes Phase 1")
legend("topright", leg = pop.labs, col = 1:length(pop.labs), pch = 19, cex = 1)
It looks like the PC1 is determined by the North-South direction and the PC2 separates Central European and British ancestry from the Finnish and Italian ancestries.
plot(pca$x[,1], pca$x[,3], col = pop, pch = 19, xlab = "PC1", ylab= "PC3",
main = "Simulation from 1000 Genomes Phase 1")
And the PC3 then separates GBR and CEU.
Let’s see some colorful pictures that PCA has produced with real data (slides 21-25).
Technically, PCA is the eigenvalue decomposition of the correlation based genetic relatedness matrix (GRM-cor) that we discussed earlier: the eigenvectors of the GRM-cor are the scores on the PCs (first eigenvector corresponds to first PC etc.) This means that the GRM-cor can be seen as being built up from the PCs, one by one, as shown on slide 27.
Tutorial on PCA by Jon Shlens is an excellent tutorial, but unfortunately it uses the rows and columns of data matrix the other way as we are using so the data matrix is given as \(p \times n\) matrix, and \(p\) (the number of variables) is called \(m\).
Most GWAS software packages can do PCA. Typically, the steps are as were done by Kerminen 2015:
Identify a suitable set of individuals by excluding one individual from each pair of closely related individuals. (Close relatives can drive some leading PCs and hence mix up the analysis when the purpose is to find population structure, see an example below.)
Identify a suitable set of SNPs, typically MAF>5% and strict LD-pruning is applied (we’ll talk about pruning in the next section). Remove also the known regions of high-LD (Price et al. 2008).
Make sure that the method is using mean-centered genotypes; often the SNPs are further satandardized to have variance 1 in the sample. Note: Also the GRM-cor uses the standardized genotypes as it is based on the genotype correlation.
Plot the loadings along the genome to observe if there are some genomic regions with much larger contributions than the genome average. Spikes somewhere in the genome indicate that those regions are driving the PC, and that is most likely because of inadequate pruning of the SNPs from that region (See Figure 9 in Kerminen 2015).
Visualize the PCs, for example by plotting two PCs against each other. Observe if there are outliers that seem to drive some of the leading (say first 20) PCs, and possibly remove those outliers, and redo PCA. Such individuals could have some quality problems or they may have genetic ancestry that is different from the rest of the sample, which can be a problem in a GWAS as we’ll discuss soon. If you have geographic location info about your samples, color each PC-plot with the expected geographic populations of the samples. If the PC plot seems to separate the expected populations, it is likely to be correctly done, otherwise there is need to recheck the steps.
Project the excluded relatives on the PCs. Some relatives may have been excluded while generating the PCs, but they will have valid scores on the PCs when projected using the loadings that were computed when those relatives were excluded.
Let’s make a data set that has 10 individuals from each of populations 1 and 2. The Fst between populations is 0.01 and the data includes one pair of full sibs from population 1, while the other individuals are unrelated within each population. Let’s see how the first and the second PC behave.
set.seed(20)
p = 10000 #SNPs
fst.12 = 0.01
cols = c("black","limegreen")
f = runif(p, 0.2,0.5) #common SNPs in background population
f.1 = rbeta(p, (1 - fst.12)/fst.12*f, (1 - fst.12)/fst.12*(1 - f))
f.2 = rbeta(p, (1 - fst.12)/fst.12*f, (1 - fst.12)/fst.12*(1 - f))
X = rbind(offspring.geno(n.families = 1, n.snps = p, fs = f.1, n.shared.parents = 2), #full-sibs from 1
offspring.geno(n.families = 4, n.snps = p, fs = f.1, n.shared.parents = 0), #unrel from 1
offspring.geno(n.families = 5, n.snps = p, fs = f.2, n.shared.parents = 0)) #unrel from 2
pop = c(rep(1,2*5), rep(2,2*5)) #population labels: 10x Pop1 and 10x Pop2.
X = X[, apply(X, 2, var) > 0] #remove SNPs without variation before scaling
pca = prcomp(X, scale = T) #do PCA
plot(pca$x[,1], pca$x[,2], asp = 1, col = cols[pop], xlab = "PC1", ylab = "PC2", pch = 19)
Thus, the first PC picks the relative pair and not the population structure. Only the second PC picks the population structure. This is not what we want in GWAS, where we want to adjust for relatedness in other ways and use PCA to get an idea of the population structure. Let’s do the PCA by first removing one of the sibs, and then projecting him/her back among the others in the PC plot.
X.unrel = X[-1,] #remove row 1
pca = prcomp(X.unrel, scale = TRUE)
plot(pca$x[,1], pca$x[,2], asp = 1, col = cols[pop[-1]], xlab = "PC1", ylab = "PC2",
main = "Unrelated PCA", pch = 19)
#project back individual 1 to the PCs spanned by the 19 other individuals
pca.1 = predict(pca, newdata = matrix(X[1,], nrow = 1)) #projects newdata to PCs defined in 'pca'
points(pca.1[1], pca.1[2], col = "violet", pch = 19, cex = 1.2) #ind 1 on PCs
We see that the excluded sibling (in violet) is projected among his/her population, and now the PCs are not affected by any close relationships.
Note that when projecting samples using predict()
on an
object from prcomp()
, the new data must be in the same
units as the original data that was used to create the PCA with
prcomp()
. If the original data for prcomp()
were as unscaled genotypes, then the new data must also be given as
unscaled genotypes with the same allele coding. And if the input data
were scaled outside the prcomp()
function call, then the
new data must also be scaled by using the same mean and SD values before
the projection using the predict()
function.
Those interested in level of fine-scale genetic structure that can be extracted from genetic data can see results for
Bantu-speaking African populations (Fortes-Lima et al. 2023)
Central Asia (Mezzavilla et al. 2014)
Estonia (Pankratov et al. 2020).
Finland (Kerminen et al. 2017).
France (Saint Pierre et al. 2020).
Ireland (Gilbert et al. 2017).
Japan (Sakaue et al. 2020)
Netherlands (Byrne et al. 2020).
Scotland (Gilbert et al. 2019)
South and Southeast Asia (Tagore et al. 2021)
Spain (Bycroft et al. 2019).