SOCR ≫ | DSPA ≫ | Topics ≫ |
Now that we have most of the fundamentals covered in the previous chapters, we can delve into the first data analytic method, dimension reduction, which reduces the number of features when dealing with a very large number of variables. Dimension reduction can help us extract a set of “uncorrelated” principal variables and reduce the complexity of the data. We are not simply picking some of the original variables. Rather, we are constructing new “uncorrelated” variables as functions of the old features.
Dimensionality reduction techniques enable exploratory data analyses by reducing the complexity of the dataset, still approximately preserving important properties, such as retaining the distances between cases or subjects. If we are able to reduce the complexity down to a few dimensions, we can then plot the data and untangle its intrinsic characteristics.
All dimensionality reduction techniques carry some resemblance to variable selection, which we will see in Chapter 16. The continuous process of dimensionality reduction yields lower variability compared to the discrete feature selection process, e.g., see Wolfgang M. Hartmann’s paper.
We will (1) start with a synthetic example demonstrating the reduction of a 2D data into 1D, (2) explain the notion of rotation matrices, (3) show examples of principal component analysis (PCA), singular value decomposition (SVD), independent component analysis (ICA), factor analysis (FA), and t-distributed Stochastic Neighbor Embedding (t-SNE), Uniform Manifold Approximation and Projection (UMAP), and (4) present a Parkinson’s disease case-study at the end.
We consider an example looking at twin heights. Suppose we simulate 1,000 2D points that representing normalized
individual heights, i.e., number of standard deviations from the mean height. Each 2D point represents a pair of twins. We will simulate this scenario using Bivariate Normal Distribution.
library(MASS)
set.seed(1234)
<- 1000
n =t(mvrnorm(n, c(0, 0), matrix(c(1, 0.95, 0.95, 1), 2, 2))) y
\(Twin1_{Height}\) | \(Twin2_{Height}\) |
---|---|
\(y[1,1]\) | \(y[1,2]\) |
\(y[2,1]\) | \(y[2,2]\) |
\(y[3,1]\) | \(y[3,2]\) |
\(\cdots\) | \(\cdots\) |
\(y[500,1]\) | \(y[500,2]\) |
\[ y^T_{2\times500} = \begin{bmatrix} y[1, ]=Twin1_{Height} \\ y[2, ]=Twin2_{Height} \end{bmatrix} \sim BVN \left ( \mu= \begin{bmatrix} Twin1_{Height} \\ Twin2_{Height} \end{bmatrix} , \Sigma=\begin{bmatrix} 1 & 0.95 \\ 0.95 & 1 \end{bmatrix} \right ) .\]
# plot(y[1, ], y[2, ], xlab="Twin 1 (standardized height)",
# ylab="Twin 2 (standardized height)", xlim=c(-3, 3), ylim=c(-3, 3))
# points(y[1, 1:2], y[2, 1:2], col=2, pch=16)
library(plotly)
plot_ly() %>%
add_markers(x = ~y[1, ], y = ~y[2, ], name="Data Scatter") %>%
add_markers(x = y[1, 1], y = y[2, 1], marker=list(color = 'red', size = 20,
line = list(color = 'yellow', width = 2)), name="Twin-pair 1") %>%
add_markers(x = y[1, 2], y = y[2, 2], marker=list(color = 'green', size = 20,
line = list(color = 'orange', width = 2)), name="Twin-pair 2")%>%
layout(title='Scatter Plot Simulated Twin Data (Y)',
xaxis = list(title="Twin 1 (standardized height)"), yaxis = list(title="Twin 2 (standardized height)"),
legend = list(orientation = 'h'))
These data may represent a fraction of the information included in a high-throughput neuroimaging genetics study of twins. You can see one example of such pediatric study here.
Tracking the distances between any two samples can be accomplished using the dist
function. stats::dist()
is a function computing the distance matrix (according to a user-specified distance metric, default=Euclidean) representing the distances between the rows of the data matrix. For example, here is the distance between the two RED points in the figure above:
=dist(t(y))
das.matrix(d)[1, 2]
## [1] 2.100187
To reduce the 2D data to a simpler 1D plot we can transform the data to a 1D matrix (vector) preserving (approximately) the distances between the 2D points.
The 2D plot shows the Euclidean distance between the pair of RED points, the length of this line is the distance between the 2 points. In 2D, these lines tend to go along the direction of the diagonal. If we rotate
the plot so that the diagonal aligned with the x-axis we get the following raw and transformed plots:
= (y[1, ]+y[2, ])/2 # the sum (actually average)
z1 = (y[1, ]-y[2, ]) # the difference
z2
= rbind( z1, z2) # matrix z has the same dimension as y
z
<- c(-3, 3)
thelim # par(mar=c(1, 2))
# par(mfrow=c(2,1))
# plot(y[1, ], y[2, ], xlab="Twin 1 (standardized height)",
# ylab="Twin 2 (standardized height)",
# xlim=thelim, ylim=thelim)
# points(y[1, 1:2], y[2, 1:2], col=2, pch=16)
# plot(z[1, ], z[2, ], xlim=thelim, ylim=thelim, xlab="Average height", ylab="Difference in height")
# points(z[1, 1:2], z[2, 1:2], col=2, pch=16)
# par(mfrow=c(1,1))
plot_ly() %>%
add_markers(x = ~z[1, ], y = ~z[2, ], name="Data Scatter") %>%
add_markers(x = z[1, 1], y = z[2, 1], marker=list(color = 'red', size = 20,
line = list(color = 'yellow', width = 2)), name="Twin-pair 1") %>%
add_markers(x = y[1, 2], y = y[2, 2], marker=list(color = 'green', size = 20,
line = list(color = 'orange', width = 2)), name="Twin-pair 2") %>%
layout(title='Scatter Plot Transformed Twin Data (Z)',
xaxis = list(title="Twin 1 (standardized height)", scaleanchor = "y", scaleratio = 2),
yaxis = list(title="Twin 2 (standardized height)", scaleanchor = "x", scaleratio = 0.5),
legend = list(orientation = 'h'))
Of course, matrix linear algebra notation can be used to represent this affine transformation of the data. Here we can see that to get the result z
, we multiplied y
by the matrix \(A\):
\[ A = \begin{pmatrix} 1/2&1/2\\ 1&-1\\ \end{pmatrix} \implies z = A \times y .\]
We can invert this transform by multiplying the result by the inverse
(rotation matrix?) \(A^{-1}\) as follows:
\[ A^{-1} = \begin{pmatrix} 1&1/2\\ 1&-1/2\\ \end{pmatrix} \implies y = A^{-1} \times z \]
You can try this in R
:
<- matrix(c(1/2, 1, 1/2, -1), nrow=2, ncol=2); A # define a matrix A
## [,1] [,2]
## [1,] 0.5 0.5
## [2,] 1.0 -1.0
<- solve(A); A_inv # inverse A_inv
## [,1] [,2]
## [1,] 1 0.5
## [2,] 1 -0.5
%*% A_inv # Verify result A
## [,1] [,2]
## [1,] 1 0
## [2,] 0 1
Note that this matrix transformation did not preserve distances, i.e., it’s not a simple rotation in 2D:
=dist(t(y)); as.matrix(d)[1, 2] # distance between first two points in Y d
## [1] 2.100187
=dist(t(z)); as.matrix(d1)[1, 2] # distance between first two points in Z=A*Y d1
## [1] 1.541323
One important question is how to identify transformations that preserve distances. In mathematics, transformations between metric spaces that are distance-preserving are called isometries
(or congruences
or congruent transformations).
First, let’s test the MA transformation we used above: \[A=\frac{Y_1+Y_2}{2} \\ M=Y_1 - Y_2.\]
<- matrix(c(1/2, 1, 1/2, -1), 2, 2)
MA
<- MA%*%y
MA_z <- dist(t(y))
d <- dist(t(MA_z))
d_MA
# plot(as.numeric(d), as.numeric(d_MA))
# abline(0, 1, col=2)
plot_ly() %>%
add_markers(x = ~as.numeric(d)[1:5000], y = ~as.numeric(d_MA)[1:5000], name="Transformed Twin Distances") %>%
add_markers(x = ~as.numeric(d)[1], y = ~as.numeric(d_MA)[1], marker=list(color = 'red', size = 20,
line = list(color = 'yellow', width = 2)), name="Twin-pair 1") %>%
add_markers(x = ~as.numeric(d)[2], y = ~as.numeric(d_MA)[2], marker=list(color = 'green', size = 20,
line = list(color = 'orange', width = 2)), name="Twin-pair 2") %>%
add_trace(x = ~c(0,8), y = ~c(0,8), mode="lines",
line = list(color = "red", width = 4), name="Preserved Distances") %>%
layout(title='Preservation of Distances Between Twins (Transform=MA_Z)',
xaxis = list(title="Original Twin Distances", range = c(0, 8)),
yaxis = list(title="Transformed Twin Distances", range = c(0, 8)),
legend = list(orientation = 'h'))
Observe that this MA transformation is not an isometry - the distances are not preserved. Here is one example with \(v1=\begin{bmatrix} v1_x=0 \\ v1_y=1 \end{bmatrix}\), \(v2=\begin{bmatrix} v2_x=1 \\ v2_y=0 \end{bmatrix}\), which are distance \(\sqrt{2}\) apart in their native space, but separated further by the transformation \(MA\), \(d(MA(v1),MA(v2))=2\).
t(MA); solve(MA); t(MA) - solve(MA) MA;
## [,1] [,2]
## [1,] 0.5 0.5
## [2,] 1.0 -1.0
## [,1] [,2]
## [1,] 0.5 1
## [2,] 0.5 -1
## [,1] [,2]
## [1,] 1 0.5
## [2,] 1 -0.5
## [,1] [,2]
## [1,] -0.5 0.5
## [2,] -0.5 -0.5
<- c(0,1); v2 <- c(1,0); rbind(v1,v2) v1
## [,1] [,2]
## v1 0 1
## v2 1 0
<- function(x1, x2) sqrt(sum((x1 - x2) ^ 2))
euc.dist euc.dist(v1,v2)
## [1] 1.414214
<- MA %*% v1; v2_t <- MA %*% v2
v1_t euc.dist(v1_t,v2_t)
## [1] 2
More generally, if \[ \begin{pmatrix} Y_1\\ Y_2\\ \end{pmatrix} \sim BVN \left( \begin{pmatrix} \mu_1\\ \mu_2\\ \end{pmatrix}, \begin{pmatrix} \sigma_1^2&\sigma_{12}\\ \sigma_{12}&\sigma_2^2\\ \end{pmatrix} \right)\] Then, \[ Z = AY + \eta \sim BVN(\eta + A\mu,A\Sigma A^{T}).\] Where BVN denotes bivariate normal distribution, \(A = \begin{pmatrix}a&b\\c&d\\ \end{pmatrix}\), \(Y=(Y_1,Y_2)^T\), \(\mu = (\mu_1,\mu_2)\), \(\Sigma = \begin{pmatrix} \sigma_1^2&\sigma_{12}\\ \sigma_{12}&\sigma_2^2\\ \end{pmatrix}\).
You can verify this by using the change of variable theorem. Thus, affine transformations preserve bivariate normality. However, in general, the linear transformation (\(A\)) is not guaranteed to be an isometry.
The question now is under what additional conditions for the transformation matrix \(A\), can we guarantee an isometry.
Notice that, \[ d^2(P_i,P_j) =\sum_{k=1}^{n} (P_{jk}-P_{ik})^2 = ||P||^2 = P^TP,\]
where \(P = (P_{j,1}-P_{i,1},...,P_{j,n}-P_{i,n})^T\), \(P_i,P_j \in \mathbb{R}^n\) are any two points in \(n\) dimensions.
Thus, the only requirement we need for \(A\) to be an isometry is \((AY)^T(AY)=Y^TY\), i.e., \(A^TA=I\), which implies that \(A\) is an orthogonal (rotational) matrix.
Let’s use a two dimension orthogonal matrix to illustrate this.
Set \(A = \frac{1}{\sqrt{2}} \begin{pmatrix}1&1\\1&-1\\ \end{pmatrix}\). It’s easy to verify that \(A\) is an orthogonal (2D rotation) matrix.
The simplest way to test the isometry is to perform the linear transformation directly as follow.
<- 1/sqrt(2)*matrix(c(1, 1, 1, -1), 2, 2)
A <- A%*%y
z <- dist(t(y))
d <- dist(t(z))
d2
# plot(as.numeric(d), as.numeric(d2))
# abline(0, 1, col=2)
plot_ly() %>%
add_markers(x = ~as.numeric(d)[1:5000], y = ~as.numeric(d2)[1:5000], name="Transformed Twin Distances") %>%
add_trace(x = ~c(0,8), y = ~c(0,8), mode="lines",
line = list(color = "red", width = 4), name="Preserved Distances") %>%
add_markers(x = ~as.numeric(d)[1], y = ~as.numeric(d2)[1], marker=list(color = 'red', size = 20,
line = list(color = 'yellow', width = 2)), name="Twin-pair 1") %>%
add_markers(x = ~as.numeric(d)[2], y = ~as.numeric(d2)[2], marker=list(color = 'green', size = 20,
line = list(color = 'orange', width = 2)), name="Twin-pair 2") %>%
layout(title='Preservation of Distances Between Twins (Transform=A)',
xaxis = list(title="Original Twin Distances", range = c(0, 8)),
yaxis = list(title="Transformed Twin Distances", range = c(0, 8)),
legend = list(orientation = 'h'))
We can observe that the distances between points computed from original data and the transformed data are the same. Thus, the transformation \(A\) is called a rotation (isometry) of \(y\).
The alternative method is to simulate from joint distribution of \(Z = (Z_1,Z_2)^T\).
As we have mentioned above: \(Z = AY + \eta \sim BVN(\eta + A\mu,A\Sigma A^{T})\).
where \(\eta = (0,0)^T\), \(\Sigma = \begin{pmatrix} 1&0.95\\0.95&1\\ \end{pmatrix}\), \(A = \frac{1}{\sqrt{2}} \begin{pmatrix}1&1\\1&-1\\ \end{pmatrix}\).
We can compute \(A\Sigma A^{T}\) by hand or using matrix multiplication in R
:
<- matrix(c(1,0.95,0.95,1),nrow=2)
sig %*%sig%*%t(A) A
## [,1] [,2]
## [1,] 1.95 0.00
## [2,] 0.00 0.05
\(A\Sigma A^{T}\) represents the variance-covariance matrix, \(cov(z_1,z_2)\). We can simulate \(z_1\), \(z_2\) independently from \(z_1\sim N(0,1.95)\) and \(z_2 \sim N(0,0.05)\). Note that independence and uncorrelation are equivalent for bivariate normal distribution.
Let’s demonstrate that the rotation transform \(A = \frac{1}{\sqrt{2}} \begin{pmatrix}1&1\\1&-1\\ \end{pmatrix}: y\to z\equiv Ay\) preserves twin-pair distances by plotting the raw and transformed distances and computing the pre- and post-transform distances between two pair of twins.
# thelim <- c(-3, 3)
# #par(mfrow=c(2,1))
# plot(y[1, ], y[2, ], xlab="Twin 1 (standardized height)",
# ylab="Twin 2 (standardized height)",
# xlim=thelim, ylim=thelim)
# points(y[1, 1:2], y[2, 1:2], col=2, pch=16)
# plot(z[1, ], z[2, ], xlim=thelim, ylim=thelim, xlab="Average height", ylab="Difference in height")
# points(z[1, 1:2], z[2, 1:2], col=2, pch=16)
# par(mfrow=c(1,1))
# Original (pre-transform)
<- function(x1, x2) sqrt(sum((x1 - x2) ^ 2))
euc.dist plot_ly() %>%
add_markers(x = ~y[1,], y = ~y[2,], name="Twin Distances") %>%
add_markers(x = ~y[1, 1], y = ~y[2, 1], marker=list(color = 'red', size = 20,
line = list(color = 'yellow', width = 2)), name="Twin-pair 1") %>%
add_markers(x = ~y[1, 2], y = ~y[2, 2], marker=list(color = 'green', size = 20,
line = list(color = 'orange', width = 2)), name="Twin-pair 2") %>%
layout(title=paste0('(Pre-Transform) Original Twin Heights (standardized): Twin-Pair Distance = ',
round(euc.dist(y[, 1], y[, 2]),3)),
xaxis = list(title="Twin 1"),
yaxis = list(title="Twin 2"),
legend = list(orientation = 'h'))
# Transform
plot_ly() %>%
add_markers(x = ~z[1,], y = ~z[2,], name="Transformed Twin Distances") %>%
add_markers(x = ~z[1, 1], y = ~z[2, 1], marker=list(color = 'red', size = 20,
line = list(color = 'yellow', width = 2)), name="Twin-pair 1") %>%
add_markers(x = ~z[1, 2], y = ~z[2, 2], marker=list(color = 'green', size = 20,
line = list(color = 'orange', width = 2)), name="Twin-pair 2") %>%
layout(title=paste0('(Transform) Twin Heights: Twin-Pair Distance = ',
round(euc.dist(z[, 1], z[, 2]),3)),
xaxis = list(title="Twin 1", scaleanchor = "y", scaleratio = 2),
yaxis = list(title="Twin 2", scaleanchor = "x", scaleratio = 0.5),
legend = list(orientation = 'h'))
We applied this transformation and observed that the distances between points were unchanged after the rotation \(A\). This rotation achieves the goals of:
Removing the second dimension and recomputing the distances, we get:
= dist(z[1, ]) ##distance computed using just first dimension
d4 # plot(as.numeric(d), as.numeric(d4))
# abline(0, 1)
# take a smaller sample size to expedite the viz
<- sample(1:length(d4), 10000, replace = FALSE)
ind <- d[ind]
x1 <- d4[ind]
y1 plot_ly() %>%
add_markers(x = ~x1, y = ~y1, name="Transformed Distances") %>%
add_trace(x = ~c(0,8), y = ~c(0,8), mode="lines",
line = list(color = "red", width = 4), name="Preserved Distances") %>%
layout(title='Approximate Distance Preservation in 1D',
xaxis = list(title="(1D) Original Distances", range = c(0, 8)),
yaxis = list(title="(1D) A-Transformed Distances", range = c(0, 8)),
legend = list(orientation = 'h'))
## No trace type specified:
## Based on info supplied, a 'scatter' trace seems appropriate.
## Read more about this trace type -> https://plotly.com/r/reference/#scatter
# add_markers(x = ~x1[1], y = ~y1[1], marker=list(color = 'red', size = 20,
# line = list(color = 'yellow', width = 2)), name="Pair 1") %>%
# add_markers(x = ~x1[2], y = ~y1[2], marker=list(color = 'green', size = 20,
# line = list(color = 'orange', width = 2)), name="Pair 2") %>%
# layout(title=paste0('Approximate Distance Estimated in 1D(Transform): Twin-Pair Distance = ',
# round(euc.dist(x1[1], y1[1]),3)),
# xaxis = list(title="Original Distances", range = c(0, 8)),
# yaxis = list(title="Transformed Distances", range = c(0, 8)),
# legend = list(orientation = 'h'))
The 1D distance provides a very good approximation to the actual 2D distance. This first dimension of the transformed data is called the first principal component
. In general, this idea motivates the use of principal component analysis (PCA) and the singular value decomposition (SVD) to achieve dimension reduction.
In the notation above, the rows represent variables and columns represent cases. In general, rows represent cases and columns represent variables. Hence, in our example shown here, \(Y\) would be transposed to be an \(N \times 2\) matrix. This is the most common way to represent the data: individuals in the rows, features are columns. In genomics, it is more common to represent subjects/SNPs/genes in the columns. For example, genes are rows and samples are columns. The sample covariance matrix usually denoted with \(\mathbf{X}^\top\mathbf{X}\) and has cells representing covariance between two units. Yet for this to be the case, we need the rows of \(\mathbf{X}\) to represents units. Here, we have to compute, \(\mathbf{Y}\mathbf{Y}^\top\) instead following the rescaling.
Principle Component Analysis (PCA), Independent Component Analysis (ICA), and Factor Analysis (FA) are similar strategies, seeking to identify a new basis (vectors representing the principal directions) that the data is projected against to maximize certain (specific to each technique) objective function. These basis functions, or vectors, are just linear combinations of the original features in the data/signal.
The singular value decomposition (SVD), discussed later in this chapter, provides a specific matrix factorization algorithm that can be employed in various techniques to decompose a data matrix \(X_{m\times n}\) as \({U\Sigma V^{T}}\), where \({U}\) is an \(m \times m\) real or complex unitary matrix (\({U^TU=UU^T=I}\), i.e., \(|\det(U)|=1\)), \({\Sigma }\) is a \(m\times n\) rectangular diagonal matrix of singular values, representing non-negative values on the diagonal, and \({V}\) is an \(n\times n\) unitary matrix.
Method | Assumptions | Cost Function Optimization | Applications |
---|---|---|---|
PCA | Gaussian signals, linear bivariate relations | Aims to explain the variance in the original signal. Minimizes the covariance of the data and yields high-energy orthogonal vectors in terms of the signal variance. PCA looks for an orthogonal linear transformation that maximizes the variance of the variables | Relies on \(1^{st}\) and \(2^{nd}\) moments of the measured data, which makes it useful when data features are close to Gaussian |
ICA | No Gaussian signal assumptions | Minimizes higher-order statistics (e.g., \(3^{rd}\) and \(4^{th}\) order skewness and kurtosis), effectively minimizing the mutual information of the transformed output. ICA seeks a linear transformation where the basis vectors are statistically independent, but neither Gaussian, orthogonal or ranked in order | Applicable for non-Gaussian, very noisy, or mixture processes composed of simultaneous input from multiple sources |
FA | Approximately Gaussian data | Objective function relies on second order moments to compute likelihood ratios. FA factors are linear combinations that maximize the shared portion of the variance underlying latent variables, which may use a variety of optimization strategies (e.g., maximum likelihood) | PCA-generalization used to test a theoretical model of latent factors causing the observed features |
PCA (principal component analysis) is a mathematical procedure that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables through a process known as orthogonal transformation.
Principal Components
Let’s consider the simplest situation where we have n observations \(\{p_1, p_2, ..., p_n\}\) with 2 features \(p_i=(x_i, y_i)\). When we draw them on a plot, we use x-axis and y-axis for positioning. However, we can make our own coordinate system by principal components.
<-data.frame(x=c(1, 3, 5, 6, 10, 16, 50), y=c(4, 6, 5, 7, 10, 13, 12))
ex# reg1<-lm(y~x, data=ex)
# plot(ex)
# abline(reg1, col='red', lwd=4)
# text(40, 10.5, "pc1")
# segments(10.5, 11, 15, 7, lwd=4)
# text(11, 7, "pc2")
<- lm(y ~ x, data=ex)
yLM = - 1/(yLM$coefficients[2]) # slope of perpendicular line
perpSlope <- data.frame(x = mean(ex$x))
newX <- predict.lm(yLM,newX)
newY
<- c(x=newX, y=newY) # (x,y) coordinates of point0 on LM line
point0 <- c(x=newX[1]-1, y=newY-perpSlope) # (x,y) coordinates of point1 on perpendicular line
point1 <- c(x=newX[1]+1, y=newY+perpSlope) # (x,y) coordinates of point2 on perpendicular line
point2
<- c('PC 1 (LM)', 'PC 2 (Perp_LM)')
modelLabels <- c(40, 20)
modelLabels.x <- c(10, 6)
modelLabels.y <- c("blue", "green")
modelLabels.color
plot_ly(ex) %>%
add_lines(x = ~x, y = ~yLM$fitted, name="First PC, Linear Model lm(Y ~ X)",
line = list(width = 4)) %>%
add_markers(x = ~x, y = ~y, name="Sample Simulated Data") %>%
add_lines(x = ~c(point1$x,point2$x), y = ~c(point1$y,point2$y), name="Second PC, Orthogonal to lm(Y ~ X)",
line = list(width = 4)) %>%
add_markers(x = ~newX[1]$x, y = ~newY, name="Center (avg(x),avg(y))", marker = list(size = 20,
color = 'green', line = list(color = 'yellow', width = 2))) %>%
layout(xaxis = list(title="X", scaleanchor = "y"), # control the y:x axes aspect ratio
yaxis = list(title="Y", scaleanchor = "x"), legend = list(orientation = 'h'),
annotations = list(text=modelLabels, x=modelLabels.x, y=modelLabels.y,
color = modelLabels.color, showarrow=FALSE ))
Illustrated on the graph, the first PC, \(pc_1\) is a minimum distance fit in the feature space. The second PC is a minimum distance fit to a line perpendicular to the first PC. Similarly, the third PC would be a minimum distance fit to all previous PCs. In our case of a 2D space, two PC is the most we can have. In higher dimensional spaces, we need to consider how many PCs do we need to make the best performance.
In general, the formula for the first PC is \(pc_1=a_1^TX=\sum_{i=1}^N a_{i, 1}X_i\) where \(X_i\) is a \(n\times 1\) vector representing a column of the matrix \(X\) (representing a total of n observations and N features). The weights \(a_1=\{a_{1, 1}, a_{2, 1}, ..., a_{N, 1}\}\) are chosen to maximize the variance of \(pc_1\). According to this rule, the \(k^{th}\) PC is \(pc_k=a_k^TX=\sum_{i=1}^N a_{i, k}X_i\). \(a_k=\{a_{1, k}, a_{2, k}, ..., a_{N, k}\}\) has to be constrained by more conditions:
Let’s figure out how to find \(a_1\). First we need to express the variance of our first principal component using the variance covariance matrix of \(X\): \[Var(pc_1)=E(pc_1^2)-(E(pc_1))^2=\] \[\sum_{i, j=1}^N a_{i, 1} a_{j, 1} E(x_i x_j)-\sum_{i, j=1}^N a_{i, 1} a_{j, 1} E(x_i)E(x_j)=\] \[\sum_{i, j=1}^N a_{i, 1} a_{j, 1} S_{i, j}.\]
Where \(S_{i, j}=E(x_i x_j)-E(x_i)E(x_j)\).
This implies \(Var(pc_1)=a_1^TS a_1\) where \(S=S_{i, j}\) is the covariance matrix of \(X=\{X_1, ..., X_N\}\). Since \(a_1\) maximized \(Var(pc_1)\) and the constrain \(a_1^T a_1=1\) holds, we can rewrite \(a_1\) as: \[a_1=max_{a_1}(a_1^TS a_1-\lambda (a_1^T a_1-1))\] Where the second part after the minus sign should be 0. To maximize this quadratic expression, we can take the derivative of this expression w.r.t. \(a_1\) and set it to 0. This yields \((S-\lambda I_N)a_1=0\).
In Chapter 4 we showed that \(a_1\) will correspond to the largest eigenvalue of \(S\), the variance covariance matrix of \(X\). Hence, \(pc_1\) retains the largest amount of variation in the sample. Likewise, \(a_k\) is the \(k^{th}\) largest eigenvalue of \(S\).
PCA requires the mean for each column in the data matrix to be zero. That is, the sample mean of each column is shifted to zero.
Let’s use a subset (N=33) of Parkinson’s Progression Markers Initiative (PPMI) data to demonstrate the relationship between \(S\) and PC loadings. First, we need to import the dataset into R and delete the patient ID column.
library(rvest)
<- read_html("https://wiki.socr.umich.edu/index.php/SMHS_PCA_ICA_FA")
wiki_url html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
## [1] <div id="content" class="mw-body" role="main">\n\t\t\t<a id="top"></a>\n\ ...
<- html_table(html_nodes(wiki_url, "table")[[1]])
pd.sub summary(pd.sub)
## Patient_ID Top_of_SN_Voxel_Intensity_Ratio
## Min. :3001 Min. :1.058
## 1st Qu.:3012 1st Qu.:1.334
## Median :3029 Median :1.485
## Mean :3204 Mean :1.532
## 3rd Qu.:3314 3rd Qu.:1.755
## Max. :3808 Max. :2.149
## Side_of_SN_Voxel_Intensity_Ratio Part_IA Part_IB
## Min. :0.9306 Min. :0.000 Min. : 0.000
## 1st Qu.:0.9958 1st Qu.:0.000 1st Qu.: 2.000
## Median :1.1110 Median :1.000 Median : 5.000
## Mean :1.1065 Mean :1.242 Mean : 4.909
## 3rd Qu.:1.1978 3rd Qu.:2.000 3rd Qu.: 7.000
## Max. :1.3811 Max. :6.000 Max. :13.000
## Part_II Part_III
## Min. : 0.000 Min. : 0.00
## 1st Qu.: 0.000 1st Qu.: 2.00
## Median : 2.000 Median :12.00
## Mean : 4.091 Mean :13.39
## 3rd Qu.: 6.000 3rd Qu.:20.00
## Max. :17.000 Max. :36.00
<-pd.sub[, -1] pd.sub
Then, we need to center the pdsub
by subtracting the average of all column means from each element in the column. Next, we cast pd.sub
as a matrix and compute its variance covariance matrix, \(S\). Finally, we can calculate the corresponding eigenvalues and eigenvectors of \(S\).
<-apply(pd.sub, 2, mean)
mumean(mu)
## [1] 4.379068
<-as.matrix(pd.sub)-mean(mu)
pd.center<-cov(pd.center)
Seigen(S)
## eigen() decomposition
## $values
## [1] 1.315073e+02 1.178340e+01 6.096920e+00 1.424351e+00 6.094592e-02
## [6] 8.035403e-03
##
## $vectors
## [,1] [,2] [,3] [,4] [,5]
## [1,] -0.007460885 -0.0182022093 0.016893318 0.02071859 0.97198980
## [2,] -0.005800877 0.0006155246 0.004186177 0.01552971 0.23234862
## [3,] 0.080839361 -0.0600389904 -0.027351225 0.99421646 -0.02352324
## [4,] 0.229718933 -0.2817718053 -0.929463536 -0.06088782 0.01466136
## [5,] 0.282109618 -0.8926329596 0.344508308 -0.06772403 -0.01764367
## [6,] 0.927911126 0.3462292153 0.127908417 -0.05068855 0.01305167
## [,6]
## [1,] -0.232667561
## [2,] 0.972482080
## [3,] -0.009618592
## [4,] 0.003019008
## [5,] 0.006061772
## [6,] 0.002456374
The next step would be calculating the PCs using the prcomp()
function in R. Note that we will use the raw (uncentered
) version of the data and have to specify the center=TRUE
option to ensure the column means are trivial. We can save the model information into pca1
where pca1$rotation
provides the loadings for each PC.
<-prcomp(as.matrix(pd.sub), center = T)
pca1summary(pca1)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6
## Standard deviation 11.4677 3.4327 2.46919 1.19346 0.2469 0.08964
## Proportion of Variance 0.8716 0.0781 0.04041 0.00944 0.0004 0.00005
## Cumulative Proportion 0.8716 0.9497 0.99010 0.99954 1.0000 1.00000
$rotation pca1
## PC1 PC2 PC3
## Top_of_SN_Voxel_Intensity_Ratio 0.007460885 -0.0182022093 0.016893318
## Side_of_SN_Voxel_Intensity_Ratio 0.005800877 0.0006155246 0.004186177
## Part_IA -0.080839361 -0.0600389904 -0.027351225
## Part_IB -0.229718933 -0.2817718053 -0.929463536
## Part_II -0.282109618 -0.8926329596 0.344508308
## Part_III -0.927911126 0.3462292153 0.127908417
## PC4 PC5 PC6
## Top_of_SN_Voxel_Intensity_Ratio 0.02071859 -0.97198980 -0.232667561
## Side_of_SN_Voxel_Intensity_Ratio 0.01552971 -0.23234862 0.972482080
## Part_IA 0.99421646 0.02352324 -0.009618592
## Part_IB -0.06088782 -0.01466136 0.003019008
## Part_II -0.06772403 0.01764367 0.006061772
## Part_III -0.05068855 -0.01305167 0.002456374
We notice that the loadings are just the eigenvectors times -1
. These loadings represent vectors in 6D space (we have 6 columns in the original data). The scale factor -1
just represents the opposite direction of the eigenvector. We can also load the factoextra
package an compute the eigenvalues of each PC.
# install.packages("factoextra")
library("factoextra")
<-get_eigenvalue(pca1)
eigen eigen
## eigenvalue variance.percent cumulative.variance.percent
## Dim.1 1.315073e+02 87.159638589 87.15964
## Dim.2 1.178340e+01 7.809737384 94.96938
## Dim.3 6.096920e+00 4.040881920 99.01026
## Dim.4 1.424351e+00 0.944023059 99.95428
## Dim.5 6.094592e-02 0.040393390 99.99467
## Dim.6 8.035403e-03 0.005325659 100.00000
The eigenvalues correspond to the amount of the variation explained by each principal component (PC), which is the same as the eigenvalues of the \(S\) matrix.
To see a detailed information about the variances explained by each PC, relative to the corresponding PC loadings.
In the 3D loadings interactive plot below, you need to zoom-in to see the smaller projections (original features that have lower impact after the linear PC rotation).
# plot(pca1)
# library(graphics)
# biplot(pca1, choices = 1:2, scale = 1, pc.biplot = F)
plot_ly(x = c(1:length(pca1$sdev)), y = pca1$sdev*pca1$sdev, name = "Scree", type = "bar") %>%
layout(title="Scree Plot", xaxis = list(title="PC's"), yaxis = list(title="Variances (SD^2)"))
# Scores
<- pca1$x
scores
# Loadings
<- pca1$rotation
loadings
# Visualization scale factor for loadings
<- 10
scaleLoad
<- plot_ly() %>%
p add_trace(x=scores[,1], y=scores[,2], z=scores[,3], type="scatter3d", mode="markers", name="",
marker = list(color=scores[,2], colorscale = c("#FFE1A1", "#683531"), opacity = 0.7))
for (k in 1:ncol(loadings)) {
<- c(0, loadings[6, k])*scaleLoad # Project PCAs only on the last 3 original data dimensions (6,5,4)
x <- c(0, loadings[5, k])*scaleLoad
y <- c(0, loadings[4, k])*scaleLoad
z <- p %>% add_trace(x=x, y=y, z=z, type="scatter3d", mode="lines",
p name=paste0("Loading PC ", k, " ", colnames(pd.sub)[k]), line=list(width=8), opacity=1)
# %>%
# add_annotations( x = 0, y = 0, z = 0,
# xref = "x", yref = "y", zref = "z",
# # axref = "x", ayref = "y", azref = "z",
# text = "", showarrow = T,
# ax = c(0, loadings[k,1])*scaleLoad, ay = c(0, loadings[k,2])*scaleLoad, az = c(0,
# loadings[k,3])*scaleLoad)
}
<- p %>%
p layout(legend = list(orientation = 'h'), title="3D Projection of 6D Data along First 3 PCs",
scene = list ( xaxis = list(title = rownames(loadings)[6]),
yaxis = list(title = rownames(loadings)[5]),
zaxis = list(title = rownames(loadings)[4])))
p
# scene = list(
# dragmode = "turntable",
# annotations = list(
# list(showarrow = T, x = c(0, loadings[1,1])*scaleLoad, y = c(0, loadings[1,2])*scaleLoad,
# z = c(0, loadings[1,3])*scaleLoad, text = "Point 1", xanchor = "left", xshift = 2, opacity=0.7),
# list(showarrow = T, x = c(0, loadings[2,1])*scaleLoad, y = c(0, loadings[2,2])*scaleLoad,
# z = c(0, loadings[2,3])*scaleLoad, text="Point 2", textangle=0, ax = 0, ay = -1, font = list(
# color = "black", size = 12), arrowcolor = "black", arrowsize = 3, arrowwidth = 1, arrowhead = 1),
# list(showarrow = T, x = c(0, loadings[3,1])*scaleLoad, y = c(0, loadings[3,2])*scaleLoad,
# z = c(0, loadings[3,3])*scaleLoad, text = "Point 3", arrowhead = 1,
# xanchor = "left", yanchor = "bottom")
# )))
# library("factoextra")
# # Data for the supplementary qualitative variables
# qualit_vars <- as.factor(pd.sub$Part_IA)
# head(qualit_vars)
# # for plots of individuals
# # fviz_pca_ind(pca1, habillage = qualit_vars, addEllipses = TRUE, ellipse.level = 0.68) +
# # theme_minimal()
# # for Biplot of individuals and variables
# fviz_pca_biplot(pca1, axes = c(1, 2), geom = c("point", "text"),
# col.ind = "black", col.var = "steelblue", label = "all",
# invisible = "none", repel = T, habillage = qualit_vars,
# palette = NULL, addEllipses = TRUE, title = "PCA - Biplot")
# A slightly more representative 3D plot of the original data and the new PC's
<- as.matrix(pd.sub[, c(4:6)])
a
<- plot_ly() %>%
p add_trace(x=a[,1], y=a[,2], z=a[,3], type="scatter3d", mode="markers", name="",
marker = list(color=a[,2], colorscale = c("#FFE1A1", "#683531"), opacity = 0.7))
for (k in 1:ncol(loadings)) {
<- c(0, -loadings[6, k])*scaleLoad # Project PCAs only on the last 3 original data dimensions (6,5,4)
x <- c(0, -loadings[5, k])*scaleLoad
y <- c(0, -loadings[4, k])*scaleLoad
z <- p %>% add_trace(x=x, y=y, z=z, type="scatter3d", mode="lines",
p name=paste0("Loading PC ", k, " ", colnames(pd.sub)[k]), line=list(width=8), opacity=1)
}<- p %>%
p layout(legend = list(orientation = 'h'), title="Alternative 3D Projection of 3D Data along with First 3 PCs",
scene = list ( xaxis = list(title = rownames(loadings)[6]),
yaxis = list(title = rownames(loadings)[5]),
zaxis = list(title = rownames(loadings)[4])))
p
The scree-plot has a clear “elbow” point at the second PC, suggesting that the first two PCs explain about 95% of the variation in the original dataset. Thus, we say we can use the first 2 PCs to represent the data. In this case, the dimension of the data is substantially reduced.
The dynamic 3D plot_ly
graph uses PC1, PC2, and PC3 as the the coordinate axes to represent the new variables and the lines radiating from the origin show the loadings on the original features. This triplot help us to visualize how the loadings are used to rearrange the structure of the data.
Next, let’s try to obtain a bootstrap test for the confidence interval of the explained variance.
set.seed(12)
= 1000
num_boot = function(i) {
bootstrap_it = pd.sub[sample(1:nrow(pd.sub), nrow(pd.sub), replace=TRUE),]
data_resample = princomp(data_resample,cor = T)
p_resample return(sum(p_resample$sdev[1:3]^2)/sum(p_resample$sdev^2))
}
= data.frame(per=sapply(1:num_boot, bootstrap_it))
pco quantile(pco$per, probs = c(0.025,0.975)) # specify 95-th % Confidence Interval
## 2.5% 97.5%
## 0.8124611 0.8985318
= sum(pca1$sdev[1:3]^2)/sum(pca1$sdev^2)
corpp
# require(ggplot2)
# plot = ggplot(pco, aes(x=pco$per)) +
# geom_histogram() + geom_vline(xintercept=corpp, color='yellow')+
# labs(title = "Percent Var Explained by the first 3 PCs") +
# theme(plot.title = element_text(hjust = 0.5))+
# labs(x='perc of var')
# show(plot)
plot_ly(x = pco$per, type = "histogram", name = "Data Histogram") %>%
layout(title='Histogram of a Bootstrap Simulation <br /> Percent of Data Variability Captured by first 3 PCs',
xaxis = list(title = "Percent of Variability"), yaxis = list(title = "Frequency Count"), bargap=0.1)
Suppose we want to fit a linear model Top_of_SN_Voxel_Intensity_Ratio ~ Side_of_SN_Voxel_Intensity_Ratio + Part_IA
. We can use plot_ly
to show a 3D scatterplot and the (univariate) linear model.
library(scatterplot3d)
#Fit linear model
<- lm(Top_of_SN_Voxel_Intensity_Ratio ~ Side_of_SN_Voxel_Intensity_Ratio + Part_IA, data = pd.sub)
lm.fit
# Get the ranges of the variable.names
summary(pd.sub$Side_of_SN_Voxel_Intensity_Ratio)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.9306 0.9958 1.1110 1.1065 1.1978 1.3811
summary(pd.sub$Part_IA)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.000 0.000 1.000 1.242 2.000 6.000
summary(pd.sub$Top_of_SN_Voxel_Intensity_Ratio)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.058 1.334 1.485 1.532 1.755 2.149
# #plot results
# myPlot <- scatterplot3d(pd.sub$Side_of_SN_Voxel_Intensity_Ratio, pd.sub$Part_IA,
# pd.sub$Top_of_SN_Voxel_Intensity_Ratio)
# # Plot the linear model (line in 3D)
# myCoef <- lm.fit$coefficients
# plotX <- seq(0.93, 1.4,length.out = 100)
# plotY <- seq(0,6,length.out = 100)
# plotZ <- myCoef[1] + myCoef[2]*plotX + myCoef[3]*plotY # linear model
# #Add the linear model to the 3D scatterplot
# myPlot$points3d(plotX,plotY,plotZ, type = "l", lwd=2, col = "red")
= lm.fit$coefficients
cf = seq(summary(pd.sub$Side_of_SN_Voxel_Intensity_Ratio)[1],
pltx summary(pd.sub$Side_of_SN_Voxel_Intensity_Ratio)[6],
length.out = length(pd.sub$Side_of_SN_Voxel_Intensity_Ratio))
= seq(summary(pd.sub$Part_IA)[1], summary(pd.sub$Part_IA)[6],length.out = length(pd.sub$Part_IA))
plty = cf[1] + cf[2]*pltx + cf[3]*plty
pltz
# Plot Scatter and add the LM line to the plot
plot_ly() %>%
add_trace(x = ~pltx, y = ~plty, z = ~pltz, type="scatter3d", mode="lines",
line = list(color = "red", width = 4),
name="lm(Top_of_SN_Voxel_Intensity_Ratio ~ Side_of_SN_Voxel_Intensity_Ratio + Part_IA)") %>%
add_markers(x = ~pd.sub$Side_of_SN_Voxel_Intensity_Ratio, y = ~pd.sub$Part_IA,
z = ~pd.sub$Top_of_SN_Voxel_Intensity_Ratio, color = ~pd.sub$Part_II, mode="markers") %>%
layout(title="lm(Top_of_SN_Voxel_Intensity_Ratio ~ Side_of_SN_Voxel_Intensity_Ratio + Part_IA)",
legend=list(orientation = 'h'), showlegend = F,
scene = list(xaxis = list(title = 'Side_of_SN_Voxel_Intensity_Ratio'),
yaxis = list(title = 'Part_IA'),
zaxis = list(title = 'Top_of_SN_Voxel_Intensity_Ratio'))) %>%
hide_colorbar()
We can also plot in 3D a bivariate 2D plane model (e.g., lm.fit', or
pca1`) for the 3D scatter (Top_of_SN_Voxel_Intensity_Ratio, Side_of_SN_Voxel_Intensity_Ratio, Part_IA). Below is an example using the linear model.
<- scatterplot3d(pd.sub$Side_of_SN_Voxel_Intensity_Ratio, pd.sub$Part_IA, pd.sub$Top_of_SN_Voxel_Intensity_Ratio)
myPlot
# Static Plot
$plane3d(lm.fit, lty.box = "solid") myPlot
# planes3d(a, b, c, d, alpha = 0.5)
# planes3d draws planes using the parametrization a*x + b*y + c*z + d = 0.
# Multiple planes may be specified by giving multiple values for the normal
# vector (a, b, c) and the offset parameter d
Next are examples of plotting the 3D scatter along with the 2D PCA model using either rgl
or plot_ly
.
<- prcomp(as.matrix(cbind(pd.sub$Side_of_SN_Voxel_Intensity_Ratio, pd.sub$Part_IA, pd.sub$Top_of_SN_Voxel_Intensity_Ratio)), center = T); summary(pca1) pca1
## Importance of components:
## PC1 PC2 PC3
## Standard deviation 1.5218 0.27556 0.09958
## Proportion of Variance 0.9643 0.03162 0.00413
## Cumulative Proportion 0.9643 0.99587 1.00000
# Given two vectors PCA1 and PCA2, the cross product V = PCA1 x PCA2
# is orthogonal to both A and to B, and a normal vector to the
# plane containing PCA1 and PCA2
# If PCA1 = (a,b,c) and PCA2 = (d, e, f), then the cross product is
# PCA1 x PCA2 = (bf - ce, cd - af, ae - bd)
# PCA1 = pca1$rotation[,1] and PCAS2=pca1$rotation[,2]
# https://en.wikipedia.org/wiki/Cross_product#Names
#normVec = c(pca1$rotation[,1][2]*pca1$rotation[,2][3]-
# pca1$rotation[,1][3]*pca1$rotation[,2][2],
# pca1$rotation[,1][3]*pca1$rotation[,2][1]-
# pca1$rotation[,1][1]*pca1$rotation[,2][3],
# pca1$rotation[,1][1]*pca1$rotation[,2][2]-
# pca1$rotation[,1][2]*pca1$rotation[,2][1]
# )
= c(pca1$rotation[2,1]*pca1$rotation[3,2]-
normVec $rotation[3,1]*pca1$rotation[2,2],
pca1$rotation[3,1]*pca1$rotation[1,2]-
pca1$rotation[1,1]*pca1$rotation[3,2],
pca1$rotation[1,1]*pca1$rotation[2,2]-
pca1$rotation[2,1]*pca1$rotation[1,2]
pca1
)
# Interactive RGL 3D plot with PCA Plane
library(rgl)
# Compute the 3D point representing the gravitational balance
<- apply( cbind(pd.sub$Side_of_SN_Voxel_Intensity_Ratio, pd.sub$Top_of_SN_Voxel_Intensity_Ratio, pd.sub$Part_IA), 2, mean)
dMean # then the offset plane parameter is (d):
<- as.numeric((-1)*normVec %*% dMean) # force the plane to go through the mean
d
# Plot the PCA Plane
plot3d(pd.sub$Side_of_SN_Voxel_Intensity_Ratio, pd.sub$Part_IA, pd.sub$Top_of_SN_Voxel_Intensity_Ratio, type = "s", col = "red", size = 1)
planes3d(normVec[1], normVec[2], normVec[3], d, alpha = 0.5)
Another alternative is to use plot_ly
for the interactive 3D visualization. First, we demonstrate displaying the lm()
derived plane modeling superimposed on the 3D scatterplot (using pd.sub$Side_of_SN_Voxel_Intensity_Ratio
, pd.sub$Top_of_SN_Voxel_Intensity_Ratio
, and pd.sub$Part_IA
).
# Define the 3D features
<- pd.sub$Side_of_SN_Voxel_Intensity_Ratio
x <- pd.sub$Top_of_SN_Voxel_Intensity_Ratio
y <- pd.sub$Part_IA
z <- data.frame(x, y, z)
myDF
### Fit a (bivariate-predictor) linear regression model
<- lm(z ~ x+y)
lm.fit <- coef(lm.fit)
coef.lm.fit
### Reparameterize the 2D (x,y) grid, and define the corresponding model values z on the grid
<- seq(min(x),max(x),length.out=100)
x.seq <- seq(min(y),max(y),length.out=100)
y.seq <- function(x,y) coef.lm.fit[1]+coef.lm.fit[2]*x+coef.lm.fit[3]*y
z.seq # define the values of z = z(x.seq, y.seq), as a Matrix of dimension c(dim(x.seq), dim(y.seq))
<- t(outer(x.seq, y.seq, z.seq))
z
# First draw the 2D plane embedded in 3D, and then add points with "add_trace"
# library(plotly)
# myPlotly <- plot_ly(x=~x.seq, y=~y.seq, z=~z,
# colors = c("blue", "red"),type="surface", opacity=0.7) %>%
#
# add_trace(data=myDF, x=x, y=y, z=pd.sub$Part_IA, mode="markers",
# type="scatter3d", marker = list(color="green", opacity=0.9,
# symbol=105)) %>%
#
# layout(scene = list(
# aspectmode = "manual", aspectratio = list(x=1, y=1, z=1),
# xaxis = list(title = "Side_of_SN_Voxel_Intensity_Ratio"),
# yaxis = list(title = "Top_of_SN_Voxel_Intensity_Ratio"),
# zaxis = list(title = "Part_IA"))
# )
# # print(myPlotly)
# myPlotly
#Setup Axis
<- seq(min(x), max(x), length.out=100)
axis_x <- seq(min(y), max(y), length.out=100)
axis_y
#Sample points
library(reshape2)
<- expand.grid(x = axis_x, y = axis_y, KEEP.OUT.ATTRS = F)
lm_surface $z <- predict.lm(lm.fit, newdata = lm_surface)
lm_surface<- acast(lm_surface, x ~ y, value.var = "z") # z ~ 0 + x + y
lm_surface
plot_ly(myDF, x = ~x, y = ~y, z = ~z,
text = paste0("Part_II: ", pd.sub$Part_II), type="scatter3d", mode="markers", color=pd.sub$Part_II) %>%
add_trace(x=~axis_x, y=~axis_y, z=~lm_surface, type="surface", color="gray", name="LM model", opacity=0.3) %>%
layout(title="3D Plane Regression (Part_IA ~ Side_of_SN_Voxel + Top_of_SN_Voxel); Color=Part II", showlegend = F,
scene = list ( xaxis = list(title = "Side_of_SN_Voxel_Intensity_Ratio"),
yaxis = list(title = "Top_of_SN_Voxel_Intensity_Ratio"),
zaxis = list(title = "Part_IA"))) %>%
hide_colorbar()
Second, we show the PCA-derived 2D plane model superimposed on the 3D scatterplot.
# define the original 3D coordinates
<- pd.sub$Side_of_SN_Voxel_Intensity_Ratio
x <- pd.sub$Top_of_SN_Voxel_Intensity_Ratio
y <- pd.sub$Part_IA
z <- data.frame(x, y, z)
myDF
### Fit (compute) the 2D PCA space (dimensionality reduction)
<- prcomp(as.matrix(cbind(pd.sub$Side_of_SN_Voxel_Intensity_Ratio, pd.sub$Top_of_SN_Voxel_Intensity_Ratio, pd.sub$Part_IA)), center = T); summary(pca1) pca1
## Importance of components:
## PC1 PC2 PC3
## Standard deviation 1.5218 0.27556 0.09958
## Proportion of Variance 0.9643 0.03162 0.00413
## Cumulative Proportion 0.9643 0.99587 1.00000
# Compute the Normal to the 2D PC plane
= c(pca1$rotation[2,1]*pca1$rotation[3,2]-
normVec $rotation[3,1]*pca1$rotation[2,2],
pca1$rotation[3,1]*pca1$rotation[1,2]-
pca1$rotation[1,1]*pca1$rotation[3,2],
pca1$rotation[1,1]*pca1$rotation[2,2]-
pca1$rotation[2,1]*pca1$rotation[1,2]
pca1
)# Compute the 3D point of gravitational balance (Plane has to go through it)
<- apply( cbind(pd.sub$Side_of_SN_Voxel_Intensity_Ratio, pd.sub$Top_of_SN_Voxel_Intensity_Ratio, pd.sub$Part_IA), 2, mean)
dMean
<- as.numeric((-1)*normVec %*% dMean) # force the plane to go through the mean
d
# Reparameterize the 2D (x,y) grid, and define the corresponding model values z on the grid. Recall z=-(d + ax+by)/c, where normVec=(a,b,c)
<- seq(min(x),max(x),length.out=100)
x.seq <- seq(min(y),max(y),length.out=100)
y.seq <- function(x,y) -(d + normVec[1]*x + normVec[2]*y)/normVec[3]
z.seq # define the values of z = z(x.seq, y.seq), as a Matrix of dimension c(dim(x.seq), dim(y.seq))
#z <- t(outer(x.seq, y.seq, z.seq))/10; range(z) # we need to check this 10 correction, to ensure the range of z is appropriate!!!
<- t(outer(x.seq, y.seq, z.seq)); range(z) z
## [1] -31.19195 29.19934
# Draw the 2D plane embedded in 3D, and then add points with "add_trace"
# library(plotly)
<- plot_ly(x=~x.seq, y=~y.seq, z=~z,
myPlotly colors = c("blue", "red"),type="surface", opacity=0.7) %>%
add_trace(data=myDF, x=x, y=y, z=(pd.sub$Part_IA-mean(pd.sub$Part_IA))*10, mode="markers",
showlegend=F, type="scatter3d", marker=list(color="green", opacity=0.9, symbol=105)) %>%
layout(scene = list(
aspectmode = "manual", aspectratio = list(x=1, y=1, z=1),
xaxis = list(title = "Side_of_SN_Voxel_Intensity_Ratio"),
yaxis = list(title = "Top_of_SN_Voxel_Intensity_Ratio"),
zaxis = list(title = "Part_IA")) ) %>%
hide_colorbar()
# print(myPlotly)
myPlotly
As stated in the summary table, classical PCA assumes that the bivariate relations are linear in nature. The non-linear PCA is a generalization that allows us to incorporate nominal and ordinal variables, as well as to handle and identify nonlinear relationships between variables in the dataset. See Chapter 2 of this textbook Nonparametric inference in nonlinear principal components analysis: exploration and beyond.
Non-linear PCA assigns values to the categories representing the numeric variables, which maximize the association (e.g., correlation) between the quantified variables (i.e., optimal scaling to quantify the variables according to their analysis levels). The Bioconductor’s pcaMethods package provides the functionality for non-linear PCA.
ICA aims to find basis vectors representing independent components of the original data. For example, this may be achieved by maximizing the norm of the \(4^{th}\) order normalized kurtosis, which iteratively projects the signal on a new basis vector, computes the objective function (e.g., the norm of the kurtosis) of the result, slightly adjusts the basis vector (e.g., by gradient ascent), and recomputes the kurtosis again. At the end, this iterative process generates a basis vector corresponding to the highest (residual) kurtosis representing the next independent component.
The process of Independent Component Analysis is to maximize the statistical independence of the estimated components. Assume that each variable \(X_i\) is generated by a sum of n independent components. \[X_i=a_{i, 1}s_1+...+a_{i, n}s_n\] Here, \(X_i\) is generated by \(s_1, ..., s_n\) and \(a_{i,1}, ... a_{i,n}\) are the corresponding weights. Finally, we rewrite \(X\) as \[X=As,\] where \(X=(X_1, ..., X_n)^T\), \(A=(a_1, ..., a_n)^T\), \(a_i=(a_{i,1}, ..., a_{i,n})\) and \(s=(s_1, ..., s_n)^T\). Note that \(s\) is obtained by maximizing the independence of the components. This procedure is done by maximizing some independence objective function.
ICA does not assume that all of its components (\(s_i\)) are Gaussian and independent of each other.
We will utilize the fastICA
function in R.
fastICA(X, n.comp, alg.typ, fun, rownorm, maxit, tol)
alg.typ == "parallel"
) or one at a time(alg.typ == "deflation"
)Now we can create a correlated matrix \(X\).
<- matrix(runif(10000), 5000, 2)
S 1:10, ] S[
## [,1] [,2]
## [1,] 0.93838454 0.13831211
## [2,] 0.30706583 0.63010550
## [3,] 0.05744836 0.89403937
## [4,] 0.50544629 0.33889262
## [5,] 0.34662135 0.54377924
## [6,] 0.58039734 0.07880356
## [7,] 0.42619826 0.49714127
## [8,] 0.24873483 0.90388595
## [9,] 0.03529218 0.44250699
## [10,] 0.62405885 0.11611481
<- matrix(c(1, 1, -1, 3), 2, 2, byrow = TRUE)
A <- S %*% A # In R, "*" and "%*%" indicate "scalar" and matrix multiplication, respectively!
X cor(X)
## [,1] [,2]
## [1,] 1.0000000 -0.4500163
## [2,] -0.4500163 1.0000000
The correlation between two variables is -0.4. Then we can start to fit the ICA model.
# install.packages("fastICA")
library(fastICA)
<- fastICA(X, 2, alg.typ = "parallel", fun = "logcosh", alpha = 1,
a method = "C", row.norm = FALSE, maxit = 200, tol = 0.0001)
To visualize the correlation of the original pre-processed data (\(X\)) and the independence of the corresponding ICA components \(S=fastICA(X)\$S\) we can draw the following composite scatter-plot.
# par(mfrow = c(1, 2))
# plot(a$X, main = "Pre-processed data")
# plot(a$S, main = "ICA components")
plot_ly() %>%
add_markers(x = a$X[ , 1], y =~a$X[ , 2], name="Pre-processed data",
marker = list(color="green", opacity=0.9, symbol=105)) %>%
add_markers(x = a$S[ , 1], y = a$S[ , 2], name="ICA components",
marker = list(color="blue", opacity=0.99, symbol=5)) %>%
layout(title='Scatter Plots of the Original (Pre-processed) Data and the corresponding ICA Transform',
xaxis = list(title="Twin 1 (standardized height)", scaleanchor = "y"),
yaxis = list(title="Twin 2 (standardized height)", scaleanchor = "x"),
legend = list(orientation = 'h'))
Finally we can confirm that the correlation of two components is nearly 0.
cor(a$S)
## [,1] [,2]
## [1,] 1.000000e+00 7.824658e-16
## [2,] 7.824658e-16 1.000000e+00
Let’s look at a more interesting example, based on the pd.sub
dataset. It has 6 variables and the correlation is relatively high. After fitting the ICA model. The components are nearly independent.
cor(pd.sub)
## Top_of_SN_Voxel_Intensity_Ratio
## Top_of_SN_Voxel_Intensity_Ratio 1.00000000
## Side_of_SN_Voxel_Intensity_Ratio 0.54747225
## Part_IA -0.10144191
## Part_IB -0.26966299
## Part_II -0.04358545
## Part_III -0.33921790
## Side_of_SN_Voxel_Intensity_Ratio Part_IA
## Top_of_SN_Voxel_Intensity_Ratio 0.5474722 -0.1014419
## Side_of_SN_Voxel_Intensity_Ratio 1.0000000 -0.2157587
## Part_IA -0.2157587 1.0000000
## Part_IB -0.4438992 0.4913169
## Part_II -0.3766388 0.5037816
## Part_III -0.5226128 0.5845831
## Part_IB Part_II Part_III
## Top_of_SN_Voxel_Intensity_Ratio -0.2696630 -0.04358545 -0.3392179
## Side_of_SN_Voxel_Intensity_Ratio -0.4438992 -0.37663875 -0.5226128
## Part_IA 0.4913169 0.50378157 0.5845831
## Part_IB 1.0000000 0.57987562 0.6735584
## Part_II 0.5798756 1.00000000 0.6390134
## Part_III 0.6735584 0.63901337 1.0000000
<-fastICA(pd.sub, 2, alg.typ = "parallel", fun = "logcosh", alpha = 1,
a1method = "C", row.norm = FALSE, maxit = 200,
tol = 0.0001)
par(mfrow = c(1, 2))
cor(a1$X)
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1.00000000 0.5474722 -0.1014419 -0.2696630 -0.04358545 -0.3392179
## [2,] 0.54747225 1.0000000 -0.2157587 -0.4438992 -0.37663875 -0.5226128
## [3,] -0.10144191 -0.2157587 1.0000000 0.4913169 0.50378157 0.5845831
## [4,] -0.26966299 -0.4438992 0.4913169 1.0000000 0.57987562 0.6735584
## [5,] -0.04358545 -0.3766388 0.5037816 0.5798756 1.00000000 0.6390134
## [6,] -0.33921790 -0.5226128 0.5845831 0.6735584 0.63901337 1.0000000
cor(a1$S)
## [,1] [,2]
## [1,] 1.000000e+00 -2.441106e-15
## [2,] -2.441106e-15 1.000000e+00
Notice that we only have 2 components instead of 6 variables. We successfully reduced the dimension of the data.
Similar to ICA and PCA, FA tries to find special principal components in data. As a generalization of PCA, FA requires that the number of components is smaller than the original number of variables (or columns of the data matrix). FA optimization relies on iterative perturbations with full-dimensional Gaussian noise and maximum-likelihood estimation where every observation in the data represents a sample point in a higher dimensional space. Whereas PCA assumes the noise is spherical, Factor Analysis allows the noise to have an arbitrary diagonal covariance matrix and estimates the subspace as well as the noise covariance matrix.
Under FA, the centered data can be expressed in the following from:
\[x_i-\mu_i=l_{i, 1}F_1+...+l_{i, k}F_k+\epsilon_i=LF+\epsilon_i,\]
where \(i\in {1, ..., p}\), \(j \in{1, ..., k}\), \(k<p\) and \(\epsilon_i\) are independently distributed error terms with zero mean and finite variance.
Let’s try FA in R using the function factanal()
. According to the previous PCA, our pd.sub
dataset can explain 95% of variance using only the first two principal components. This suggest that we might need 2 factors in FA. We can double check that by examining the scree plot.
## Report For a nScree Class
##
## Details: components
##
## Eigenvalues Prop Cumu Par.Analysis Pred.eig OC Acc.factor AF
## 1 3 1 1 1 1 NA (< AF)
## 2 1 0 1 1 1 (< OC) 1
## 3 1 0 1 1 0 1
## 4 0 0 1 1 0 0
## 5 0 0 1 1 NA 0
## 6 0 0 1 0 NA NA
##
##
## Number of factors retained by index
##
## noc naf nparallel nkaiser
## 1 2 1 2 2
Note that 3 out of 4 Cattell’s Scree test rules summary suggest we should use 2 factors. Thus, in the function factanal()
we can specify factors=2
. In addition, we can use varimax
rotation of the factor axes maximizing the variance of the squared loadings of factors (columns) on the original variables (rows), which effectively differentiates the original variables by the extracted factors. Oblique promax
and Procrustes rotation
(projecting the loadings to a target matrix with a simple structure) are two alternative and commonly used matrix rotations that may be specified.
<-factanal(pd.sub, factors=2, rotation="varimax")
fit# fit<-factanal(pd.sub, factors=2, rotation="promax") # the most popular oblique rotation; And fitting a simple structure
fit
##
## Call:
## factanal(x = pd.sub, factors = 2, rotation = "varimax")
##
## Uniquenesses:
## Top_of_SN_Voxel_Intensity_Ratio Side_of_SN_Voxel_Intensity_Ratio
## 0.018 0.534
## Part_IA Part_IB
## 0.571 0.410
## Part_II Part_III
## 0.392 0.218
##
## Loadings:
## Factor1 Factor2
## Top_of_SN_Voxel_Intensity_Ratio 0.991
## Side_of_SN_Voxel_Intensity_Ratio -0.417 0.540
## Part_IA 0.650
## Part_IB 0.726 -0.251
## Part_II 0.779
## Part_III 0.825 -0.318
##
## Factor1 Factor2
## SS loadings 2.412 1.445
## Proportion Var 0.402 0.241
## Cumulative Var 0.402 0.643
##
## Test of the hypothesis that 2 factors are sufficient.
## The chi square statistic is 1.35 on 4 degrees of freedom.
## The p-value is 0.854
Here the p-value 0.854 is very large, suggesting that we failed to reject the null-hypothesis that 2 factors are sufficient. We can also visualize the loadings for all the variables.
<- fit$loadings
load # plot(load, type="n") # set up plot
# text(load, labels=colnames(pd.sub), cex=.7) # add variable names
<- as.data.frame(load[])
df <- rownames(df)
Features <- df$Factor1
X <- df$Factor2
Y <- data.frame(Features, X, Y)
df1 <- palette(rainbow(6)) # as.numeric(as.factor(Features))
cols <- cols[2:7] # this is necessary as cols has 8 rows (not 6, as df1 does!)
cols
plot_ly(df1, x = ~X, y = ~Y, text = ~Features, color = cols) %>%
add_markers(marker = list(opacity=0.99, size=20, color=cols, symbol=~as.numeric(as.factor(Features)))) %>%
add_text(textfont = list(family= "Times", size= 20, color= cols), textposition="top right") %>%
layout(title = '2D FA', xaxis = list(title = 'Factor 1', zeroline = TRUE,range = c(-0.5, 1)),
yaxis = list(title = 'Factor 2'), showlegend = FALSE)
This plot displays factors 1 and 2 on the x-axis and y-axis, respectively.
SVD is a factorization of a real or complex matrix. If we have a data matrix \(X\) with \(n\) observation and \(p\) variables it can be factorized into the following form: \[X=U D V^T,\] where \(U\) is a \(n \times p\) unitary matrix that \(U^TU=I\), \(D\) is a \(p \times p\) diagonal matrix, and \(V^T\) is a \(p \times p\) unitary matrix, which is the conjugate transpose of the \(n\times n\) unitary matrix, \(V\). Thus, we have \(V^TV=I\).
SVD is closely linked to PCA (when correlation matrix is used for calculation). \(U\) represents the left singular vectors, \(D\) the singular values, \(U%*%D\) yields the PCA scores, and \(V\) the right singular vectors - PCA loadings.
Using the pd.sub
dataset, we can compare the outputs from the svd()
function and the princomp()
function (another R function for PCA). Prior to the SVD, we need to scale
the data matrix.
#SVD output
<-nrow(pd.sub)-1
df<-scale(pd.sub)
zvars<-svd(zvars)
z.svd$d/sqrt(df) z.svd
## [1] 1.7878123 1.1053808 0.7550519 0.6475685 0.5688743 0.5184536
$v z.svd
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0.2555204 0.71258155 -0.37323594 0.10487773 -0.4773992 0.22073161
## [2,] 0.3855208 0.47213743 0.35665523 -0.43312945 0.5581867 0.04564469
## [3,] -0.3825033 0.37288211 0.70992668 0.31993403 -0.2379855 -0.22728693
## [4,] -0.4597352 0.09803466 -0.11166513 -0.79389290 -0.2915570 -0.22647775
## [5,] -0.4251107 0.34167997 -0.46424927 0.26165346 0.5341197 -0.36505061
## [6,] -0.4976933 0.06258370 0.03872473 -0.01769966 0.1832789 0.84438182
#PCA output
<-princomp(pd.sub, cor=T)
pca2 pca2
## Call:
## princomp(x = pd.sub, cor = T)
##
## Standard deviations:
## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6
## 1.7878123 1.1053808 0.7550519 0.6475685 0.5688743 0.5184536
##
## 6 variables and 33 observations.
loadings(pca2)
##
## Loadings:
## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6
## Top_of_SN_Voxel_Intensity_Ratio 0.256 0.713 0.373 0.105 0.477 0.221
## Side_of_SN_Voxel_Intensity_Ratio 0.386 0.472 -0.357 -0.433 -0.558
## Part_IA -0.383 0.373 -0.710 0.320 0.238 -0.227
## Part_IB -0.460 0.112 -0.794 0.292 -0.226
## Part_II -0.425 0.342 0.464 0.262 -0.534 -0.365
## Part_III -0.498 -0.183 0.844
##
## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6
## SS loadings 1.000 1.000 1.000 1.000 1.000 1.000
## Proportion Var 0.167 0.167 0.167 0.167 0.167 0.167
## Cumulative Var 0.167 0.333 0.500 0.667 0.833 1.000
When correlation matrix is used for calculation (cor=T
), the \(V\) matrix of SVD contains the corresponding PCA loadings.
Intuitively, the SVD approach \(X= UD V^T\) represents a composition of the (centered!) data into 3 geometrical transformations: a rotation or reflection (\(U\)), a scaling (\(D\)), and a rotation or reflection (\(V\)). Here we assume that the data \(X\) stores samples/cases in rows and variables/features in columns. If these are reversed, then the interpretations of the \(U\) and \(V\) matrices reverse as well.
The t-SNE technique represents a recent machine learning strategy for nonlinear dimensionality reduction that is useful for embedding (e.g., scatter-plotting) of high-dimensional data into lower-dimensional (1D, 2D, 3D) spaces. For each object (point in the high-dimensional space), the method models similar objects using nearby and dissimilar objects using remote distant objects. The two steps in t-SNE include (1) construction of a probability distribution over pairs of the original high-dimensional objects where similar objects have a high probability of being paired and correspondingly, dissimilar objects have a small probability of being selected; and (2) defining a similar probability distribution over the points in the derived low-dimensional embedding minimizing the Kullback-Leibler divergence between the high- and low-dimensional distributions relative to the locations of the objects in the embedding map. Either Euclidean or non-Euclidean distance measures between objects may be used as similarity metrics.
Suppose we have high dimensional data (\(N\)D): \(x_1, x_2,..., x_N\). In step 1
, for each pair (\(x_i, x_j\)), t-SNE estimates the probabilities \(p_{i,j}\) that are proportional to their corresponding similarities, \(p_{j | i}\):
\[p_{j | i} = \frac{\exp\left (\frac{-||x_i - x_j||^2}{2\sigma_i^2} \right )}{\sum_{k \neq i} \exp\left (\frac{-||x_i - x_k||^2}{2\sigma_i^2} \right )}.\]
The similarity between \(x_j\) and \(x_i\) may be thought of as the conditional probability, \(p_{j | i}\). That is, assuming \(N\)D Gaussian distributions centered at each point \(x_i\), neighbors are selected based on a probability distribution (proportion of their probability density), which represents the chance that \(x_i\) may select \(x_j\) as its neighbor, \(p_{i,j} = \frac{p_{j | i} + p_{i |j}}{2N}\).
The perplexity (\(perp\)) of a discrete probability distribution, \(p\), is defined as an exponential function of the entropy, \(H(p)\), over all discrete events: \(perp(x)=2^{H(p)}=2^{-\sum _{x}p(x)\log_{2}p(x)}\). t-SNE performs a binary search for the value \(\sigma_i\) that produces a predefined value \(perp\). The simple interpretation of the perplexity at a data point \(x_i\), \(2^{H(p_i)}\), is as a smooth measure of the effective number of points in the \(x_i\) neighborhood. The performance of t-SNE may vary with the perplexity value, which is typically specified by the user, e.g., between \(5\leq perp\leq 50\).
Then, the precision (variance, \(\sigma_i\)) of the local Gaussian kernels may be chosen to ensure that the perplexity of the conditional distribution equals a specified perplexity. This allows adapting the kernel bandwidth to the sample data density – smaller \(\sigma_i\) values are fitted in denser areas of the sample data space, and correspondingly, larger \(\sigma_i\) are fitted in sparser areas. A particular value of \(\sigma_i\) yields a probability distribution, \(p_i\), over all of other data points, which has an increasing entropy as \(\sigma_i\) increases.
t-SNE learns a mapping \(f: \{x_1, x_2, ..., x_N\} \longrightarrow \{y_1, y_2, ..., y_d\}\), where \(x_i\in \mathbb{R}^N\) and \(y_i \in \mathbb{R}^d\) (\(N\gg d\)) that resembles closely the original similarities, \(p_{i,j}\) and represents the derived similarities, \(q_{i,j}\) between pairs of embedded points \(y_i,y_j\), defined by:
\[q_{i,j} = \frac{(1 + ||y_i - y_j||^2)^{-1}}{\sum_{k \neq i} (1 + ||y_i - y_k||^2)^{-1}}.\]
The t-distributed
reference in t-SNE refers to the heavy-tailed Student-t distribution (\(t_{df=1}\)) which coincides with Cauchy distribution, \(f(z)=\frac{1}{1+z^2}\). It is used to model and measure similarities between closer points in the embedded low-dimensional space, as well as dissimilarities of objects that map far apart in the embedded space.
The rationale for using Student t distribution for mapping the points is based on the fact that the volume of an \(N\)D ball of radius \(r\), \(B^N\), is proportional to \(r^N\). Specifically, \(V_N(r) = \frac{\pi^\frac{N}{2}}{\Gamma\left(\frac{N}{2} + 1\right)}r^N\), where \(\Gamma()\) is the Euler’s gamma function, which is an extension of the factorial function to non-integer arguments. For large \(N\), when we select uniformly random points inside \(B^N\), most points will be expected to be close to the ball surface (boundary), \(S^{N-1}\), and few will be expected near the \(B^N\) center, as half the volume of \(B^N\) is included in the hyper-area inside \(B^N\) and outside a ball of radius \(r_1=\frac{1}{\sqrt[N]{2}}\times r \sim r\). You can try this with \(N=2\), \(\{x\in \mathbb{R}^2 |\ ||x||\leq r\}\), representing a disk in a 2D plane.
When reducing the dimensionality of a dataset, if we used the Gaussian distribution for the mapping embedding into the lower dimensional space, there will be a distortion of the distribution of the distances between neighboring objects. This is simply because the distribution of the distances is much different between the original (high-dimensional) and a the map-transformed low-dimensional spaces. t-SNE tries to (approximately) preserve the distances in the two spaces to avoid imbalances that may lead to biases due to excessive attraction-repulsion forces. Using Student t distribution \(df=1\) (aka Cauchy distribution) for mapping the points preserves (to some extent) the distance similarity distribution, because of the heavier tails of \(t\) compared to the Gaussian distribution. For a given similarity between a pair of data points, the two corresponding map points will need to be much further apart in order for their similarity to match the data similarity.
A minimization process with respect to the objects \(y_i\) using gradient descent of a (non-symmetric) objective function, Kullback-Leibler divergence between the distributions \(Q\) and \(P\) , is used to determine the object locations \(y_i\) in the map, i.e.,
\[KL(P || Q) = \sum_{i \neq j} p_{i,j} \log \frac{p_{i,j}}{q_{i,j}}.\]
The minimization of the KL objective function by gradient descent may be analytically represented by:
\[\frac{\partial {KL(P||Q)}}{\partial {y_i}}= \sum_{j}{(p_{i,j}-q_{i,j})f(|x_i-x_j|) u_{i,j}},\] where \(f(z)=\frac{z}{1+z^2}\) and \(u_{i,j}\) is a unit vector from \(y_j\) to \(y_i\). This gradient represents the aggregate sum of all spring forces applied to map point \(x_i\).
This optimization leads to an embedding mapping that “preserves” the object (data point) similarities of the original high-dimensional inputs into the lower dimensional space. Note that the data similarity matrix (\(p_{i,j}\)) is fixed, whereas its counterpart, the map similarity matrix (\(q_{i,j}\)) depends on the embedding map. Of course, we want these two distance matrices to be as close as possible, implying that similar data points in the original space yield similar map-points in the reduced dimension.
Later, in Chapter 10 and Chapter 22, we will present the Optical Character Recognition (OCR) and analysis of hand-written notes (unstructured text).
Below, we show a simple example of generating a 2D embedding of the hand-written digits dataset using t-SNE.
# install.packages("tsne"); library (tsne)
# install.packages("Rtsne")
library(Rtsne)
# Download the hand-written digits data
<- tempfile()
pathToZip download.file("https://www.socr.umich.edu/people/dinov/2017/Spring/DSPA_HS650/data/DigitRecognizer_TrainingData.zip", pathToZip)
<- read.csv(unzip(pathToZip))
train dim(train)
## [1] 42000 785
unlink(pathToZip)
# identify the label-nomenclature - digits 0, 1, 2, ..., 9 - and map to diff colors
<- function(x){
colMap <- rainbow(length(x))[order(order(x))] # reindexing by ranking the observed values
cols
}
# Note on "order(order())": set.seed(123); x <- sample(10)
# This experiment shows that order(order()) = rank()
# set.seed(12345); data <- sample(6); data
# order(data); order(order(data)); rank(data)
# Ordering acts as its own inverse and returns a sequence starting with the index of the
# smallest data value (1). Nested odd "order" applications yield the same vector outputs.
# Order outputs an index vector useful for sorting the original data vector.
# The location of the smallest value is in the first position of the order-output.
# The index of the second smallest data value is next, etc.
# The last order output item represents the index of the largest data value.
# Double-order application yields an indexing vector where the first element is the index
# of the smallest first-order-index, etc., which corresponds to the data vector's rank.
<-train$label
train.labels$label<-as.factor(train$label)
train<- colMap(train.labels)
train.labels.colors names(train.labels.colors) <- train$label # unique(train$label)
# May need to check and increase the RAM allocation
memory.limit()
## [1] 32643
memory.limit(50000)
## [1] 50000
# Remove the labels (column 1) and Scale the image intensities to [0; 1]
<- data.matrix(train[, -1]); dim(train) train
## [1] 42000 784
<- t(train/255)
train
# Visualize some of the images
library("imager")
# first convert the CSV data (one row per image, 42,000 rows)
<- array(train[ , ], c(28, 28, 42000))
array_3D <- matrix(array_3D[,,1], nrow = 28, ncol = 28)
mat_2D plot(as.cimg(mat_2D))
# We can also use plot_ly to display the image as heatmap
plot_ly(z=~t(mat_2D[, ncol(mat_2D):1]), type="heatmap", showscale = FALSE) %>%
layout(xaxis=list(title="X)", scaleanchor="y"), yaxis=list(title="Y)", scaleanchor="x"), legend=list(orientation='h'))
<- 42000
N <- as.cimg(array_3D[,,], 28, 28, N)
img_3D
# plot the k-th image (1<=k<=N)
<- 5; plot(img_3D, k) k
<- 6; plot(img_3D, k) k
<- 7; plot(img_3D, k) k
<- function(index) bquote(bold("Image: "~.(index)))
pretitle #layout(t(1:2))
<- par(mfrow = c(2,2), oma = c(5,4,0,0) + 0.1, mar = c(0,0,1,1) + 0.1)
op
for (k in 1:4) {
plot(img_3D, k, xlim = c(0,28), ylim = c(28,0), axes=F, ann=T, main=pretitle(k))
}
# Run the t-SNE, tracking the execution time (artificially reducing the sample-size to get reasonable calculation time)
<- system.time(tsne_digits <- Rtsne(t(train)[1:10000 , ], dims = 2, perplexity=30, verbose=TRUE, max_iter = 500)); execTime_tSNE execTime_tSNE
## Performing PCA
## Read the 10000 x 50 data matrix successfully!
## OpenMP is working. 1 threads.
## Using no_dims = 2, perplexity = 30.000000, and theta = 0.500000
## Computing input similarities...
## Building tree...
## - point 10000 of 10000
## Done in 9.78 seconds (sparsity = 0.012259)!
## Learning embedding...
## Iteration 50: error is 97.574505 (50 iterations in 2.20 seconds)
## Iteration 100: error is 91.594093 (50 iterations in 3.20 seconds)
## Iteration 150: error is 86.734216 (50 iterations in 1.79 seconds)
## Iteration 200: error is 86.286529 (50 iterations in 2.13 seconds)
## Iteration 250: error is 86.211699 (50 iterations in 2.05 seconds)
## Iteration 300: error is 3.153065 (50 iterations in 1.72 seconds)
## Iteration 350: error is 2.747677 (50 iterations in 1.58 seconds)
## Iteration 400: error is 2.528965 (50 iterations in 1.61 seconds)
## Iteration 450: error is 2.385505 (50 iterations in 1.63 seconds)
## Iteration 500: error is 2.282080 (50 iterations in 1.58 seconds)
## Fitting performed in 19.51 seconds.
## user system elapsed
## 47.80 1.30 49.47
# Full dataset(42K * 1K) execution may take over 5-mins
# execTime_tSNE <- system.time(tsne_digits <- Rtsne(train[ , ], dims = 2, perplexity=30, verbose=TRUE, max_iter = 500)); execTime_tSNE
# Plot only first 1,000 cases (to avoid clutter)
# plot(tsne_digits$Y[1:1000, ], t='n', main="t-SNE") # don't plot the points to avoid clutter
# text(tsne_digits$Y[1:1000, ], labels=names(train.labels.colors)[1:1000], col=train.labels.colors[1:1000])
# 2D t-SNE Plot
<- data.frame(tsne_digits$Y[1:1000, ], train.labels.colors[1:1000])
df plot_ly(df, x = ~X1, y = ~X2, mode = 'text') %>%
add_text(text = names(train.labels.colors)[1:1000], textfont = list(color = df$train.labels.colors.1.1000.)) %>%
layout(title = "t-SNE 2D Embedding", xaxis = list(title = ""), yaxis = list(title = ""))
# 3D t_SNE plot
<- system.time(tsne_digits3D <- Rtsne(t(train)[1:10000 , ], dims = 3, perplexity=30, verbose=TRUE, max_iter = 500)); execTime_tSNE execTime_tSNE
## Performing PCA
## Read the 10000 x 50 data matrix successfully!
## OpenMP is working. 1 threads.
## Using no_dims = 3, perplexity = 30.000000, and theta = 0.500000
## Computing input similarities...
## Building tree...
## - point 10000 of 10000
## Done in 11.06 seconds (sparsity = 0.012259)!
## Learning embedding...
## Iteration 50: error is 97.574504 (50 iterations in 5.20 seconds)
## Iteration 100: error is 90.399745 (50 iterations in 8.78 seconds)
## Iteration 150: error is 85.955834 (50 iterations in 3.58 seconds)
## Iteration 200: error is 85.675999 (50 iterations in 3.46 seconds)
## Iteration 250: error is 85.625954 (50 iterations in 4.01 seconds)
## Iteration 300: error is 2.868188 (50 iterations in 3.61 seconds)
## Iteration 350: error is 2.483727 (50 iterations in 3.48 seconds)
## Iteration 400: error is 2.283443 (50 iterations in 3.56 seconds)
## Iteration 450: error is 2.154290 (50 iterations in 3.67 seconds)
## Iteration 500: error is 2.063024 (50 iterations in 3.55 seconds)
## Fitting performed in 42.91 seconds.
## user system elapsed
## 72.79 1.12 75.00
<- data.frame(tsne_digits3D$Y[1:1000, ], train.labels.colors[1:1000])
df3D plot_ly(df3D, x = ~df3D[, 1], y = ~df3D[, 2], z= ~df3D[, 3], mode = 'markers+text') %>%
add_text(text = names(train.labels.colors)[1:1000], textfont = list(color = df$train.labels.colors.1.1000.)) %>%
layout(title = "t-SNE 3D Embedding",
scene = list(xaxis = list(title=""), yaxis=list(title=""), zaxis=list(title="")))
# Classic plot all cases as solid discs with colors corresponding to each of the 10 numbers
# plot(tsne_digits$Y, main="t-SNE Clusters", col=train.labels.colors, pch = 19)
# legend("topright", unique(names(train.labels.colors)), fill=unique(train.labels.colors), bg='gray90', cex=0.5)
plot_ly(df3D, x = ~df3D[, 1], y = ~df3D[, 2], z= ~df3D[, 3], mode = 'markers+text') %>%
add_text(text = names(train.labels.colors)[1:1000], textfont = list(color = df$train.labels.colors.1.1000.)) %>%
layout(title = "t-SNE 3D Embedding",
scene = list(xaxis = list(title=""), yaxis=list(title=""), zaxis=list(title="")))
<- palette(rainbow(10))
cols
plot_ly(df3D, x=~df3D[, 1], y=~df3D[, 2], z=~df3D[, 3], color=train.labels.colors[1:1000],
colors=cols, name=names(train.labels.colors)[1:1000]) %>%
add_markers() %>%
layout(scene=list(xaxis=list(title=''), yaxis=list(title=''), zaxis=list(title='')), showlegend=F) %>%
hide_colorbar()
The hands-on interactive SOCR t-SNE Dimensionaltiy Reduction Activity provides an interactive demonstration of t-SNE utilizing TensorBoard and the UK Biobank data.
In 2018, McInnes and Healy proposed the Uniform Manifold Approximation and Projection (UMAP) (UMAP) technique for dimensional reduction.
SImilar to t-SNE, UMAP first constructs a high dimensional graph representation of initial data and employs graph-layout algorithms to project the original high-dimensional data into a lower dimensional space. The iterative process aims to preserve the graph structure as much as possible. The initial high-dimensional graph representation uses simplicial complexes, as weighted graphs where edge weights represent likelihoods of the connectivity between two graph points are connected, i.e., neighborhoods. For a given point, the UMAP connectedness metric computes the distance with other points based on the overlap of their respective neighborhoods.
The parameter controlling the neighborhood size (i.e., radius) determines the tradeoff between within and between cluster sizes. Choosing too small radius may lead to small and rather isolated clusters. Selecting too large radius may cluster all points into a single group. For local radius selection, UMAP preprocessing utilizes the distances between each point and its nth nearest neighbor.
A “fuzzy” simplicial complex (graph) is constructed by iterative minimization of the connectivity likelihood function as the radius increases. Assuming that each point is connected to at least one of its closest neighbors, UMAP ensures that local and global graph structure is (somewhat protected and) preserved during the optimization process (e.g., based on stochastic gradient descent).
UMAP relies on local approximations of patches on the manifold to construct local fuzzy simplicial complex (topological) representations of the high dimensional data. For each low dimensional representation of the projection of the data, UMAP tries to generate an analogous equivalent simplicial complex representation. The iterative UMAP optimization process aims to preserve the topological layout in the low dimensional space by minimizing the cross-entropy between the high- and low-dimensional topological representations.
The DSPA Appendix on Shape includes examples of how to generate some of geometric and topological primitives, including 0, 1, and 2 simplicial complexes.
The figure below shows the first few such primitives - a point, line, triangle, and tetrahedron.
library(plotly)
<- plot_ly(type = 'mesh3d',
p # Define all (4) zero-cells (points or vertices)
# P_i(x,y,z), 0<=i<4
x = c(0, 1/sqrt(3), -1/(2*sqrt(3)), -1/(2*sqrt(3))),
y = c(sqrt(2/3), 0, 0, 0),
z = c(0, 0, -1/2, 1/2),
# Next define all triples (i,j,k) of vertices that form a 2-cell face.
# All Tetrahedra have 4 faces
i = c(0, 0, 0, 1),
j = c(1, 2, 3, 2),
k = c(2, 3, 1, 3),
# Define the appearance of the 4 faces (2-cells)
facecolor = toRGB(viridisLite::viridis(4)),
showscale = TRUE,
opacity=0.8
) <- list(
traceEdge x1 = c(-1/(2*sqrt(3)), -1/(2*sqrt(3))),
y1 = c(0, 0),
z1 = c(-1/2, 1/2),
line = list(
color = "rgb(1, 1, 1)", #dark color for line traces
width = 20 # width of line
), mode = "lines",
opacity = 1,
type = "scatter3d"
)
# emphasize one of the faces by stressing the three 1-cells (edges)
<- add_trace(p, x=~traceEdge$x1, y=~traceEdge$y1, z=~traceEdge$z1,
p type="scatter3d", mode=traceEdge$mode,
opacity=traceEdge$opacity,
line = list(color=traceEdge$line$color,
width=traceEdge$line$width), showlegend=F)
# add one 0-cell (point)
<- add_trace(p, x=-1/(2*sqrt(3)), y=0, z=1/2, type="scatter3d",
p mode="markers", marker = list(size =16,
color="blue", opacity = 1.0)) %>%
layout(title = "Simplicial Complexes (0,1,2,3) Cells", showlegend = FALSE,
scene = list(
xaxis = list(title = "X"),
yaxis = list(title = "Y"),
zaxis = list(title = "Z")
)) p
For a given finite set of data observations, we are trying to represent the underlying topological space the data is sampled from. This topological formulation can be approximated as patches of open covers modeled by simplicial complexes. Locally, the data may be assumed to lie in a metric space where distances between data points can be measured. This leads to neighborhood approximations that can locally be represented as \(nD\) balls centered at each data point. This topological space covering may not represent a complete and open cover as all data samples are finite and small, biased, or incomplete samples may lead to poor approximations of the topology of the problem state-space. Zero-cells (0-simplexes, points) are fit for for each observed data point, 1-cells (lines) are fit for each pair of data in the same neighborhood, and so on. More information about simplex trees is available in this article The Simplex Tree: An Efficient Data Structure for General Simplicial Complexes.
Example of simplicial tree decomposition of 10 data points using the R
package simplextree.
# install.packages("simplextree")
library(simplextree)
<- simplex_tree(list(1:3, 2:5, 5:8, 7:8, c(7,9,10)))
simplicialTreeExample plot(simplicialTreeExample, color_pal = rainbow(simplicialTreeExample$dimension + 1))
# Generate some data
<- 30 #number of points to generate
n
#generate space of parameter
= seq(length.out=n, from=0, to=pi)
theta
<- 0.0; b <- 0.0; r <- 5.0
a
= a + r*cos(theta) + rnorm(n, 0, 0.4)
x = b + r*sin(theta) + rnorm(n, 0, 0.4)
y
= a + r*cos(theta)
x1 = b + r*sin(theta)
y1
<- as.data.frame(cbind(x,y))
df #code to plot the circle for visualization
# plot(x,y)
plot_ly() %>%
add_trace(x=x, y=y, type="scatter", name = 'Simulated Data Sample',
mode = 'markers', marker = list(opacity=0.99, size=20, symbol=1)) %>%
add_trace(x=x1, y = y1, type="scatter",
name = '(Quadratic) Model',
mode = 'lines') %>%
layout(title='Sample Data from Quadratic Model',
xaxis = list(title="X)", scaleanchor = "y"),
yaxis = list(title="Y)", scaleanchor = "x"),
legend = list(orientation = 'h'))
# install.packages("reticulate")
library(reticulate)
# specify the path of the Python version that you want to use
= "C:/Users/Dinov/Anaconda3/" # manual
py_path # py_path = Sys.which("python3") # automated
use_python(py_path, required = T)
Sys.setenv(RETICULATE_PYTHON = "C:/Users/Dinov/Anaconda3/")
Go into python to generate the simplicial complex for the data.
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import os
'QT_QPA_PLATFORM_PLUGIN_PATH'] = 'C:/Users/Dinov/Anaconda3/Library/plugins/platforms'
os.environ[
print(r.df[1:6])
#This set of functions allows for building a Vietoris-Rips simplicial complex from point data
## x y
## 1 4.871794 0.860027
## 2 4.410777 1.155552
## 3 3.605075 1.341127
## 4 4.781317 2.019405
## 5 4.247768 2.503801
def euclidianDist(a,b):
return np.linalg.norm(a - b) #Euclidean distance metric
#Build neighborhood graph
def buildGraph(raw_data, epsilon = 3.1, metric=euclidianDist): #raw_data is a numpy array
= [x for x in range(raw_data.shape[0])] #initialize node set, reference indices from original data array
nodes = [] #initialize empty edge array
edges = [] #initialize weight array, stores the weight (which in this case is the distance) for each edge
weights for i in range(raw_data.shape[0]): #iterate through each data point
for j in range(raw_data.shape[0]-i): #inner loop to calculate pairwise point distances
= raw_data[i]
a = raw_data[j+i] #each simplex is a set (no order), hence [0,1] = [1,0]; so only store one
b if (i != j+i):
= metric(a,b)
dist if dist <= epsilon:
+i}) #add edge
edges.append({i,jlen(edges)-1,dist]) #store index and weight
weights.append([return nodes,edges,weights
def lower_nbrs(nodeSet, edgeSet, node):
return {x for x in nodeSet if {x,node} in edgeSet and node > x}
def rips(graph, k):
= graph[0:2]
nodes, edges = [{n} for n in nodes]
VRcomplex for e in edges: #add 1-simplexes (edges)
VRcomplex.append(e)for i in range(k):
for simplex in [x for x in VRcomplex if len(x)==i+2]: #skip 0-simplexes
#for each u in simplex
= set.intersection(*[lower_nbrs(nodes, edges, z) for z in simplex])
nbrs for nbr in nbrs:
set.union(simplex,{nbr}))
VRcomplex.append(return VRcomplex
def drawComplex(origData, ripsComplex, axes=[-6,8,-6,6]):
plt.clf()
plt.axis(axes)0],origData[:,1]) #plotting just for clarity
plt.scatter(origData[:,for i, txt in enumerate(origData):
0]+0.05, origData[i][1])) #add labels
plt.annotate(i, (origData[i][
#add lines for edges
for edge in [e for e in ripsComplex if len(e)==2]:
#print(edge)
= [origData[pt] for pt in [n for n in edge]]
pt1,pt2 #plt.gca().add_line(plt.Line2D(pt1,pt2))
= plt.Polygon([pt1,pt2], closed=None, fill=None, edgecolor='r')
line
plt.gca().add_line(line)
#add triangles
for triangle in [t for t in ripsComplex if len(t)==3]:
= [origData[pt] for pt in [n for n in triangle]]
pt1,pt2,pt3 = plt.Polygon([pt1,pt2,pt3], closed=False, color="blue",alpha=0.3, fill=True, edgecolor=None)
line
plt.gca().add_line(line) plt.show()
Visualize the simplicial complex.
= np.array(r.df)
newData # print(newData)
= buildGraph(raw_data=newData, epsilon=1.7)
graph = rips(graph=graph, k=3)
ripsComplex =newData, ripsComplex=ripsComplex) drawComplex(origData
The 2D simplicial complex construction above illustrated a weighted graph representation of the 2D data using 0, 1, and 2 cells. In general, similar decomposition using higher order simplexes can be obtained, with increasing computational complexity. This simplex graph help with the lower dimensional projections in the UMAP algorithm as the simplicial decomposition captures the topological structure of the data.
Our aim is to generate a low dimensional representation of the data that has similar (or homologous) topological structure as the higher-dimensional simplicial decomposition. The UMAP projection is low dimensional representation of the data into the desired embedding space (e.g., \(\mathbb{R}^2\) or \(\mathbb{R}^3\)). We will measure distances on the manifolds using some metric, e.g., standard Euclidean distance. In particular, we focus on distances to the nearest neighbors, where neighborhoods are defined as balls centered at specific data point of a min_dist
diameter, a user-specified hyper-parameter.
The lower dimensional projections are driven by iteratively solving optimization problems of finding the low dimensional simplicial graph that closely resembles the fuzzy topological structure of the previous graph (or the original simplicial decomposition. The simplexes are represented as graphs with weighted edges that can be thought of as probabilities that compare the homologous 0-simplexes, 1-simplexes, and so on. These probabilities can be modeled as Bernoulli processes as (corresponding) simplexes either exist or don’t exist and the probability is the univariate parameter of the Bernoulli distribution, which can be estimated using the cross entropy measure.
For example, if \(S1\) the set of all possible 1-simplexes, let’s denote by \(\omega(e)\) and \(\omega'(e)\) the weight functions of the 1-simplex \(e\) in the high dimensional space and the corresponding lower dimensional counterpart. Then, the cross entropy measure for the 1-simplexes is:
\[\sum_{e\in E} {\left [ \underbrace{\omega(e)log\left ( \frac{\omega(e)}{\omega'(e)}\right )}_\text{attractive force}+ \underbrace{(1-\omega(e))log\left ( \frac{1-\omega(e)}{1-\omega'(e)}\right ) }_\text{repulsive force}\right ]} .\]
The iterative optimization process would minimize the objective function composed of all cross entropies for all simplicial complexes using a strategy like stochastic gradient descent.
The optimization process balances the push-pull between the attractive forces between the points favoring larger values of \(\omega'(e)\) (that correspond to small distances between the points), and the repulsive forces between the ends of \(e\) when \(\omega(e)\) is small (that correspond to small values of \(\omega'(e)\).
Let’s go back to the MNIST digits dataset and apply UMAP projection from the original 784D space to 2D.
The R package umap
provides functionality to use UMAP for dimensional reduction of high-dimensional data.
Let’s demonstrate UMAP projecting the the hand-written digits dataset in 2D and using the projection for prediction and forecasting using new testing/validation data.
# install.packages('plotly')
library(plotly)
# install.packages('umap')
library(umap)
#
# # Specify the input data
# umapData <- iris[ , 1:4]
# umapLabels <- iris[ , 5]
#
# # UMAP projection in 2D
# umapData.umap <- umap(umapData) # exclude the iris 3-class labels
#
# # Using Python through R-reticulate interface
# # library(reticulate)
# # iris.umap_learn <- umap(umapData, method="umap-learn")
#
# # Plot UMAP reduction in 2D
# plot(umapData.umap$layout, col=as.factor(umapLabels))
# https://rdrr.io/cran/umap/man/umap.defaults.html
= umap.defaults
custom.settings $n_neighbors = 5
custom.settings$n_components=3
custom.settings# custom.settings
<- system.time(umap_digits3D <- umap(t(train)[1:10000 , ], umap.config=custom.settings)); execTime_UMAP execTime_UMAP
## user system elapsed
## 71.79 0.50 73.03
<- palette(rainbow(10))
cols
# 2D UMAP Plot
<- data.frame(umap_digits3D$layout[1:1000, ], train.labels.colors[1:1000])
dfUMAP plot_ly(dfUMAP, x = ~X1, y = ~X2, mode = 'text') %>%
add_text(text=names(train.labels.colors)[1:1000], textfont=list(color=dfUMAP$train.labels.colors.1.1000.)) %>%
layout(title="UMAP (748D->2D) Embedding", xaxis = list(title = ""), yaxis = list(title = ""))
# # 3D t_SNE plot
# execTime_UMAP <- system.time(umap_digits3D <- umap(t(train)[1:10000 , ], umap.config=custom.settings)); execTime_UMAP
#
# df3D <- data.frame(tsne_digits3D$Y[1:1000, ], train.labels.colors[1:1000])
# plot_ly(df3D, x = ~df3D[, 1], y = ~df3D[, 2], z= ~df3D[, 3], mode = 'markers+text') %>%
# add_text(text = names(train.labels.colors)[1:1000], textfont = list(color = df$train.labels.colors.1.1000.)) %>%
# layout(title = "t-SNE 3D Embedding",
# scene = list(xaxis = list(title=""), yaxis=list(title=""), zaxis=list(title="")))
# execTime_UMAP <- system.time(umapData.umap <-
# umap(t(train)[1:10000 , ]))
# execTime_UMAP
# # Full dataset(42K * 1K) execution may take much longer
#
# # Plot the result 2D UMAP embedding of the data with digits or solid discs
# par(mfrow=c(1,1)) # reset the plot canvas
#
# # Plot only first 1,000 cases (to avoid clutter)
# plot(umapData.umap$layout[1:200 , ], t='n', main="UMAP (748D->2D)") # don't plot the points to avoid clutter
# text(umapData.umap$layout[1:200,], labels=names(train.labels.colors)[1:200], col=train.labels.colors[1:200])
#
# # plot(umapData.umap$layout[1:200 , ],
# # col=as.factor(train.labels.colors[1:200]))
Using the MNIST hand-written digits dataset, or the simpler iris dataset, we will show how we can use UMAP for predicting the image-to-digit, or flower-to-species (taxa) mappings. In the MNIST data, we use a new 1K 2d images for prediction and for the iris data, we simulate new iris flower features by introducing random noise \(N(0,0.1)\) to the original Iris data.
# for IRIS data
# umapData <- iris[ , 1:4]
# umapLabels <- iris[ , 5]
#
# # UMAP projection in 2D
# umapData.umap <- umap(umapData)
#
# # Generate a new random set (from the original iris data)
# umapDataNoise <- umapData + matrix(rnorm(150*40, 0, 0.1), ncol=4)
# colnames(umapDataNoise) <- colnames(umapData)
# head(umapDataNoise, 5)
#
# # Prediction/Forecasting
# umapDataNoisePred <- predict(umapData.umap, umapDataNoise)
#
# # Stack the original and noise-corrupted data into the same object and plot
# plot(umapData.umap$layout, col=umapLabels, pch = 1,
# xlab='UMAP Dim 1', ylab='UMAP Dim 2') # UMAP of Original data
# par(new=TRUE)
# plot(umapDataNoisePred, col=umapLabels, pch = 3,
# xaxt='n', yaxt='n', ann=FALSE) # UMAP of Predictions on Noisy data
# Stack the original and noise-corrupted data into the same object and plot
# plot(umap_digits3D$layout, col=umapLabels, pch = 1,
# xlab='UMAP Dim 1', ylab='UMAP Dim 2') # UMAP of Original data
# par(new=TRUE)
# plot(umapDataNoisePred, col=umapLabels, pch = 3,
# xaxt='n', yaxt='n', ann=FALSE) # UMAP of Predictions on Noisy data
# Use the next 1,000 images as prospective new data to classify into 0,1, ..., 9 labels
<- train[ , 10001:11000]
umapData_1000 str(umapData_1000)
## num [1:784, 1:1000] 0 0 0 0 0 0 0 0 0 0 ...
## - attr(*, "dimnames")=List of 2
## ..$ : chr [1:784] "pixel0" "pixel1" "pixel2" "pixel3" ...
## ..$ : NULL
# Prediction/Forecasting
<- predict(umap_digits3D, t(umapData_1000)) # mind transpose of data (case * Feature)
umapData_1000_Pred # train.labels.colors <- colMap(train.labels)
# 2D UMAP Plot
<- data.frame(x=umap_digits3D$layout[1:1000, 1], y=umap_digits3D$layout[1:1000, 2],
dfUMAP col=train.labels.colors[1:1000])
plot_ly(dfUMAP, x = ~x, y = ~y, mode = 'text') %>%
# Add training data UMAP projection scatter Digits
add_text(text=names(train.labels.colors)[1:1000], textfont=list(color=~col, size=15), showlegend=F) %>%
# Add 1,000 testing hand-written digit cases onto the 2D UMAP projection pane as colored scatter points
# add_markers(marker = list(opacity=0.99, size=10, color=train.labels.colors[10001:11000],
# symbol=~as.numeric(as.factor(train.labels.colors[10001:11000]))))
add_markers(x=umapData_1000_Pred[ , 1], y=umapData_1000_Pred[ , 2],
name =train.labels[10001:11000], marker = list(color=train.labels.colors[10001:11000])) %>%
layout(title="UMAP Prediction: Projection of 1,000 New Images in 2D",
xaxis = list(title = ""), yaxis = list(title = ""))
# #############################################################################
# # plot(range(umapData.umap$layout)[ , 1], range(umapData.umap$layout[ , 2]))
# # points(umapData.umap$layout[,1], umapData.umap$layout[,2],
# # col=umapLabels, cex=2, pch=1)
# # mtext(side=3, main="UMAP")
# # df <- as.data.frame(cbind(umapData.umap$layout[ , 1], umapData.umap$layout[ , 2]))
# library(plotly)
# plot_ly(x = umapData.umap$layout[ , 1] ,
# y = umapData.umap$layout[ , 2], z = umapData.umap$layout[ , 2],
# color = ~umapLabels,
# opacity = .5,
# colors = c('darkgreen', 'red'),
# type = "scatter3d",
# mode = "markers",
# marker = list(size = 5, width=2),
# text = ~umapLabels,
# hoverinfo = "text") %>%
# layout(
# title = "UMAP clusters"
# )
#
# # p <-plot_ly(
# # x = umapData.umap$layout[ , 1] ,
# # y = umapData.umap$layout[ , 2], z = umapData.umap$layout[ , 2],
# # type="scatter3d",
# # mode = "markers",
# # color = as.factor(umapLabels)) %>%
# # layout(
# # title = "UMAP clusters"
# # )
# # p
The UMAP configuration allows specifying the 21 parameters controlling the UMAP projection embedding of the function umap()
.
umap()
is treated as a data matrix or as a distance matrixpredict()
Reproducibility is a stronger condition that requires obtaining consistent computational results using a fixed protocol including input data, computational steps, scientific methods, software code, compiler, and other experimental conditions.
Replicability is a weaker condition that requires obtaining homologous results across multiple studies examining the same phenomenon aiming at answering the same scientific questions, each study using its own independent (yet homologous) input data.
As most UMAP implementations involve stochastic processes that employ random number generation, there may be variations in the output results from repeated runs of the algorithm. To stabilize these outputs and ensure result reproducibility we can use seed-setting, umapData.umap.1234 = umap(umapData, random_state=1234)
.
Below is a summary of key UMAP interpretation points:
The data we will be using in this case study is the Clinical, Genetic and Imaging Data for Parkinson’s Disease in the SOCR website. A detailed data explanation is on the following link PD data. Let’s import the data into R.
# Loading required package: xml2
<- read_html("https://wiki.socr.umich.edu/index.php/SOCR_Data_PD_BiomedBigMetadata")
wiki_url html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
## [1] <div id="content" class="mw-body" role="main">\n\t\t\t<a id="top"></a>\n\ ...
<- html_table(html_nodes(wiki_url, "table")[[1]])
pd_data head(pd_data); summary(pd_data)
## # A tibble: 6 x 33
## Cases L_caudate_ComputeArea L_caudate_Volume R_caudate_Compu~ R_caudate_Volume
## <int> <int> <int> <int> <int>
## 1 2 597 767 855 968
## 2 2 597 767 855 968
## 3 2 597 767 855 968
## 4 2 597 767 855 968
## 5 3 604 873 935 1043
## 6 3 604 873 935 1043
## # ... with 28 more variables: L_putamen_ComputeArea <int>,
## # L_putamen_Volume <int>, R_putamen_ComputeArea <int>,
## # R_putamen_Volume <int>, L_hippocampus_ComputeArea <int>,
## # L_hippocampus_Volume <int>, R_hippocampus_ComputeArea <int>,
## # R_hippocampus_Volume <int>, cerebellum_ComputeArea <int>,
## # cerebellum_Volume <int>, L_lingual_gyrus_ComputeArea <int>,
## # L_lingual_gyrus_Volume <int>, R_lingual_gyrus_ComputeArea <int>, ...
## Cases L_caudate_ComputeArea L_caudate_Volume R_caudate_ComputeArea
## Min. : 2.0 Min. :525.0 Min. :719.0 Min. :795.0
## 1st Qu.:158.0 1st Qu.:582.0 1st Qu.:784.0 1st Qu.:875.0
## Median :363.5 Median :600.0 Median :800.0 Median :897.0
## Mean :346.1 Mean :600.4 Mean :800.3 Mean :894.5
## 3rd Qu.:504.0 3rd Qu.:619.0 3rd Qu.:819.0 3rd Qu.:916.0
## Max. :692.0 Max. :667.0 Max. :890.0 Max. :977.0
## R_caudate_Volume L_putamen_ComputeArea L_putamen_Volume R_putamen_ComputeArea
## Min. : 916 Min. : 815.0 Min. :1298 Min. :1198
## 1st Qu.: 979 1st Qu.: 879.0 1st Qu.:1376 1st Qu.:1276
## Median : 998 Median : 897.5 Median :1400 Median :1302
## Mean :1001 Mean : 898.9 Mean :1400 Mean :1300
## 3rd Qu.:1022 3rd Qu.: 919.0 3rd Qu.:1427 3rd Qu.:1321
## Max. :1094 Max. :1003.0 Max. :1507 Max. :1392
## R_putamen_Volume L_hippocampus_ComputeArea L_hippocampus_Volume
## Min. :2846 Min. :1203 Min. :3036
## 1st Qu.:2959 1st Qu.:1277 1st Qu.:3165
## Median :3000 Median :1300 Median :3200
## Mean :3000 Mean :1302 Mean :3198
## 3rd Qu.:3039 3rd Qu.:1325 3rd Qu.:3228
## Max. :3148 Max. :1422 Max. :3381
## R_hippocampus_ComputeArea R_hippocampus_Volume cerebellum_ComputeArea
## Min. :1414 Min. :3634 Min. :16378
## 1st Qu.:1479 1st Qu.:3761 1st Qu.:16617
## Median :1504 Median :3802 Median :16699
## Mean :1504 Mean :3799 Mean :16700
## 3rd Qu.:1529 3rd Qu.:3833 3rd Qu.:16784
## Max. :1602 Max. :4013 Max. :17096
## cerebellum_Volume L_lingual_gyrus_ComputeArea L_lingual_gyrus_Volume
## Min. :13680 Min. :3136 Min. :10709
## 1st Qu.:13933 1st Qu.:3262 1st Qu.:10943
## Median :13996 Median :3299 Median :11007
## Mean :14002 Mean :3300 Mean :11010
## 3rd Qu.:14077 3rd Qu.:3333 3rd Qu.:11080
## Max. :14370 Max. :3469 Max. :11488
## R_lingual_gyrus_ComputeArea R_lingual_gyrus_Volume
## Min. :3135 Min. :11679
## 1st Qu.:3258 1st Qu.:11935
## Median :3294 Median :12001
## Mean :3296 Mean :12008
## 3rd Qu.:3338 3rd Qu.:12079
## Max. :3490 Max. :12324
## L_fusiform_gyrus_ComputeArea L_fusiform_gyrus_Volume
## Min. :3446 Min. :10682
## 1st Qu.:3554 1st Qu.:10947
## Median :3594 Median :11016
## Mean :3598 Mean :11011
## 3rd Qu.:3637 3rd Qu.:11087
## Max. :3763 Max. :11394
## R_fusiform_gyrus_ComputeArea R_fusiform_gyrus_Volume Sex
## Min. :3094 Min. : 9736 Min. :0.0000
## 1st Qu.:3260 1st Qu.: 9928 1st Qu.:0.0000
## Median :3296 Median : 9994 Median :1.0000
## Mean :3299 Mean : 9996 Mean :0.5851
## 3rd Qu.:3332 3rd Qu.:10058 3rd Qu.:1.0000
## Max. :3443 Max. :10235 Max. :1.0000
## Weight Age Dx chr12_rs34637584_GT
## Min. : 51.00 Min. :31.00 Length:1128 Min. :0.000
## 1st Qu.: 71.00 1st Qu.:54.00 Class :character 1st Qu.:0.000
## Median : 78.50 Median :61.00 Mode :character Median :1.000
## Mean : 78.45 Mean :60.64 Mean :0.539
## 3rd Qu.: 84.00 3rd Qu.:68.00 3rd Qu.:1.000
## Max. :109.00 Max. :87.00 Max. :1.000
## chr17_rs11868035_GT UPDRS_part_I UPDRS_part_II UPDRS_part_III
## Min. :0.0000 Min. :0.000 Min. : 1.000 Min. : 1.00
## 1st Qu.:0.0000 1st Qu.:0.000 1st Qu.: 5.000 1st Qu.: 6.00
## Median :0.0000 Median :1.000 Median : 9.000 Median :13.00
## Mean :0.4184 Mean :0.773 Mean : 8.879 Mean :13.02
## 3rd Qu.:1.0000 3rd Qu.:1.000 3rd Qu.:13.000 3rd Qu.:18.00
## Max. :1.0000 Max. :2.000 Max. :20.000 Max. :30.00
## Time
## Min. : 0.0
## 1st Qu.: 4.5
## Median : 9.0
## Mean : 9.0
## 3rd Qu.:13.5
## Max. :18.0
To make sure that the data is ready for further modeling, we need to fix a few things. Firstly, the Dx
variable or diagnosis is a factor. We need to change it to a numeric variable. Second, we don’t need the patient ID and time variable in the dimension reduction procedures.
$Dx <- gsub("PD", 1, pd_data$Dx)
pd_data$Dx <- gsub("HC", 0, pd_data$Dx)
pd_data$Dx <- gsub("SWEDD", 0, pd_data$Dx)
pd_data$Dx <- as.numeric(pd_data$Dx)
pd_dataattach(pd_data)
<-pd_data[, -c(1, 33)] pd_data
Now we start the process of fitting a PCA model. Here we will use the princomp()
function and use the correlation rather than the covariance matrix for calculation. Mind that previous, we used prcomp()
and now we are employing princomp()
, two different functions, to compute the PCA.
<- princomp(pd_data, cor=TRUE)
pca.model summary(pca.model) # pc loadings (i.e., eigenvector columns)
## Importance of components:
## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5
## Standard deviation 1.39495952 1.28668145 1.28111293 1.2061402 1.18527282
## Proportion of Variance 0.06277136 0.05340481 0.05294356 0.0469282 0.04531844
## Cumulative Proportion 0.06277136 0.11617617 0.16911973 0.2160479 0.26136637
## Comp.6 Comp.7 Comp.8 Comp.9 Comp.10
## Standard deviation 1.15961464 1.135510 1.10882348 1.0761943 1.06687730
## Proportion of Variance 0.04337762 0.041593 0.03966095 0.0373611 0.03671701
## Cumulative Proportion 0.30474399 0.346337 0.38599794 0.4233590 0.46007604
## Comp.11 Comp.12 Comp.13 Comp.14 Comp.15
## Standard deviation 1.05784209 1.04026215 1.03067437 1.0259684 0.99422375
## Proportion of Variance 0.03609774 0.03490791 0.03426741 0.0339552 0.03188648
## Cumulative Proportion 0.49617378 0.53108169 0.56534910 0.5993043 0.63119078
## Comp.16 Comp.17 Comp.18 Comp.19 Comp.20
## Standard deviation 0.97385632 0.96688855 0.92687735 0.92376374 0.89853718
## Proportion of Variance 0.03059342 0.03015721 0.02771296 0.02752708 0.02604416
## Cumulative Proportion 0.66178421 0.69194141 0.71965437 0.74718145 0.77322561
## Comp.21 Comp.22 Comp.23 Comp.24 Comp.25
## Standard deviation 0.88924412 0.87005195 0.86433816 0.84794183 0.82232529
## Proportion of Variance 0.02550823 0.02441905 0.02409937 0.02319372 0.02181351
## Cumulative Proportion 0.79873384 0.82315289 0.84725226 0.87044598 0.89225949
## Comp.26 Comp.27 Comp.28 Comp.29 Comp.30
## Standard deviation 0.80703739 0.78546699 0.77505522 0.76624322 0.68806884
## Proportion of Variance 0.02100998 0.01990188 0.01937776 0.01893963 0.01527222
## Cumulative Proportion 0.91326947 0.93317135 0.95254911 0.97148875 0.98676096
## Comp.31
## Standard deviation 0.64063259
## Proportion of Variance 0.01323904
## Cumulative Proportion 1.00000000
plot(pca.model)
biplot(pca.model)
fviz_pca_biplot(pca.model, axes = c(1, 2), geom = "point",
col.ind = "black", col.var = "steelblue", label = "all",
invisible = "none", repel = F, habillage = pd_data$Sex,
palette = NULL, addEllipses = TRUE, title = "PCA - Biplot")
plot_ly(x = c(1:length(pca.model$sdev)), y = pca.model$sdev*pca.model$sdev, name = "Scree", type = "bar") %>%
layout(title="Scree Plot", xaxis = list(title="PC's"), yaxis = list(title="Variances (SD^2)"))
# Scores
<- pca.model$scores
scores # Loadings
<- pca.model$loadings
loadings
# Visualization scale factor for loadings
<- 10
scaleLoad
<- plot_ly() %>%
p add_trace(x=scores[,1], y=scores[,2], z=scores[,3], type="scatter3d", mode="markers", name=pd_data$Dx,
marker = list(color=pd_data$Dx, colorscale = c("gray", "red"), opacity = 0.7), showlegend=F)
for (k in 1:nrow(loadings)) {
<- c(0, loadings[k,1])*scaleLoad
x <- c(0, loadings[k,2])*scaleLoad
y <- c(0, loadings[k,3])*scaleLoad
z <- p %>% add_trace(x=x, y=y, z=z, type="scatter3d", mode="lines",
p name=paste0("Loading PC ", k, " ", colnames(pd.sub)[k]), line=list(width=8), opacity=1)
}
<- p %>%
p layout(legend = list(orientation = 'h'),
title=paste0("3D Projection of ", length(pca.model$sdev),"D PD Data along First 3 PCs (Colored by Dx)"))
p
Albeit the two cohorts (normal controls and patients, \(DX\), red and gray colored markers in the 3D scene) are slightly separated in the second principal direction, we can see in this real world example that PCs do not necessarily correspond to a definitive “elbow” plot suggesting an optimal number of components. In our PCA model, each PC explains about the same amount of variation. Thus, it is hard to tell how many PCs, or factors, we need to select. This would be an ad hoc decision in this case. We can understand this better after understanding the following FA model.
Let’s set up an Cattel’s Scree test to determine the number of factors first.
<- eigen(cor(pd_data)) # get eigenvalues
ev <- parallel(subject=nrow(pd_data), var=ncol(pd_data), rep=100, cent=.05)
ap <- nScree(x=ev$values, aparallel=ap$eigen$qevpea)
nS summary(nS)
## Report For a nScree Class
##
## Details: components
##
## Eigenvalues Prop Cumu Par.Analysis Pred.eig OC Acc.factor AF
## 1 2 0 0 1 2 (< OC) NA (< AF)
## 2 2 0 0 1 2 0
## 3 2 0 0 1 1 0
## 4 1 0 0 1 1 0
## 5 1 0 0 1 1 0
## 6 1 0 0 1 1 0
## 7 1 0 0 1 1 0
## 8 1 0 0 1 1 0
## 9 1 0 0 1 1 0
## 10 1 0 0 1 1 0
## 11 1 0 0 1 1 0
## 12 1 0 1 1 1 0
## 13 1 0 1 1 1 0
## 14 1 0 1 1 1 0
## 15 1 0 1 1 1 0
## 16 1 0 1 1 1 0
## 17 1 0 1 1 1 0
## 18 1 0 1 1 1 0
## 19 1 0 1 1 1 0
## 20 1 0 1 1 1 0
## 21 1 0 1 1 1 0
## 22 1 0 1 1 1 0
## 23 1 0 1 1 1 0
## 24 1 0 1 1 1 0
## 25 1 0 1 1 1 0
## 26 1 0 1 1 1 0
## 27 1 0 1 1 1 0
## 28 1 0 1 1 1 0
## 29 1 0 1 1 1 0
## 30 0 0 1 1 NA 0
## 31 0 0 1 1 NA NA
##
##
## Number of factors retained by index
##
## noc naf nparallel nkaiser
## 1 1 1 14 14
Although the Cattel’s Scree test suggest that we should use 14 factors, the real fit shows 14 is not enough. Previous PCA results suggest we need around 20 PCs to obtain a cumulative variance of 0.6. After a few trials we find that 19 factors can pass the chi square test for sufficient number of factors at \(0.05\) level.
<-factanal(pd_data, 19, rotation="varimax")
fa.model fa.model
##
## Call:
## factanal(x = pd_data, factors = 19, rotation = "varimax")
##
## Uniquenesses:
## L_caudate_ComputeArea L_caudate_Volume
## 0.840 0.005
## R_caudate_ComputeArea R_caudate_Volume
## 0.868 0.849
## L_putamen_ComputeArea L_putamen_Volume
## 0.791 0.702
## R_putamen_ComputeArea R_putamen_Volume
## 0.615 0.438
## L_hippocampus_ComputeArea L_hippocampus_Volume
## 0.476 0.777
## R_hippocampus_ComputeArea R_hippocampus_Volume
## 0.798 0.522
## cerebellum_ComputeArea cerebellum_Volume
## 0.137 0.504
## L_lingual_gyrus_ComputeArea L_lingual_gyrus_Volume
## 0.780 0.698
## R_lingual_gyrus_ComputeArea R_lingual_gyrus_Volume
## 0.005 0.005
## L_fusiform_gyrus_ComputeArea L_fusiform_gyrus_Volume
## 0.718 0.559
## R_fusiform_gyrus_ComputeArea R_fusiform_gyrus_Volume
## 0.663 0.261
## Sex Weight
## 0.829 0.005
## Age Dx
## 0.005 0.005
## chr12_rs34637584_GT chr17_rs11868035_GT
## 0.638 0.721
## UPDRS_part_I UPDRS_part_II
## 0.767 0.826
## UPDRS_part_III
## 0.616
##
## Loadings:
## Factor1 Factor2 Factor3 Factor4 Factor5 Factor6
## L_caudate_ComputeArea
## L_caudate_Volume 0.980
## R_caudate_ComputeArea
## R_caudate_Volume
## L_putamen_ComputeArea
## L_putamen_Volume
## R_putamen_ComputeArea
## R_putamen_Volume
## L_hippocampus_ComputeArea
## L_hippocampus_Volume
## R_hippocampus_ComputeArea -0.102
## R_hippocampus_Volume
## cerebellum_ComputeArea
## cerebellum_Volume
## L_lingual_gyrus_ComputeArea 0.107 0.106
## L_lingual_gyrus_Volume
## R_lingual_gyrus_ComputeArea 0.989
## R_lingual_gyrus_Volume 0.983
## L_fusiform_gyrus_ComputeArea
## L_fusiform_gyrus_Volume
## R_fusiform_gyrus_ComputeArea
## R_fusiform_gyrus_Volume
## Sex -0.111
## Weight 0.983
## Age 0.984
## Dx 0.965
## chr12_rs34637584_GT 0.124
## chr17_rs11868035_GT -0.303
## UPDRS_part_I -0.260
## UPDRS_part_II
## UPDRS_part_III 0.332 0.104
## Factor7 Factor8 Factor9 Factor10 Factor11 Factor12
## L_caudate_ComputeArea -0.101
## L_caudate_Volume
## R_caudate_ComputeArea
## R_caudate_Volume -0.103 -0.107 -0.182 0.174
## L_putamen_ComputeArea 0.299 -0.147
## L_putamen_Volume -0.123
## R_putamen_ComputeArea 0.147 -0.175 0.225
## R_putamen_Volume 0.698
## L_hippocampus_ComputeArea 0.708
## L_hippocampus_Volume
## R_hippocampus_ComputeArea
## R_hippocampus_Volume 0.652
## cerebellum_ComputeArea 0.920
## cerebellum_Volume 0.690
## L_lingual_gyrus_ComputeArea 0.143 -0.126
## L_lingual_gyrus_Volume
## R_lingual_gyrus_ComputeArea
## R_lingual_gyrus_Volume
## L_fusiform_gyrus_ComputeArea
## L_fusiform_gyrus_Volume
## R_fusiform_gyrus_ComputeArea 0.121
## R_fusiform_gyrus_Volume 0.844
## Sex
## Weight
## Age
## Dx
## chr12_rs34637584_GT -0.195 -0.207 0.197
## chr17_rs11868035_GT -0.165
## UPDRS_part_I -0.209 0.212 0.122
## UPDRS_part_II
## UPDRS_part_III -0.161 0.104
## Factor13 Factor14 Factor15 Factor16 Factor17
## L_caudate_ComputeArea 0.113 -0.119 -0.165
## L_caudate_Volume
## R_caudate_ComputeArea 0.174 -0.164
## R_caudate_Volume 0.125 0.120
## L_putamen_ComputeArea -0.165
## L_putamen_Volume 0.128 -0.149 0.382 -0.187
## R_putamen_ComputeArea 0.260 -0.218
## R_putamen_Volume -0.128
## L_hippocampus_ComputeArea
## L_hippocampus_Volume -0.106
## R_hippocampus_ComputeArea 0.331 0.181
## R_hippocampus_Volume -0.114
## cerebellum_ComputeArea
## cerebellum_Volume
## L_lingual_gyrus_ComputeArea 0.136 0.137 0.256
## L_lingual_gyrus_Volume
## R_lingual_gyrus_ComputeArea
## R_lingual_gyrus_Volume
## L_fusiform_gyrus_ComputeArea 0.493 -0.113
## L_fusiform_gyrus_Volume 0.646
## R_fusiform_gyrus_ComputeArea -0.544
## R_fusiform_gyrus_Volume
## Sex -0.352 -0.111
## Weight 0.106
## Age
## Dx 0.210
## chr12_rs34637584_GT 0.227 -0.289 0.186
## chr17_rs11868035_GT 0.168 -0.113 0.206
## UPDRS_part_I -0.123
## UPDRS_part_II 0.378
## UPDRS_part_III -0.121 -0.282 0.311
## Factor18 Factor19
## L_caudate_ComputeArea 0.237
## L_caudate_Volume
## R_caudate_ComputeArea -0.112
## R_caudate_Volume 0.113
## L_putamen_ComputeArea 0.164
## L_putamen_Volume -0.131
## R_putamen_ComputeArea -0.109 0.341
## R_putamen_Volume 0.110
## L_hippocampus_ComputeArea
## L_hippocampus_Volume -0.435
## R_hippocampus_ComputeArea
## R_hippocampus_Volume
## cerebellum_ComputeArea
## cerebellum_Volume
## L_lingual_gyrus_ComputeArea 0.140
## L_lingual_gyrus_Volume 0.536
## R_lingual_gyrus_ComputeArea
## R_lingual_gyrus_Volume
## L_fusiform_gyrus_ComputeArea
## L_fusiform_gyrus_Volume
## R_fusiform_gyrus_ComputeArea
## R_fusiform_gyrus_Volume
## Sex
## Weight
## Age
## Dx
## chr12_rs34637584_GT -0.152
## chr17_rs11868035_GT -0.175
## UPDRS_part_I 0.127
## UPDRS_part_II
## UPDRS_part_III
##
## Factor1 Factor2 Factor3 Factor4 Factor5 Factor6 Factor7 Factor8
## SS loadings 1.282 1.029 1.026 1.019 1.013 1.011 0.921 0.838
## Proportion Var 0.041 0.033 0.033 0.033 0.033 0.033 0.030 0.027
## Cumulative Var 0.041 0.075 0.108 0.140 0.173 0.206 0.235 0.263
## Factor9 Factor10 Factor11 Factor12 Factor13 Factor14 Factor15
## SS loadings 0.782 0.687 0.647 0.615 0.587 0.569 0.566
## Proportion Var 0.025 0.022 0.021 0.020 0.019 0.018 0.018
## Cumulative Var 0.288 0.310 0.331 0.351 0.370 0.388 0.406
## Factor16 Factor17 Factor18 Factor19
## SS loadings 0.547 0.507 0.475 0.456
## Proportion Var 0.018 0.016 0.015 0.015
## Cumulative Var 0.424 0.440 0.455 0.470
##
## Test of the hypothesis that 19 factors are sufficient.
## The chi square statistic is 54.51 on 47 degrees of freedom.
## The p-value is 0.211
This data matrix has relatively low correlation. Thus, it is not suitable for ICA.
cor(pd_data)[1:10, 1:10]
## L_caudate_ComputeArea L_caudate_Volume
## L_caudate_ComputeArea 1.000000000 0.05794916
## L_caudate_Volume 0.057949162 1.00000000
## R_caudate_ComputeArea -0.060576361 0.01076372
## R_caudate_Volume 0.043994457 0.07245568
## L_putamen_ComputeArea 0.009640983 -0.06632813
## L_putamen_Volume -0.064299184 -0.11131525
## R_putamen_ComputeArea 0.040808105 0.04504867
## R_putamen_Volume 0.058552841 -0.11830387
## L_hippocampus_ComputeArea -0.037932760 -0.04443615
## L_hippocampus_Volume -0.042033469 -0.04680825
## R_caudate_ComputeArea R_caudate_Volume
## L_caudate_ComputeArea -0.060576361 0.043994457
## L_caudate_Volume 0.010763720 0.072455677
## R_caudate_ComputeArea 1.000000000 0.057441889
## R_caudate_Volume 0.057441889 1.000000000
## L_putamen_ComputeArea -0.015959528 -0.017003442
## L_putamen_Volume 0.063279351 0.021962691
## R_putamen_ComputeArea 0.078643479 0.054287467
## R_putamen_Volume 0.007022844 -0.094336376
## L_hippocampus_ComputeArea 0.051359613 0.006123355
## L_hippocampus_Volume 0.085788328 -0.077913614
## L_putamen_ComputeArea L_putamen_Volume
## L_caudate_ComputeArea 0.009640983 -0.06429918
## L_caudate_Volume -0.066328127 -0.11131525
## R_caudate_ComputeArea -0.015959528 0.06327935
## R_caudate_Volume -0.017003442 0.02196269
## L_putamen_ComputeArea 1.000000000 0.02228947
## L_putamen_Volume 0.022289469 1.00000000
## R_putamen_ComputeArea 0.090496109 0.09093926
## R_putamen_Volume 0.176353726 -0.05768765
## L_hippocampus_ComputeArea 0.094604791 0.02530330
## L_hippocampus_Volume -0.064425367 0.04041557
## R_putamen_ComputeArea R_putamen_Volume
## L_caudate_ComputeArea 0.04080810 0.058552841
## L_caudate_Volume 0.04504867 -0.118303868
## R_caudate_ComputeArea 0.07864348 0.007022844
## R_caudate_Volume 0.05428747 -0.094336376
## L_putamen_ComputeArea 0.09049611 0.176353726
## L_putamen_Volume 0.09093926 -0.057687648
## R_putamen_ComputeArea 1.00000000 0.052245264
## R_putamen_Volume 0.05224526 1.000000000
## L_hippocampus_ComputeArea -0.05508472 0.131800075
## L_hippocampus_Volume -0.08866344 -0.001133570
## L_hippocampus_ComputeArea L_hippocampus_Volume
## L_caudate_ComputeArea -0.037932760 -0.04203347
## L_caudate_Volume -0.044436146 -0.04680825
## R_caudate_ComputeArea 0.051359613 0.08578833
## R_caudate_Volume 0.006123355 -0.07791361
## L_putamen_ComputeArea 0.094604791 -0.06442537
## L_putamen_Volume 0.025303302 0.04041557
## R_putamen_ComputeArea -0.055084723 -0.08866344
## R_putamen_Volume 0.131800075 -0.00113357
## L_hippocampus_ComputeArea 1.000000000 -0.02633816
## L_hippocampus_Volume -0.026338163 1.00000000
# generate some random categorical labels for all N observations
<- pd_data$Dx
class <- as.data.frame(pd_data[1:5], class=class)
df
plot_ly(df) %>%
add_trace(type = 'splom', dimensions = list( list(label=colnames(pd_data)[1], values=~L_caudate_ComputeArea),
list(label=colnames(pd_data)[2], values=~L_caudate_Volume),
list(label=colnames(pd_data)[3], values=~R_caudate_ComputeArea),
list(label=colnames(pd_data)[4], values=~R_caudate_Volume),
list(label=colnames(pd_data)[5], values=~L_putamen_ComputeArea)),
text=~class, marker = list(line = list(width = 1, color = 'rgb(230,230,230)'))) %>%
layout(title= 'Parkinsons Disease (PD) Data Pairs Plot', hovermode='closest', dragmode= 'select',
plot_bgcolor='rgba(240,240,240, 0.95)')
Next, let’s try the t-Distributed Stochastic Neighbor Embedding method on the PD data.
# install.packages("Rtsne")
library(Rtsne)
# If working with post-processed PD data above: remove duplicates (after stripping time)
# pd_data <- unique(pd_data[,])
# If working with raw PD data: reload it
<- html_table(html_nodes(wiki_url, "table")[[1]])
pd_data
# Run the t-SNE, tracking the execution time (artificially reducing the sample-size to get reasonable calculation time)
<- system.time(tsne_PD <- Rtsne(pd_data, dims = 3, perplexity=30, verbose=TRUE, max_iter = 1000)); execTime_tSNE execTime_tSNE
## Performing PCA
## Read the 1128 x 35 data matrix successfully!
## OpenMP is working. 1 threads.
## Using no_dims = 3, perplexity = 30.000000, and theta = 0.500000
## Computing input similarities...
## Building tree...
## Done in 0.19 seconds (sparsity = 0.111894)!
## Learning embedding...
## Iteration 50: error is 71.952447 (50 iterations in 0.38 seconds)
## Iteration 100: error is 69.327842 (50 iterations in 0.27 seconds)
## Iteration 150: error is 69.327343 (50 iterations in 0.17 seconds)
## Iteration 200: error is 69.326984 (50 iterations in 0.17 seconds)
## Iteration 250: error is 69.327757 (50 iterations in 0.18 seconds)
## Iteration 300: error is 1.021073 (50 iterations in 0.35 seconds)
## Iteration 350: error is 0.740974 (50 iterations in 0.40 seconds)
## Iteration 400: error is 0.671042 (50 iterations in 0.44 seconds)
## Iteration 450: error is 0.642149 (50 iterations in 0.47 seconds)
## Iteration 500: error is 0.628549 (50 iterations in 0.51 seconds)
## Iteration 550: error is 0.621370 (50 iterations in 0.56 seconds)
## Iteration 600: error is 0.617776 (50 iterations in 0.55 seconds)
## Iteration 650: error is 0.613730 (50 iterations in 0.71 seconds)
## Iteration 700: error is 0.611010 (50 iterations in 0.57 seconds)
## Iteration 750: error is 0.609505 (50 iterations in 0.57 seconds)
## Iteration 800: error is 0.607603 (50 iterations in 0.57 seconds)
## Iteration 850: error is 0.605966 (50 iterations in 0.57 seconds)
## Iteration 900: error is 0.604546 (50 iterations in 0.56 seconds)
## Iteration 950: error is 0.601767 (50 iterations in 0.55 seconds)
## Iteration 1000: error is 0.598981 (50 iterations in 0.56 seconds)
## Fitting performed in 9.12 seconds.
## user system elapsed
## 9.46 0.05 9.66
# Plot the result 2D map embedding of the data
# table(pd_data$Sex)
# plot(tsne_PD$Y, main="t-SNE Clusters", col=rainbow(length(unique(pd_data$Sex))), pch = 1)
#legend("topright", c("Male", "Female"), fill=rainbow(length(unique(pd_data$Sex))), bg='gray90', cex=0.5)
table(pd_data$Dx)
##
## HC PD SWEDD
## 400 400 328
# Either use the DX label column to set the colors col = as.factor(pd_data$Dx)
#plot(tsne_PDs$Y, main="t-SNE Clusters", col=as.factor(pd_data$Dx), pch = 15)
#legend("topright", c("HC", "PD", "SWEDD"), fill=unique(as.factor(pd_data$Dx)), bg='gray90', cex=0.5)
# Or to set the colors explicitly
= function(input_char){
CharToColor = c("HC"="blue", "PD"="red", "SWEDD"="yellow")
mapping
mapping[input_char]
}$Dx.col = sapply(pd_data$Dx, CharToColor)
pd_data
# plot(tsne_PD$Y, main="t-SNE Clusters", col=pd_data$Dx.col, pch = 15)
# legend("topright", c("HC", "PD", "SWEDD"), fill=unique(pd_data$Dx.col), bg='gray90', cex=0.5)
<- data.frame(tsne_PD$Y, pd_data$Dx.col)
df3D plot_ly(df3D, x = ~df3D[, 1], y = ~df3D[, 2], z= ~df3D[, 3], type="scatter3d", mode = 'markers',
color = pd_data$Dx.col, name=pd_data$Dx) %>%
layout(title = "PD t-SNE 3D Embedding",
scene = list(xaxis = list(title=""), yaxis=list(title=""), zaxis=list(title="")))
Similarly, we can try the UMAP method on the PD data.
<- system.time(umap_PD_3D <- umap(pd_data[, -c(27, 34)])) # remove "Dx.col" &
execTime_UMAP execTime_UMAP
## user system elapsed
## 3.14 0.00 3.17
<- palette(rainbow(3))
cols
# 2D UMAP Plot
<- data.frame(umap_PD_3D$layout, df3D$pd_data.Dx.col)
dfUMAP plot_ly(dfUMAP, x = ~X1, y = ~X2, mode = 'text') %>%
add_text(text=pd_data$Dx, textfont=list(color=dfUMAP$df3D.pd_data.Dx.col)) %>%
layout(title="UMAP PD (32D->2D) Embedding", xaxis = list(title = ""), yaxis = list(title = ""))
The results of the PCA, ICA, FA, t-SNE, and UMAP methods on the PD data imply that the data is complex and intrinsically high-dimensional, which prevents explicit embeddings into a low-dimensional (e.g., 2D or 3D) space. More advanced methods to interrogate this dataset will be demonstrated later. The SOCR publications site provides additional examples of Parkinson’s disease studies.