SOCR ≫ | DSPA ≫ | DSPA2 Topics ≫ |
In this chapter, we are going to cover several powerful black-box machine learning and artificial intelligence techniques. These techniques have complex mathematical formulations, however, efficient algorithms and reliable software packages have been developed to utilize them for various practical applications. We will (1) describe Neural Networks as analogues of biological neurons, (2) develop hands-on a neural network that can be trained to compute the square-root function, (3) describe support vector machine (SVM) classification, (4) present the random forest as an ensemble ML technique, and (5) analyze several case-studies, including optical character recognition (OCR), the Iris flowers, Google Trends and the Stock Market, and Quality of Life in chronic disease.
Later, in Chapter 14, we will provide more details and additional examples of deep neural network learning. For now, let’s start by exploring the mechanics inside black box machine learning approaches.
An Artificial Neural Network (ANN)
model mimics the
biological brain response to multisource (sensory-motor) stimuli
(inputs). ANN simulates the brain using a network of interconnected
neuron cells to create a massive parallel processor. Indeed, ANNs rely
on graphs of artificial nodes, not brain cells, to model intrinsic
process characteristics using observational data.
The basic ANN component is a cell node. Suppose we have the input \(x=\{x_i\}\) to the node feeding information from upstream network nodes, and one output propagating the information downstream through the network. The first step in fitting an ANN involves estimation of the weight coefficients for each input feature. These weights (\(w\)’s) correspond to the relative importance of each input. Then, the weighted signals are summed by the “neuron cell” and this sum is passed on according to an activation function denoted by \(f(\cdot)\). The last step is generating an output \(y\) at the end of each node. A typical output will have the following mathematical relationship to the inputs. The weights \(\{w_i\}_{i\ge 1}\) control the weight-averaging of the inputs, \(\{x_i\}\), used to assess the activation function. The constant factor weight \(w_o\) and the corresponding bias term \(b\) allows us to shift or offset the entire activation function (left or right). \[\underbrace{y(x)}_{output}=f\left (w_o \underbrace{b}_{bias}+\sum_{i=1}^n \overbrace{w_i}^{weights} \underbrace{x_i}_{inputs}\right ).\]
There are three important components for building a neural network:
Let’s look at each of these components one by one.
There are many alternative activation functions. One example is a threshold activation function that results in an output signal only when a specified input threshold has been attained.
\[f(x)= \left\{ \begin{array}{ll} 0 & x<0 \\ 1 & x\geq 0 \\ \end{array} \right. .\]
This is the simplest form of an activation function. It may be rarely used in real world situations. Most commonly used alternative is the sigmoid activation function where \(f(x)=\frac{1}{1+e^{-x}}\). The Euler number \(e\) is defined by the limit of \(e=\displaystyle\lim_{n\longrightarrow\infty}{\left ( 1+\frac{1}{n}\right )^n}\). The output signal is no longer binary but can be any real number ranging from 0 to 1.
Other activation functions might also be useful:
Depending on the specific problem, we can choose a proper activation function based on the needs for a corresponding codomain or function range. For example, with hyperbolic tangent activation function, we can only have outputs ranging from -1 to 1 regardless of what input we have. With linear functions the output may range from \(-\infty\) to \(+\infty\). The Gaussian activation function yields an ANN model called Radial Basis Function network.
The number of layers: The \(x\)’s, or features, in the dataset are called input nodes while the predicted values are called the output nodes. Multilayer networks include multiple hidden layers. The following graph represents a two-layer neural network:
\[\text{Two Layer network.}\]
When we have multiple layers, the information flow could be complicated.
The arrows in the last graph (with multiple layers) suggest a feed forward network. In such networks, we can also have multiple outcomes modeled simultaneously.
\[\text{Multi-output}\]
Alternatively, in a recurrent network (feedback network), information can also travel backwards in loops (or delays). This is illustrated in the following graph.
\[\text{Delay(feedback network)}\]
This short-term memory increases the power of recurrent networks dramatically. However, in practice, recurrent networks are not commonly used.
In any network, the number of input nodes and output nodes are predetermined by the predictive variables in the dataset and the desired prediction outcome. Typically, researchers can specify the number of hidden layers and the number of nodes in each layer of the network model. Ideally, simpler networks with fewer nodes and lower number of hidden layers are preferred for simplicity, computational efficiency, and network model interpretability.
This algorithm could determine the weights in the model using the strategy of optimizing back-propagating errors. First, we assign random weight values (all weights must be non-trivial, i.e., \(\not= 0\)). For example, we can use a normal distribution, or any other random process, to assign initial weights (priors). Then, we can adjust the weights iteratively by repeating the process until a certain convergence or stopping criterion is met. Each iteration contains two phases.
In the end, we pick a set of weights minimizing the total aggregate error to be the weights of the specific neural network.
In this case study, we are going to use the Google trends and stock market dataset. A doc file with the meta-data and the CSV data are available on the supporting Case-Studies Canvas Site. These daily data (between 2008 and 2009) can be used to examine the associations between Google search trends and market conditions, e.g., real estate or stock market indices.
We’ll remove the first two columns since the goal now is to model and predict Real Estate or other economic indices.
First we need to load the dataset into R
.
google <- read.csv("https://umich.instructure.com/files/416274/download?download_frd=1", stringsAsFactors = F)
Let’s delete the first two columns, since the only goal is to predict Google Real Estate Index with other indexes and DJI.
## 'data.frame': 731 obs. of 24 variables:
## $ Unemployment : num 1.54 1.56 1.59 1.62 1.64 1.64 1.71 1.85 1.82 1.78 ...
## $ Rental : num 0.88 0.9 0.92 0.92 0.94 0.96 0.99 1.02 1.02 1.01 ...
## $ RealEstate : num 0.79 0.81 0.82 0.82 0.83 0.84 0.86 0.89 0.89 0.89 ...
## $ Mortgage : num 1 1.05 1.07 1.08 1.1 1.11 1.15 1.22 1.23 1.24 ...
## $ Jobs : num 0.99 1.05 1.1 1.14 1.17 1.2 1.3 1.41 1.43 1.44 ...
## $ Investing : num 0.92 0.94 0.96 0.98 0.99 0.99 1.02 1.09 1.1 1.1 ...
## $ DJI_Index : num 13044 13044 13057 12800 12827 ...
## $ StdDJI : num 4.3 4.3 4.31 4.14 4.16 4.16 4.16 4 4.1 4.17 ...
## $ Unemployment_30MA : num 1.37 1.37 1.38 1.38 1.39 1.4 1.4 1.42 1.43 1.44 ...
## $ Rental_30MA : num 0.72 0.72 0.73 0.73 0.74 0.75 0.76 0.77 0.78 0.79 ...
## $ RealEstate_30MA : num 0.67 0.67 0.68 0.68 0.68 0.69 0.7 0.7 0.71 0.72 ...
## $ Mortgage_30MA : num 0.98 0.97 0.97 0.97 0.98 0.98 0.98 0.99 0.99 1 ...
## $ Jobs_30MA : num 1.06 1.06 1.05 1.05 1.05 1.05 1.05 1.06 1.07 1.08 ...
## $ Investing_30MA : num 0.99 0.98 0.98 0.98 0.98 0.97 0.97 0.97 0.98 0.98 ...
## $ DJI_Index_30MA : num 13405 13396 13390 13368 13342 ...
## $ StdDJI_30MA : num 4.54 4.54 4.53 4.52 4.5 4.48 4.46 4.44 4.41 4.4 ...
## $ Unemployment_180MA: num 1.44 1.44 1.44 1.44 1.44 1.44 1.44 1.44 1.44 1.44 ...
## $ Rental_180MA : num 0.87 0.87 0.87 0.87 0.87 0.87 0.86 0.86 0.86 0.86 ...
## $ RealEstate_180MA : num 0.89 0.89 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.87 ...
## $ Mortgage_180MA : num 1.18 1.18 1.18 1.18 1.17 1.17 1.17 1.17 1.17 1.17 ...
## $ Jobs_180MA : num 1.24 1.24 1.24 1.24 1.24 1.24 1.24 1.24 1.24 1.24 ...
## $ Investing_180MA : num 1.04 1.04 1.04 1.04 1.04 1.04 1.04 1.04 1.04 1.04 ...
## $ DJI_Index_180MA : num 13493 13492 13489 13486 13482 ...
## $ StdDJI_180MA : num 4.6 4.6 4.6 4.6 4.59 4.59 4.59 4.58 4.58 4.58 ...
As we can see from the structure of the data, these indexes and DJI
have different ranges. We should rescale
the data to make
all features unitless, and therefore, comparable. In Chapter
5, we learned that normalizing these features using our own
normalize()
function could fix the problem of heterogeneity
of measuring units across features. We can use lapply()
to
apply the normalize()
function to each feature (column in
the data frame).
normalize <- function(x) {
return((x - min(x)) / (max(x) - min(x)))
}
google_norm<-as.data.frame(lapply(google, normalize))
summary(google_norm$RealEstate)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0000 0.4615 0.6731 0.6292 0.8077 1.0000
The last step clearly normalizes all feature vectors into the range \([0,1]\).
The next step would be to split the google
dataset into
training and testing sets. This time we will use the
sample()
and floor()
function to separate
training and test dataset (75:25). The sample()
function
creates a set of (random) indicators for row indices. We can subset the
original dataset with random rows using these indicators. The
floor()
function takes a number x and returns the
closest integer to x.
sample(row, floor(size))
sub <- sample(nrow(google_norm), floor(nrow(google_norm)*0.75))
google_train <- google_norm[sub, ]
google_test <- google_norm[-sub, ]
Following this data preprocessing, can move forward with the neural network training phase.
Here, we use the function neuralnet::neuralnet()
, which
returns a NN object containing:
m <- neuralnet(target ~ predictors, data=mydata, hidden=1)
,
where:
# install.packages("neuralnet")
library(neuralnet)
google_model <- neuralnet(RealEstate~Unemployment+Rental+Mortgage+Jobs+Investing+DJI_Index+StdDJI, data=google_train)
plot(google_model) # neuralnet::plot.nn
The above graph shows that we have only one hidden node.
Error
represents the aggregate sum of squared errors and
Steps
indicates the number of iterations until the ANN
model converged. Note that these outputs could be different when you run
exact same model-fitting optimization because the weights are
stochastically estimated.
Also, bias nodes (blue singletons in the graph) may be added to feedforward neural networks acting like intermediate input nodes that produce constant values, e.g., 1. They are not connected to nodes in previous layers, yet they generate biased activation. Bias nodes are not required but are helpful in some neural networks as they allow network flexibility by offsetting the activation functions.
Similar to the predict()
function that we have mentioned
in previous chapters, compute()
is an alternative method
that could help us to generate the model predictions.
p<-compute(m, test)
Our model used
Unemployment, Rental, Mortgage, Jobs, Investing, DJI_Index, StdDJI
as predictors. Therefore, we need to reference these corresponding
column numbers in the test dataset, i.e., columns 1, 2, 4, 5, 6, 7, 8,
respectively.
google_pred <- compute(google_model, google_test[, c(1:2, 4:8)])
pred_results <- google_pred$net.result
cor(pred_results, google_test$RealEstate)
## [,1]
## [1,] 0.9774052
As mentioned in Chapter 3, we can still use the correlation between predicted results and observed Real Estate Index to evaluate the algorithm. For real datasets, a correlation exceeding \(0.9\) is a very good indicator of the performance of the NN model. Could this be improved further?
This time we will include \(4\) hidden nodes in the single-layer NN model. Let’s see what results we can get from this more complicated model.
google_model2 <- neuralnet(RealEstate~Unemployment+Rental+Mortgage+Jobs+Investing+DJI_Index+StdDJI, data=google_train, hidden = 4)
plot(google_model2)
Although the graph looks more complicated than the previous neural
network, we have smaller Error
, i.e., sum of squared
errors. The neural network models may be used both for
classification and regression, which we will see in
the next part. Let’s first try regression.
google_pred2 <- compute(google_model2, google_test[, c(1:2, 4:8)])
pred_results2 <- google_pred2$net.result
cor(pred_results2, google_test$RealEstate)
## [,1]
## [1,] 0.9873705
plot_ly() %>%
add_markers(x=pred_results2, y=google_test$RealEstate,
name="Data Scatter", type="scatter", mode="markers") %>%
add_trace(x = c(0,1), y = c(0,1), type="scatter", mode="lines",
line = list(width = 4), name="Ideal Agreement") %>%
layout(title=paste0('Scatterplot (Normalized) Observed vs. Predicted Real Estate Values, Cor(Obs,Pred)=',
round(cor(pred_results2, google_test$RealEstate), 2)),
xaxis = list(title="NN (hidden=4) Real Estate Predictions"),
yaxis = list(title="(Normalized) Observed Real Estate"),
legend = list(orientation = 'h'))
We get an even higher correlation. This is almost an ideal result!
The predicted and observed RealEstate
indices have a strong
linear relationship. Nevertheless, too many hidden nodes might sometimes
decrease the correlation between predicted and observed values, which
will be examined in the practice problems later in this chapter.
We observe an even lower Error
by using three hidden
layers, each with \(4,3,3\) nodes,
respectively. However, this enhanced neural network may complicate the
interpretation of the results (or may overfit the network to intrinsic
noise in the data).
google_model2 <- neuralnet(RealEstate~Unemployment+Rental+Mortgage+Jobs+Investing+DJI_Index+StdDJI, data=google_train, hidden = c(4,3,3))
google_pred2<-compute(google_model2, google_test[, c(1:2, 4:8)])
pred_results2<-google_pred2$net.result
cor(pred_results2, google_test$RealEstate)
## [,1]
## [1,] 0.9905499
# plot(google_model2)
plot_ly() %>%
add_markers(x=pred_results2, y=google_test$RealEstate,
name="Data Scatter", type="scatter", mode="markers") %>%
add_trace(x = c(0,1), y = c(0,1), type="scatter", mode="lines",
line = list(width = 4), name="Ideal Agreement") %>%
layout(title=paste0('Scatterplot (Normalized) Observed vs. Predicted Real Estate Values, Cor(Obs,Pred)=',
round(cor(pred_results2, google_test$RealEstate), 2)),
xaxis = list(title="NN (hidden=(4,3,3)) Real Estate Predictions"),
yaxis = list(title="(Normalized) Observed Real Estate"),
legend = list(orientation = 'h'))
Neural networks can be used at the interface of experimental, theoretical, computational and data sciences. Here is one powerful example, data science applications to string theory.
We will demonstrate the foundation of the neural network prediction to estimate a basic mathematical function, square-root, \(\sqrt {\ \ \ }: \mathbb{R}^+ \longrightarrow \mathbb{R}^+\). First, let’s generate and plot the data.
# generate random training data: 1,000 |X_i|, where X_i ~ Uniform (0,100) or perhaps ~ N(0,1)
rand_data <- abs(runif(1000, 0, 100))
# create a 2 column data-frame (input=data, output=sqrt_data)
sqrt_df <- data.frame(rand_data, sqrt_data=sqrt(rand_data))
# plot(rand_data, sqrt_df$sqrt_data)
s <- seq(from=0, to=100, length.out=1000)
plot_ly(x = ~s, y = ~sqrt(s), type="scatter", mode = "lines") %>%
layout(title='Square-root Function',
xaxis = list(title="Input (x)", scaleanchor="y"),
yaxis = list(title="Output (y=sqrt(x))", scaleanchor="x"),
legend = list(orientation = 'h'))
Next, fit the NN model.
# Train the neural net
set.seed(1234)
net.sqrt <- neuralnet(sqrt_data ~ rand_data, sqrt_df, hidden=10, threshold=0.1)
Examine the NN-prediction results on testing data.
# report the NN
# print(net.sqrt)
# generate testing data seq(from=0.1, to=N, step=0.1)
N <- 200 # out of range [100: 200] is also included in the testing!
test_data <- seq(0, N, 0.1); test_data_sqrt <- sqrt(test_data)
test_data.df <- data.frame(rand_data=test_data, sqrt_data=sqrt(test_data));
# try to predict the square-root values using 10 hidden nodes
# Compute or predict for test data input, test_data.df
pred_sqrt <- predict(net.sqrt, test_data.df)
# compute uses the trained neural net (net.sqrt),
# to estimate the square-roots of the testing data
# compare real (test_data_sqrt) and NN-predicted (pred_sqrt) square roots of test_data
# plot(pred_sqrt, test_data_sqrt, xlim=c(0, 12), ylim=c(0, 12));
# abline(0,1, col="red", lty=2)
# legend("bottomright", c("Pred vs. Actual SQRT", "Pred=Actual Line"), cex=0.8, lty=c(1,2),
# lwd=c(2,2),col=c("black","red"))
plot_ly(x = ~pred_sqrt[,1], y = ~test_data_sqrt, type = "scatter", mode="markers", name="scatter") %>%
add_trace(x = c(0,14), y = c(0,14), mode="lines", line = list(width = 4), name="Ideal Agreement") %>%
layout(title='Scatter Plot Predicted vs. Actual SQRT',
xaxis = list(title="NN Predicted", scaleanchor="y"),
yaxis = list(title="Actual Value (y=sqrt(x))", scaleanchor="x"),
legend = list(orientation = 'h'))
compare_df <-data.frame(pred_sqrt, test_data_sqrt); # compare_df
# plot(test_data, test_data_sqrt)
# lines(test_data, pred_sqrt, pch=22, col="red", lty=2)
# legend("bottomright", c("Actual SQRT","Predicted SQRT"),
# lty=c(1,2),lwd=c(2,2),col=c("black","red"))
plot_ly(x = ~test_data, y = ~test_data_sqrt, type="scatter", mode="lines", name="SQRT") %>%
add_trace(x = ~test_data, y = ~pred_sqrt, mode="markers", name="NN Model Prediction") %>%
layout(title='Predicted vs. Actual SQRT',
xaxis = list(title="Inputs"),
yaxis = list(title="Outputs (y=sqrt(x))"),
legend = list(orientation = 'h'))
We observe that the NN, net.sqrt
actually learns and
predicts pretty close to the complex square root function. Of course,
everyone’s results may vary as we randomly generate the training data
(rand_data) and the NN construction (net.sqrt) is also stochastic.
DSPA Appendix 12 (Stochastic Neural Networks - Restricted Boltzmann Machine (RBM)) includes a description of stochastic neural networks, and more specifically, Restricted Boltzmann Machine (RBM). RBMs are used for tasks such as dimensionality reduction, classification, regression, collaborative filtering, feature learning, and generative modeling. Its foundations span mathematics, statistics, and physics, with each field contributing essential principles to the structure and function of RBMs. This appendix provides an example of training a RBM to predict the function \(f(x)=x-\sin(x)\).
In practice, NN may be more useful as a classifier. Let’s demonstrate
this by using the Stock Market data. We mark the samples according to
their RealEstate
categorization. Those higher than \(75\%\) percentile will be labeled \(0\), those lower than \(0.25\) percentile will get labeled \(2\), and those in between will get labeled
\(1\). Note that even in the
classification setting, the responses still must be numeric.
google_class = google_norm
id1 = which(google_class$RealEstate>quantile(google_class$RealEstate,0.75))
id2 = which(google_class$RealEstate<quantile(google_class$RealEstate,0.25))
id3 = setdiff(1:nrow(google_class),union(id1,id2))
google_class$RealEstate[id1]=0
google_class$RealEstate[id2]=1
google_class$RealEstate[id3]=2
summary(as.factor(google_class$RealEstate))
## 0 1 2
## 179 178 374
We divide the data into training and testing sets and generate three
derived one-hot-encoding features (dummy variables), which correspond to
the three RealEstate
outcome labels (categories).
set.seed(2017)
train = sample(1:nrow(google_class),0.7*nrow(google_class))
google_tr = google_class[train,]
google_ts = google_class[-train,]
train_x = google_tr[,c(1:2,4:8)]
train_y = google_tr[,3]
colnames(train_x)
## [1] "Unemployment" "Rental" "Mortgage" "Jobs" "Investing"
## [6] "DJI_Index" "StdDJI"
test_x = google_ts[,c(1:2,4:8)]
test_y = google_ts[3]
train_y_ind = model.matrix(~factor(train_y)-1)
colnames(train_y_ind) = c("High","Median","Low")
train = cbind(train_x, train_y_ind)
We specify non-linear output and report intermediate results every \(5,000\) iterations.
nn_single = neuralnet(High+Median+Low~Unemployment+Rental+Mortgage+Jobs+Investing+DJI_Index+StdDJI,
data = train,
hidden=4,
linear.output=FALSE,
lifesign='full', lifesign.step=5000)
Below is the prediction function using this neural network model to
forecast the RealEstate
class label.
pred = function(nn, dat) {
# compute uses the trained neural net (nn=nn_single), and
# new testing data (dat=google_ts) to generate predictions (y_hat)
# compute returns a list containing:
# (1) neurons: a list of the neurons' output for each layer of the neural network, and
# (2) net.result: a matrix containing the overall result of the neural network.
yhat = compute(nn, dat)$net.result
# find the maximum in each row (1) in the net.result matrix
# to determine the first occurrence of a specific element in each row (1)
# we can use the apply function with which.max
yhat = apply(yhat, 1, which.max)-1
return(yhat)
}
mean(pred(nn_single, google_ts[,c(1:2,4:8)]) != as.factor(google_ts[,3]))
## [1] 0.01818182
Finally, report the confusion matrix illustrating the
agreement/disagreement between the 3 observed RealEstate
class labels and their (NN) predicted counterparts.
##
## 0 1 2
## 0 51 0 0
## 1 0 54 2
## 2 2 0 111
Now let’s inspect the structure of the resulting Neural Network.
Similarly, we can change hidden
to utilize multiple
hidden layers, however, a more complicated model won’t necessarily
guarantee an improved performance.
nn_single = neuralnet(High+Median+Low~Unemployment+Rental+Mortgage+Jobs+Investing+DJI_Index+StdDJI,
data = train,
hidden=c(4,5),
linear.output=FALSE,
lifesign='full', lifesign.step=5000)
mean(pred(nn_single, google_ts[,c(1:2,4:8)]) != as.factor(google_ts[,3]))
## [1] 0.02727273
Recall that in Chapter 5 we presented lazy machine learning methods, which assign class labels using geometrical distances of different features. In multidimensional feature spaces, we can utilize spheres, centered according to the training dataset, to assign testing data labels. What kinds of shapes may be embedded in \(nD\) space to help with the classification process?
The easiest shape would be an \((n-1)D\) plane embedded in \(nD\), which splits the entire space into two parts. Support Vector Machines (SVM) can use hyperplanes to split the data into separate groups, or classes. Obviously, this may be useful for datasets that are linearly separable.
As an example, consider lines, \((n-1)D\) planes, embedded in \(\mathbb{R}^2\). Assuming that we have only
two features, will you choose line \(A\) or line \(B\) as the better hyperplane
separating the data? Or even another plane \(C\)?
To answer the above question, we need to search for the Maximum Margin Hyperplane (MMH). That is the hyperplane that creates greatest separation between the two closest observations.
We define support vectors as the points from each class that are closest to MMH. Each class must have at least one observation as a support vector.
Using support vectors alone is insufficient for finding the MMH. Although some mathematical calculations are involved, the fundamentals of the SVM process is fairly simple. Let’s look at linearly separable data and non-linearly separable data individually.
If the dataset is linearly separable, we can find the outer boundaries of our two groups of data points. These boundaries are called convex hulls (red lines in the following graph). The MMH line (orange solid) is just the line that is perpendicular to the shortest line (green dash) between the two convex hulls.
Mind the difference between convex hull and concave hull of a set of points.
# install.packages("alphahull")
library(alphahull)
# Define a convex spline polygon function
# Input 'boundaryVertices' (n * 2 matrix) include the ordered X,Y coordinates of the boundary vertices
# 'vertexNumber' is the number of spline vertices to use; dim(boundaryVertices)[1] ... not all are necessary and some end vertices are clipped
# 'k' controls the smoothness of the periodic spline, i.e., the number of points to wrap around the ends
# Returns an array of points
convexSplinePolygon <- function(boundaryVertices, vertexNumber, k=3)
{
# Wrap k vertices around each end.
n <- dim(boundaryVertices)[1]
if (vertexNumber < n) {
print("vertexNumber< n!!!")
stop()
}
if (k >= 1) {
data <- rbind(boundaryVertices[(n-k+1):n, ], boundaryVertices, boundaryVertices[1:k, ])
} else {
data <- boundaryVertices
}
# Spline-interpolate the x and y coordinates
data.spline <- spline(1:(n+2*k), data[ , 1], n=vertexNumber)
x <- data.spline$x
x1 <- data.spline$y
x2 <- spline(1:(n+2*k), data[,2], n=vertexNumber)$y
# Keep only the middle part
cbind(x1, x2)[k < x & x <= n+k, ]
}
# install.packages("alphahull")
# Concave hull (alpha-convex hull)
group1 <- list(x=A[6:9], y=B[6:9])
# if duplicate points are expected, remove them to prevent ahull() function errors
group2 <- lapply(group1, "[" ,which(!duplicated(as.matrix(as.data.frame(group1)))))
concaveHull1 <- ahull(group2, alpha=6)
# plot(concaveHull1, add=FALSE, col="blue", wpoints=FALSE, xlim=c(0,10),ylim=c(0,10))
# points(group2, pch=19)
library(alphahull)
# Convex hull
group3 <- list(x=A[1:5], y=B[1:5])
# points(group3, pch=19)
convHull2 <- lapply(group3, "[", chull(group3))
# polygon(convHull2, lty=2, border="gray", lwd=2)
# polygon(convexSplinePolygon(as.matrix(as.data.frame(convHull2)), 100),border="red",lwd=2)
# legend("topleft", c("Convex Hull", "Convex Spline Hull", "Concave Hull"), lty=c(2,1,1), lwd=c(2,2,2),col=c("gray","red", "blue"), cex=0.8)
# text(5,2, "group 2", col="red"); text(8,6, "group 1", col="blue")
library(sp)
SpP = SpatialPolygons(list(Polygons(list(Polygon(group2)),ID="s1")))
# plot(SpP)
# points(XY)
x1 <- SpP@polygons[[1]]@Polygons[[1]]@coords[,1]
y1 <- SpP@polygons[[1]]@Polygons[[1]]@coords[,2]
df1 <- convexSplinePolygon(as.matrix(as.data.frame(convHull2)), 100)
plot_ly() %>%
add_trace(x=df1[,1], y=df1[,2], type="scatter", mode="lines", name="Convex Hull", line=list(color="lightblue")) %>%
add_lines(x = x1, y = y1, type="scatter", mode="lines", name="Concave Region", line=list(color="orange")) %>%
add_segments(x = df1[1,1], xend=df1[dim(df1)[1],1], y = df1[1,2], yend = df1[dim(df1)[1],2], type="scatter",
mode="lines", name="", line=list(color="gray"), showlegend=F) %>%
add_segments(x = x1[3], xend=x1[4], y = y1[3], yend = y1[4], type="scatter",
mode="lines", name="Concave Region", line=list(color="orange"), showlegend=F) %>%
add_lines(x = c(6,4), y = c(8,5), name="Shortest Line Between the Convex Clusters (A)", line=list(dash='dash')) %>%
add_lines(x = c(10,2), y = c(3,8.7), mode="lines", name="MMH Line (B)") %>%
add_segments(x=1, xend=4, y=1, yend = 5, line=list(color="gray", dash='dash'), showlegend=F) %>%
add_segments(x=1, xend=4, y=1, yend = 3, line=list(color="gray", dash='dash'), showlegend=F) %>%
add_segments(x=4, xend=4, y=3, yend = 5, line=list(color="gray", dash='dash'), showlegend=F) %>%
add_segments(x=6, xend=10, y=8, yend = 7, line=list(color="gray", dash='dash'), showlegend=F) %>%
add_segments(x=6, xend=10, y=8, yend = 7, line=list(color="gray", dash='dash'), showlegend=F) %>%
add_segments(x=10, xend=9, y=7, yend = 10, line=list(color="gray", dash='dash'), showlegend=F) %>%
add_markers(x = A, y = B, type="scatter", mode="markers", name="Data", marker=list(color="blue")) %>%
add_markers(x = 4, y = 5, name="P1",
marker = list(size = 20, color = 'blue', line = list(color = 'yellow', width = 2))) %>%
add_segments(x=6, xend=9, y=8, yend = 10, line=list(color="gray", dash='dash'), showlegend=F) %>%
add_markers(x = 6, y = 8, name="P2",
marker = list(size = 20, color = 'blue', line = list(color = 'yellow', width = 2))) %>%
# add_lines(x = df1[,1], y = df1[,2], type="scatter", mode="lines", name="Convex Hull") %>%
layout(title="Illustration of Hyperplane (line) Separation of 2D Data",
xaxis=list(title="X", scaleanchor="y"), # control the y:x axes aspect ratio
yaxis = list(title="Y", scaleanchor = "x"), legend = list(orientation = 'h'),
annotations = list(text=modelLabels, x=modelLabels.x, y=modelLabels.y, textangle=c(-40,0),
font=list(size=15, color=modelLabels.col), showarrow=FALSE))
# # extract the row numbers of the boundary points, in convex order.
# indx=concaveHull1$arcs[,"end1"]
# points <- df[indx,2:3] # extract the boundary points from df
# points <- rbind(points,points[1,]) # add the closing point
# # create the SpatialPolygonsDataFrame
# SpP = SpatialPolygons(list(Polygons(list(Polygon(points)),ID="s1")))
# plot(SpP)
# plot(SpP@polygons[[1]]@Polygons[[1]]@coords[,1], SpP@polygons[[1]]@Polygons[[1]]@coords[,2])
An alternative way to linearly separate the data into (two) clusters is to find two parallel planes that can separate the data into two groups, and then increase the distance between the two planes as much as possible.
We can use vector notation to mathematically define planes. In \(n\)-dimensional space, a plane could be expressed by the following equation: \[\vec{w}\cdot\vec{x}+b=0,\] where \(\vec{w}\) (weights) is the plane normal vector, \(\vec{x}\) is the vector of unknowns, both have n coordinates, and \(b\) is a constant scalar that completely determines the plane (as it specifies a point the plane goes through).
To clarify this notation let’s look at the situation in a 3D space where we can express (embed) 2D Euclidean planes using a point \((x_o,y_o,z_o)\) and normal-vector \((a,b,c)\) form. This is just a linear equation, where \(d=-(ax_o + by_o + cz_0)\): \[ax + by + cz + d = 0,\] or equivalently \[w_1x_1+w_2x_2+w_3x_3+b=0\] We can see that it is equivalent to the vector notation
Using the vector notation, we can specify two hyperplanes as follows: \[\vec{w}\cdot\vec{x}+b\geq+1\] and \[\vec{w}\cdot\vec{x}+b\leq-1\] We require that all of the observations in the first class fall above the first plane and all observations in the other class fall below the second plane.
The distance between two planes is calculated as: \[\frac{2}{\lVert \vec{w}\rVert}\] where \(\lVert \cdot \rVert\) is the Euclidean norm. To maximize the distance, we need to minimize the Euclidean norm.
To sum up we are going to find \(\min\frac{\lVert \vec{w}\rVert}{2}\) subject to the following constrain
\[y_i(\vec{w}\cdot\vec{x_i}-b)\geq1, \, 0\leq i\leq n\ ,\] where for each data point index \(i\), \(y_i= \pm1\) correspond to \({w}\cdot x_{i} - b \geq 1\) and \({w}\cdot x_{i} - b \leq -1\), respectively.
We will see more about constrained and unconstrained optimization later in Chapter 13. For each nonlinear programming problem, the primal problem, there is related nonlinear programming problem, also known as the Lagrangian dual problem. Under certain assumptions for convexity and suitable constraints, the primal and dual problems have equal optimal objective values. Primal optimization problems are typically described as:
\[\begin{array}{rcl} \min_x{f(x)} \\ \text{subject to} \\ g_i(x) \leq 0\\ h_j(x) = 0 \\ \end{array}.\]
Then the Lagrangian dual problem is defined as a parallel nonlinear programming problem
\[\begin{array}{rcl} & \min_{u,v}{\theta(u,v)} & \\ & \text{subject to} & \\ & u \geq 0 & \\ \end{array},\] where \[ \theta(u,v)= \inf_{x}{ \left ( f(x)+\displaystyle\sum_i {u_i g_i(x)} +\displaystyle\sum_j {v_j h_j(x)} \right )}.\]
Chapter 13 provides additional technical details about optimization duality.
Suppose the Lagrange primal is \[L_p = \frac{1}{2}||w||^2-\sum_{i=1}^{n}\alpha_i[y_i(w_0+x_i^{t}w)-1],\ \text{where}\ \alpha_i\geq 0.\]
To optimize that objective function, we can set the partial derivatives equal to zero:
\[\frac{\partial}{\partial w}\ :\ w = \sum_{i=1}^{n}\alpha_iy_ix_i\]
\[\frac{\partial}{\partial b}\ :\ 0 = \sum_{i=1}^{n}\alpha_iy_i.\]
Substituting into the Lagrange primal, we obtain the Lagrange dual:
\[L_D = \sum_{i=1}^{n}\alpha_i - \frac{1}{2} \sum_{i=1}^{n}\alpha_i\alpha_i' y_i y_i' x_i^t x_i '= \sum _{i=1}^{n}\alpha_i-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}\alpha_{i}(x_{i}\cdot x_{j})y_{j}\alpha_{j}.\]
Then, we maximize \(L_D\) subject to \(\alpha_i \geq 0\) and \(\sum_{i=1}^{n}\alpha_iy_i =0\). For each \(i\in \{1,\,\cdots ,\,n\}\), this iterative optimization results in adjusting the coefficient \(\alpha_{i}\) in the direction of \(\frac{\partial f}{\partial \alpha_{i}}\).
Hence, the resulting coefficient vector \((\alpha_{1}',\cdots ,\alpha_{n}')\) is projected onto the nearest vector of coefficients which satisfies the given constraints. Repeating this process drives the coefficient vector to a local optimum.
By the Karush-Kuhn-Tucker optimization conditions, we have \(\hat\alpha[y_i(\hat{b}+x_i^t\hat{w})-1]=0.\)
This implies that if \(y_i \hat{f}(x_i)>1\), then \(\hat{\alpha}_i=0\).
The support of a function (\(f(x_i)=\hat{b}+x_i^t\hat{w}\)) is the smallest subset of the domain containing only arguments (\(x\)) which are not mapped to zero (\(f(x)\not=0\)). In our case, the solution \(\hat{w}\) is defined in terms of a linear combination of the support points:
\[\hat{f}(x)=w^t x = w = \sum_{i=1}^{n}\alpha_iy_ix_i. \]
That’s where the name of Support Vector Machines (SVM) comes from.
For non-linearly separable data, we employ the kernel trick to linearize the problem in a higher dimensional space. Still, we use a separating hyperplane, but allow some of the points to be misclassified into the wrong class. To penalize for misclassification, we add a regularization term after the fidelity term (Euclidean norm) and then minimize the additive mixture of the two terms.
Therefore, the solution will optimize the following regularized objective (cost) function: \[\min \left (\frac{\lVert \vec{w}\rVert}{2} +C\sum_{i=1}^{n} \xi_i \right )\] \[\text{subject to}\] \[y_i(\vec{w}\cdot\vec{x}_i-b)\geq1, \, \forall\vec{x}_i, \, \xi_i\geq0,\] where hyperparameter \(C\) controls the error term (regularization) penalty and \(\xi_i\) is the distance between the misclassified observation \(i\) and the plane.
We have the following Lagrange primal problem: \[L_p = \frac{1}{2}||w||^2 + C\sum_{i=1}^{n}\xi_i-\sum_{i=1}^{n}\alpha_i[y_i(b+x_i^{t}w)-(1-\xi_i)] - \sum_{i=1}^{n}\gamma_i\xi_i,\] where \[\alpha_i,\gamma_i \geq 0.\]
Similar to what we did earlier in the linearly separable case, we can use the derivatives of the primal problem to solve the dual problem.
Notice the inner product in the final expression. We can replace this inner product with a kernel function that maps the feature space into a higher dimensional space (e.g., using a polynomial kernel) or an infinite dimensional space (e.g., using a Gaussian kernel).
An alternative way to solve for the non-linear separable is called the kernel trick. That is to add new dimensions (or features) to make these non-linear separable data to be separable in a higher dimensional space.
The solution of the quadratic optimization problem in this case involves regularized objective function: \[\min_{w, b} \left (\frac{\lVert \vec{w}\rVert}{2} +C\sum_{i=1}^{n} \xi_i \right ),\] \[\text{subject to}\] \[y_i(\vec{w}\cdot\phi (\vec{x}_i)-b)\geq 1 -\xi_i, \, \forall\vec{x}_i, \, \xi_i\geq 0.\] Again, the hyperparameter \(C\) controls the regularization penalty and \(\xi_i\) are the slack variables introduced by lifting the initial low-dimensional (non-linear) problem to a new higher dimensional linear problem. The quadratic optimization of this (primal) higher-dimensional problem is similarly transformed into a Lagrangian dual problem:
\[L_p = \max_{\alpha} \min_{w, b} \left \{\frac{1}{2}||w||^2 + C\sum_{i=1}^{n}\alpha_i \left ( 1- w^T\phi(\vec{x}_i) +b \right )\right \},\] where \[0\leq \alpha_i \leq C, \forall i.\]
The solution to the Lagrange dual problem provides estimates of \(\alpha_i\) and we can predict the class label of a new sample \(x_{test}\) via:
\[y_{test}={\text{sign}}\left (w^t \phi (x_{test})+b\right )= {\text{sign}}\left ( \sum_{i=1}^{n} \alpha_i y_i \ \underbrace{\phi(\vec{x}_{i,test})^t \phi(\vec{x}_{j,test})}_{kernel, K(\vec{x}_i,\vec{x}_j)=\phi(\vec{x}_{i,test}). \phi(\vec{x}_{j,test})} +b \right ).\]
Below is one example where the 2D data (mtcars
, \(n=32, k=10\) cars fuel consumption) doesn’t
appear as linearly separable in its native 2D (\(weight\times horsepower\)), the binary
colors correspond to V-shaped or Straight engine
type.
library(plotly)
mtcars$vs[which(mtcars$vs == 0)] <- 'V-Shaped Engine'
mtcars$vs[which(mtcars$vs == 1)] <- 'Straight Engine'
mtcars$vs <- as.factor(mtcars$vs)
p_2D <- plot_ly(mtcars, x = ~wt, y = ~hp/10, color = ~vs, colors = c('blue', 'red'), name=~vs) %>%
add_markers() %>%
add_segments(x = 1, xend = 6, y = 8, yend = 18, colors="gray", opacity=0.2,
showlegend = FALSE) %>%
layout(xaxis = list(title = 'Weight'), yaxis = list(title = 'Horsepower'), legend = list(orientation = 'h'),
title="(mtcars) Automobile Weight vs. Horsepower Relation") %>% hide_colorbar()
p_2D
However, the data can be lifted in 3D where it is more clearly linearly separable (by engine type) via a 2D plane.
# library(plotly)
# p_3D <- plot_ly(mtcars, x = ~wt, y = ~hp, z = ~qsec, color = ~vs, colors = c('blue', 'red')) %>%
# add_markers() %>%
# layout(scene = list(xaxis = list(title = 'Weight'),
# yaxis = list(title = 'Horsepower'),
# zaxis = list(title = '1/4 mile time')))
#p_3D
# Compute the Normal to the 2D PC plane
normVec = c(1, 1.3, -3.0)
# Compute the 3D point of gravitational balance (Plane has to go through it)
dMean <- c(3.2, -280, 2)
d <- as.numeric((-1)*normVec %*% dMean) # force the plane to go through the mean
x=mtcars$wt; y=mtcars$hp; z=mtcars$qsec; w=mtcars$vs # define the x, y, z dimensions
w.col = ifelse(mtcars$vs=="Straight Engine", "blue", "red")
w.name = ifelse(mtcars$vs=="Straight Engine", "Straight", "V-shape")
# Reparametrize the 2D (x,y) grid, and define the corresponding model values z on the grid. Recall z=-(d + ax+by)/c, where normVec=(a,b,c)
x.seq <- seq(min(x),max(x),length.out=100)
y.seq <- seq(min(y),max(y),length.out=100)
z.seq <- function(x,y) -(d + normVec[1]*x + normVec[2]*y)/normVec[3]
# define the values of z = z(x.seq, y.seq), as a Matrix of dimension c(dim(x.seq), dim(y.seq))
z1 <- t(outer(x.seq, y.seq, z.seq))/10; range(z1) # we need to check this 10 correction, to ensure the range of z is appropriate!!!
## [1] 14.53043 26.92413
# Draw the 2D plane embedded in 3D, and then add points with "add_trace"
myPlotly <- plot_ly(x=~x.seq, y=~y.seq, z=~z1,
colors = "gray", type="surface", opacity=0.5, showlegend = FALSE) %>%
add_trace(data=mtcars, x=x, y=y, z=mtcars$qsec, mode="markers", type="scatter3d",
marker = list(color=w.col, opacity=0.9, symbol=105)) %>%
layout(showlegend = FALSE, scene = list(
aspectmode = "manual", aspectratio = list(x=1, y=1, z=1),
xaxis = list(title = "Weight", range = c(min(x),max(x))),
yaxis = list(title = "Horsepower", range = c(min(y),max(y))),
zaxis = list(title = "1/4 mile time", range = c(14, 23)))
) %>% hide_colorbar()
myPlotly
How can we do that in practice? We transform our data using kernel functions. A general form for kernel functions would be: \[K(\vec{x_i}, \vec{x_j})=\phi(\vec{x_i})\cdot\phi(\vec{x_j}),\] where \(\phi\) is a mapping of the data into another space.
The linear kernel would be the simplest one that is just the dot product of the features. \[K(\vec{x_i}, \vec{x_j})=\vec{x_i}\cdot\vec{x_j}.\] The polynomial kernel of degree d transform the data by adding a simple non-linear transformation of the data. \[K(\vec{x_i}, \vec{x_j})=(\vec{x_i}\cdot\vec{x_j}+1)^d.\] The sigmoid kernel is very similar to the neural networks approach. It uses a sigmoid activation function. \[K(\vec{x_i}, \vec{x_j})=\tanh(k\vec{x_i}\cdot\vec{x_j}-\delta).\] The Gaussian radial basis function (RBF) kernel is similar to RBF neural network and may be a good place to start, in general. \[K(\vec{x_i}, \vec{x_j})=\exp \left (\frac{-\lVert \vec{x_i}-\vec{x_j}\rVert^2}{2\sigma^2}\right ) .\]
In Chapter 4 we saw machine learning strategies for hand-written digit recognition. We now want to expand that to character recognition. The following example illustrates management and transferring of handwritten notes (text) and converting them to typeset or printed text representing the characters in the original notes (unstructured image data).
Protocol:
In this example, we use an optical document image (data) that has already been pre-partitioned into rectangular grid cells containing 1 character of the 26 English letters, A through Z.
The resulting gridded dataset is distributed by the UCI Machine Learning Data Repository. The dataset contains 20, 000 examples of 26 English capital letters printed using 20 different randomly reshaped and morphed fonts.
Load the data and split it into training and testing sets.
# read in data and examine its structure
hand_letters <- read.csv("https://umich.instructure.com/files/2837863/download?download_frd=1", header = T)
str(hand_letters)
## 'data.frame': 20000 obs. of 17 variables:
## $ letter: chr "T" "I" "D" "N" ...
## $ xbox : int 2 5 4 7 2 4 4 1 2 11 ...
## $ ybox : int 8 12 11 11 1 11 2 1 2 15 ...
## $ width : int 3 3 6 6 3 5 5 3 4 13 ...
## $ height: int 5 7 8 6 1 8 4 2 4 9 ...
## $ onpix : int 1 2 6 3 1 3 4 1 2 7 ...
## $ xbar : int 8 10 10 5 8 8 8 8 10 13 ...
## $ ybar : int 13 5 6 9 6 8 7 2 6 2 ...
## $ x2bar : int 0 5 2 4 6 6 6 2 2 6 ...
## $ y2bar : int 6 4 6 6 6 9 6 2 6 2 ...
## $ xybar : int 6 13 10 4 6 5 7 8 12 12 ...
## $ x2ybar: int 10 3 3 4 5 6 6 2 4 1 ...
## $ xy2bar: int 8 9 7 10 9 6 6 8 8 9 ...
## $ xedge : int 0 2 3 6 1 0 2 1 1 8 ...
## $ xedgey: int 8 8 7 10 7 8 8 6 6 1 ...
## $ yedge : int 0 4 3 2 5 9 7 2 1 1 ...
## $ yedgex: int 8 10 9 8 10 7 10 7 7 8 ...
We can specify vanilladot
as a linear kernel, or
alternatively:
rbfdot
Radial Basis kernel i.e, “Gaussian”polydot
Polynomial kerneltanhdot
Hyperbolic tangent kernellaplacedot
Laplacian kernelbesseldot
Bessel kernelanovadot
ANOVA RBF kernelsplinedot
Spline kernelstringdot
String kernel# begin by training a simple linear SVM
library(kernlab)
set.seed(123)
hand_letter_classifier <- ksvm(as.factor(letter) ~ ., data = hand_letters_train, kernel = "vanilladot")
## Setting default kernel parameters
## Support Vector Machine object of class "ksvm"
##
## SV type: C-svc (classification)
## parameter : cost C = 1
##
## Linear (vanilla) kernel function.
##
## Number of Support Vectors : 6618
##
## Objective Function Value : -13.2947 -19.6051 -20.8982 -5.6651 -7.2092 -31.5151 -48.3253 -17.6236 -57.0476 -30.532 -15.7162 -31.49 -28.2706 -45.741 -11.7891 -33.3161 -28.2251 -16.5347 -13.2693 -30.88 -29.4259 -7.7099 -11.1685 -29.4289 -13.0857 -9.2631 -144.1105 -52.7747 -71.052 -109.7783 -158.3152 -51.2839 -39.6499 -67.0061 -23.8637 -27.6083 -26.3461 -35.2626 -38.6346 -116.8967 -173.8336 -214.2196 -20.7925 -10.3812 -53.1156 -12.228 -46.6132 -8.6867 -18.9108 -11.0535 -94.5751 -26.5689 -224.0215 -70.5714 -8.3232 -4.5265 -132.5431 -74.6876 -19.5742 -12.7352 -81.7894 -11.6983 -25.4835 -17.582 -23.934 -27.022 -50.7092 -10.9228 -4.3852 -13.7216 -3.8547 -3.5723 -8.419 -36.9773 -47.1418 -172.6874 -42.457 -44.0342 -42.7695 -13.0527 -16.7534 -78.7849 -101.8146 -32.1141 -30.3349 -104.0695 -32.1258 -24.6301 -32.6087 -17.0808 -5.1347 -40.5505 -6.684 -16.2962 -56.364 -147.3669 -49.0907 -37.8334 -32.8068 -73.248 -127.7819 -10.5342 -5.2495 -11.9568 -30.1631 -135.5915 -51.521 -176.2669 -99.0973 -10.295 -14.5906 -3.7822 -64.1452 -7.4813 -84.9109 -40.9146 -87.2437 -66.8629 -69.9932 -20.5294 -12.7577 -7.0328 -22.9219 -12.3975 -223.9411 -29.9969 -24.0552 -132.6252 -133.7033 -9.2959 -33.1873 -5.8016 -57.3392 -60.9046 -27.1766 -200.8554 -29.9334 -15.9359 -130.0183 -154.4587 -43.5779 -24.4852 -135.7896 -74.1531 -303.5043 -131.4741 -149.5403 -30.4917 -29.8086 -47.3454 -24.6204 -44.2792 -6.2064 -8.6708 -36.4412 -68.712 -179.7303 -44.7489 -84.8608 -136.6786 -569.3398 -113.0779 -138.435 -303.8556 -32.8011 -60.4546 -139.3525 -108.9841 -34.277 -64.9071 -38.6148 -7.5086 -204.222 -12.9572 -29.0252 -2.0352 -5.9916 -14.3706 -21.5773 -57.0064 -19.6546 -178.0543 -19.812 -4.145 -4.5318 -0.8101 -116.8649 -7.8269 -53.3445 -21.4812 -13.5066 -5.3881 -15.1061 -27.6061 -18.9239 -68.8104 -26.1223 -93.0231 -15.1693 -9.7999 -7.6137 -1.5301 -84.9531 -5.4551 -93.187 -93.4153 -43.8334 -23.6706 -59.1468 -22.0933 -47.8381 -219.9936 -39.5596 -47.2643 -34.0752 -20.2532 -11.239 -118.4152 -6.4126 -5.1846 -8.7272 -9.4584 -20.8522 -22.0878 -113.0806 -29.0912 -80.397 -29.6206 -13.7422 -8.9416 -3.0785 -79.842 -6.1869 -13.9663 -63.3665 -93.2067 -11.5593 -13.0449 -48.2558 -2.9343 -8.25 -76.4361 -33.5374 -109.112 -4.1731 -6.1978 -1.2664 -84.1287 -18.3054 -7.2209 -45.5509 -3.3567 -16.8612 -60.5094 -43.9956 -53.0592 -6.1407 -17.4499 -2.3741 -65.023 -102.1593 -103.4312 -23.1318 -17.3394 -50.6654 -31.4407 -57.6065 -19.6857 -5.2667 -4.1767 -55.8445 -30.92 -57.2396 -30.1101 -7.611 -47.7711 -12.1616 -19.1572 -53.5364 -3.8024 -53.124 -225.6075 -12.6791 -11.5852 -16.6614 -9.7186 -65.824 -16.3897 -42.3931 -50.513 -24.752 -14.513 -40.495 -16.5124 -57.1813 -4.7974 -5.2949 -81.7477 -3.272 -6.3448 -1.1259 -114.3256 -22.3232 -339.8619 -31.0491 -31.3872 -4.9625 -82.4936 -123.6225 -72.8463 -23.4836 -33.1608 -11.7133 -19.7607 -1.8599 -50.1148 -8.2868 -143.3592 -1.8508 -1.9699 -9.4175 -0.5202 -25.0654 -30.0489 -5.6248
## Training error : 0.129733
# predictions on testing dataset
hand_letter_predictions <- predict(hand_letter_classifier, hand_letters_test)
head(hand_letter_predictions)
## [1] C U K U E I
## Levels: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
# table(hand_letter_predictions, hand_letters_test$letter)
# look only at agreements vs. disagreements
# construct a vector of TRUE/FALSE indicating correct/incorrect predictions
agreement <- hand_letter_predictions == hand_letters_test$letter # check if characters agree
table(agreement)
## agreement
## FALSE TRUE
## 780 4220
## agreement
## FALSE TRUE
## 0.156 0.844
tab <- table(hand_letter_predictions, hand_letters_test$letter)
tab_df <- tidyr::spread(as.data.frame(tab), key = Var2, value = Freq)
sum(diag(table(hand_letter_predictions, hand_letters_test$letter)))
## [1] 4220
Replacing the vanilladot
linear kernel with
rbfdot
Radial Basis Function kernel, i.e., “Gaussian”
kernel may improve the OCR prediction.
hand_letter_classifier_rbf <- ksvm(as.factor(letter) ~ ., data = hand_letters_train, kernel = "rbfdot")
hand_letter_predictions_rbf <- predict(hand_letter_classifier_rbf, hand_letters_test)
agreement_rbf <- hand_letter_predictions_rbf == hand_letters_test$letter
table(agreement_rbf)
## agreement_rbf
## FALSE TRUE
## 361 4639
## agreement_rbf
## FALSE TRUE
## 0.0722 0.9278
Note the improvement of automated (SVM) classification accuracy
(\(0.928\)) for rbfdot
compared to the previous (vanilladot
) result (\(0.844\)).
Let’s have another look at the iris data that we saw in Chapter 2.
SVM requires all features to be numeric and each feature has to be scaled into a relatively small interval. We are using Edgar Anderson’s Iris Data in R for this case study. This dataset measures the length and width of sepals and petals from three Iris flower species.
Let’s load the data first. In this case study we want to explore the
variable Species
.
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
##
## setosa versicolor virginica
## 50 50 50
The data looks good. However, recall that we need fairly normalized
data. We could normalize the data by hand. Luckily, the R
package we are going to use will normalize the dataset
automatically.
Now we can separate the training and test dataset using the $75%-25% rule.
Let’s first try a toy (iris data) example.
library(e1071)
iris.svm_1 <- svm(Species~Petal.Length+Petal.Width, data=iris_train,
kernel="linear", cost=1)
iris.svm_2 <- svm(Species~Petal.Length+Petal.Width, data=iris_train,
kernel="radial", cost=1)
par(mfrow=c(2,1))
plot(iris.svm_1, iris[,c(5,3,4)], symbolPalette = rainbow(4), color.palette = terrain.colors)
legend("center", "Linear")
We are going to use kernlab
for this case study. However
other packages like e1071
and klaR
are
available if you are quite familiar with C++.
Let’s break down the function ksvm()
m<-ksvm(target~predictors, data=mydata, kernel="rbfdot", c=1)
rbfdot
).Let’s install the package and test it with some real data.
# install.packages("kernlab")
library(kernlab)
iris_clas <- ksvm(Species~., data=iris_train, kernel="vanilladot")
## Setting default kernel parameters
## Support Vector Machine object of class "ksvm"
##
## SV type: C-svc (classification)
## parameter : cost C = 1
##
## Linear (vanilla) kernel function.
##
## Number of Support Vectors : 24
##
## Objective Function Value : -0.8784 -0.3342 -12.9635
## Training error : 0.026786
Here, we used all the variables other than the Species
in the dataset as predictors. In this model, we used the kernel
vanilladot
, a linear kernel, and obtained a training error
of about 0.03.
Given any pre-fit model, we have already used the
predict()
function to make predictions. Here we have a
factor outcome, so we need the command table()
to show us
how well the predictions and actual data match.
##
## iris.pred setosa versicolor virginica
## setosa 9 0 0
## versicolor 0 14 0
## virginica 0 3 12
We can see that only a few cases of Iris versicolor
flowers may be classified as virginica
. The species of the
majority of the flowers are all correctly identified.
To see the results more clearly, we can use the proportional table to show the agreements of the categories.
## agreement
## FALSE TRUE
## 0.07894737 0.92105263
Here ==
means “equal to”. Over 90% of predictions are
correct. Nevertheless, is there any chance that we can improve the
outcome? What if we try a Gaussian kernel?
Linear kernel is the simplest one but usually not the best one. Let’s try the RBF (Radial Basis “Gaussian” Function) kernel instead.
## Support Vector Machine object of class "ksvm"
##
## SV type: C-svc (classification)
## parameter : cost C = 1
##
## Gaussian Radial Basis kernel function.
## Hyperparameter : sigma = 0.812699787875342
##
## Number of Support Vectors : 51
##
## Objective Function Value : -4.2531 -4.7802 -16.0651
## Training error : 0.017857
##
## iris.pred1 setosa versicolor virginica
## setosa 9 0 0
## versicolor 0 15 0
## virginica 0 2 12
## agreement
## FALSE TRUE
## 0.05263158 0.94736842
The model performance did not drastically improve, compared to the previous linear kernel case (you might get slightly different results). This is because this Iris dataset has a mostly linear feature space separation. In practice, we could try alternative kernel functions and see which one fits the dataset the best.
We can tune the SVM using the tune.svm
function in the
package e1071
.
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 0.3678794
##
## - best performance: 0.03560606
Further, we can draw a cv plot to gauge the model performance:
# install.packages("sparsediscrim")
set.seed(2017)
# Install the Package sparsediscrim: https://cran.r-project.org/src/contrib/Archive/sparsediscrim/
# install.packages("corpcor", "bdsmatrix")
# install.packages("C:/Users/Dinov/Desktop/sparsediscrim_0.2.4.tar.gz", repos = NULL, type="source")
library(sparsediscrim)
library (reshape); library(ggplot2)
folds = cv_partition(iris$Species, num_folds = 5)
train_cv_error_svm = function(costC) {
#Train
ir.svm = svm(Species~., data=iris,
kernel="radial", cost=costC)
train_error = sum(ir.svm$fitted != iris$Species) / nrow(iris)
#Test
test_error = sum(predict(ir.svm, iris_test) != iris_test$Species) / nrow(iris_test)
#CV error
ire.cverr = sapply(folds, function(fold) {
svmcv = svm(Species~.,data = iris, kernel="radial", cost=costC, subset = fold$training)
svmpred = predict(svmcv, iris[fold$test,])
return(sum(svmpred != iris$Species[fold$test]) / length(fold$test))
})
cv_error = mean(ire.cverr)
return(c(train_error, cv_error, test_error))
}
costs = exp(-5:8)
ir_cost_errors = sapply(costs, function(cost) train_cv_error_svm(cost))
df_errs = data.frame(t(ir_cost_errors), costs)
colnames(df_errs) = c('Train', 'CV', 'Test', 'Logcost')
dataL <- melt(df_errs, id="Logcost")
# ggplot(dataL, aes_string(x="Logcost", y="value", colour="variable",
# group="variable", linetype="variable", shape="variable")) +
# geom_line(size=1) + labs(x = "Cost",
# y = "Classification error",
# colour="",group="",
# linetype="",shape="") + scale_x_log10()
plot_ly(dataL, x = ~log(Logcost), y = ~value, color = ~variable,
colors = c('blue', 'red', "green"), type="scatter", mode="lines") %>%
layout(xaxis = list(title = 'log(Cost)'), yaxis = list(title = 'Classifier Error'), legend = list(orientation = 'h'),
title="SVM CV-Plot of Model Performance (Iris Data)") %>% hide_colorbar()
Now, let’s attempt to improve the performance of a Gaussian kernel by tuning:
set.seed(2020)
gammas = exp(-5:5)
tune_g = tune.svm(Species~., kernel = "radial", data = iris_train, cost = costs, gamma = gammas)
tune_g
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## gamma cost
## 0.01831564 7.389056
##
## - best performance: 0.01818182
We observe that the model achieves a better prediction now.
iris.svm_g <- svm(Species~., data=iris_train,
kernel="radial", gamma=0.0183, cost=20)
table(iris_test$Species, predict(iris.svm_g, iris_test))
##
## setosa versicolor virginica
## setosa 9 0 0
## versicolor 0 15 2
## virginica 0 0 12
## agreement
## FALSE TRUE
## 0.05263158 0.94736842
Chapter 14 provides more details about neural networks and deep learning.
Meta-learning involves building and ensembling multiple learners relying either on single or multiple learning algorithms. Meta-learners combine the outputs of several techniques and report consensus results that are more reliable, in general. For example, to decrease the variance (bagging) or bias (boosting), random forest attempts in two steps to correct the general decision trees’ trend to overfit the model to the training set:
Before stepping into the details, let’s briefly summarize:
Bagging (stands for Bootstrap Aggregating) is the way to decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multiple samples of the same cardinality/size as your original data. We can’t expect to improve the model predictive power by synthetically increasing the size of the training set, however we may decrease the variance by narrowly tuning the prediction to the expected outcome.
Boosting is a two-step approach that aims to reduce bias in parameter estimation. First, we use subsets of the original data to produce a series of moderately performing models and then “boost” their performance by combining them together using a particular cost function (e.g., Accuracy). Unlike bagging, in classical boosting, the subset creation is not random and depends upon the performance of the previous models: every new subset contains the elements that were (likely to be) misclassified by previous models. Usually, when using boosting, we prefer weaker classifiers. For example, a prevalent choice is to use a stump (level-one decision tree) in AdaBoost (Adaptive Boosting).
One of the most well-known meta-learning method is bootstrap aggregating or bagging. It builds multiple models with bootstrap samples using a single algorithm. The models’ predictions are combined with voting (for classification) or averaging (for numeric prediction). Voting means the bagging model’s prediction is based on the majority of learners’ prediction for a class. Bagging is especially good with unstable learners like decision trees or SVM models.
To illustrate the Bagging method we will again use the Quality of
life and chronic disease dataset we saw earlier in Chapter
5. Just like we did in the second practice problem in this chapter,
we will use CHARLSONSCORE
as the class label, which has 11
different levels.
qol <- read.csv("https://umich.instructure.com/files/481332/download?download_frd=1")
qol <- qol[!qol$CHARLSONSCORE==-9 , -c(1, 2)]
qol$CHARLSONSCORE <- as.factor(qol$CHARLSONSCORE)
To apply bagging()
, we need to download the
ipred
package first. After loading the package, we build a
bagging model with CHARLSONSCORE
as class label and all
other variables in the dataset as predictors. We can specify the number
of voters (decision tree models we want to have), which defaults to
25.
# install.packages("ipred")
library(ipred)
set.seed(123)
mybag<-bagging(CHARLSONSCORE ~ ., data=qol, nbagg=25)
The result, mybag
, is a complex class object that
includes y
(vector of responses), X
(data
frame of predictors), mtrees
(multiple trees as a list of
length nbagg containing the trees for each bootstrap sample,
OOB
(logical indicating whether the out-of-bag estimate
should be computed), err
error (if OOB=TRUE, the out-of-bag
estimate of misclassification or root mean squared error or the Brier
score for censored data), and comb (Boolean indicating whether a
combination of models was requested).
Now we will use the predict()
function to apply this
forecasting model. For evaluation purposes, we create a table to inspect
the re-substitution error.
## agreement
## FALSE TRUE
## 0.00128866 0.99871134
This model works very well with its training data. It labeled 99.8%
of the cases correctly. To evaluate its performance on testing data, we
apply the caret
train()
function again with 10
repeated CVs as a re-sampling method. In caret
, the bagged
trees method is called treebag
.
## Loading required package: lattice
set.seed(123)
ctrl <- trainControl(method="repeatedcv", number = 10, repeats = 10)
train(CHARLSONSCORE ~ ., data=as.data.frame(qol), method="treebag", trControl=ctrl)
## Bagged CART
##
## 2328 samples
## 38 predictor
## 11 classes: '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 10 times)
## Summary of sample sizes: 2094, 2097, 2096, 2098, 2096, 2095, ...
## Resampling results:
##
## Accuracy Kappa
## 0.5199428 0.2120838
Well, we got a very marginal accuracy of 52% and a fair Kappa
statistics. This result is better than the one we got earlier using the
ksvm()
function alone (\(\sim
50\%\)). Here we combined the prediction results of \(38\) decision trees to get this accuracy.
It seems that we can’t forecast CHARLSONSCORE
too well,
however other QoL outcomes may have higher prediction accuracy. For
instance, we may predict QOL_Q_01
with \(accuracy=0.6\) and \(\kappa=0.42\).
set.seed(123)
ctrl <- trainControl(method="repeatedcv", number = 10, repeats = 10)
train(as.factor(QOL_Q_01) ~ . , data=as.data.frame(qol), method="treebag", trControl=ctrl)
In addition to decision tree classification, caret
allows us to explore alternative bag()
functions. For
instance, instead of bagging based on decision trees, we can bag using a
SVM model. caret
provides a nice setting for SVM training,
making predictions and counting votes in a list object
svmBag
. We can examine these objects by using the
str()
function.
## List of 3
## $ fit :function (x, y, ...)
## $ pred :function (object, x)
## $ aggregate:function (x, type = "class")
Clearly, fit
provides the training functionality,
pred
the prediction and forecasting on new data, and
aggregate
is a way to combine many models and achieve
voting-based consensus. Using the member operator, the \(\$\) sign, we can explore these three types
of elements of the svmBag
object. For instance, the
fit
element may be extracted from the SVM object by:
## function (x, y, ...)
## {
## loadNamespace("kernlab")
## out <- kernlab::ksvm(as.matrix(x), y, prob.model = is.factor(y),
## ...)
## out
## }
## <bytecode: 0x0000017e2c275f60>
## <environment: namespace:caret>
The SVM bag fit
relies on the
kernlab::ksvm()
function. The other two methods,
pred
and aggregate
, may be explored in a
similar way. They follow the SVM model building and testing process we
saw earlier.
This svmBag
object could be used as an optional setting
in the train()
function. However, this option requires that
all features are linearly independent, which may be rare in real world
data.
Bagging uses equal weights for all learners we include in the model. Boosting is different as it employs non-uniform weights. Suppose we have the first learner correctly classifying 60% of the observations. This 60% of data may be less likely to be included in the training dataset for the next learner. So, we have more learners working on the remaining “hard-to-classify” observations.
Mathematically, the boosting technique uses a weighted sum of functions to predict the outcome class labels. We can try to fit the true model using weighted additive modeling. We start with a random learner that can classify some of the observations mostly correctly, possibly with some errors.
\[\hat{y}_1=l_1.\]
This \(l_1\) is our first learner and \(\hat{y}_1\) denotes its predictions (this equation is in matrix form). Then, we can calculate the residuals of our first learner.
\[\epsilon_1=y-v_1\times\hat{y}_1,\] where
\(v_1\) is a shrinkage parameter to
avoid overfitting. Next, we fit the residual with another learner. This
learner minimizes the following objective function \(\sum_{i=1}^N||y_i-l_{k-1}-l_k||\). Here
k=2
. Then we obtain a second model \(l_2\) with:
\[\hat{y}_2=l_2.\]
After that, we can update the residuals:
\[\epsilon_2=\epsilon_1-v_2\times\hat{y}_2.\]
We repeat this residual fitting until adding another learner \(l_k\) results in updated residual \(\epsilon_k\) that is smaller than a small predefined threshold. In the end, we will have an additive model like:
\[L=v_1\times l_1+v_2\times l_2+...+v_k\times l_k,\]
where we ensemble k weak learners to generate a stronger meta model.
Schapire and Freund found that although individual learners trained on the pilot observations might be very weak in predicting in isolation, boosting the collective power of all of them is expected to generate a model no worse than the best of all individual constituent models included in the boosting ensemble. Usually, the boosting results are quite better than the top individual model. Although boosting can be used for almost all models, it’s most commonly applied to decision trees.
Random forests, or decision tree forests, represent a class of boosting methods focusing on decision tree learners. Random forests (RF) represent ensembles of a large number of (typically weaker) classifiers such as individual decision trees that provide independent class predictions. Then, for any testing set, the RF aggregates the predictions of all individual trees to pool the most likely class label which becomes the RF prediction. If and when a large number of relatively independent tree models reach a consensus, i.e., majority vote, this forecasting outcome is expected to outperform any specific individual decision tree model. Whereas many of the trees in the forest may yield vastly incorrect and wild predictions, any consistent pattern emerging from other trees would suggest strong candidate predictions that pull the consensus in a uniform (correct) direction. The main assumptions of the random forest ensemble method include (1) existence of an actual signal encoded in the data features that may guide the individual tree into a nonrandom guessing pattern, and (2) the individual trees in the forest should be fairly independently trained to output predictions with low correlations.
As RF relies on bagging to generate a meta ensemble regression or classification, it uses averaging of a large number of weak (noisy), mostly independent, and approximately unbiased models. This average pooling naturally reduces the performance variance. RF can be applied to any family of classifiers, and is particularly useful with decision trees, which tend to capture complex high-dimensional interactions. When grown sufficiently deep, the RFs have relatively low bias in their predictions. Individual trees are commonly expected to generate noisy predictions. Therefore, the expectation of the subsequent RF average pooling of the identically distributed \(B\) trees is the same as the expectation of each of the constituent trees. Hence, the bias of bagged trees in the RF will be the same as the bias of any one of the individual trees. However, the RF improvement over each individual decision tree prediction rapidly decreases the forecasting variance. In boosting methods, the decision trees are not necessarily independent, but are grown adaptively to reduce the bias.
Recall that by the central limit theorem (CLT), the average of \(B\) independent random variables from a distribution with a variance \(\sigma^2\), has a much lower variance \(\frac{\sigma^2}{B}\), which goes to zero as the sample size \(B\to\infty\). However, when the random and identically distributed variables are not independent, then the variance of the arithmetic mean could be much larger. For instance, if pairwise correlations of the variables is non-trivial, \(\rho>0\), the variance of the sampling distribution of the average would be on the order of \(O(\sigma^2)\). More specifically, in this case the variance would be \[\left ( \rho +\frac{1-\rho}{B}\right ) \sigma^2\ \ \underbrace{\longrightarrow}_{B\to\infty}\ \ \rho\sigma^2 .\]
In other words, the process of bagging dependent trees has less benefit on reducing prediction variability in the average pooled predictor. In RF, the improvement of prediction is based on the variance reduction of bagging, which is achieved by reducing tree correlations using tree-growing process based on random selection of the input variables.
Below is a pseudo code illustrating the core parts of a Random Forest approach for regression or classification.
The RF prediction given a new testing case \(x\) is given by:
\[\hat{C}_B^{RF}(x) = {{majority}\choose{vote}}\{ \hat{C}_b(x)\}. \]
At each iteration, the bootstrapped sample dataset affects the growth of each tree. Prior to deciding whether and how to split a leaf node, we randomly choose \(m\leq p\) features as candidate splitting variables. In general, \(1\leq m\sim \sqrt{p} \ll p\). Suppose the parameter vector \(\Omega_b\) characterizes the \(b\)-th random forest tree in terms of splitting variables, node cut-points, and terminal-node values. In the RF regression setting, growing \(B\) trees \(\{T(x|\Omega_b)\}_1^B\) yields the following random forest regression predictor
\[\hat{f}_B^{RF}(x)=\frac{1}{B}\sum_{b=1}^B {T(x|\Omega_b)} .\]
One approach to train and build random forests uses the
randomForest::randomForest()
method, which has the
following invocation:
m<-randomForest(expression, data, ntree=500, mtry=sqrt(p))
p
stands for
the number of features in the data.Let’s build a random forest using the Quality of Life dataset.
# install.packages("randomForest")
library(randomForest)
set.seed(123)
rf <- randomForest(as.factor(QOL_Q_01) ~ . , data=qol)
rf
##
## Call:
## randomForest(formula = as.factor(QOL_Q_01) ~ ., data = qol)
## Type of random forest: classification
## Number of trees: 500
## No. of variables tried at each split: 6
##
## OOB estimate of error rate: 38.4%
## Confusion matrix:
## 1 2 3 4 5 6 class.error
## 1 14 10 14 5 0 0 0.6744186
## 2 3 73 120 14 0 1 0.6540284
## 3 1 23 558 205 3 1 0.2945638
## 4 0 2 164 686 36 4 0.2309417
## 5 0 0 7 160 92 0 0.6447876
## 6 0 1 34 81 5 11 0.9166667
By default the model contains 500 voter trees and tries 6 variables
at each split. Its OOB (out-of-bag) error rate is about 38%, which
corresponds with a moderate accuracy (62%). Note that the OOB error rate
is not a re-substitution error. Next to the confusion matrix, we see the
reported OOB error rate for all specific classes. All of these error
rates are reasonable estimates of future performances with unseen data.
We can see that this model is so far the best of all models, although it
is still not highly predictive of QOL_Q_01
.
In addition to model building, the caret
package also
supports model evaluation. It reports more detailed model performance
evaluations. As usual, we need to specify the re-sampling method and a
parameter grid. Let’s use a 10-fold CV re-sampling method as an example.
The grid for this model contains information about the mtry
parameter (the only tuning parameter for random forest). Previously we
tried the default value \(\sqrt{38}=6\)
(38 is the number of features). This time we could compare multiple
mtry
parameters.
library(caret)
ctrl <- trainControl(method="cv", number=10)
grid_rf <- expand.grid(mtry=c(2, 4, 8, 16))
Next, we apply the train()
function with our
ctrl
and grid_rf
settings.
set.seed(123)
m_rf <- train(as.factor(QOL_Q_01) ~ ., data = qol, method = "rf",
metric = "Kappa", trControl = ctrl, tuneGrid = grid_rf)
m_rf
## Random Forest
##
## 2328 samples
## 38 predictor
## 6 classes: '1', '2', '3', '4', '5', '6'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 2095, 2095, 2096, 2096, 2095, 2096, ...
## Resampling results across tuning parameters:
##
## mtry Accuracy Kappa
## 2 0.5760034 0.3465920
## 4 0.6039266 0.4004127
## 8 0.6125122 0.4197845
## 16 0.6180824 0.4372678
##
## Kappa was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 16.
This call may take a while to complete. The result appears to be a
good model, when mtry=16
we reached a moderately high
accuracy (0.62) and good kappa
statistic (0.44). This is a
good result for a meta-learner of 6 dispersed classes
(table(as.factor(qol$QOL_Q_01))
).
More examples of using randomForest()
and interpreting
its results are shown in Chapter 5.
We may achieve even higher accuracy using AdaBoost. Adaptive boosting (AdaBoost) can be used in conjunction with many other types of learning algorithms to improve their performance. The output of the other learning algorithms (‘weak learners’) is combined into a weighted sum that represents the final output of the boosted classifier. AdaBoost is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by the previous classifiers.
For binary cases, we could use the method ada::ada()
and
for multiple classes (multinomial/polytomous outcomes) we can use the
package adabag
. The adabag::boosting()
function allows us to specify a method by setting
coeflearn
. The two main types of adaptive boosting methods
that are commonly used include the AdaBoost.M1
algorithm,
e.g., Breiman
and Freund
, or the
Zhu
’s SAMME
algorithm. The key parameter in
the adabag::boosting()
method is coeflearn:
The generalizations of AdaBoost for multiple classes (\(\geq 2\)) include AdaBoost.M1
(where individual trees are required to have an error \(\lt \frac{1}{2}\)) and SAMME
(where individual trees are required to have an error \(\lt 1-\frac{1}{nclasses}\)).
Let’s see some examples using these three alternative adaptive boosting methods:
# Prep the data
qol <- read.csv("https://umich.instructure.com/files/481332/download?download_frd=1")
qol <- qol[!qol$CHARLSONSCORE==-9 , -c(1, 2)]
qol$CHARLSONSCORE <- as.factor(qol$CHARLSONSCORE)
#qol$QOL_Q_01 <- as.factor(qol$QOL_Q_01)
qol <- qol[!qol$CHARLSONSCORE==-9 , -c(1, 2)]
qol$cd <- qol$CHRONICDISEASESCORE>1.497
qol$cd <- factor(qol$cd, levels=c(F, T), labels = c("minor_disease", "severe_disease"))
qol <- qol[!qol$CHRONICDISEASESCORE==-9, ]
# install.packages("ada"); install.packages("adabag")
library("ada"); library("adabag")
set.seed(123)
# qol_boost <- boosting(QOL_Q_01 ~ . , data=qol, mfinal = 100, coeflearn = 'Breiman')
# mean(qol_boost$class==qol$QOL_Q_01)
qol_boost <- boosting(cd ~ . , data=qol[, -37], mfinal = 100, coeflearn = 'Breiman')
mean(qol_boost$class==qol$cd)
## [1] 0.86621
set.seed(123)
#qol_boost <- boosting(QOL_Q_01 ~ ., data=qol, mfinal = 100, coeflearn = 'Breiman')
#mean(qol_boost$class==qol$QOL_Q_01)
qol_boost <- boosting(cd ~ . , data=qol[, -37], mfinal = 100, coeflearn = 'Breiman')
mean(qol_boost$class==qol$cd)
## [1] 0.86621
set.seed(1234)
#qol_boost <- boosting(QOL_Q_01 ~ ., data=qol, mfinal = 100, coeflearn = 'Zhu')
#mean(qol_boost$class==qol$QOL_Q_01)
qol_boost <- boosting(cd ~ . , data=qol[, -37], mfinal = 100, coeflearn = 'Zhu')
mean(qol_boost$class==qol$cd)
## [1] 0.9378995
We observe that the Zhu
approach achieves the best
results, average \(accuracy> 0.93\).
Notice that the default method is M1 Breiman
and the number
of boosting iterations is specified by the parameter
mfinal
.
Use the Google
trend data. Fit a neural network model with the Google Trends data
we saw earlier. This time use Investing
as target and
Unemployment, Rental, RealEstate, Mortgage, Jobs, DJI_Index, StdDJI
as predictors. Use 3 hidden nodes.
Note: remember to change the columns you want to include in the test dataset when predicting.
The following number is the correlation between predicted and observed values.
google_model3 <- neuralnet(Investing~Unemployment+Rental+RealEstate+Mortgage+Jobs+DJI_Index+StdDJI, data=google_train, hidden = 3)
plot(google_model3)
google_pred3<-compute(google_model3, google_test[, c(1:5, 7:8)])
pred_results3<-google_pred3$net.result
cor(pred_results3, google_test$Investing)
## [,1]
## [1,] 0.8910658
You might get slightly different results since the weights are generated randomly.
Use the Quality of life and chronic disease and the corresponding meta-data doc, which we used in Chapter 5, .
Let’s load the data first. In this case study, we want to use the
variable CHARLSONSCORE
as our target variable.
qol <- read.csv("https://umich.instructure.com/files/481332/download?download_frd=1")
featureLength <- dim(qol)[2]
str(qol[, c((featureLength-3):featureLength)])
## 'data.frame': 2356 obs. of 4 variables:
## $ TOS_Q_03 : int 4 4 4 4 4 4 4 4 4 4 ...
## $ TOS_Q_04 : int 5 5 5 5 5 5 5 5 5 5 ...
## $ CHARLSONSCORE : int 2 2 3 1 0 0 2 8 0 1 ...
## $ CHRONICDISEASESCORE: num 1.6 1.6 1.54 2.97 1.28 1.28 1.31 1.67 2.21 2.51 ...
Delete the first two columns (we don’t need ID variables) and rows
that have missing values in CHARLSONSCORE
(where
CHARLSONSCORE
equals “-9”)
!qol$CHARLSONSCORE==-9
means we want all the rows that have
CHARLSONSCORE not equal to -9. The exclamation sign (!) indicates
“exclude”. Also, we need to convert our categorical variable
CHARLSONSCORE
into a factor.
qol <- qol[!qol$CHARLSONSCORE==-9 , -c(1, 2)]
qol$CHARLSONSCORE<-as.factor(qol$CHARLSONSCORE)
featureLength <- dim(qol)[2]
str(qol[, c((featureLength-3):featureLength)])
## 'data.frame': 2328 obs. of 4 variables:
## $ TOS_Q_03 : int 4 4 4 4 4 4 4 4 4 4 ...
## $ TOS_Q_04 : int 5 5 5 5 5 5 5 5 5 5 ...
## $ CHARLSONSCORE : Factor w/ 11 levels "0","1","2","3",..: 3 3 4 2 1 1 3 9 1 2 ...
## $ CHRONICDISEASESCORE: num 1.6 1.6 1.54 2.97 1.28 1.28 1.31 1.67 2.21 2.51 ...
Now the dataset is ready. First, separate the dataset into training
and test datasets using the $75%-25% rule. Then, build a SVM model using
all other variables in the dataset to be predictor variables. Try to add
different costs of misclassification to the model. Rather than the
default C=1
we use C=2
and C=3
.
See how the model behaves. Here we utilize the radial basis kernel.
Output for C=2
.
sub <- sample(nrow(qol), floor(nrow(qol)*0.75))
qol_train <- qol[sub, ]
qol_test <- qol[-sub, ]
qol_clas2 <- ksvm(CHARLSONSCORE~., data=qol_train, kernel="rbfdot", C=2)
qol_clas2
## Support Vector Machine object of class "ksvm"
##
## SV type: C-svc (classification)
## parameter : cost C = 2
##
## Gaussian Radial Basis kernel function.
## Hyperparameter : sigma = 0.0181973085952074
##
## Number of Support Vectors : 1677
##
## Objective Function Value : -1740.813 -632.9186 -320.7258 -53.7773 -16.9526 -6.3754 -7.156 -30.4514 -23.0695 -6.6509 -674.9566 -339.5998 -56.4041 -17.214 -6.5786 -6.9729 -30.4611 -23.4473 -6.542 -295.8391 -52.2428 -16.705 -6.6857 -6.805 -28.8288 -22.4088 -6.374 -48.8078 -16.4169 -5.9778 -6.4675 -26.9683 -20.8392 -6.1615 -13.9802 -5.6421 -5.4961 -17.1575 -15.648 -5.8733 -4.7161 -4.8676 -10.2043 -8.3648 -4.5794 -3.0381 -4.5698 -4.6504 -2.947 -6.2255 -5.262 -4.0991 -12.7168 -5.4418 -4.9154
## Training error : 0.296678
qol.pred2 <- predict(qol_clas2, qol_test)
# table(qol.pred2, qol_test$CHARLSONSCORE)
agreement <- qol.pred2==qol_test$CHARLSONSCORE
prop.table(table(agreement))
## agreement
## FALSE TRUE
## 0.5223368 0.4776632
tab <- table(qol.pred2, qol_test$CHARLSONSCORE)
tab_df <- tidyr::spread(as.data.frame(tab), key = Var2, value = Freq)
sum(diag(table(hand_letter_predictions, hand_letters_test$letter)))
## [1] 4220
Output for C=3
.
## Support Vector Machine object of class "ksvm"
##
## SV type: C-svc (classification)
## parameter : cost C = 3
##
## Gaussian Radial Basis kernel function.
## Hyperparameter : sigma = 0.0179889535817069
##
## Number of Support Vectors : 1672
##
## Objective Function Value : -2330.535 -855.864 -437.2106 -70.9967 -23.2139 -8.3772 -10.1266 -41.6795 -31.0578 -8.9981 -931.3888 -477.7364 -76.4402 -23.7998 -8.8366 -9.7188 -41.6991 -31.8914 -8.7547 -387.8451 -67.7297 -22.6842 -9.0726 -9.3431 -38.0991 -29.5964 -8.3788 -61.0287 -22.0583 -7.4899 -8.5863 -33.982 -26.2201 -7.8991 -16.808 -6.9375 -6.5236 -18.4361 -17.277 -7.2616 -5.6831 -5.6377 -10.5495 -8.4495 -5.0581 -3.0609 -5.2213 -4.9783 -2.9657 -8.0507 -5.9557 -4.3806 -13.7442 -6.3358 -5.3304
## Training error : 0.239977
qol.pred3 <- predict(qol_clas3, qol_test)
# table(qol.pred3, qol_test$CHARLSONSCORE)
agreement <- qol.pred3==qol_test$CHARLSONSCORE
prop.table(table(agreement))
## agreement
## FALSE TRUE
## 0.5120275 0.4879725
tab <- table(qol.pred3, qol_test$CHARLSONSCORE)
tab_df <- tidyr::spread(as.data.frame(tab), key = Var2, value = Freq)
sum(diag(table(hand_letter_predictions, hand_letters_test$letter)))
## [1] 4220
Try to practice these techniques using other data from the list of our Case-Studies.