Assignment 2
Contents
Assignment 2#
Due Friday, October 21 at 11:59PM on Gradescope#
Question 1#
This problem relates to the QDA model, in which the observations within each class are drawn from a normal distribution with a classspecific mean vector and a class specific covariance matrix. We consider the simple case where \(p=1\); i.e. there is only one feature. Suppose that we have \(K\) classes, and that if an observation belongs to the \(k\)-th class then \(X\) comes from a one-dimensional normal distribution, \(X \sim N\left(\mu_k, \sigma_k^2\right)\). Recall that the density function for the one-dimensional normal distribution is given in (4.16). Prove that in this case, the Bayes classifier is not linear. Argue that it is in fact quadratic.
Hint: For this problem, you should follow the arguments laid out in Section 4.4.1, but without making the assumption that \(\sigma_1^2=\ldots=\sigma_K^2\).
Question 2#
When the number of features \(p\) is large, there tends to be a deterioration in the performance of \(\mathrm{KNN}\) and other local approaches that perform prediction using only observations that are near the test observation for which a prediction must be made. This phenomenon is known as the curse of dimensionality, and it ties into the fact that non-parametric approaches often perform poorly when \(p\) is large. We will now investigate this curse.
(a) Suppose that we have a set of observations, each with measurements on \(p=1\) feature, \(X\). We assume that \(X\) is uniformly (evenly) distributed on \([0,1]\). Associated with each observation is a response value. Suppose that we wish to predict a test observation’s response using only observations that are within \(10 \%\) of the range of \(X\) closest to that test observation. For instance, in order to predict the response for a test observation with \(X=0.6\), we will use observations in the range \([0.55,0.65]\). On average, what fraction of the available observations will we use to make the prediction?
(b) Now suppose that we have a set of observations, each with measurements on \(p=3\) features, \(X_1,X_2\) and \(X_3\). We assume that \(\left(X_1, X_2, X_3\right)\) are uniformly distributed on \([0,1] \times[0,1] \times[0,1]\). We wish to predict a test observation’s response using only observations that are within \(10 \%\) of the range of \(X_1\), within \(10 \%\) of the range of \(X_2\), and within \(10 \%\) of the range of \(X_3\) closest to that test observation. For instance, in order to predict the response for a test observation with \(X_1=0.6,X_2=0.35\) and \(X_3=0.15\), we will use observations in the range \([0.55,0.65]\) for \(X_1\), in the range \([0.3,0.4]\) for \(X_2\), and in the range \([0.1,0.2]\) for \(X_2\). On average, what fraction of the available observations will we use to make the prediction?
(c) Now suppose that we have a set of observations on \(p=200\) features. Again the observations are uniformly distributed on each feature, and again each feature ranges in value from 0 to 1 . We wish to predict a test observation’s response using observations within the \(10 \%\) of each feature’s range that is closest to that test observation. What fraction of the available observations will we use to make the prediction?
(d) Using your answers to parts (a)-(c), argue that a drawback of KNN when \(p\) is large is that there are very few training observations “near” any given test observation.
(e) Now suppose that we wish to make a prediction for a test observation by creating a p-dimensional hypercube centered around the test observation that contains, on average, \(10 \%\) of the training observations. For \(p=1,3\), and 200 , what is the length of each side of the hypercube? Comment on your answer.
Note: A hypercube is a generalization of a cube to an arbitrary number of dimensions. When \(p=1\), a hypercube is simply a line segment, when \(p=2\) it is a square, when \(p=3\) a cube, and when \(p=200\) it is a 200-dimensional cube.
Question 3#
Suppose we collect data for a group of students in a statistics class with variables \(X_1=\) hours studied, \(X_2=\) undergrad GPA, and \(Y=\) receive an A. We fit a logistic regression and produce estimated coefficient, \(\hat{\beta}_0=-10, \hat{\beta}_1=0.15, \hat{\beta}_2=2\).
(a) Estimate the probability that a student who studies for \(45 \mathrm{~h}\) and has an undergrad GPA of \(3.75\) gets an \(\mathrm{A}\) in the class.
(b) How many hours would the student in part (a) need to study to have a \(75 \%\) chance of getting an A in the class?
Question 4#
Equation \(4.32\) derived an expression for \(\log \left(\frac{\operatorname{Pr}(Y=k \mid X=x)}{\operatorname{Pr}(Y=K \mid X=x)}\right)\) in the setting where \(p>1\), so that the mean for the \(k\) th class, \(\mu_k\), is a \(p\) dimensional vector, and the shared covariance \(\boldsymbol{\Sigma}\) is a \(p \times p\) matrix. However, in the setting with \(p=1,(4.32)\) takes a simpler form, since the means \(\mu_1, \ldots, \mu_K\) and the variance \(\sigma^2\) are scalars. In this simpler setting, repeat the calculation in (4.32), and provide expressions for \(a_k\) and \(b_{k j}\) in terms of \(\pi_k, \pi_K, \mu_k, \mu_K\), and \(\sigma^2\).
Question 5#
Suppose that you wish to classify an observation \(X \in \mathbb{R}\) into apples and oranges. You fit a logistic regression model and find that
Your friend fits a logistic regression model to the same data using the softmax formulation in (4.13), and finds that
(a) What is the log odds of orange versus apple in your model?
(b) What is the log odds of orange versus apple in your friend’s model?
(c) Suppose that in your model, \(\hat{\beta}_0=3\) and \(\hat{\beta}_1=-2\). What are the coefficient estimates in your friend’s model? Be as specific as possible.
(d) Now suppose that you and your friend fit the same two models on a different data set. This time, your friend gets the coefficient estimates \(\hat{\alpha}_{\text {orange0 }}=1.5, \hat{\alpha}_{\text {orange } 1}=-2.4, \hat{\alpha}_{\text {apple } 0}=3.6, \hat{\alpha}_{\text {apple1 }}=0.8\). What are the coefficient estimates in your model?
(e) Finally, suppose you apply both models from (d) to a data set with 2,000 test observations. What fraction of the time do you expect the predicted class labels from your model to agree with those from your friend’s model? Explain your answer.
Question 6#
This question should be answered using the Weekly data set, which is part of the ISLR2 package. This data is similar in nature to the Smarket data from this chapter’s lab, except that it contains 1,089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.
(a) Produce some numerical and graphical summaries of the Week1y data. Do there appear to be any patterns?
(b) Use the full data set to perform a logistic regression with Direction as the response and the first four lag variables plus Volume as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones?
(c) Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.
(d) Now fit the logistic regression model using a training data period from 1990 to 2008, with Lag3 as the only predictor. Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).
(e) Repeat (d) using LDA.
(f) Repeat (d) using QDA.
(g) Repeat (d) using KNN with \(K=1\).
(h) Repeat (d) using naive Bayes.
(i) Which of these methods appears to provide the best results on this data?
(j) Experiment with different combinations of predictors, including possible transformations and interactions, for each of the methods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for \(K\) in the KNN classifier.
Question 7#
This problem involves writing functions.
(a) Write a function, LogPower(), that prints out the result of raising \(\ln(2)\) to the 3rd power. In other words, your function should compute \(\ln(2)^3\) and print out the results. Hint: Recall that in R \(\mathrm{x}^{\wedge} \mathrm{a}\) raises \(\mathrm{x}\) to the power a, and \(\mathrm{log}(x)\) computes the natural logarithm of \(x\). Use the print() function to output the result.
(b) Create a new function, LogPower2(), that allows you to pass any two numbers, \(x\) and \(a\), and prints out the value of \(\log(x)^{\wedge}a\). You can do this by beginning your function with the line
LogPower2 <- function (x, a)
You should be able to call your function by entering, for instance,
LogPower2(3,8)
on the command line. This should output the value of \(\ln(3)^8\), namely, 2.122.
(c) Using the LogPower2() function that you just wrote, compute \(\ln(10)^3\), \(\ln(8)^{17}\), and \(\ln(131)^3\).
(d) Now create a new function, LogPower3(), that actually returns the result \(\log(x)^{\wedge} a\) as an \(R\) object, rather than simply printing it to the screen. That is, if you store the value \(\log(x)^{\wedge} \mathrm{a}\) in an object called result within your function, then you can simply return() this result, using the following line:
return (result)
The line above should be the last line in your function, before the } symbol.
(e) Now using the LogPower3() function, create a plot of
\(f(x)=\ln(x)^2\). The \(x\)-axis should display a range of integers from
1 to 10 , and the \(y\)-axis should display \(\ln(x)^2\). Label the axes
appropriately, and use an appropriate title for the figure. Consider
displaying either the \(x\)-axis, the \(y\)-axis, or both on the
log-scale. You can do this by using log ="x", log = "y"
, or log = "xy"
as arguments to the plot()
function.
(f) Create a function, PlotLogPower(), that allows you to create a
plot of \(x\) against \(\ln(x)^{\wedge} a\) for a fixed a and for a range
of values of \(x\). For instance, if you call PlotLogPower(1:10,3)
then a plot should be created with an \(x\)-axis taking on values \(1,2,
\ldots, 10\), and a \(y\)-axis taking on values \(\log(1)^3, \log(2)^3,
\ldots, \log(10)^3\).
Question 8#
In Chapter 4, we used logistic regression to predict the probability of default using income and balance on the Default data set. We will now estimate the test error of this logistic regression model using the validation set approach. Do not forget to set a random seed before beginning your analysis.
(a) Fit a logistic regression model that uses income to predict default.
(b) Using the validation set approach, estimate the test error of this model. In order to do this, you must perform the following steps:
1. Split the sample set into a training set and a validation set.
2. Fit a multiple logistic regression model using only the training observations.
3. Obtain a prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual, and classifying the individual to the default category if the posterior probability is greater than $0.5$.
4. Compute the validation set error, which is the fraction of the observations in the validation set that are misclassified.
(c) Repeat the process in (b) three times, using three different splits of the observations into a training set and a validation set. Comment on the results obtained.
(d) Now consider a logistic regression model that predicts the probability of default using income and a dummy variable for student. Estimate the test error for this model using the validation set approach. Comment on whether or not including a dummy variable for student leads to a reduction in the test error rate.
Question 9#
We continue to consider the use of a logistic regression model to predict the probability of default using income and student on the Default data set. In particular, we will now compute estimates for the standard errors of the income and student logistic regression coefficients in two different ways: (1) using the bootstrap, and (2) using the standard formula for computing the standard errors in the \(\operatorname{glm}()\) function. Do not forget to set a random seed before beginning your analysis.
(a) Using the summary()
and glm()
functions, determine the
estimated standard errors for the coefficients associated with income
in a multiple logistic regression model that uses both predictors.
(b) Write a function, boot.fn()
, that takes as input the Default
data set as well as an index of the observations, and that outputs the
coefficient estimates for income and student in the multiple logistic
regression model.
(c) Use the boot()
function together with your boot.fn()
function
to estimate the standard errors of the logistic regression
coefficients for income and student.
(d) Comment on the estimated standard errors obtained using the
glm()
function and using your bootstrap function.