$30
Introduction In this homework, we will explore Linear Regression and its regularized counterparts, LASSO and Ridge Regression, in more depth
Question 1. Simple Linear Regression
(a) Consider a data set consisting of X values (features) X1,...,Xn and Y values (responses) Y1,...,Yn.
Let βˆ0,βˆ1,σˆ be the output of running ordinary least squares (OLS) regression on the data. Now define the transformation:
,
for each i = 1,...,n, where c 6= 0 and d are arbitrary real constants. Let β
e0,βe1,σe be the output of
OLS on the data Xe1,...,Xen and Y1,...,Yn. Write equations for in terms of βˆ0,βˆ1,σˆ (and in terms of c,d), and be sure to justify your answers. Note that the estimate of error in OLS is taken to be:
,
where eˆis the vector of residuals, i.e. with i-the element eˆi = Yi−Yˆi, where Yˆi is the i-th prediction made by the model, and p is the number of features (so in this case p = 2).
(b) Suppose you have a dataset where X takes only two values while Y can take arbitrary real values. To consider a concrete example, consider a clinical trial where Xi = 1 indicates that the i-th patient receives a dose of a particular drug (the treatment), and Xi = 0 indicates that they did not, and
Yi is the real-valued outcome for the i-th patient, e.g. blood pressure. Let Y T and Y P indicate the sample mean outcomes for the treatment group and non-treatment (placebo) group, respectively.
What will be the value of the OLS coefficients βˆ0,βˆ1 in terms of the group means?
What to submit: For both parts of the question, present your solution neatly - photos of handwritten work or using a tablet to write the answers is fine. Please include all working and circle your final answers.
Question 2. LASSO vs. Ridge Regression
In this problem we will consider the dataset provided in data.csv, with response variable Y , and features X1,...,X8.
Use a pairs plot to study the correlations between the features. In 3-4 sentences, describe what you see and how this might affect a linear regression model. What to submit: a single plot, some commentary.
In order for LASSO and Ridge to be run properly, we often rescale the features in the dataset. First, rescale each feature so that it has zero mean, and then rescale it so that where n denotes the total number of observations. What to submit: print out the sum of squared observations of each of the 8 (transformed) features, i.e. Pi Xij2 for j = 1,...,8
Now we will apply ridge regression to this dataset, recall that ridge regression is defined as the solution to the optimisation:
βˆ = argmin.
Run ridge regression with λ = {0.01,0.1,0.5,1,1.5,2,5,10,20,30,50,100,200,300}. Create a plot with x-axis representing log(λ), and y-axis representing the value of the coefficient for each feature in each of the fitted ridge models. In other words, the plot should describe what happens to each of the coefficients in your model for the different choices of λ. For this problem you are permitted to use the sklearn implementation of Ridge regressiom to run the models and extract the coefficients, and base matplotlib/numpy to create the plots but no other packages are to be used to generate this plot. In a few lines, comment on what you see, in particular what do you observe for features 3,4,5?
What to submit: a single plot, some commentary, a screen shot of the code used for this section. Your plot must have a legend, and you must use the following colors: [’red’, ’brown’, ’green’, ’blue’, ’orange’, ’pink’, ’purple’, ’grey’] for the features X1,..,X8 in the plot.
In this part, we will use Leave-One-Out Cross Validation (LOOCV) to find a good value of λ for the ridge problem. Create a fine grid of λ values running from 0 to 50 in increments of 0.1, so the grid would be: 0,0.1,0.2,...,50. For each data point i = 1,...,n, run ridge with each λ value on the dataset with point i removed, find βˆ, then get the leave-one-out error for predicting Yi. Average the squared error over all n choics of i. Plot the leave-one-out error against λ and find the best λ Compare your results to standard Ordinary Least Squares (OLS), does the ridge seem to give better prediction error based on your analysis? Note that for this question you are not permitted to use any existing packages that implement cross validation, you must write the code yourself from scratch. You must create the plot yourself from scratch using basic matplotlib functionality.
What to submit: a single plot, some commentary, a screen shot of any code used for this section.
Recall the LASSO problem:
βˆ = argmin.
Repeat part (c) for the LASSO. What to submit: a single plot, some commentary, a screen shot of the code used for this section. You must use the same color scheme as in part (c).
Repeat the leave-one-out analysis of part (d) for the LASSO and for a grid of λ values 0,0.1,...,20. Note that sklearn will throw some warnings for the λ = 0 case which can be safely ignored for our purposes. What to submit: a single plot, some commentary, a screen shot of the code used for this section.
Briefly comment on the differences you observed between the LASSO and Ridge. Which model do you prefer and why? Provide reasonable justification here for full marks. What to submit: some commentary and potentially plots if your discussion requires it.
Question 3. Sparse Solutions with LASSO
In this question, we will try to understand why LASSO regression yields sparse solutions. Sparse means that the solution of the LASSO optimisation problem:
βˆ = argmin
has most of its entries βˆj = 0, which you may have observed empirically in the previous question. To study this from a theoretical perspective, we will consider a somewhat extreme case in which we take the penalty term λ to be very large, and show that the optimal LASSO solution is βˆ = 0p, the vector of all zeroes. Assume that X ∈ Rn×p, Y ∈ Rn and the optimisation is over β ∈ Rp.
Consider the quantity |hY,Xβi|. Show that, where Xj denotes the j-th column of X.
We will now assume that λ is very large, such that it satisfies:
λ ≥ max|XTY |.
j
Using the result of part (a), and the assumption on λ, prove that βˆ = 0p is a solution of the LASSO problem.
In the previous part, we showed that βˆ = 0p is a minimizer of `(β). Prove that βˆ is the unique minimizer of `(β), i.e. if β 6= 0p, then `(β) > `(0p). hint: consider the two cases: kXβk2 = 0 and kXβk2 > 0
What to submit: For all parts of the question, present your solution neatly - photos of handwritten work or using a tablet to write the answers is fine. Please include all working and circle your final answers. Note that if you cannot do part (a), you are still free to use the result of part (a) to complete parts (b) and (c