Starting from:

$25

IE343-Final Project Solved

1            Preliminary and Binary Kernel Logistic Regression
We implemented the logistic regression in the binary and multiple labels classification at the midterm project. In the final project, we will extend the logistic regression to the kernel logistic regression.

Let’s N and d denote the total number of the training data points and dimension of each data input’s dimension, respectively.

And let {(X(i),y(i))}i=1,2,···,N be the training data sets. At the binary logistic regression, we train the parameter θ where the model was

 P .                                                                                                       (2.1)

Here, concatenate(1,X) := (1,X1,X2,···Xd) and < ·,· indicates the inner product.

Let’s review the SVM. To escape the non-linear separable situation, the mapping function φ was defined which increase the dimension of the data’s input space. For the hyper-plane β0+ < β,φ~ (X) , the optimal β~∗ = PNj=1 αjy(j)φ(X(j)).

From that β~∗, we obtain the optimal dual variable {αj∗}j=1,2,···,N by solving quadratic programming.

And from that mapping function φ, the kernel function K was defined by

                                                                                  K(X(i),X(j)) =< φ(X(i)),φ(X(j)) .                               (2.2)

Now, let’s deal with the kernel logistic regression from these backgrounds. For binary kernel logistic regression, the model can be constructed by

1

                              P(y = 1|X = (X1,X2,···Xd)) =               (2.3)

1 + exp(< (β0,β~),concatenate(1,φ(X))

 (2.4)

(2.5)

(2.6)

1

(2.4) was derived by using ). And at the (2.5), we redefine the β0 to α0.

In short, you need to implement the binary kernel logistic regression model

                                 P                                                                                                         (2.7)

by training parameter α.

2         TODO for Code Implementation
There are 2 tasks for the code implementation. ‘task 1’ and ‘task2’ are for binary logistic regression and binary kernel logistic regression, respectively. ‘task1’ and ‘task2’ files are attached and they are almost same from midterm project’s ‘mid_task2_titanic’ folder. The different things is that I provide additional files ‘./App/Pre_processing/kernel.py’ and ‘./App/rbf.py’. These are from the assignment 4 and will be helpful for the final project. Unlike midterm project, I already filled the ‘getData()’ method in the ‘.\main.py’. And you need to complete followings for the code implementation.

(1)   (task1)

It is similar to the midterm project. You need to fill out the ‘./App/logistic_regressor.py’. The code should be implemented in the minus-log-likelihood loss function, epoch_num=5000 and lr=0.00005 settings. And ‘main.py’ needs to print the 50 train accuracy results(print the test accuracy for every 100 epoch) and one test accuracy. Lastly, you need to use the bias term θ0 of (2.1).

(2)   (task2)

You need to fill out the ‘./App/logistic_regressor.py’ and ‘./main.py’ . The code should be implemented in the minus-log-likelihood loss function, RBF kernels, epoch_num=5000 and lr=0.005 settings. RBF kernels should use the hyperparameter (1,1,1,1,...1). And ‘main.py’ needs to print the 50 train accuracy results(print the test accuracy for every 100 epoch) and one test accuracy. Lastly, you need to use the bias term α0 of (2.7).

– Final Porject
2
3         TODO for Report
(1)   Write the kernel logistic regression model and logistic regression model for the binary labels data. For those models, which parameter we have to train? State the parameter’s dimension precisely. ()

(2)   What’s the needs of the kernel logistic regression? i.e. from kernel logistic regression, what we can recover the issue that original logistic regression can’t solve and state the reason precisely. ()

(3)   State the advantages and disadvantages of kernel logistic regression and original logistic regression. ()

(4)   State how the weight update should happen for the original logistic regression and kernel logistic regression at the binary version. You need to write precisely how the gradient is induced also. ()

(5)   Construct the model for the original logistic regression and kernel logistic regression at the multiple labels(K : number of labels 2). You need to clarify the parameters. ()

(6)   For the model you construct at the (5) and minus-log-likelihood loss function, write the pseudo code about each weight update situation. You also need to state the reason and input value in the method function precisely.()

(7)   Describe the method about your logistic regression model with the code at the task1 line by line.()

(8)       Describe the method about your kernel logistic regression model and main.py with the code at the task2 line by line.(

More products