Starting from:

$25

CSE 417T - hw1 -Solved

  Run the perceptron learning algorithm, starting with the zero weight vector, on the training set you just generated, and keep track of the number of iterations it takes to learn a hypothesis that correctly separates the training data.

Write code in Matlab to perform the above experiment and then repeat it 1000 times (note that you’re generating a new w∗ and a new training set each time). We have provided, in your SVN repository, two stub files that you should complete for this purpose. The files have comments that explain their inputs and outputs. You need to commit your final versions along with your homework writeup.

Once you have your code working, plot a histogram of the number of iterations the algorithm takes to learn a linear separator (you should submit this with your writeup). How does the number of iterations compare with the bound on the number of errors we derived in class? Note that this bound will be different for each instantiation of w∗ and the training set, so in order to answer this question, you should analyze the distribution of differences between the bound and the number of iterations. Plot and submit a histogram of the log of this difference, and discuss your interpretation of these results.

For up to 10 points of extra credit, can you characterize the situations in which the algorithm takes more iterations to correctly learn a hypothesis that separates the training data? Back up your answer with evidence from your experiments. Hint: You may want to try and visualize a lower dimensional version, and/or hold w∗ fixed and vary the training set as you try to figure this out.

3.    (20 points) LFD Problem 1.7

4.    (15 points) LFD Problem 1.8

5.    (15 points) LFD Problem 1.12

2

More products