Starting from:

$40.99

EE5904_ME5404-Neural Networks: Homework 2 Solved

Q1. Rosenbrock's Valley Problem  Consider the Rosenbrock's Valley function:which has a global minimum at (x, y) = (1,1) where f(x, y) = 0. Now suppose the starting point is randomly initialized in the open interval (0, 1) for x and y, find the global minimum using:

a). Steepest (Gradient) descent method

                                                  w(k 1)  w(k) g(k)

with learning rate η = 0.001. Record the number of iterations when f(x, y) converges to (or very close to) 0 and plot out the trajectory of (x, y) in the 2-dimensional space. Also plot out the function value as it approaches the global minimum. What would happen if a larger learning rate, say η = 0.5, is used?

b). Newton's method (as discussed on page 13 in the slides of lecture Four)

w(n) H1(n)g(n)

Record the number of iterations when f(x, y) converges to (or very close to) 0 and plot out the trajectory of (x, y) in the 2-dimensional space. Also plot out the function value as it approaches the global minimum.

Q2. Function Approximation
Consider using MLP to approximate the following function:  y1.2sin(x )  cos(2.4x) for x [ 2,2].

The training set is generated by dividing the domain [-2, 2] using a uniform step length 0.05, while the test set is constructed by dividing the domain [-2, 2] using a uniform step length 0.01. You may use the MATLAB neural network toolbox to implement a MLP (see the Appendix for guidance) and do the following experiments:

a). Use the sequential mode with BP algorithm and experiment with the following different structures of the MLP: 1-n-1 (where n = 1, 2, ..., 10, 20, 50, 100). For each architecture plot out the outputs of the MLP for the test samples after training and compare them to the desired outputs. Try to determine whether it is under-fitting, proper fitting or over-fitting. Identify the minimal number of hidden neurons from the experiments, and check if the result is consistent with the guideline given in the lecture slides. Compute the outputs of the MLP when x=-3 and +3, and see if the MLP can make reasonable predictions outside of the domain of the input limited by the training set.

b). Use the batch mode with trainlm algorithm to repeat the above procedure.

c). Use the batch mode with trainbr algorithm to repeat the above procedure.

 

Q3. Facial Attribute Recognition
Multi-layer perceptron (MLP) can be used to solve real-world pattern recognition problems. In this assignment, MLP will be designed to handle a binary classification task. Specifically, students are divided into 3 groups based on matric numbers and each group is assigned with different tasks as illustrated in the following Table.

Group ID 
Task 
Example 
1
Gender Classification
 

Male [1]         Female [0]
2
Smile Face Detection
 

Smile [1]       Non-Smile [0]
3
Glasses Wearing Detection
 

Glass [1]       Non-Glass [0]
You may download the zipped dataset (Face Database.zip) from IVLE. After unzipping, you will find two folders: TrainImages and TestImages that hold the training set and test set respectively. As demonstrated in the above Table, the training/test data are 1,000/250 RGB images of human faces; and the ground-truth labels associated with each image are given in the .att file that shares the same filename with corresponding image. It is noted that there are 3 binary labels in each .att file as shown below, but you only need one of them according to your assigned group.

--------------------------------------------------------

1     |      male[1]/female[0]             for group 1

0     |      smile[1]/non-smile[0]       for group 2

0     |      glass[1]/non-glass[0]        for group 3

--------------------------------------------------------

In order to find your group, you need to calculate ‘mod(LTD, 3) + 1’ where LTD is the last two digits of your matric number, e.g. A1234567X is assigned to group mod(67, 3) + 1 = 2 (Smile Face Detection).

All the images are provided in JEPG format with size 101*101. You can use I = imread(filename) to read these image files, where filename specifies the relative path to an image (for code efficiency, you may use function dir() to get filenames of all the images within a folder). The returned value I is an array (101-by-101-by-3 in this assignment) containing the image data. For example,

I = imread(‘TrainImages/Abba_Eban_0001.jpg’);

will read image ‘Abba_Eban_0001.jpg’ from the training set into MATLAB workspace. For simplicity, grayscale images are used in this assignment, and you can convert the RGB image into grayscale image by function rgb2gray(): G = rgb2gray(I);

Then, you could get the grayscale image as a 101-by-101 array which can be displayed using imshow(). In order to efficiently process all the image data, you may need to rearrange the matrix form data G into a vector by

V = G(:);

and the resulting V is a column vector whose elements are taken column-wisely from G. Subsequently, you could group all the training images together using

train_images = [V1, V2, …];

and group all the test images together following the same way. In the next, these matrices (101*101-by-image_number) are used as input to the networks.

As mentioned before, the ground-truth labels are stored in the .att file associated with each image and can be extracted by

L = load(‘TrainImages/Abba_Eban_0001.att’); l = L(i);

where i is your group ID and l is a binary number holding the ground-truth label of image ‘Abba_Eban_0001.jpg’ for your assigned task.

You are required to complete the following tasks:

a) For your assigned task, plot and analyse the label distribution of both the training set and test set.
 b) Apply Rosenblatt’s perceptron (single layer perceptron) to your assigned task. After the training procedure, calculate the classification accuracy for both the training set and test set, and evaluate the performance of the perceptron.

 c) The original input dimension is 10,201 (101*101), which may be redundant and leave space for reduction. Try to naively downsample the images or apply a more sophisticated technique like PCA to these images. Then, retrain the perceptron in b) with the dimensionally reduced images and compare the results. (you may use imresize() and pca() in this task)

d) Apply MLP to your assigned task using batch mode training. After the training procedure, calculate the classification accuracy for both the training set and test set, and evaluate the performance of the network.
  e)Apply MLP to your assigned task using sequential mode training. After the training procedure, calculate the classification accuracy for both the training set and test set, and evaluate the performance of the network. Compare the result with that of d) and make your recommendation on these two approaches.

) f) You may notice that all the images, either for training or test, are already aligned by placing eyes at a certain location. Do you think it is necessary to do so? Justify your answer.

Important note: There are many design and training issues to be considered when you apply neural networks to solve real world problems. We have discussed most of them in the lecture four. Some of them have clear answers, some of them may rely on empirical rules, and some of them have to be determined by trial and error. I believe that you will have more fun playing with these design parameters and making your own judgment rather than solving the problem with a prescribed set of parameters. Hence, there is no standard answer to this problem, and the marking will be based upon the whole procedure rather than the final classification accuracy. (Use “help” and “doc” commands in MATLAB to get familiar with the functions that you don’t know and Google everything that confuse you.)

Appendix
Create a feed-forward back propagation network using MATLAB toolbox using: net = patternnet(hiddenSizes, trainFcn, performFcn)
where the arguments are specified as follows:

hiddenSizes -- Row vector of one or more hidden layer sizes (default = 10);           trainFcn -- Training function (default = 'trainscg');          performFcn -- Performance function (default = 'crossentropy').

trainFcn specifies the optimization algorithm based on which the network is updated during training, and there are many choices:

 

More products