$30
Q1. Function Approximation with RBFN
Consider using RBFN to approximate the following function:
π¦π¦ = 1.2 sin(ππππ) β cos(2.4ππππ) , ππππππ ππ β [β1, 1]
The training set is constructed by dividing the range [β1, 1] using a uniform step length 0.05, while the test set is constructed by dividing the range [β1, 1] using a uniform step length 0.01. Assume that the observed outputs in the training set are corrupted by random noise as follows.
π¦π¦(ππ) = 1.2 sinππππ(ππ)β cos2.4ππππ(ππ) + 0.3ππ(ππ)
where the random noise ππ(ππ) is Gaussian noise with zero mean and stand deviation of one, which can be generated by MATLAB command randn. Note that the test set is not corrupted by noises. Perform the following computer experiments:
a) Use the exact interpolation method (as described on pages 16-21 in the slides of lecture five) and determine the weights of the RBFN. Assume the RBF is Gaussian function with standard deviation of 0.1. Evaluate the approximation performance of the resulting RBFN using the test set.
b) Follow the strategy of βFixed Centers Selected at Randomβ (as described on page 37 in the slides of lecture five), randomly select 20 centers among the sampling points. Determine the weights of the RBFN. Evaluate the approximation performance of the resulting RBFN using test set. Compare it to the result of part a).
c) Use the same centers and widths as those determined in part a) and apply the regularization method as described on pages 42-45 in the slides for lecture five. Vary the value of the regularization factor and study its effect on the performance of RBFN.
(3 Q2. Handwritten Character Classification using RBFN
In this task, you will build a handwritten character classifier using RBFN. The training data is provided in characters10.mat which contains 3,500 grayscale images (of size 28Γ28) over 10 classes as listed below.
Character
N
E
U
R
A
L
T
W
O
K
Example
Label
0
1
2
3
4
5
6
7
8
9
Specifically, each class possesses 300 images for training and 50 images for test. Please select two classes according to the last two different digits of your matric number (e.g. A0642311, choose classes 3 and 1; A1234567, choose classes 6 and 7).
In MATLAB, the following code can be used to load the training and test data:
-------------------------------------------------------------------------------------------------------
load('characters10.mat');
% train_data Γ training data, 3000x784 matrix
% train_label Γ labels of the training data, 3000x1 vector
% test_data Γ test data, 500x784 matrix
% test_label Γ labels of the test data, 500x1 vector
------------------------------------------------------------------------------------------------------- After loading the data, you may view them using the code below:
-------------------------------------------------------------------------------------------------------
imshow(reshape(train_data(column_no, :), [28,28]));
------------------------------------------------------------------------------------------------------- To select a few classes for training, you may refer to the following code:
-------------------------------------------------------------------------------------------------------
trainIdx = find(train_label==0 | train_label==1 | train_label==2); % select classes 0, 1, 2 trainY = train_label(trainIdx); trainX = train_data(trainIdx,:);
-------------------------------------------------------------------------------------------------------
Please use the following code to evaluate:
-------------------------------------------------------------------------------------------------------
TrAcc = zeros(1,1000);
TeAcc = zeros(1,1000); thr = zeros(1,1000);
TrN = length(TrLabel); TeN = length(TeLabel); for i = 1:1000 t = (max(TrPred)-min(TrPred)) * (i-1)/1000 + min(TrPred); thr(i) = t;
TrAcc(i) = (sum(TrLabel(TrPred<t)==0) + sum(TrLabel(TrPred=t)==1)) / TrN;
TeAcc(i) = (sum(TeLabel(TePred<t)==0) + sum(TeLabel(TePred=t)==1)) / TeN; end plot(thr,TrAcc,'.- ',thr,TeAcc,'^-');legend('tr','te');
------------------------------------------------------------------------------------------------------- TrPred and TePred record respectively the predicted labels of training data and test data; and are determined by TrPred(j) = βππππ=0 π€π€ππππππ(TrData(j, : )) and TePred(j) = βππππ=0 π€π€ππππππ(TeData(j, : )) where ππ is the number of hidden neurons. TrData and TeData are the training and test data selected based on your matric number. TrLabel and TeLabel record respectively the ground-truth labels (convert to {0,1} before use!) of training data and test data.
You are required to complete the following tasks:
a) Use the βExact Interpolationβ method (as described in pages 16-29 of lecture five) and apply regularization (as described in pages 42-45 of lecture five). Assume the RBF is Gaussian function with standard deviation of 100. Firstly, determine the weights of RBFN without regularization and evaluate its performance; then vary the value of regularization factor and study its effect on the resulting RBFNsβ performance.
b) Follow the strategy of βFixed Centers Selected at Randomβ (as described in page 37 of lecture five). Randomly select 100 centers among the training samples. Firstly, determine the weights of RBFN with widths fixed at an appropriate size and compare its performance to the result of a); then vary the value of width from 0.1 to 10000 and study its effect on the resulting RBFNsβ performance.
c) Try classical βK-Mean Clusteringβ (as described in pages 38-39 of lecture five) with 2 centers. Firstly, determine the weights of RBFN and evaluate its performance; then visualize the obtained centers and compare them to the mean of training images of each class. State your findings.
Q3. Self-Organizing Map (SOM)
a) Write your own code to implement a SOM that maps a 1-dimensional output layer of 36 neurons to a βsinusoid curveβ. Display the trained weights of each output neuron as points in a 2D plane, and plot lines to connect every topological adjacent neurons (e.g. the 2nd neuron is connected to the 1st and 3rd neuron by lines). The training points sampled from the βsinusoid curveβ can be obtained by the following code:
-------------------------------------------------------------------------------------------------------
x = linspace(-pi,pi,400); trainX = [x; 2*sin(x)]; Γ 2x400 matrix plot(trainX(1,:),trainX(2,:),'+r'); axis equal
-------------------------------------------------------------------------------------------------------
b) Write your own code to implement a SOM that maps a 2-dimensional output layer of 36 (i.e. 6Γ6) neurons to a βcircleβ. Display the trained weights of each output neuron as a point in the 2D plane, and plot lines to connect every topological adjacent neurons (e.g. neuron (2,2) is connected to neuron (1,2) (2,3) (3,2) (2,1) by lines). The training points sampled from the βcircleβ can be obtained by the following code:
-------------------------------------------------------------------------------------------------------
X = randn(800,2); s2 = sum(trainX.^2,2); trainX = (X.*repmat(1*(gammainc(s2/2,1).^(1/2))./sqrt(s2),1,2))'; Γ 2x800 matrix plot(trainX(1,:),trainX(2,:),'+r'); axis equal
-------------------------------------------------------------------------------------------------------
c) Write your own code to implement a SOM that clusters and classifies handwritten characters. The training data is provided in characters10.mat (as introduced in Q2). Please omit two classes according to the last two different digits of your matric number (e.g. A0642311, ignore classes 3 and 1; A1234567, ignore classes 6 and 7.)
After loading the data, complete the following tasks:
c-1) Print out corresponding conceptual/semantic map of the trained SOM (as described in page 24 of lecture six) and visualize the trained weights of each output neuron on a 10Γ10 map (a simple way could be to reshape the weights of a neuron into a 28Γ28 matrix and display it as an image). Make comments on them, if any.
)
c-2) Apply the trained SOM to classify the test images (in test_data). The classification can be done in the following fashion: input a test image to SOM and find out the winner neuron; then label the test image with the winner neuronβs label (note: labels of all the output neurons have already been determined in c-1)).
Calculate the classification accuracy on the whole test set and discuss your findings.
The recommended values of design parameters are:
1. The size of SOM is 1Γ36 for a), 6Γ6 for b), and 10Γ10 for c).
2. The total iteration number N is set to be 600 for a) & b) and 1000 for c). Only the first phase (self-organizing) of learning is used in this experiment.
3. The learning rate ππ(ππ) is set as:
ππ
ππ(ππ) = ππ0 exp β , ππ = 0,1,2, β¦
ππ2
where ππ0 is the initial learning rate and is set to be 0.1, ππ2 is the time constant and is set to be N.
4. The time-varying neighborhood function is:
ππππ2,ππ
βππ,ππ(π₯π₯)(ππ) = exp β 2 ππ(ππ)2 , ππ = 0,1,2, β¦
where ππππ,ππ is the distance between neuron j and winner I; ππ(ππ) is the effective width and satisfies:
ππ
ππ(ππ) = ππ0 exp β , ππ = 0,1,2, β¦
ππ1
where ππ0 is the initial effective width and is set according to the size of output
ππ layerβs lattice, ππ1 is the time constant and is chosen as ππππ = log (ππ0).