Starting from:

$25

CPSC585- Project 3 Solved

Overview of Submission 
For this project, we wanted to have the following features:

●      the ability to run more than one models by avoiding hard coding the actual network

●      the ability to declaratively specify a network to be built for our classification task

●      the ability to run these models without having to go back into the python code

●      the ability to generate different varieties of network with a program

●      the ability to manually specify another network without having to modify the code

●      the ability to test different configuration with varying hyperparameters

●      the ability to run the same network multiple times

●      the ability to store the result/accuracy of the classification in a file for further analysis

          

Model.csv
An independent model can be created using a text editor or generated by a program.  This model file describes how the main notebook can create a model.  Each model can be specified in a file (example: model_ref.csv):

The top line (excluding the comments) are the hyperparameters that can specify the different parameters that can be specified.  The comments describe the values and are self explanatory. One value, the processCount specifies how many times the model is run ​only if​ the computed accuracy exceeds a threshold of 90%. code:

# Hyperparameters

# processCount, epochCount, batchSize, loss, optimizer

3,1000,2000,categorical_crossentropy,RMSprop

 

The layer description will describe the different layers.  Allowable values (not complete) are conv2d, dense, dropout, flatten.  In the first layer is conv2d,relu,32,3,3,l1,0.0002 which means that the ​conv2d ​layer is used with a ​relu ​activation function with a filter count of ​32 ​and a kernel size of ​(3, 3)​.  The kernel regularizer is ​l1​ with a value of ​0.0002​.

code:

# Layer Description conv2d,relu,32,3,3,l1,0.0002 conv2d,relu,64,3,3,, maxpooling2d,2,2 dropout,0.25

 

Execution
During execution of the main notebook, the program will load all model files in ​./models directory and execute each one.  Once the model is executed and the score is computed, the score is saved in a file ​./models/results/scoreboard.csv​.  The accuracy scores are sorted with the best at the top.  The model may be run multiple times but once they are done, they are stored at ​./models/archive​.  The model might encounter an exception.  Those are stored at ./models/archive/bad​.

Notebook
The submission includes two notebooks:

project3-2.ipynb​ - this notebook will run ​all​ models and is the main program. generate_model.ipynb​ - this notebook can be used to generate additional models for analysis. 

 

 

Questions
How does the accuracy compare to its performance on MNIST? 

We ran the script ​mnist_cnn.py​.  We were able to get ​0.99190​ which is extremely good.

 

We were able to train several models with very good results.  Using the application developed by the team, we compiled the top results and stored in the file at models\results\scoreboard.csv​.  Looking at the current result, the top two results were:

.94279​ and ​.93480​.  However, they don’t compare to mnist_cnn result which was fantastic.

 
 

How does the accuracy compare to the MLP you trained in Project 2?

In our project 2, we were able to get our accuracy to around ​91.4%​.  This was not as good as our CNN results.

 

 

 

More products