$25
Learning Outcomes
This assessment relates to the following learning outcomes of the course.
• Demonstrate advanced knowledge of data mining concepts and techniques.
• Apply the techniques of clustering, classification, association finding, feature selection and visualisation on real world data
• Apply data mining software and toolkits in a range of applications
3 Assignment Details
3.1 Part 1: Classification (20 marks)
This part of the assignment is concerned with the file:
/KDrive/SEH/SCSIT/Students/Courses/COSC2111/DataMining/
./data/arff/UCI/credit-g.arff.
The data comes from a German credit bureau. The main goal is to achieve the highest classification accuracy with the lowest amount of overfitting.
1. Run the following classifiers, with the default parameters, on this data: ZeroR, OneR, J48, IBK and construct a table of the training and cross-validation errors. You can get the training error by selecting “Use training set” as the test option. What do you conclude from these results?
Run No
Classifier
Parameters
Training
Cross-valid
Over-
Parameters
Error
Error
Fitting
1
ZeroR
None
30.0%
30.0%
None
.
.
.
.
.
2. Using the J48 classifier, can you find a combination of the C and M parameter values that minimizes the amount of overfitting? Include the results of your best five runs, including the parameter values, in your table of results.
3. Reset J48 parameters to their default values. What is the effect of lowering the number of examples in the training set? Include your runs in your table of results.
4. Using the IBk classifier, can you find the value of k that minimizes the amount of overfitting? Include your runs in your table of results.
5. Try a number of other classifiers. Aside from ZeroR, which classifiers are best and worst in terms of predictive accuracy? Include 5 runs in your table of results.
6. Compare the accuracy of ZeroR, OneR and J48. What do you conclude?
7. What golden nuggets did you find, if any?
8. [OPTIONAL] Use an attribute selection algorithm to get a reduced attribute set. How does the accuracy on the reduced set compare with the accuracy on the full set?
Report Length: Up to two pages, not including the table of runs.
3.2 Part 2: Numeric Prediction (10 marks)
This part of the assignment is concerned with the file:
/KDrive/SEH/SCSIT/Students/Courses/COSC2111/DataMining/ data/arff/numeric/cholesterol.arff.
The task is to predict the value of the “chol” attribute. The main goal is to achieve the lowest mean absolute error with the lowest amount of overfitting.
1. Run the following classifers, with default parameters, on this data: ZeroR, MP5, IBk and construct a table of the training and cross-validation errors. You may want to turn on “Output Predictions” to get a better sense of the magnitude of the error on each example. What do you conclude from these results?
2. Explore different parameter settings for M5P and IBk. Which values give the best performance in terms of predictive accuracy and overfitting. Include the results of the best five runs in your table of results.
3. Investigate three other classifiers for numeric prediction and their associated parameters. Include your best five runs in your table of results. Which classifier gives the best performance in terms of predictive accuracy and overfitting?
4. What golden nuggets did you find, if any?
Report Length Up to one page, not including the table of runs.
3.3 Part 3: Clustering (10 marks)
Clustering of the credit data of part 1. For this part use only the attributes purpose, age, personal status and credit amount. The aim is determine the number of clusters in the data and assess whether any of the clusters are meaningful.
1. Run the Kmeans clustering algorithm on this data for the following values of K: 1,2,3,4,5,10,20. Analyse the resulting clusters. What do you conclude?
2. Choose a value of K and run the algorithm with different seeds. What is the effect of changing the seed?
3. Run the EM algorithm on this data with the default parameters and describe the output.
4. The EM algorithm can be quite sensitive to whether the data is normalized or not.
Use the weka normalize filter
(Preprocess --> Filter --> unsupervised --> normalize)
to normalize the numeric attributes. What difference does this make to the clustering runs?
5. The algorithm can be quite sensitive to the values of minLogLikelihoodImprovementCV minStdDev and minLogLikelihoodImprovementIterating, Explore the effect of changing these values. What do you conclude?
6. How many clusters do you think are in the data? Give an English language description of one of them.
7. Compare the use of Kmeans and EM for these clustering tasks. Which do you think is best? Why?
8. What golden nuggets did you find, if any?
Report Length Up to one page.
3.4 Part 4: Association Finding (10 marks)
Association finding in the files groceries1.arff and groceries2.arff in the folder /KDrive/SEH/SCSIT/Students/Courses/COSC2111/DataMining/data/arff.
The main aim is to determine whether there are any significant associations in the data.
These files contain the same details of shopping transactions represented in two different ways. You can use a text viewer to look at the files.
1. What is the difference in representations?
2. Load the file groceries1.arff into weka and run the Apriori algorithm on this data. You might need to restrict the number of attributes and/or the number of examples. What significant associations can you find?
3. Explore different possibilities of the metric type and associated parameters. What do you find?
4. Load the file groceries2.arff into weka and run the Apriori algorithm on this data. What do you find?
5. Explore different possibilities of the metric type and associated parameters. What do you find?
6. Try the other associators. What are the differences to Apriori?
7. What golden nuggets did you find, if any?
8. [OPTIONAL] Can you find any meaningful associations in the bank data?
Report Length Up to one page.