Starting from:

$25

CS433-Higgs Boson Challenge Solved

Introduction
In this project, you will learn to use the concepts we have seen in the lectures and practiced in the labs on a real-world dataset, start to finish. You will do exploratory data analysis to understand your dataset and your features, do feature processing and engineering to clean your dataset and extract more meaningful information, implement and use machine learning methods on real data, analyze your model and generate predictions using those methods and report your findings.

Logistics
Group formation. For Project 1, you will work in a team of 3 students, by your choice. If you are still searching for teammates, please use the discussion forum on Moodle. A good data science team combines a diverse set of skills, and greatly benefits from inter-disciplinary backgrounds.

Deliverables at a glance.            (More details and grading criteria further down)

In Python. For this first project, we want you to implement and use the methods we have seen in class. You need to put all code in a github classroom repository of your team, here using this github classroom invitation link. The Python libraries allowed in this project areThe Python standard libraries
Numpy
Visualization libraries (e.g. matplotlib, seaborn) but only for visualization purposes.
No external libraries allowed! (e.g. Scikit-Learn, PyTorch, TensorFlow, ...). External libraries will be allowed in Project 2.

Written Report. You will write a maximum 2 page PDF report on your findings, using LaTeX. References are allowed to be put on a extra third page.
Competitive Part. To give you immediate feedback and a fair ranking, you can use the competition platform aicrowd.com to score your predictions. You can submit whenever and almost as many times as you like, up until the final submission deadline (not graded).
The Dataset. For this course, we are providing you with our own online competition based on a popular machine learning challenge recently - finding the Higgs boson - using original data from CERN.

Step 1 - Getting Started
Create an account using your epfl.ch email and head over to the competition arena https://www.aicrowd.com/challenges/epfl-machine-learning-higgs

Then, download the training dataset, available in .csv format. To load the data, use the same code we used during the labs. You can find an example of a .csv loading function in our provided template code from labs 1 and 2.

Step 2 - Implement ML Methods
We want you to implement and use the methods we have seen in class and in the labs. You will need to provide working implementations of the functions in Table 1. If you have not finished them during the labs, you should start by implementing the first ones to have a working toolbox before diving in the dataset.

Function
Details
max iters, gamma)
Linear regression using gradient descent
mean squared error sgd(y, tx, initial w, max iters, gamma)
Linear regression using stochastic gradient descent
least squares(y, tx)
Least squares regression using normal equations
ridge regression(y, tx, lambda )
Ridge regression using normal equations
logistic regression(y, tx, initial w, max iters, gamma)
Logistic regression using gradient descent or SGD (y ∈

{0,1})
reg logistic regression(y, tx, lambda , initial w, max iters, gamma)
Regularized logistic regression using gradient descent or SGD (y ∈{0,1}, with regularization term λ∥w∥2)
Table 1: List of functions to implement. In the above method signatures, for iterative methods, initial w is the initial weight vector, gamma is the step-size, and max iters is the number of steps to run. lambda  is always the regularization parameter. (Note that here we have used the trailing underscore because lambda is a reserved word in Python with a different meaning). For SGD, you must use the standard mini-batch-size 1 (sample just one datapoint).

The mean squared error formula has a factor 0.5 to be consistent with the lecture notes.

You should take care of the following:

Return type: Note that all functions should return: (w, loss), which is the last weight vector of the method, and the corresponding loss value (cost function). Note that while in previous labs you might have kept track of all encountered w for iterative methods, here we only want the last one. Moreover, the loss returned by the regularized methods (ridge regression and reg logistic regression) should not include the penalty term.
File names: Please provide all function implementations in a single python file, called implementations.py.
All code should be easily readable and commented.
Note that we will call your provided methods and evaluate for correct implementation. We provide some basic tests to check your implementation in https://github.com/epfml/ML_course/tree/master/ projects/project1/grading_tests.
Coding and experimenting is only one part of this project. It is at least equally important to write a convincing scientific report about what you did (the PDF deliverable). As space is limited, focus on clarity and describe the most impactful insights you found. More detailed instructions and criteria what consists in a good scientific report is provided below.

 

Physics Background
The Higgs boson is an elementary particle in the Standard Model of physics which explains why other particles have mass. Its discovery at the Large Hadron Collider at CERN was announced in March 2013. In this project, you will apply machine learning techniques to actual CERN particle accelerator data to recreate the process of “discovering” the Higgs particle. For some background, physicists at CERN smash protons into one another at high speeds to generate even smaller particles as by-products of the collisions. Rarely, these collisions can produce a Higgs boson. Since the Higgs boson decays rapidly into other particles, scientists don’t observe it directly, but rather measure its“decay signature”, or the products that result from its decay process. Since many decay signatures look similar, it is our job to estimate the likelihood that a given event’s signature was the result of a Higgs boson (signal) or some other process/particle (background). In practice, this means that you will be given a vector of features representing the decay signature of a collision event, and asked to predict whether this event was signal (a Higgs boson) or background (something else). To do this, you will use the binary classification techniques we have discussed in the lectures.

If you’re interested in more background on this dataset, we point you to the longer description here: https://higgsml.lal.in2p3.fr/files/2014/04/documentation_v1.8.pdf.

Note that understanding the physics background is not necessary to perform well in this machine learning challenge as part of the course.

Appendix
Guidelines for Machine Learning Projects
Now that you have implemented few basic methods, you should use this toolbox on the dataset. Here are a few things that you might want to try.

Exploratory data analysis You should learn about your dataset - figure out which features are continuous, which ones are categorical, check if there are obvious relationships between the features, take a look at the distribution of each feature, and so on. Check https://en.wikipedia.org/wiki/Exploratory_data_analysis.

Feature processing Cleaning your dataset by removing useless features and values, combining others, finding better representations of the features to feed your model, scaling the features, and so on. Check this article on feature engineering: http://machinelearningmastery.com/discover-feature-engineering-how-toengineer-features-and-how-to-get-good-at-it/.

Determining whether a method overfits or underfits You should be able to diagnose the whether your model is over- or underfitting the data and take actions to fix the problems with your model. Recommended reading: Advice on applying machine learning methods by Andrew Ng: http://cs229.stanford.edu/materials/MLadvice.pdf.

Applying methods and visualizing Beyond simply applying the models we have seen, it helps to try to understand what the ML model is doing. Try to find out which datapoints are wrongly classified and, if possible, why this is the case. Then use this information to improve your model. Check Peter Domingo’s Useful things to know about machine learning: http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf

Accurately estimate how well your method is doing By applying cross-validation and estimating the generalization error. Among the (potentially many) modifications you did, show an ablation study of the most important ones: Which changes had the largest impact on your final performance? (For choices of models, training algorithms, data-preprocessing, hyperparameters etc).

Report Guidelines
In addition to finding a good model for the data, you will need to explain your methodology in a report. For the first project, this will help you getting used to writing, and prepare you for the more extensive Project 2.

Clearly describe your used methods, state your conclusions and argue that the results you obtained make (or do not make) sense, and the reasons behind it. Keep the report short and to the point, with a strict limit of 2 pages (Project 2 will allow 4 pages). References are allowed to be put on a extra third page.

To get started more easily with writing the report, we provide you a LaTeX template here

github.com/epfml/ML course/tree/master/projects/project1/latex-example-paper

The file also contains some more helpful information on how to write a scientific report or paper. We will also help you learn it during the exercise session and office hours if you ask us.

For more guidelines on what makes a good report, see the grading criteria above. In particular, don’t forget to take care about

Reproducibility: Not only in the code, but also in the report, do include complete details about each algorithm you tried, e.g. what lambda values you used for ridge regression? How exactly did you do that feature transformation? how many folds did you use for cross-validation? etc...
Baselines: Give clear experimental evidence: When you added this new combined feature, or changed the regularization, by how much did that increase or decrease the test error? It is crucial to always report such obtained differences in the evaluation metrics, and to include several properly implemented baseline algorithms as a comparison to your approach.
Longer article on what are good practices in writing a scientific report in a data science, computing or ML context:

http://arxiv.org/pdf/1609.00037
or an older article http://arxiv.org/pdf/1210.0530.
Some additional resources on LaTeX:

https://github.com/VoLuong/Begin-Latex-in-minutes - getting started with LaTeX
http://www.maths.tcd.ie/~dwilkins/LaTeXPrimer/ - tutorial on LaTeX
http://www.stdout.org/~winston/latex/latexsheet-a4.pdf - cheat sheet collecting most of all useful commands in LaTeX
http://en.wikibooks.org/wiki/LaTeX - detailed tutorial on LaTeX
Producing figures for LaTeX in Python
When making figures and plots, make sure the reader understands what the axes mean, what units and data is being visualized (see also the reproducibility criterion). There are some good visualization tools in Python. “matplotlib” is probably the single most used Python package for 2D-graphics.            The relevant tutorials are as follow:

Matplotlib tutorial: http://www.labri.fr/perso/nrougier/teaching/matplotlib/
Matplotlib tutorial: https://sites.google.com/site/scigraphs/tutorial
Matplotlib Tutorial: http://jakevdp.github.io/mpl_tutorial/
Regarding other useful Python data visualization libraries, please refer to this blog for more information.

More products