Starting from:

$30

COMP551-Image-Restoration Solved

2          Background
One goal of publishing scientific work is to enable future readers build upon it. Reproducibility is the central theme to achieve this target, yet it is unfortunately one of the biggest challenges of Machine Learning research. Everyone is encouraged to follow the reproducibility checklist while publishing scientific research, to make the results reliable and reproducible. In addition, a challenge is organized every year to measure the progress of our reproducibility effort. The participants select a published paper from one of the listed conferences, and attempt to reproduce its central claims. The objective is to assess if the conclusions reached in the original paper are reproducible. The focus of this challenge is to follow the process described in the paper and attempt to reach the same conclusions. We have designed this miniproject in the spirit of the reproducibility challenge. Top projects can potentially be extended and submitted to the challenge in January. Note that in comparison to previous mini-projects, this is an open ended project meant to help you use the theoretical and applied knowledge from this course to experiment and tinker with actual, popular research work in the field.

3          Task
The goal of this assignment is to select a paper and investigate the main claims, to see if similar conclusions can be reached or the conclusions can be challenged. For this mini project, you are not expected to implement anything from scratch. You are encouraged to use any code repository published with the paper or any other implementation you might have found online.

It is up to you to define the experiments you would like to perform. The reasoning behind your choice of experiments is part of the evaluation criteria. Here are some possibilities:

1.   Rerunning the models on reported datasets to see if you can reproduce the evaluation metrics reported in the paper, or alternative evaluation metrics.

2.   Improving the baselines reported in the paper.

3.   Applying the model to other datasets.

4.   Investigating the effect of different choices in the proposed methodology/architecture (ablation study).

5.   Improving the performance of the proposed method.

6.   Comparison to a method that was not considered in the original paper.

3.1        Paper selection guidelines
You must choose a paper from the current pool of papers of the reproducibility challenge. If you have a good reason to work on another paper, check with the lead TAs for this project first.

Data You should be able to access the data or environment you will need to reproduce the paper’s experiments.

Code and Trained Model In many cases a code might be available directly from the authors or another source. You should check whether you can work with the code before picking the paper. Similarly, there may be trained models available for use.

Computation You should estimate the computational requirements for reproducing the paper and take into account the resources available to you for the project. Some authors might have had access to infrastructure unavailable to you; you might not want to choose such a paper. Alternatively, you can study the claims of the paper on smaller scales, due to limited time and computation.

You will undoubtedly find some papers that produce incredible demonstrations of deep learning feats. While it may be tempting to try to replicate tasks like image synthesis and text generation, note that these deep learning models tend to be quite large and consequently the experiments may demand extreme computational resources (and time!). Note that, this should not necessarily prevent you from choosing a paper as you can reduce the computational costs in many ways: running experiments on smaller datasets, reducing the size of the models or considering subsets of the experiments of the original paper. You can also attempt to design new experiments to investigate the same claims.

Several models also have pretrained weights available to download. Since these have been trained on huge datasets, you are encouraged to code up the models and directly import these weights instead of training from scratch. You can then use the pretrained model for experimentation as well as fine-tune the weights on new data. But make sure to add all the resources you have used in your references.

There are various ways to construct a very interesting project without requiring a massive amount of compute. For example:

•  Averaging Weights Lead to Wider Optima and Better Generalization: This paper introduces a simple algorithm with some impressive results, which don’t require enormous models to demonstrate. Some example experiments: Measuring flatness of minima, checking model performance, using different neural network architectures, etc.

•  diffGrad: An Optimization Method for Convolutional Neural Networks: This paper introduces a novel optimization algorithm that claims to address shortcomings of existing optimization algorithms on training CNNs. Some example experiments include measuring the efficacy of the algorithm by different metrics (convergence rate, sensitivity to hyperparameters, variance, etc) or evaluating the algorithm on different datasets.

More products