$29.99
1. Implement a multilayer perceptron. Specifically, implement the back-propagation algorithm to learn the weights of aperceptron with 2 input nodes, 2 hidden nodes and 1 output node. Train your network to learn the following binary operations:
(a) XOR (5)
(b) AND (5)
(c) OR (5)
Experiment with the number of training samples N and see how its affects performance. Add noise to the labels to generate more samples. Your code should make the number of nodes a configurable parameter.
2. Implement an autoencoder by building on the MLP implementation from the previous question. Choose your networksize appropriately (meaning a size that you can train and test on your computer without running into memory issues).
(5)
3. Now, implement an autoencoder with the sparsity constraint by building the autoencoder implementation from theprevious question. Choose your network size appropriately (meaning a size that you can train and test on your computer without running into memory issues). (10)
4. Bonus question: Implement a variational autoencoder. Ensure you generate z using the reparameterization technique so that backpropagation can work. Train on the MNIST database. You can downsample the images to 14×14 to make your optimization converge faster. Again, build on your previous MLP implementation. (35)
1