Instructions. • This coding assignment revolves around developing a basic Multi-Layer Feedforward network and manually training it. Additional instructions for each question are provided accordingly. • You have to submit the GoogleColab file named as rollno-A1.ipynb. Files submitted without following naming convention, .py files and any other will not be evaluated. For example if your roll number is 1234567 / MT34567 file name must be 1234567-A1.ipynb / MT34567-A1.ipynb • No extensions will be granted for this assignment under any circumstances. • You can refer to the code available in the notebook shared on Google Classroom for guidance.
1. Download the MNIST dataset (provide code to download in google colab) and create a custom dataloader using torch.utils.data.Dataset, DataLoader. Write another dataloader completely from scratch and compare the loading performance of your scratch implemented data loader with the one written with PyTorch classes across different batch sizes (128, 256, 512, 1024). Plot a graph illustrating the relationship between batch size and loading time. 2. Implement a Feed-Forward neural network architecture using torh.nn.Linear featuring four hidden layers, each comprising minimum 32 neurons (excluding input and output layers). Train the model using the most effective data loader identified in the previous question with ReLU activation function. Employ the Cross-Entropy loss function and opt for the Stochastic Gradient Descent (SGD) optimizer with default parameters, setting the learning rate to 0.0003. Plot graphs depicting the loss and accuracy during training, validation and testing for a total of 60 epochs. For this question you can use whatever PyTorch has to offer. 4. Execute the tasks outlined in questions 2 and 3, but this time, use a sigmoid activation function while keeping all other parameters and configurations unchanged. 1