$30
1 Question 1: Simple Non-Linear least squares for Gaussian function
First, go through the solved example here from the notes page. After understanding this,
(1.1) Code it from scratch using numpy and try it out yourself for say different number of iterations with a certain tolerance for all 50 observations using Gradient Descent. Make the following plots using matplotlib: * Data and fit plot: Ground truth Gaussian, observations (points) & predicted Gaussian on the same plot. * Cost function (∥r∥2) vs number of iterations
Experiment with the hyperparameters and compile your observations in a table. Clearly mention your hyperparameters with justification.
(1.2) You’ve used Gradient Descent above. Now implement Gauss-Newton and LM algorithms. To contrast between the three, you must experiment with * Different initial estimate: Can a particular algorithm handle if the initial estimate is too far from GT? * Different number of observations: Can a particular algorithm handle very less observations? * Add noise to your observations: Can a particular algorithm handle large noise? * What else can you think of? (For example, can an algorithm converge in less iterations compared to others?)
Make the plots (mentioned in 1.1) for all 3 algorithms. Report your observations in a table(s) (comparison between the three for different factors). You will be awarded depending on how comprehensive your experimentation is (which you have to explain below under “Answers for Question 1” section).
1.1 Code for Question 1
[ ]: # Only numpy & matplotlib is sufficient for this question.
##############################################################################
# TODO: Do tasks described in Question 1 #
##############################################################################
# Replace "pass" statement with your code (You can split this cell into
# multiple cells if you wish to) pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
[ ]: ## Define the plots inside a function above and call them in this cell one by␣
,→one. When I run this cell, all plots ## asked in Q1 should be generated.
##############################################################################
# TODO: Plotting for Question 1 # pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
1.2 Answers for Question 1
Add explanations for the answers along with tables here. ### Answer for 1.1 Explain your experimentations with justification here
This
is
sample
table
sample 1
sample 1
sample 1
sample 1
2.2.1 Answer for 1.2
Explain your experimentations with justification here
This
is
sample
table
sample 2
sample 2
sample 2
sample 2
2 Question 2: ICP Coding
Implement basic ICP algorithm with (given) known correspondences.
Let X be your point cloud observed from the initial position. Your robot moved and observed P1 as your current point cloud. Same with P2 under a different transformation. Now you wish to apply ICP to recover transformation between (X & P1) and (X & P2). Use root mean squared error (rmse) as the error metric.
[ ]: # HELPER FUNCTIONS: DON'T EDIT THIS BLOCK - If you want to test on more cases,␣
,→you can add code to this block but # DON'T delete existing code.
# Visualizing ICP registration def plot_icp(X, P, P0, i, rmse):
plt.cla()
plt.scatter(X[0,:], X[1,:], c='k', marker='o', s=50, lw=0) plt.scatter(P[0,:], P[1,:], c='r', marker='o', s=50, lw=0) plt.scatter(P0[0,:], P0[1,:], c='b', marker='o', s=50, lw=0) plt.legend(('X', 'P', 'P0'), loc='lower left') plt.plot(np.vstack((X[0,:], P[0,:])), np.vstack((X[1,:], P[1,:])) ,c='k')
plt.title("Iteration: " + str(i) + " RMSE: " + str(rmse))
plt.axis([-10, 15, -10, 15]) plt.gca().set_aspect('equal', adjustable='box') plt.draw() plt.pause(2) return
1 Question 1: Simple Non-Linear least squares for Gaussian function
First, go through the solved example here from the notes page. After understanding this,
(1.1) Code it from scratch using numpy and try it out yourself for say different number of iterations with a certain tolerance for all 50 observations using Gradient Descent. Make the following plots using matplotlib: * Data and fit plot: Ground truth Gaussian, observations (points) & predicted Gaussian on the same plot. * Cost function (∥r∥2) vs number of iterations
Experiment with the hyperparameters and compile your observations in a table. Clearly mention your hyperparameters with justification.
(1.2) You’ve used Gradient Descent above. Now implement Gauss-Newton and LM algorithms. To contrast between the three, you must experiment with * Different initial estimate: Can a particular algorithm handle if the initial estimate is too far from GT? * Different number of observations: Can a particular algorithm handle very less observations? * Add noise to your observations: Can a particular algorithm handle large noise? * What else can you think of? (For example, can an algorithm converge in less iterations compared to others?)
Make the plots (mentioned in 1.1) for all 3 algorithms. Report your observations in a table(s) (comparison between the three for different factors). You will be awarded depending on how comprehensive your experimentation is (which you have to explain below under “Answers for Question 1” section).
1.1 Code for Question 1
[ ]: # Only numpy & matplotlib is sufficient for this question.
##############################################################################
# TODO: Do tasks described in Question 1 #
##############################################################################
# Replace "pass" statement with your code (You can split this cell into
# multiple cells if you wish to) pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
[ ]: ## Define the plots inside a function above and call them in this cell one by␣
,→one. When I run this cell, all plots ## asked in Q1 should be generated.
##############################################################################
# TODO: Plotting for Question 1 # pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
1.2 Answers for Question 1
Add explanations for the answers along with tables here. ### Answer for 1.1 Explain your experimentations with justification here
This
is
sample
table
sample 1
sample 1
sample 1
sample 1
2.2.1 Answer for 1.2
Explain your experimentations with justification here
This
is
sample
table
sample 2
sample 2
sample 2
sample 2
2 Question 2: ICP Coding
Implement basic ICP algorithm with (given) known correspondences.
Let X be your point cloud observed from the initial position. Your robot moved and observed P1 as your current point cloud. Same with P2 under a different transformation. Now you wish to apply ICP to recover transformation between (X & P1) and (X & P2). Use root mean squared error (rmse) as the error metric.
[ ]: # HELPER FUNCTIONS: DON'T EDIT THIS BLOCK - If you want to test on more cases,␣
,→you can add code to this block but # DON'T delete existing code.
# Visualizing ICP registration def plot_icp(X, P, P0, i, rmse):
plt.cla()
plt.scatter(X[0,:], X[1,:], c='k', marker='o', s=50, lw=0) plt.scatter(P[0,:], P[1,:], c='r', marker='o', s=50, lw=0) plt.scatter(P0[0,:], P0[1,:], c='b', marker='o', s=50, lw=0) plt.legend(('X', 'P', 'P0'), loc='lower left') plt.plot(np.vstack((X[0,:], P[0,:])), np.vstack((X[1,:], P[1,:])) ,c='k')
plt.title("Iteration: " + str(i) + " RMSE: " + str(rmse))
plt.axis([-10, 15, -10, 15]) plt.gca().set_aspect('equal', adjustable='box') plt.draw() plt.pause(2) return