$25
The aim of this exercise is to study the variation in the performance of MLE with increase in the number of samples. Consider the following equation:
xi = A + ni i = 1,...,N (1)
where A is scalar and ni are the noise samples. Compute the maximum likelihood estimate of A for the following cases:
1. ni ∼N(0,1). In this case, use the following expression derived in class:
.
√
2. ni ∼ Lap(0,1/ 2), i.e., Laplace distribution with zero mean and unit variance. In this case, the MLE is derived as:
Aˆ = median(xi).
3. ni ∼ Cauchy(0,γ). Use γ = p2Cg where Cg = 1.78. The closed form solution for MLE is not available and hence, MLE should be computed through numerical evaluation. Use Newton Raphson or any other appropriate numerical method.
Repeat the above experiments for N = 1,10,100,1000,10000 and for A = 1 and A = 10. Here, N is the number of samples considered for estimation. Present the following for each noise distribution:
1. Tabulate the values of E[Aˆ] against the number of samples for both values of A. What do you infer?
2. Tabulate the values of V ar(Aˆ) against the number of samples for both values of A. What do you infer?
3. Plot the CDF of the estimate for N = 1,10,100,1000,10000 samples for A = 1. Ensure that you take enough realizations to get a smooth CDF. What can you say about the CDF? Justify. Do you observe the following relation?
√
N(Aˆ − A0) ∼N(0,I(A)−1)
Here, I denotes the Fisher information.
1
4. Plot the PDF of the estimate for N = 1,10,100,1000,10000 samples for A = 1. Ensure that you take enough realizations to get a smooth PDF. What can you say about the PDF convergence?