Starting from:

$24.99

CS754 Assignment 2- Advanced Image Processing Solution

1. Refer to a copy of the paper ‘The restricted isometry property and its implications for compressed sensing’in the homework folder. Your task is to open the paper and answer the question posed in each and every green-colored highlight. The task is the complete proof of Theorem 3 done in class. [24 points = 1.5 points for each of the 16 questions]
2. Your task here is to implement the ISTA algorithm for the following three cases:
(a) Consider the image from the homework folder. Add iid Gaussian noise of mean 0 and variance 4 (on a[0,255] scale) to it, using the ‘randn’ function in MATLAB. Thus y = x + η where η ∼ N(0,4). You should obtain x from y using the fact that patches from x have a sparse or near-sparse representation in the 2D-DCT basis.
(b) Divide the image shared in the homework folder into patches of size 8 × 8. Let xi be the vectorized version of the ith patch. Consider the measurement yi = Φxi where Φ is a 32×64 matrix with entries drawn iid from N(0,1). Note that xi has a near-sparse representation in the 2D-DCT basis U which is computed in MATLAB as ‘kron(dctmtx(8)’,dctmtx(8)’)’. In other words, xi = Uθi where θi is a near-sparse vector. Your job is to reconstruct each xi given yi and Φ using ISTA. Then you should reconstruct the image by averaging the overlapping patches. You should choose the α parameter in the ISTA algorithm judiciously. Choose λ = 1 (for a [0,255] image). Display the reconstructed image in your report. State the RMSE given as kX(:)−Xˆ(:)k2/kX(:)k2 where Xˆ is the reconstructed image and X is the true image. [16 points]
(c) Repeat the reconstruction task using the Haar wavelet basis via the MATLAB command ‘dwt2’ withthe option ‘db1’. Display the reconstructed image in your report. State the RMSE. Use MATLAB function handles carefully. [8 points]
(d) Consider a 100-dimensional sparse signal x containing 10 non-zero elements. Let this signal be convolved with a kernel h = [1,2,3,4,3,2,1]/16 followed by addition of Gaussian noise of standard deviation equal to 5% of the magnitude of x to yield signal y, i.e. y = h ∗ x + η. Your job is to reconstruct x from y given h. Be careful of how you create the matrix A in the ISTA algorithm. [8 points]
3. One of the questions that came up in a live session was the notion of an oracle. Consider compressivemeasurements y = Φx + η of a purely sparse signal x, where . When we studied Theorem 3 in class, I had made a statement that the solution provided by the basis pursuit problem for a purely sparse
1
signal comes very close (i.e. has an error that is only a constant factor worse than) an oracular solution. An oracular solution is defined as the solution that we could obtain if we knew in advance the indices (set S) the non-zero elements of the signal x. This homework problem is to understand my statement better. For this, do as follows. In the following, we will assume that the inverse of ΦTSΦS exists, where ΦS is a submatrix of Φ with columns belonging to indices in S.
(a) Express the oracular solution x˜ using a pseudo-inverse of the sub-matrix ΦS. [5 points]
† ,T is standard notation for
(b) Now, show that. Here ΦSS
the pseudo-inverse of ΦS. The largest singular value of matrix X is denoted as kXk2. [3 points]

(c) Argue that the largest singular value of ΦS lies betweenandwhere k = |S| and δ2k
is the RIC of Φ of order 2k. [4 points]
(d) This yields . Argue that the solution given by Theorem 3 is only a
constant factor worse than this solution. [3 points]
4. If s < t where s and t are positive integers, prove that δs ≤ δt where δs,δt stand for the restricted isometry constant (of any sensing matrix) of order s and t respectively. [8 points]
5. Here is our obligatory Google search question :-). Your task is to find out any one paper from within the last
6. Consider the problem P1: minxkxk1 s. t. . Also consider the LASSO problem which seeks to minimize the cost function . If x is a minimizer of J(.) for some value of λ > 0, then show that there exists some value of for which x is also the minimizer of the problem P1. [6 points] (Hint: Consider . Now use the fact that x is a minimizer of J(.) to show that it is also a
minimizer of P1 subject to an appropriate constraint involving
2

More products