Starting from:

$30

ISYE8803-Homework 5 Robust PCA,Matrix Recovery and Compressive Sensing for Color Images Solved

Question 1. Robust PCA  
Recently, robust PCA has become popular for many modern problems, including video surveillance (where the background objects appear in low rank matrix and foreground objects appear in sparse matrix), face recognition (eigenfaces are in low rank matrix and shadows, occlusions, etc. are in sparse matrix). In this problem, we want to use robust PCA on the Extended Yale Face Database to decompose some of its images to the low-rank component and occluded regions.  

The Extended Yale Face Database consists of cropped and aligned images of 38 individuals (28 from the extended database, and 10 from the original database) under 9 poses and 64 lighting conditions. Each image is 192 pixels tall and 168 pixels wide. Each of the facial images in this data set have been reshaped into a large column vector with 192 * 168 = 32256 elements.

For this problem, consider the first 64 columns of the data set (corresponding to the first 64 images), then apply robust PCA method on that. Your response should include image numbers:  3, 4, 14, 15, 17, 18, 19, 20, 21, 32, and 43. Show each image along with its low rank image and sparse image. Comment on how effective RPCA is to fill in occluded regions of the image corresponding to the shadows.

 

Question 2. Matrix Recovery  
‘ratings.mat’ 𝑀# contains ratings on a 1 to 10 scale by 200 viewers for 100 movies. 50% of the ratings were randomly removed (i.e. replaced with 0) and stored in ‘ratings_missing.mat’ 𝑀$. Recover the original matrix from ‘ratings_missing.mat’ by using the two methods below:

•     Method 1: By directly solving the following optimization problem:

min( ‖𝑀‖∗ subject to 𝑀 {𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑     𝑠𝑒𝑡}
• Method 2: By using the completion algorithm (i.e. using singular value thresholding to solve the above optimization problem)

Include the following results and commentary in your report:

(a)    For Method 1, report the optimal objective function value and the relative reconstruction error. The relative reconstruction error is defined by:

‖𝑀 − 𝑀#‖?

                                                                                           ‖𝑀#‖?      

  

(b)   For Method 2, report the relative reconstruction error for the following values of 𝛿 and        𝜏 after (i) 1000 iterations and (ii) 2000 iterations. Comment on the results.

𝜹
 
 
𝝉
0.1
50
 
 
2
50
 
 
0.1
500
 
 
2
500
 
 
 

(c)    Comment on the performance (i.e. execution time etc.) of Method 1 and Method 2. Conclude by providing your recommendation on which is a better method.

 

Question 3. Compressive Sensing for Color Images
Color images are acquired in three channels - Red (R), Green (G) and Blue (B). The data (image) acquired by each of the channels is sparse in DCT or wavelet domain. i.e.,

𝐶 = 𝛷F𝒳H, 𝐶 ∈ {𝑅, 𝐺, 𝐵}.

𝒳H = 𝛷𝐶

Where C represent each of the three-color channels, 𝛷 is the (sparsifying) transform matrix and 𝒳H is the sparse transform coefficient for each color channel. 

(1). In this problem, you are required to compress the color image (Lenna.png) to 70% of its original size by using compressive sensing matrix 𝐴H (random sampling). The sensing matrix can be the same for all channels. Define your own sensing matrix 𝐴H and plot the compressed color image.

(2). In this problem, use the provided code DWT.m to generate the transform matrix 𝛷 and recover the color image. Compare the original color image with recovered color image. Compute the reconstruction error in terms of MSE. 

Hint: in compact matrix-vector notation, 𝑦O = 𝐴H𝐶 =  𝐴H𝛷F𝒳H,  𝐶 ∈ {𝑅, 𝐺, 𝐵} can be expressed as 

                                                                𝑦Q              𝐴Q𝛷F             0              0         𝒳Q

                                                             P𝑦RT = U    0          𝐴R𝛷F             0   W P𝒳RT

                                                                𝑦S                   0             0            𝐴S𝛷F 𝒳S

In short, 

𝑦 = 𝐴𝒳

Where 𝑦 = [𝑦QF𝑦RF𝑦SF]F, 𝐴 = BlockDiag(𝐴Q𝛷F,     𝐴R𝛷F, 𝐴S𝛷F) and 𝒳 = [𝒳QF𝒳RF𝒳SF]F

 

 

Question 4. Sparse Smooth Decomposition  
In this problem, we’re going to use Sparse Smooth Decomposition to extract features rather than detect anomalies. Provided are two images of heatmaps of a GPU lid. One is at idle, and the other one is under load (training a CNN in to classify tensors of birds and cats.) We want to programmatically detect where heat spreads on the lid under load so engineers can design appropriate heatsinks and place thermal sensors on the die.

Unfortunately, the temperature sensor we have is very noisy when the GPU is idle due to the temperature differentials being quite small. Therefore, we can’t solely rely on image processing techniques from Module 2, such as simply subtracting the at-idle image from the at-load image and doing edge detection.

A: 10%) Read in both images and convert to grayscale. To demonstrate why this would be an ugly problem with simple techniques, show a simple subtraction of the idle image from the load image as the deliverable for this part.

B: 40%) Implement 2-D SSD. For purposes of this part, use the default parameters, as in the example code. (Eg., delta = 0.2, x and y knots = 6, anomaly knots = length/4.) As deliverables, include in your report the same output as from the example code. That is, the:

•          combined image used

•          decomposed mean

•          decomposed features – we are interested in heat generated under load, so set values <0 to 0!

C: 50%) The default parameters do quite well, but we can do better! Play with all available parameters to generate the best separation you can. Remember, your goal is to capture the load heat as sparse “anomaly” in the decomposition. In other words, you (probably) don’t want the spots that are only hot under load to show up in the mean. For this part, include in your report your:

•          combined image used (the “delta” used when combining the images is fair game)

•          decomposed mean

•          decomposed features – again, set values <0 to 0!

•          a brief discussion of your methodology; say what you tried, what did (or didn’t) work, and why you chose what you finally chose.

More products