Starting from:

$30

CSE573-Project 2 Image Features and Homography, Epipolar Geometry and K-means Clustering Solved

1.   Given two images mountain1.jpg and mountain2.jpg, extract SIFT features and draw the keypoints for both images. Include the resulting two images (task1 sift1.jpg, task1 sift2.jpg) in the report. (1pt)

2.   Match the keypoints using k-nearest neighbour (k=2), i.e., for a keypoint in the left image, finding the best 2 matches in the right image. Filter good matches satisfy m.distance < 0.75 n.distance, where m is the first match and n is the second match. Draw the match image using cv2.drawMatches for all matches (your match image should contain both inliers and outliers). Include the result image (task1 matches knn.jpg) in the report. (1pt)

3.   Compute the homography matrix H (with RANSAC) from the first image to the second image. Include the matrix values in the report. (1pt)

4.   Draw the match image for around 10 random matches using only inliers. Include the result image (task1 matches.jpg) in the report. (1pt)

5.   Warp the first image to the second image using H. The resulting image should contain all pixels in mountain1.jpg and mountain2.jpg. Include the result image (task1 pano.jpg) in the report. (1pt)

1         Epipolar Geometry \


1.   Given two images tsucuba left.png and tsucuba right.png, do the same process for Task 1.1 and 1.2. Include the three images (2 for task 1.1 and 1 for task 1.2) (task2 sift1.jpg, task2 sift2.jpg, task2 matches knn.jpg) in the report. (1pt)

2.   Computer the fundamental matrix F (with RANSAC). Include the matrix values in the report. (1pt)

3.   Randomly select 10 inlier match pairs. For each keypoint in the left image, compute the epiline and draw on the right image. For each keypoint in the right image, compute the epiline and draw on the left image [Using different colors for different match pairs, but the same color for epilines on the left and right images with the same match pair.] Include two images (task2 epi right.jpg, task2 epi left.jpg) with epilines in the report. (2pt)

4.   Compute the disparity map for tsucuba left.png and tsucuba right.png. Include the disparity image (task2 disparity.jpg) in the report. (1pt)

2         K-means Clustering


 5.9

 4.6

 6.2

  4.7



 5.5 X =  5.0   4.9

  6.7

  5.1



6.0
3.2 

2.9 

2.8  3.2  4.2  3.0  3.1  3.1 

3.8 

3.0
Given the matrix X whose rows represent different data points, you are asked to perform a k-means clustering on this dataset using the Euclidean distance as the distance function. Here k is chosen as 3. All data in X were plotted in above Figure. The centers of 3 clusters were initialized as µ1 = (6.2,3.2) (red), µ2 = (6.6,3.7) (green), µ3 = (6.5,3.0) (blue).

Implement the k-means clustering algorithm (you are only allowed to use the basic numpy routines to implement the algorithm).

1.   Classify N = 10 samples according to nearest µi(i = 1,2,3). Plot the results by coloring the empty triangles in red, blue or green. Include the classification vector and the classification plot (task3 iter1 a.jpg) in the report. (1pt)

(a) [Hint:] Using plt.scatter with edgecolor, facecolor, marker and plt.text to plot the figure.

2.   Recompute µi. Plot the updated µi in solid circle in red, blue, and green respectively. Include the updated µi values and the plot in the report (task3 iter1 b.jpg). (1pt)

3.   For a second iteration, plot the classification plot and updated µi plot for the second iteration. Include the classification vector and updated µi values and these two plots (task3 iter2 a.jpg, task3 iter2 b.jpg) in the report. (1pt)

4.   [Color Quantization] Apply k-means to image color quantization. Using only k colors to represent the image baboon.jpg. Include the color quantized images for k = 3,5,10,20

(task3 baboon 3.jpg, task3 baboon 5.jpg, task3 baboon 10.jpg, task3 baboon 20.jpg).

(2pt)



5.   [Gaussian Mixture Model] Implement the Gaussian mixture models (GMM) (you are only allowed to use the basic numpy routines and scipy.stats.multivariate normal to implement the algorithm). Your GMM algorithm should run on dataset represented as a matrix X of shape (N,D), each row represent a datapoint. N is the number of datapoints, and D is the dimension of the datapoints. (3 bonus points)

(a)  Run GMM on the above dataset represented as a 10 × 2 matrix X. Let µ1 = (6.2,3.2),

. What are the µi

after the first iteration. Include the µi values in the report. (1 pt)

(b)  Apply GMM to the Old Faithful dataset (https://www.stat.cmu.edu/˜larry/all-of-statistics/=data/faithful.dat). The dataset matrix X should be of shape (272,2) [x: eruptions, y: waiting]. Let k = 3, µ1 =

)),

. Plot the results for the first five iterations (The following image is a sample plot-

ted with the given parameters at iteration 0, your reporting results should be similar but with different Gaussian mixture centers and covariances). Include these five plots (task3 gmm iter1.jpg,

..., task3 gmm iter5.jpg) in the report. (2 pt)



https://github.com/joferkington/oost paper code/blob/master/error ellipse.py to plot the covariance ellipse. (Setting alpha=0.5, using red, green, blue for the three clusters.)

More products