Starting from:

$30

CS4277-CS5477-Assignment 3 Solved

1.0.1      Introduction
In this assignment, you will get to estimate the essential and fundamental matrix by using eight point algorithm. As discussed in the lecture, images taken from different views should fulfill the epipolar constraint, which can be used to estimate the fundamental and essential matrix. You will first estimate the fundamental and essential matrix with 15 correspondences provided in the dataset. Then you will decompose the essential matrix to find rotation and translation between two views. The decomposition will give 4 feasible camera poses and you will select the the correct pose by chriality check.

1.1       Part 1: Load and Visualize Data
In this part, you will get yourself familiar with the data by visualizing it. The data includes two images of the same object (im1.jpg and img2.jpg) and 15 correpondences (correspondences.mat). You can visualize the data with the provided code below.

   In [2]: correspondences = sio.loadmat('data/correspondences_ud') data1_ori = correspondences['movingPoints'] data2_ori = correspondences['fixedPoints'] data1 = np.concatenate([data1_ori.T, np.ones((1, data1_ori.shape[0]))], axis = 0) data2 = np.concatenate([data2_ori.T, np.ones((1, data2_ori.shape[0]))], axis = 0) img1 = plt.imread('data/img1.jpg') img2 = plt.imread('data/img2.jpg') plt.figure(figsize=(12, 6)) plt.subplot(1, 2, 1) for j in range(data1_ori.shape[0]):

cv2.circle(img1, (np.int32(data1_ori[j, 0]), np.int32(data1_ori[j, 1])) , 5, (255,

plt.imshow(img1) plt.subplot(1, 2, 2) for j in range(data2_ori.shape[0]):

cv2.circle(img2, (np.int32(data2_ori[j, 0]), np.int32(data2_ori[j, 1])) , 5, (255,

plt.imshow(img2)

Out[2]: <matplotlib.image.AxesImage at 0x7ffb8b257208>

 


1.2         Part 2: Estimate Fundamental Matrix from Point Correspondences
In this part, you will implement the 8-point algorithm to estimate the fundamental matrix. For any pair of matching points xi ↔ x0i in two images, the 3 × 3 fundamental matrix is defined by the equation: x0TFx = 0

Let f be the 9-vector made up of the entries of F in row-major order, we get:

(x0x, x0y, x0, y0x, y0y, y0, x, y, 1)f = 0

From a set of n point matches, we obtain a set of linear equations of the form:

Af = 0

The solution for f is the singlar vector corresponding to the smallest singular value of A. Then you will enforce the singularity constraint to F matrix such that the rank of F is 2. Note that the normalization step is very important here to for accurate estimation.

You can verify your estimation by visualizing the epipolar lines in both images, where the epiploar lines will pass through all matching points. The helper function plot_epipolar_line() is provided for visualization

Implement the following function(s): cv2.findFundamentalMat()
*     Prohibited Functions: cv2.findFundamentalMat()

*      You may use the following functions: np.linalg.svd() In [3]: F = compute_fundamental(data1, data2) plt.figure(figsize = (12, 6)) plt.subplot(1, 2, 1) plt.imshow(img1) for i in range(data1.shape[1]):

plt.plot(data1[0, i], data1[1, i], 'bo') m, n = img1.shape[:2] line1 = np.dot(F.T, data2[:, i]) t = np.linspace(0, n, 100)

lt1 = np.array([(line1[2] + line1[0] * tt) / (-line1[1]) for tt in t])

ndx = (lt1 >= 0) & (lt1 < m)

plt.plot(t[ndx], lt1[ndx], linewidth=2)

plt.subplot(1, 2, 2) plt.imshow(img2) for i in range(data2.shape[1]):

plt.plot(data2[0, i], data2[1, i], 'ro') m, n = img2.shape[:2] line2 = np.dot(F, data1[:, i]) t = np.linspace(0, n, 100)

lt2 = np.array([(line2[2] + line2[0] * tt) / (-line2[1]) for tt in t])

ndx = (lt2 >= 0) & (lt2 < m)

plt.plot(t[ndx], lt2[ndx], linewidth=2)

 

1.3        Part 3: Estimate Essential Matrix from Point Correspondences
In this part, you will also implement the 8-point algorithm to estimate the essential matrix. The steps are the same with the fundamental matrix estimation except for that :

1.       The normalization step: For each correspondence xi ↔ x0i, compute K . K and K0 are the camera calibration matrices which are given in the intrinsics.h5 file. Note we only give one camera calibration matrix here because the two images are taken by the same camera.

2.       The singlarity constraint: The essential matrix should have two similar singular values, andthird is zero.

Implement the following function(s): cv2.findEssentialMat()
* Prohibited Functions: cv2.findEssentialMat()

* You may use the following functions: np.linalg.svd(), np.linalg.inv()

Note that the your estimated essential matrix may be different from the results estimated by using cv2.findEssentialMat(), because the cv2.findEssentialMat() use a different algorithm.

 In [4]: with h5py.File('data/intrinsics.h5', 'r') as f: K = f['K'][:]

E = compute_essential(data1, data2, K)

 

1.4       Part 4: Two-view Relative Pose Estimation
In this part, you will extract the relative rotaion R and translation t from the essential matrix E accordint to:

E = btc×R.

The essentrial matrix can be decomposed into 4 feasible camera poses, and you will select the correct one by cheriality check. Specifically, the 3D structure can be computed with the linear triangulation method, and the 3D points should appear in front of both cameras. Note that we assum that the rotation and translation of the first camera are identity matrix and zeros respectively.

Implement the following function(s): cv2.recoverPose()

*  Prohibited Functions: cv2.recoverPose()

*  You may use the following functions: np.linalg.svd()

 In [5]: trans = decompose_e(E, K, data1, data2)

More products