Starting from:

$30

CS4277-CS5477-Assignment 2 Solved

        1.0.1      Introduction
In this assignment, you will implement Zhenyou Zhang’s camera calibration. The extrincs and intrincs of a camera are estimated from three images of a model plane. You will first estimate the five intrinsic parameters (focal length, principle point, skew) and six extrinsic parameters (three for rotation and three for translation) by a close-form solution. Then you will estimate five distortion parameters and also finetune all parameters by minimize the total reprojection error.

* Zhengyou Zhang. A Flexible New Technique for CameraCalibration

 

        1.1       Part 1: Load and Visualize Data
In this part, you will get yourself familiar with the data by visualizing it. The data includes three images of a planar checkerboard (CalibIm1-3.tif) and the correpsonding corner locations in each image (data1-3.txt). The 3D points of the model are stored in Model.txt. Note that only X and Y coordinates are provided because we assume that the model plane is on Z = 0. You can visualize the data with the provided code below.



        1.2      Part 2: Estimate the Intrinsic Parameters
In this part, you will estimate the the intrinsics, which inludes focal length, skew and principle point.You will firstly estimate the homography between each observed image and the 3D model. Note that you are allowed to use cv2.findHomography() here to since you already implemented it in lab1. Each view of the checkerboard gives us two constraints:

vb = 0,

where v is 2 × 6 matrix made up of the homography terms. Given three observations, we can get :

Vb = 0,

where V is a 6 × 6 matrix obtained from stacking all constraints together. The solution can be obtained by taking the right null-space of V, which is the right singular vector corresponding to the smallest singular value of V.

Implement the following function(s): cv2.calibrateCamera()
•     You may use the following functions: cv2.findHomography(), np.linalg.svd()

•     Prohibited Functions: cv2.calibrateCamera()

 

        1.3      Part 3: Estimate the Extrinsic Parameters
In this part, you will estimate the extrinsic parameters based on the intrinsic matrix A you obtained from Part 2. You can compute the rotation and translation according to:

r1 = λA−1h1r2 = λA−1h2r3 = r1 × r2t = λA−1h3.

λ = 1/kA−1h1k = 1/kA−1h2k, and hi represents the ith column of the homography H. Note that the rotation matrix R = [r1, r1, r1] does not in general satisfy the properties of a rotation matrix. Hence, you will use the provided function convt2rotation() to estimate the best rotation matrix. The detail is given in the supplementary of the reference paper. • You may use the following functions:

np.linalg.svd(), np.linalg.inv(),np.linalg.norm(), convt2rotation

[3]: R_all, T_all, K = init_param(pts_model, pts_2d)

A = np.array([K[0], K[1], K[2], 0, K[3], K[4], 0, 0, 1]).reshape([3, 3]) img_all = [] for i in range(len(R_all)):

R = R_all[i] T = T_all[i] points_2d = pts_2d[i]

 trans = np.array([R[:, 0], R[:, 1], T]).T points_rep = np.dot(A, np.dot(trans, pts_model_homo)) points_rep = points_rep[0:2] / points_rep[2:3] img = cv2.imread('./zhang_data/CalibIm{}.tif'.format(i + 1)) for j in range(points_rep.shape[1]):

cv2.circle(img, (np.int32(points_rep[0, j]), np.int32(points_rep[1,␣

 ,→j])), 5, (0, 0, 255), 2) cv2.circle(img, (np.int32(points_2d[0, j]), np.int32(points_2d[1, j])),␣

,→4, (255, 0, 0), 2) plt.figure() plt.imshow(img)

Up to now, you already get a rough estimation of the intrinsic and extrinsic parameters. You can check your results with the provided code, which visualizes the reprojections of the corner locations with the estimated parameters. You will find that the points that are far from the center of the image (the four corners of the checkerboard) are not as accurate as points at the center. This is because we did not consider the distortion parameters in this step.
 

 

        1.4      Part 4: Estimate All Parameters
In this part, you will estimate all parameters by minimize the total reprojection error:

n  m argmin ∑∑kxij −π(K, R, t, ˇ, Xj)k.

K,R,t,ˇ i=1 j=1

K, R, t are the intrinsics and extrinsices, which are initialized with estimation from Part 3. ˇ represents the five distortion parameters and are initialized with zeros. Xj and xij represent the 3D model and the corresponding 2D observation.

Note that you will use the function least_squares() in scipy to minimize the reprojection error and find the optimal parameters. During the optimization process, the rotation matrix R should be represented by a 3-dimensional vector by using the provided function matrix2vector(). We provide the skeleton code of how to use the function least_squares() below.

The key step of the optimization is to define the error function error_fun(), where the first parameter param is the parameters you will optimize over. The param in this example includes: intrinsics (0-5), distortion (5-10), extrinsics (10-28). The extrincs consist of three pairs of rotation s and translation t because we have three views. The rotation s is the 3-dimensional vector representation, which you can convert back to a rotation matrix with provided function vector2matrix(). You will have to consider the distortion when computing the reprojection error. Let x = (x, y) be the normalized image coordinate, namely the points_ud_all in the code. The radial distortion is given by:

xr =   ,

where r2 = x2 + y2 and κ1, κ2, κ5 are the radial distortion parameters. The tangential distortion is given by :

dx   ,

where κ3, κ4 are the tangential distortion parameters. FInally, the image coordinates after distortion is given by :

xd = xr + dx.

The optimization converges when the error does not change too much. Note that you will decide the iter_num according to the error value by yourself. You can verify the optimal parameters by visualizing the points after distortion. The function visualize_distorted() is an example of how to visualize the the points after distortion in image. You will find that the points that are far from the center of the image is more accurate than the estimation from Part 3.

More products