Starting from:

$30

COMP590-Homework 3 Feature Extraction Solved

In this , you will implement the following pipeline for feature exaction and matching (specific steps are described in detail below):

1)     Harris “cornerness” filter for a gray-value image

2)     Non-maximum suppression of cornerness scores

3)     Corner keypoint extraction

4)     Patch-based feature description

5)     One-to-one matching using SSD and NCC

 

 

Steps for Feature Detection (Harris Corner Detection)
The general steps of Harris corner detection are as follows:

1)     Convert input image 𝐼𝑅𝐺𝐵(𝑥, 𝑦) to grayscale image 𝐼(𝑥, 𝑦).

2)     Apply a “cornerness” filter ℎ(𝐼). This results in an image 𝑅(𝑥, 𝑦) with larger values corresponding to image points that are more likely to be corners. The filter ℎ is formed from a series of independent filter steps:

a.      Compute the image gradients 𝑋 = 𝐼𝑥(𝑥, 𝑦) and 𝑌 = 𝐼𝑦(𝑥, 𝑦). It is suggested you use the Sobel filter for increased robustness to noise.

b.     Compute the matrix-valued image 𝑀(𝑥, 𝑦), as defined by Harris and Stephens. For the Gaussian weighting, consider using SciPy’s gaussian_filter function.

c.      Compute 𝑅(𝑥, 𝑦) = det(𝑀(𝑥, 𝑦)) − 𝑘 tr(M(x, y))2, for some constant 𝑘.

3)     Apply non-maximum suppression to 𝑅(𝑥, 𝑦), keeping only pixel locations that have the strongest response within their 𝑤 × 𝑤 pixel neighborhood. Consider using a maximum filter for this operation.

4)     Return the (𝑥, 𝑦) coordinates of the strongest corner response maxima, according to some thresholding operation. Here, we will simply select as features the up-to-𝐾 strongest maxima with response 𝑅(𝑥, 𝑦) 0. Note that the keypoints should have integer pixel positions.

 

 

 

Feature Description and Keypoint Matching
We will take a very simple approach to feature description: For each keypoint, take the 𝑛 × 𝑛 image patch of 𝐼(𝑥, 𝑦) centered at that keypoint. For keypoint matching between two images, you should implement two methods for comparing descriptors:

1)     Sum-of-squares difference (SSD)

2)     One minus normalized cross-correlation (1 – NCC; this assigns a distance of zero for identical descriptors)

Given two images, compute the distance of every feature in the first image to every feature in the second image and store the result in a match matrix, with rows corresponding to first-image features and columns corresponding to second-image features.[1] Then, compute keypoint correspondences using one-to-one matching:

 One-to-one Matching: For each keypoint in the first image, find the most similar keypoint in the second image. Repeat this for the keypoints in the second image against the first. Keep the feature correspondences that are mutual best matches. 

 

 

 

Note: While you could use SciPy’s cdist function to compute the pairwise scores (i.e., SSD and 1 – NCC), please implement the metrics on your own for this assignment.

 

 

 

Summary: Algorithm Parameters
•        𝜎 : standard deviation of the Gaussian filter when computing 𝑅(𝑥, 𝑦)

•        𝑘  : typical values range from 0.05 to 0.15

•        𝑤 : for non-maximum suppression, use a fixed window size of 𝑤 = 11px

•        𝐾 : maximum number of keypoints to return (note: only return points where 𝑅(𝑥, 𝑦) 0)

•        𝑛 : for feature description, use a fixed window size of 𝑛 = 7px

•        Matching method: either SSD or (one minus) NCC



[1] For example, the first column will contain the distance from the first keypoint in the first image to every keypoint in the second image. (Note that the keypoint ordering for each image is arbitrary, but fixed.)

More products