|
|
|
@ -7,25 +7,25 @@ Goal |
|
|
|
|
In this chapter, |
|
|
|
|
|
|
|
|
|
- We will understand the concepts behind Harris Corner Detection. |
|
|
|
|
- We will see the functions: **cv.cornerHarris()**, **cv.cornerSubPix()** |
|
|
|
|
- We will see the following functions: **cv.cornerHarris()**, **cv.cornerSubPix()** |
|
|
|
|
|
|
|
|
|
Theory |
|
|
|
|
------ |
|
|
|
|
|
|
|
|
|
In last chapter, we saw that corners are regions in the image with large variation in intensity in |
|
|
|
|
In the last chapter, we saw that corners are regions in the image with large variation in intensity in |
|
|
|
|
all the directions. One early attempt to find these corners was done by **Chris Harris & Mike |
|
|
|
|
Stephens** in their paper **A Combined Corner and Edge Detector** in 1988, so now it is called |
|
|
|
|
Harris Corner Detector. He took this simple idea to a mathematical form. It basically finds the |
|
|
|
|
the Harris Corner Detector. He took this simple idea to a mathematical form. It basically finds the |
|
|
|
|
difference in intensity for a displacement of \f$(u,v)\f$ in all directions. This is expressed as below: |
|
|
|
|
|
|
|
|
|
\f[E(u,v) = \sum_{x,y} \underbrace{w(x,y)}_\text{window function} \, [\underbrace{I(x+u,y+v)}_\text{shifted intensity}-\underbrace{I(x,y)}_\text{intensity}]^2\f] |
|
|
|
|
|
|
|
|
|
Window function is either a rectangular window or gaussian window which gives weights to pixels |
|
|
|
|
The window function is either a rectangular window or a Gaussian window which gives weights to pixels |
|
|
|
|
underneath. |
|
|
|
|
|
|
|
|
|
We have to maximize this function \f$E(u,v)\f$ for corner detection. That means, we have to maximize the |
|
|
|
|
second term. Applying Taylor Expansion to above equation and using some mathematical steps (please |
|
|
|
|
refer any standard text books you like for full derivation), we get the final equation as: |
|
|
|
|
We have to maximize this function \f$E(u,v)\f$ for corner detection. That means we have to maximize the |
|
|
|
|
second term. Applying Taylor Expansion to the above equation and using some mathematical steps (please |
|
|
|
|
refer to any standard text books you like for full derivation), we get the final equation as: |
|
|
|
|
|
|
|
|
|
\f[E(u,v) \approx \begin{bmatrix} u & v \end{bmatrix} M \begin{bmatrix} u \\ v \end{bmatrix}\f] |
|
|
|
|
|
|
|
|
@ -34,20 +34,20 @@ where |
|
|
|
|
\f[M = \sum_{x,y} w(x,y) \begin{bmatrix}I_x I_x & I_x I_y \\ |
|
|
|
|
I_x I_y & I_y I_y \end{bmatrix}\f] |
|
|
|
|
|
|
|
|
|
Here, \f$I_x\f$ and \f$I_y\f$ are image derivatives in x and y directions respectively. (Can be easily found |
|
|
|
|
out using **cv.Sobel()**). |
|
|
|
|
Here, \f$I_x\f$ and \f$I_y\f$ are image derivatives in x and y directions respectively. (These can be easily found |
|
|
|
|
using **cv.Sobel()**). |
|
|
|
|
|
|
|
|
|
Then comes the main part. After this, they created a score, basically an equation, which will |
|
|
|
|
determine if a window can contain a corner or not. |
|
|
|
|
Then comes the main part. After this, they created a score, basically an equation, which |
|
|
|
|
determines if a window can contain a corner or not. |
|
|
|
|
|
|
|
|
|
\f[R = det(M) - k(trace(M))^2\f] |
|
|
|
|
|
|
|
|
|
where |
|
|
|
|
- \f$det(M) = \lambda_1 \lambda_2\f$ |
|
|
|
|
- \f$trace(M) = \lambda_1 + \lambda_2\f$ |
|
|
|
|
- \f$\lambda_1\f$ and \f$\lambda_2\f$ are the eigen values of M |
|
|
|
|
- \f$\lambda_1\f$ and \f$\lambda_2\f$ are the eigenvalues of M |
|
|
|
|
|
|
|
|
|
So the values of these eigen values decide whether a region is corner, edge or flat. |
|
|
|
|
So the magnitudes of these eigenvalues decide whether a region is a corner, an edge, or flat. |
|
|
|
|
|
|
|
|
|
- When \f$|R|\f$ is small, which happens when \f$\lambda_1\f$ and \f$\lambda_2\f$ are small, the region is |
|
|
|
|
flat. |
|
|
|
@ -60,16 +60,16 @@ It can be represented in a nice picture as follows: |
|
|
|
|
![image](images/harris_region.jpg) |
|
|
|
|
|
|
|
|
|
So the result of Harris Corner Detection is a grayscale image with these scores. Thresholding for a |
|
|
|
|
suitable give you the corners in the image. We will do it with a simple image. |
|
|
|
|
suitable score gives you the corners in the image. We will do it with a simple image. |
|
|
|
|
|
|
|
|
|
Harris Corner Detector in OpenCV |
|
|
|
|
-------------------------------- |
|
|
|
|
|
|
|
|
|
OpenCV has the function **cv.cornerHarris()** for this purpose. Its arguments are : |
|
|
|
|
OpenCV has the function **cv.cornerHarris()** for this purpose. Its arguments are: |
|
|
|
|
|
|
|
|
|
- **img** - Input image, it should be grayscale and float32 type. |
|
|
|
|
- **img** - Input image. It should be grayscale and float32 type. |
|
|
|
|
- **blockSize** - It is the size of neighbourhood considered for corner detection |
|
|
|
|
- **ksize** - Aperture parameter of Sobel derivative used. |
|
|
|
|
- **ksize** - Aperture parameter of the Sobel derivative used. |
|
|
|
|
- **k** - Harris detector free parameter in the equation. |
|
|
|
|
|
|
|
|
|
See the example below: |
|
|
|
@ -103,12 +103,12 @@ Corner with SubPixel Accuracy |
|
|
|
|
|
|
|
|
|
Sometimes, you may need to find the corners with maximum accuracy. OpenCV comes with a function |
|
|
|
|
**cv.cornerSubPix()** which further refines the corners detected with sub-pixel accuracy. Below is |
|
|
|
|
an example. As usual, we need to find the harris corners first. Then we pass the centroids of these |
|
|
|
|
an example. As usual, we need to find the Harris corners first. Then we pass the centroids of these |
|
|
|
|
corners (There may be a bunch of pixels at a corner, we take their centroid) to refine them. Harris |
|
|
|
|
corners are marked in red pixels and refined corners are marked in green pixels. For this function, |
|
|
|
|
we have to define the criteria when to stop the iteration. We stop it after a specified number of |
|
|
|
|
iteration or a certain accuracy is achieved, whichever occurs first. We also need to define the size |
|
|
|
|
of neighbourhood it would search for corners. |
|
|
|
|
iterations or a certain accuracy is achieved, whichever occurs first. We also need to define the size |
|
|
|
|
of the neighbourhood it searches for corners. |
|
|
|
|
@code{.py} |
|
|
|
|
import numpy as np |
|
|
|
|
import cv2 as cv |
|
|
|
@ -139,7 +139,7 @@ img[res[:,3],res[:,2]] = [0,255,0] |
|
|
|
|
|
|
|
|
|
cv.imwrite('subpixel5.png',img) |
|
|
|
|
@endcode |
|
|
|
|
Below is the result, where some important locations are shown in zoomed window to visualize: |
|
|
|
|
Below is the result, where some important locations are shown in the zoomed window to visualize: |
|
|
|
|
|
|
|
|
|
![image](images/subpixel3.png) |
|
|
|
|
|
|
|
|
|