Add Java and Python code for AKAZE local features matching tutorial. Fix incorrect uses of Mat.mul() in Java code.

Uniform Lowe's ratio test in the code.
pull/11873/head
catree 6 years ago
parent e4b51fa8ad
commit 481af5c469
  1. 224
      doc/tutorials/features2d/akaze_matching/akaze_matching.markdown
  2. 2
      doc/tutorials/features2d/table_of_content_features2d.markdown
  3. 23
      samples/cpp/tutorial_code/features2D/AKAZE_match.cpp
  4. 2
      samples/cpp/tutorial_code/features2D/feature_flann_matcher/SURF_FLANN_matching_Demo.cpp
  5. 2
      samples/cpp/tutorial_code/features2D/feature_homography/SURF_FLANN_matching_homography_Demo.cpp
  6. 16
      samples/java/tutorial_code/ImgTrans/distance_transformation/ImageSegmentationDemo.java
  7. 163
      samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java
  8. 4
      samples/java/tutorial_code/features2D/feature_flann_matcher/SURFFLANNMatchingDemo.java
  9. 4
      samples/java/tutorial_code/features2D/feature_homography/SURFFLANNMatchingHomographyDemo.java
  10. 7
      samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java
  11. 81
      samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py
  12. 2
      samples/python/tutorial_code/features2D/feature_flann_matcher/SURF_FLANN_matching_Demo.py
  13. 2
      samples/python/tutorial_code/features2D/feature_homography/SURF_FLANN_matching_homography_Demo.py

@ -7,8 +7,7 @@ Introduction
In this tutorial we will learn how to use AKAZE @cite ANB13 local features to detect and match keypoints on
two images.
We will find keypoints on a pair of images with given homography matrix, match them and count the
number of inliers (i. e. matches that fit in the given homography).
number of inliers (i.e. matches that fit in the given homography).
You can find expanded version of this example here:
<https://github.com/pablofdezalc/test_kaze_akaze_opencv>
@ -16,7 +15,7 @@ You can find expanded version of this example here:
Data
----
We are going to use images 1 and 3 from *Graffity* sequence of Oxford dataset.
We are going to use images 1 and 3 from *Graffiti* sequence of [Oxford dataset](http://www.robots.ox.ac.uk/~vgg/data/data-aff.html).
![](images/graf.png)
@ -27,107 +26,148 @@ Homography is given by a 3 by 3 matrix:
3.4663091e-04 -1.4364524e-05 1.0000000e+00
@endcode
You can find the images (*graf1.png*, *graf3.png*) and homography (*H1to3p.xml*) in
*opencv/samples/cpp*.
*opencv/samples/data/*.
### Source Code
@include cpp/tutorial_code/features2D/AKAZE_match.cpp
@add_toggle_cpp
- **Downloadable code**: Click
[here](https://raw.githubusercontent.com/opencv/opencv/3.4/samples/cpp/tutorial_code/features2D/AKAZE_match.cpp)
- **Code at glance:**
@include samples/cpp/tutorial_code/features2D/AKAZE_match.cpp
@end_toggle
@add_toggle_java
- **Downloadable code**: Click
[here](https://raw.githubusercontent.com/opencv/opencv/3.4/samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java)
- **Code at glance:**
@include samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java
@end_toggle
@add_toggle_python
- **Downloadable code**: Click
[here](https://raw.githubusercontent.com/opencv/opencv/3.4/samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py)
- **Code at glance:**
@include samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py
@end_toggle
### Explanation
-# **Load images and homography**
@code{.cpp}
Mat img1 = imread("graf1.png", IMREAD_GRAYSCALE);
Mat img2 = imread("graf3.png", IMREAD_GRAYSCALE);
Mat homography;
FileStorage fs("H1to3p.xml", FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
@endcode
We are loading grayscale images here. Homography is stored in the xml created with FileStorage.
-# **Detect keypoints and compute descriptors using AKAZE**
@code{.cpp}
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
AKAZE akaze;
akaze(img1, noArray(), kpts1, desc1);
akaze(img2, noArray(), kpts2, desc2);
@endcode
We create AKAZE object and use it's *operator()* functionality. Since we don't need the *mask*
parameter, *noArray()* is used.
-# **Use brute-force matcher to find 2-nn matches**
@code{.cpp}
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
@endcode
We use Hamming distance, because AKAZE uses binary descriptor by default.
-# **Use 2-nn matches to find correct keypoint matches**
@code{.cpp}
for(size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
float dist2 = nn_matches[i][1].distance;
if(dist1 < nn_match_ratio * dist2) {
matched1.push_back(kpts1[first.queryIdx]);
matched2.push_back(kpts2[first.trainIdx]);
}
}
@endcode
If the closest match is *ratio* closer than the second closest one, then the match is correct.
-# **Check if our matches fit in the homography model**
@code{.cpp}
for(int i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
col /= col.at<double>(2);
float dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
pow(col.at<double>(1) - matched2[i].pt.y, 2));
if(dist < inlier_threshold) {
int new_i = inliers1.size();
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
@endcode
If the distance from first keypoint's projection to the second keypoint is less than threshold,
then it it fits in the homography.
We create a new set of matches for the inliers, because it is required by the drawing function.
-# **Output results**
@code{.cpp}
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("res.png", res);
...
@endcode
Here we save the resulting image and print some statistics.
### Results
Found matches
-------------
- **Load images and homography**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/features2D/AKAZE_match.cpp load
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java load
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py load
@end_toggle
We are loading grayscale images here. Homography is stored in the xml created with FileStorage.
- **Detect keypoints and compute descriptors using AKAZE**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/features2D/AKAZE_match.cpp AKAZE
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java AKAZE
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py AKAZE
@end_toggle
We create AKAZE and detect and compute AKAZE keypoints and descriptors. Since we don't need the *mask*
parameter, *noArray()* is used.
- **Use brute-force matcher to find 2-nn matches**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/features2D/AKAZE_match.cpp 2-nn matching
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java 2-nn matching
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py 2-nn matching
@end_toggle
We use Hamming distance, because AKAZE uses binary descriptor by default.
- **Use 2-nn matches and ratio criterion to find correct keypoint matches**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/features2D/AKAZE_match.cpp ratio test filtering
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java ratio test filtering
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py ratio test filtering
@end_toggle
If the closest match distance is significantly lower than the second closest one, then the match is correct (match is not ambiguous).
- **Check if our matches fit in the homography model**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/features2D/AKAZE_match.cpp homography check
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java homography check
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py homography check
@end_toggle
If the distance from first keypoint's projection to the second keypoint is less than threshold,
then it fits the homography model.
We create a new set of matches for the inliers, because it is required by the drawing function.
- **Output results**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/features2D/AKAZE_match.cpp draw final matches
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java draw final matches
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py draw final matches
@end_toggle
Here we save the resulting image and print some statistics.
Results
-------
### Found matches
![](images/res.png)
A-KAZE Matching Results
-----------------------
Depending on your OpenCV version, you should get results coherent with:
@code{.none}
Keypoints 1: 2943
Keypoints 2: 3511
Matches: 447
Inliers: 308
Inlier Ratio: 0.689038}
Inlier Ratio: 0.689038
@endcode

@ -98,6 +98,8 @@ OpenCV.
- @subpage tutorial_akaze_matching
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 3.0
*Author:* Fedor Morozov

@ -6,11 +6,12 @@
using namespace std;
using namespace cv;
const float inlier_threshold = 2.5f; // Distance threshold to identify inliers
const float inlier_threshold = 2.5f; // Distance threshold to identify inliers with homography check
const float nn_match_ratio = 0.8f; // Nearest neighbor matching ratio
int main(int argc, char* argv[])
{
//! [load]
CommandLineParser parser(argc, argv,
"{@img1 | ../data/graf1.png | input image 1}"
"{@img2 | ../data/graf3.png | input image 2}"
@ -21,20 +22,25 @@ int main(int argc, char* argv[])
Mat homography;
FileStorage fs(parser.get<String>("@homography"), FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
//! [load]
//! [AKAZE]
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
Ptr<AKAZE> akaze = AKAZE::create();
akaze->detectAndCompute(img1, noArray(), kpts1, desc1);
akaze->detectAndCompute(img2, noArray(), kpts2, desc2);
//! [AKAZE]
//! [2-nn matching]
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
//! [2-nn matching]
vector<KeyPoint> matched1, matched2, inliers1, inliers2;
vector<DMatch> good_matches;
//! [ratio test filtering]
vector<KeyPoint> matched1, matched2;
for(size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
@ -45,8 +51,12 @@ int main(int argc, char* argv[])
matched2.push_back(kpts2[first.trainIdx]);
}
}
//! [ratio test filtering]
for(unsigned i = 0; i < matched1.size(); i++) {
//! [homography check]
vector<DMatch> good_matches;
vector<KeyPoint> inliers1, inliers2;
for(size_t i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
@ -63,12 +73,14 @@ int main(int argc, char* argv[])
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
//! [homography check]
//! [draw final matches]
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("akaze_result.png", res);
double inlier_ratio = inliers1.size() * 1.0 / matched1.size();
double inlier_ratio = inliers1.size() / (double) matched1.size();
cout << "A-KAZE Matching Results" << endl;
cout << "*******************************" << endl;
cout << "# Keypoints 1: \t" << kpts1.size() << endl;
@ -80,6 +92,7 @@ int main(int argc, char* argv[])
imshow("result", res);
waitKey();
//! [draw final matches]
return 0;
}

@ -46,7 +46,7 @@ int main( int argc, char* argv[] )
std::vector<DMatch> good_matches;
for (size_t i = 0; i < knn_matches.size(); i++)
{
if (knn_matches[i].size() > 1 && knn_matches[i][0].distance / knn_matches[i][1].distance <= ratio_thresh)
if (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance)
{
good_matches.push_back(knn_matches[i][0]);
}

@ -48,7 +48,7 @@ int main( int argc, char* argv[] )
std::vector<DMatch> good_matches;
for (size_t i = 0; i < knn_matches.size(); i++)
{
if (knn_matches[i].size() > 1 && knn_matches[i][0].distance / knn_matches[i][1].distance <= ratio_thresh)
if (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance)
{
good_matches.push_back(knn_matches[i][0]);
}

@ -103,8 +103,9 @@ class ImageSegmentation {
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
Core.normalize(dist, dist, 0, 1., Core.NORM_MINMAX);
Mat distDisplayScaled = dist.mul(dist, 255);
Core.normalize(dist, dist, 0.0, 1.0, Core.NORM_MINMAX);
Mat distDisplayScaled = new Mat();
Core.multiply(dist, new Scalar(255), distDisplayScaled);
Mat distDisplay = new Mat();
distDisplayScaled.convertTo(distDisplay, CvType.CV_8U);
HighGui.imshow("Distance Transform Image", distDisplay);
@ -113,14 +114,14 @@ class ImageSegmentation {
//! [peaks]
// Threshold to obtain the peaks
// This will be the markers for the foreground objects
Imgproc.threshold(dist, dist, .4, 1., Imgproc.THRESH_BINARY);
Imgproc.threshold(dist, dist, 0.4, 1.0, Imgproc.THRESH_BINARY);
// Dilate a bit the dist image
Mat kernel1 = Mat.ones(3, 3, CvType.CV_8U);
Imgproc.dilate(dist, dist, kernel1);
Mat distDisplay2 = new Mat();
dist.convertTo(distDisplay2, CvType.CV_8U);
distDisplay2 = distDisplay2.mul(distDisplay2, 255);
Core.multiply(distDisplay2, new Scalar(255), distDisplay2);
HighGui.imshow("Peaks", distDisplay2);
//! [peaks]
@ -144,11 +145,14 @@ class ImageSegmentation {
}
// Draw the background marker
Imgproc.circle(markers, new Point(5, 5), 3, new Scalar(255, 255, 255), -1);
Mat markersScaled = markers.mul(markers, 10000);
Mat markersScaled = new Mat();
markers.convertTo(markersScaled, CvType.CV_32F);
Core.normalize(markersScaled, markersScaled, 0.0, 255.0, Core.NORM_MINMAX);
Imgproc.circle(markersScaled, new Point(5, 5), 3, new Scalar(255, 255, 255), -1);
Mat markersDisplay = new Mat();
markersScaled.convertTo(markersDisplay, CvType.CV_8U);
HighGui.imshow("Markers", markersDisplay);
Imgproc.circle(markers, new Point(5, 5), 3, new Scalar(255, 255, 255), -1);
//! [seeds]
//! [watershed]

@ -0,0 +1,163 @@
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.DMatch;
import org.opencv.core.KeyPoint;
import org.opencv.core.Mat;
import org.opencv.core.MatOfDMatch;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.Scalar;
import org.opencv.features2d.AKAZE;
import org.opencv.features2d.DescriptorMatcher;
import org.opencv.features2d.Features2d;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.w3c.dom.Document;
import org.xml.sax.SAXException;
class AKAZEMatch {
public void run(String[] args) {
//! [load]
String filename1 = args.length > 2 ? args[0] : "../data/graf1.png";
String filename2 = args.length > 2 ? args[1] : "../data/graf3.png";
String filename3 = args.length > 2 ? args[2] : "../data/H1to3p.xml";
Mat img1 = Imgcodecs.imread(filename1, Imgcodecs.IMREAD_GRAYSCALE);
Mat img2 = Imgcodecs.imread(filename2, Imgcodecs.IMREAD_GRAYSCALE);
if (img1.empty() || img2.empty()) {
System.err.println("Cannot read images!");
System.exit(0);
}
File file = new File(filename3);
DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder documentBuilder;
Document document;
Mat homography = new Mat(3, 3, CvType.CV_64F);
double[] homographyData = new double[(int) (homography.total()*homography.channels())];
try {
documentBuilder = documentBuilderFactory.newDocumentBuilder();
document = documentBuilder.parse(file);
String homographyStr = document.getElementsByTagName("data").item(0).getTextContent();
String[] splited = homographyStr.split("\\s+");
int idx = 0;
for (String s : splited) {
if (!s.isEmpty()) {
homographyData[idx] = Double.parseDouble(s);
idx++;
}
}
} catch (ParserConfigurationException e) {
e.printStackTrace();
System.exit(0);
} catch (SAXException e) {
e.printStackTrace();
System.exit(0);
} catch (IOException e) {
e.printStackTrace();
System.exit(0);
}
homography.put(0, 0, homographyData);
//! [load]
//! [AKAZE]
AKAZE akaze = AKAZE.create();
MatOfKeyPoint kpts1 = new MatOfKeyPoint(), kpts2 = new MatOfKeyPoint();
Mat desc1 = new Mat(), desc2 = new Mat();
akaze.detectAndCompute(img1, new Mat(), kpts1, desc1);
akaze.detectAndCompute(img2, new Mat(), kpts2, desc2);
//! [AKAZE]
//! [2-nn matching]
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
List<MatOfDMatch> knnMatches = new ArrayList<>();
matcher.knnMatch(desc1, desc2, knnMatches, 2);
//! [2-nn matching]
//! [ratio test filtering]
float ratioThreshold = 0.8f; // Nearest neighbor matching ratio
List<KeyPoint> listOfMatched1 = new ArrayList<>();
List<KeyPoint> listOfMatched2 = new ArrayList<>();
List<KeyPoint> listOfKeypoints1 = kpts1.toList();
List<KeyPoint> listOfKeypoints2 = kpts2.toList();
for (int i = 0; i < knnMatches.size(); i++) {
DMatch[] matches = knnMatches.get(i).toArray();
float dist1 = matches[0].distance;
float dist2 = matches[1].distance;
if (dist1 < ratioThreshold * dist2) {
listOfMatched1.add(listOfKeypoints1.get(matches[0].queryIdx));
listOfMatched2.add(listOfKeypoints2.get(matches[0].trainIdx));
}
}
//! [ratio test filtering]
//! [homography check]
double inlierThreshold = 2.5; // Distance threshold to identify inliers with homography check
List<KeyPoint> listOfInliers1 = new ArrayList<>();
List<KeyPoint> listOfInliers2 = new ArrayList<>();
List<DMatch> listOfGoodMatches = new ArrayList<>();
for (int i = 0; i < listOfMatched1.size(); i++) {
Mat col = new Mat(3, 1, CvType.CV_64F);
double[] colData = new double[(int) (col.total() * col.channels())];
colData[0] = listOfMatched1.get(i).pt.x;
colData[1] = listOfMatched1.get(i).pt.y;
colData[2] = 1.0;
col.put(0, 0, colData);
Mat colRes = new Mat();
Core.gemm(homography, col, 1.0, new Mat(), 0.0, colRes);
colRes.get(0, 0, colData);
Core.multiply(colRes, new Scalar(1.0 / colData[2]), col);
col.get(0, 0, colData);
double dist = Math.sqrt(Math.pow(colData[0] - listOfMatched2.get(i).pt.x, 2) +
Math.pow(colData[1] - listOfMatched2.get(i).pt.y, 2));
if (dist < inlierThreshold) {
listOfGoodMatches.add(new DMatch(listOfInliers1.size(), listOfInliers2.size(), 0));
listOfInliers1.add(listOfMatched1.get(i));
listOfInliers2.add(listOfMatched2.get(i));
}
}
//! [homography check]
//! [draw final matches]
Mat res = new Mat();
MatOfKeyPoint inliers1 = new MatOfKeyPoint(listOfInliers1.toArray(new KeyPoint[listOfInliers1.size()]));
MatOfKeyPoint inliers2 = new MatOfKeyPoint(listOfInliers2.toArray(new KeyPoint[listOfInliers2.size()]));
MatOfDMatch goodMatches = new MatOfDMatch(listOfGoodMatches.toArray(new DMatch[listOfGoodMatches.size()]));
Features2d.drawMatches(img1, inliers1, img2, inliers2, goodMatches, res);
Imgcodecs.imwrite("akaze_result.png", res);
double inlierRatio = listOfInliers1.size() / (double) listOfMatched1.size();
System.out.println("A-KAZE Matching Results");
System.out.println("*******************************");
System.out.println("# Keypoints 1: \t" + listOfKeypoints1.size());
System.out.println("# Keypoints 2: \t" + listOfKeypoints2.size());
System.out.println("# Matches: \t" + listOfMatched1.size());
System.out.println("# Inliers: \t" + listOfInliers1.size());
System.out.println("# Inliers Ratio: \t" + inlierRatio);
HighGui.imshow("result", res);
HighGui.waitKey();
//! [draw final matches]
System.exit(0);
}
}
public class AKAZEMatchDemo {
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new AKAZEMatch().run(args);
}
}

@ -42,12 +42,12 @@ class SURFFLANNMatching {
matcher.knnMatch(descriptors1, descriptors2, knnMatches, 2);
//-- Filter matches using the Lowe's ratio test
float ratio_thresh = 0.7f;
float ratioThresh = 0.7f;
List<DMatch> listOfGoodMatches = new ArrayList<>();
for (int i = 0; i < knnMatches.size(); i++) {
if (knnMatches.get(i).rows() > 1) {
DMatch[] matches = knnMatches.get(i).toArray();
if (matches[0].distance / matches[1].distance <= ratio_thresh) {
if (matches[0].distance < ratioThresh * matches[1].distance) {
listOfGoodMatches.add(matches[0]);
}
}

@ -48,12 +48,12 @@ class SURFFLANNMatchingHomography {
matcher.knnMatch(descriptorsObject, descriptorsScene, knnMatches, 2);
//-- Filter matches using the Lowe's ratio test
float ratio_thresh = 0.75f;
float ratioThresh = 0.75f;
List<DMatch> listOfGoodMatches = new ArrayList<>();
for (int i = 0; i < knnMatches.size(); i++) {
if (knnMatches.get(i).rows() > 1) {
DMatch[] matches = knnMatches.get(i).toArray();
if (matches[0].distance / matches[1].distance <= ratio_thresh) {
if (matches[0].distance < ratioThresh * matches[1].distance) {
listOfGoodMatches.add(matches[0]);
}
}

@ -7,6 +7,7 @@ import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.photo.CalibrateDebevec;
import org.opencv.photo.MergeDebevec;
@ -70,7 +71,7 @@ class HDRImaging {
//! [Tonemap HDR image]
Mat ldr = new Mat();
TonemapDurand tonemap = Photo.createTonemapDurand();
TonemapDurand tonemap = Photo.createTonemapDurand(2.2f, 4.0f, 1.0f, 2.0f, 2.0f);
tonemap.process(hdr, ldr);
//! [Tonemap HDR image]
@ -81,8 +82,8 @@ class HDRImaging {
//! [Perform exposure fusion]
//! [Write results]
fusion = fusion.mul(fusion, 255);
ldr = ldr.mul(ldr, 255);
Core.multiply(fusion, new Scalar(255,255,255), fusion);
Core.multiply(ldr, new Scalar(255,255,255), ldr);
Imgcodecs.imwrite("fusion.png", fusion);
Imgcodecs.imwrite("ldr.png", ldr);
Imgcodecs.imwrite("hdr.hdr", hdr);

@ -0,0 +1,81 @@
from __future__ import print_function
import cv2 as cv
import numpy as np
import argparse
from math import sqrt
## [load]
parser = argparse.ArgumentParser(description='Code for AKAZE local features matching tutorial.')
parser.add_argument('--input1', help='Path to input image 1.', default='../data/graf1.png')
parser.add_argument('--input2', help='Path to input image 2.', default='../data/graf3.png')
parser.add_argument('--homography', help='Path to the homography matrix.', default='../data/H1to3p.xml')
args = parser.parse_args()
img1 = cv.imread(args.input1, cv.IMREAD_GRAYSCALE)
img2 = cv.imread(args.input2, cv.IMREAD_GRAYSCALE)
if img1 is None or img2 is None:
print('Could not open or find the images!')
exit(0)
fs = cv.FileStorage(args.homography, cv.FILE_STORAGE_READ)
homography = fs.getFirstTopLevelNode().mat()
## [load]
## [AKAZE]
akaze = cv.AKAZE_create()
kpts1, desc1 = akaze.detectAndCompute(img1, None)
kpts2, desc2 = akaze.detectAndCompute(img2, None)
## [AKAZE]
## [2-nn matching]
matcher = cv.DescriptorMatcher_create(cv.DescriptorMatcher_BRUTEFORCE_HAMMING)
nn_matches = matcher.knnMatch(desc1, desc2, 2)
## [2-nn matching]
## [ratio test filtering]
matched1 = []
matched2 = []
nn_match_ratio = 0.8 # Nearest neighbor matching ratio
for m, n in nn_matches:
if m.distance < nn_match_ratio * n.distance:
matched1.append(kpts1[m.queryIdx])
matched2.append(kpts2[m.trainIdx])
## [ratio test filtering]
## [homography check]
inliers1 = []
inliers2 = []
good_matches = []
inlier_threshold = 2.5 # Distance threshold to identify inliers with homography check
for i, m in enumerate(matched1):
col = np.ones((3,1), dtype=np.float64)
col[0:2,0] = m.pt
col = np.dot(homography, col)
col /= col[2,0]
dist = sqrt(pow(col[0,0] - matched2[i].pt[0], 2) +\
pow(col[1,0] - matched2[i].pt[1], 2))
if dist < inlier_threshold:
good_matches.append(cv.DMatch(len(inliers1), len(inliers2), 0))
inliers1.append(matched1[i])
inliers2.append(matched2[i])
## [homography check]
## [draw final matches]
res = np.empty((max(img1.shape[0], img2.shape[0]), img1.shape[1]+img2.shape[1], 3), dtype=np.uint8)
cv.drawMatches(img1, inliers1, img2, inliers2, good_matches, res)
cv.imwrite("akaze_result.png", res)
inlier_ratio = len(inliers1) / float(len(matched1))
print('A-KAZE Matching Results')
print('*******************************')
print('# Keypoints 1: \t', len(kpts1))
print('# Keypoints 2: \t', len(kpts2))
print('# Matches: \t', len(matched1))
print('# Inliers: \t', len(inliers1))
print('# Inliers Ratio: \t', inlier_ratio)
cv.imshow('result', res)
cv.waitKey()
## [draw final matches]

@ -29,7 +29,7 @@ knn_matches = matcher.knnMatch(descriptors1, descriptors2, 2)
ratio_thresh = 0.7
good_matches = []
for m,n in knn_matches:
if m.distance / n.distance <= ratio_thresh:
if m.distance < ratio_thresh * n.distance:
good_matches.append(m)
#-- Draw matches

@ -29,7 +29,7 @@ knn_matches = matcher.knnMatch(descriptors_obj, descriptors_scene, 2)
ratio_thresh = 0.75
good_matches = []
for m,n in knn_matches:
if m.distance / n.distance <= ratio_thresh:
if m.distance < ratio_thresh * n.distance:
good_matches.append(m)
#-- Draw matches

Loading…
Cancel
Save