From 2225b257cfd41a957a2fcfea6f757b09c1d67d11 Mon Sep 17 00:00:00 2001 From: inkredibl Date: Thu, 25 Apr 2024 11:05:16 +0300 Subject: [PATCH] Merge pull request #25488 from inkredibl:doc-fix-findEssentialMat Fix documentation for findEssentialMat to reflect how it actually works. #25488 Documentation for findEssentialMat() incorrectly states that the method uses the same cameraMatrix for both lists of points even though there are two cameraMatrix and distCoeffs. Checked the code and it does the right thing i.e. uses cameraMatrix1, distCoeffs1 for points1 and cameraMatrix2, distCoeffs2 for points2. Updated the documentation for the method to clarify what it does. The code itself is not changed. --- modules/calib3d/include/opencv2/calib3d.hpp | 30 +++++++-------------- 1 file changed, 10 insertions(+), 20 deletions(-) diff --git a/modules/calib3d/include/opencv2/calib3d.hpp b/modules/calib3d/include/opencv2/calib3d.hpp index 61e3aca810..fd35b7d5f3 100644 --- a/modules/calib3d/include/opencv2/calib3d.hpp +++ b/modules/calib3d/include/opencv2/calib3d.hpp @@ -2492,13 +2492,13 @@ CV_EXPORTS_W Mat findFundamentalMat( InputArray points1, InputArray points2, @param points1 Array of N (N \>= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision). -@param points2 Array of the second image points of the same size and format as points1 . +@param points2 Array of the second image points of the same size and format as points1. @param cameraMatrix Camera intrinsic matrix \f$\cameramatrix{A}\f$ . Note that this function assumes that points1 and points2 are feature points from cameras with the -same camera intrinsic matrix. If this assumption does not hold for your use case, use -#undistortPoints with `P = cv::NoArray()` for both cameras to transform image points -to normalized image coordinates, which are valid for the identity camera intrinsic matrix. When -passing these coordinates, pass the identity matrix for this parameter. +same camera intrinsic matrix. If this assumption does not hold for your use case, use another +function overload or #undistortPoints with `P = cv::NoArray()` for both cameras to transform image +points to normalized image coordinates, which are valid for the identity camera intrinsic matrix. +When passing these coordinates, pass the identity matrix for this parameter. @param method Method for computing an essential matrix. - @ref RANSAC for the RANSAC algorithm. - @ref LMEDS for the LMedS algorithm. @@ -2590,23 +2590,13 @@ Mat findEssentialMat( @param points1 Array of N (N \>= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision). -@param points2 Array of the second image points of the same size and format as points1 . -@param cameraMatrix1 Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ . -Note that this function assumes that points1 and points2 are feature points from cameras with the -same camera matrix. If this assumption does not hold for your use case, use -#undistortPoints with `P = cv::NoArray()` for both cameras to transform image points -to normalized image coordinates, which are valid for the identity camera matrix. When -passing these coordinates, pass the identity matrix for this parameter. -@param cameraMatrix2 Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ . -Note that this function assumes that points1 and points2 are feature points from cameras with the -same camera matrix. If this assumption does not hold for your use case, use -#undistortPoints with `P = cv::NoArray()` for both cameras to transform image points -to normalized image coordinates, which are valid for the identity camera matrix. When -passing these coordinates, pass the identity matrix for this parameter. -@param distCoeffs1 Input vector of distortion coefficients +@param points2 Array of the second image points of the same size and format as points1. +@param cameraMatrix1 Camera matrix for the first camera \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ . +@param cameraMatrix2 Camera matrix for the second camera \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ . +@param distCoeffs1 Input vector of distortion coefficients for the first camera \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed. -@param distCoeffs2 Input vector of distortion coefficients +@param distCoeffs2 Input vector of distortion coefficients for the second camera \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed. @param method Method for computing an essential matrix.