diff --git a/doc/tutorials/calib3d/camera_calibration/camera_calibration.markdown b/doc/tutorials/calib3d/camera_calibration/camera_calibration.markdown index a7a7dd727e..85515d99bb 100644 --- a/doc/tutorials/calib3d/camera_calibration/camera_calibration.markdown +++ b/doc/tutorials/calib3d/camera_calibration/camera_calibration.markdown @@ -77,8 +77,8 @@ Source code You may also find the source code in the `samples/cpp/tutorial_code/calib3d/camera_calibration/` folder of the OpenCV source library or [download it from here -](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp). The program has a -single argument: the name of its configuration file. If none is given then it will try to open the +](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp). For the usage of the program, run it with `-h` argument. The program has an +essential argument: the name of its configuration file. If none is given then it will try to open the one named "default.xml". [Here's a sample configuration file ](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/calib3d/camera_calibration/in_VID5.xml) in XML format. In the configuration file you may choose to use camera as an input, a video file or an image list. If you @@ -131,6 +131,8 @@ Explanation Similar images result in similar equations, and similar equations at the calibration step will form an ill-posed problem, so the calibration will fail. For square images the positions of the corners are only approximate. We may improve this by calling the @ref cv::cornerSubPix function. + (`winSize` is used to control the side length of the search window. Its default value is 11. + `winSzie` may be changed by command line parameter `--winSize=`.) It will produce better calibration result. After this we add a valid inputs result to the *imagePoints* vector to collect all of the equations into a single container. Finally, for visualization feedback purposes we will draw the found points on the input image using @ref @@ -170,7 +172,7 @@ of the calibration variables we'll create these variables here and pass on both calibration and saving function. Again, I'll not show the saving part as that has little in common with the calibration. Explore the source file in order to find out how and what: @snippet samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp run_and_save -We do the calibration with the help of the @ref cv::calibrateCamera function. It has the following +We do the calibration with the help of the @ref cv::calibrateCameraRO function. It has the following parameters: - The object points. This is a vector of *Point3f* vector that for each input image describes how @@ -184,13 +186,33 @@ parameters: @code{.cpp} vector > objectPoints(1); calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern); + objectPoints[0][s.boardSize.width - 1].x = objectPoints[0][0].x + grid_width; + newObjPoints = objectPoints[0]; + objectPoints.resize(imagePoints.size(),objectPoints[0]); @endcode + @note If your calibration board is inaccurate, unmeasured, roughly planar targets (Checkerboard + patterns on paper using off-the-shelf printers are the most convenient calibration targets and + most of them are not accurate enough.), a method from @cite strobl2011iccv can be utilized to + dramatically improve the accuracies of the estimated camera intrinsic parameters. This new + calibration method will be called if command line parameter `-d=` is provided. In the + above code snippet, `grid_width` is actually the value set by `-d=`. It's the measured + distance between top-left (0, 0, 0) and top-right (s.squareSize*(s.boardSize.width-1), 0, 0) + corners of the pattern grid points. It should be measured precisely with rulers or vernier calipers. + After calibration, newObjPoints will be updated with refined 3D coordinates of object points. - The image points. This is a vector of *Point2f* vector which for each input image contains coordinates of the important points (corners for chessboard and centers of the circles for the circle pattern). We have already collected this from @ref cv::findChessboardCorners or @ref cv::findCirclesGrid function. We just need to pass it on. - The size of the image acquired from the camera, video file or the images. +- The index of the object point to be fixed. We set it to -1 to request standard calibration method. + If the new object-releasing method to be used, set it to the index of the top-right corner point + of the calibration board grid. See cv::calibrateCameraRO for detailed explanation. + @code{.cpp} + int iFixedPoint = -1; + if (release_object) + iFixedPoint = s.boardSize.width - 1; + @endcode - The camera matrix. If we used the fixed aspect ratio option we need to set \f$f_x\f$: @snippet samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp fixed_aspect - The distortion coefficient matrix. Initialize with zero. @@ -202,11 +224,15 @@ parameters: coordinate space). The 7-th and 8-th parameters are the output vector of matrices containing in the i-th position the rotation and translation vector for the i-th object point to the i-th image point. +- The updated output vector of calibration pattern points. This parameter is ignored with standard + calibration method. - The final argument is the flag. You need to specify here options like fix the aspect ratio for - the focal length, assume zero tangential distortion or to fix the principal point. + the focal length, assume zero tangential distortion or to fix the principal point. Here we use + CALIB_USE_LU to get faster calibration speed. @code{.cpp} -double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, - distCoeffs, rvecs, tvecs, s.flag|CV_CALIB_FIX_K4|CV_CALIB_FIX_K5); +rms = calibrateCameraRO(objectPoints, imagePoints, imageSize, iFixedPoint, + cameraMatrix, distCoeffs, rvecs, tvecs, newObjPoints, + s.flag | CALIB_USE_LU); @endcode - The function returns the average re-projection error. This number gives a good estimation of precision of the found parameters. This should be as close to zero as possible. Given the diff --git a/doc/tutorials/calib3d/table_of_content_calib3d.markdown b/doc/tutorials/calib3d/table_of_content_calib3d.markdown index 20ca778d5f..3093ee8b5a 100644 --- a/doc/tutorials/calib3d/table_of_content_calib3d.markdown +++ b/doc/tutorials/calib3d/table_of_content_calib3d.markdown @@ -13,7 +13,7 @@ Although we get most of our images in a 2D format they do come from a 3D world. - @subpage tutorial_camera_calibration - *Compatibility:* \> OpenCV 2.0 + *Compatibility:* \> OpenCV 4.0 *Author:* Bernát Gábor diff --git a/modules/calib3d/doc/calib3d.bib b/modules/calib3d/doc/calib3d.bib index 57989b34fd..47dcf93788 100644 --- a/modules/calib3d/doc/calib3d.bib +++ b/modules/calib3d/doc/calib3d.bib @@ -39,3 +39,16 @@ year={2013}, publisher={IEEE} } + +@inproceedings{strobl2011iccv, + title={More accurate pinhole camera calibration with imperfect planar target}, + author={Strobl, Klaus H. and Hirzinger, Gerd}, + booktitle={2011 IEEE International Conference on Computer Vision (ICCV)}, + pages={1068-1075}, + month={Nov}, + year={2011}, + address={Barcelona, Spain}, + publisher={IEEE}, + url={https://elib.dlr.de/71888/1/strobl_2011iccv.pdf}, + doi={10.1109/ICCVW.2011.6130369} +} diff --git a/modules/calib3d/include/opencv2/calib3d.hpp b/modules/calib3d/include/opencv2/calib3d.hpp index 9bba16a85e..35fb6124ba 100644 --- a/modules/calib3d/include/opencv2/calib3d.hpp +++ b/modules/calib3d/include/opencv2/calib3d.hpp @@ -170,7 +170,7 @@ pattern (every view is described by several 3D-2D point correspondences). *rectification* transformation that makes the camera optical axes parallel. @note - - A calibration sample for 3 cameras in horizontal position can be found at + - A calibration sample for 3 cameras in horizontal position can be found at opencv_source_code/samples/cpp/3calibration.cpp - A calibration sample based on a sequence of images can be found at opencv_source_code/samples/cpp/calibration.cpp @@ -1077,7 +1077,7 @@ The algorithm performs the following steps: patternSize=cvSize(cols,rows) in findChessboardCorners . @sa - findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort + calibrateCameraRO, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort */ CV_EXPORTS_AS(calibrateCameraExtended) double calibrateCamera( InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, Size imageSize, @@ -1089,18 +1089,89 @@ CV_EXPORTS_AS(calibrateCameraExtended) double calibrateCamera( InputArrayOfArray int flags = 0, TermCriteria criteria = TermCriteria( TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON) ); -/** @overload double calibrateCamera( InputArrayOfArrays objectPoints, +/** @overload */ +CV_EXPORTS_W double calibrateCamera( InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, Size imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, - OutputArray stdDeviations, OutputArray perViewErrors, int flags = 0, TermCriteria criteria = TermCriteria( - TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON) ) + TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON) ); + +/** @brief Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. + +This function is an extension of calibrateCamera() with the method of releasing object which was +proposed in @cite strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar +targets (calibration plates), this method can dramatically improve the precision of the estimated +camera parameters. Both the object-releasing method and standard method are supported by this +function. Use the parameter **iFixedPoint** for method selection. In the internal implementation, +calibrateCamera() is a wrapper for this function. + +@param objectPoints Vector of vectors of calibration pattern points in the calibration pattern +coordinate space. See calibrateCamera() for details. If the method of releasing object to be used, +the identical calibration board must be used in each view and it must be fully visible, and all +objectPoints[i] must be the same and all points should be roughly close to a plane. **The calibration +target has to be rigid, or at least static if the camera (rather than the calibration target) is +shifted for grabbing images.** +@param imagePoints Vector of vectors of the projections of calibration pattern points. See +calibrateCamera() for details. +@param imageSize Size of the image used only to initialize the intrinsic camera matrix. +@param iFixedPoint The index of the 3D object point in objectPoints[0] to be fixed. It also acts as +a switch for calibration method selection. If object-releasing method to be used, pass in the +parameter in the range of [1, objectPoints[0].size()-2], otherwise a value out of this range will +make standard calibration method selected. Usually the top-right corner point of the calibration +board grid is recommended to be fixed when object-releasing method being utilized. According to +\cite strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front +and objectPoints[0].back.z are used. With object-releasing method, accurate rvecs, tvecs and +newObjPoints are only possible if coordinates of these three fixed points are accurate enough. +@param cameraMatrix Output 3x3 floating-point camera matrix. See calibrateCamera() for details. +@param distCoeffs Output vector of distortion coefficients. See calibrateCamera() for details. +@param rvecs Output vector of rotation vectors estimated for each pattern view. See calibrateCamera() +for details. +@param tvecs Output vector of translation vectors estimated for each pattern view. +@param newObjPoints The updated output vector of calibration pattern points. The coordinates might +be scaled based on three fixed points. The returned coordinates are accurate only if the above +mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter +is ignored with standard calibration method. +@param stdDeviationsIntrinsics Output vector of standard deviations estimated for intrinsic parameters. +See calibrateCamera() for details. +@param stdDeviationsExtrinsics Output vector of standard deviations estimated for extrinsic parameters. +See calibrateCamera() for details. +@param stdDeviationsObjPoints Output vector of standard deviations estimated for refined coordinates +of calibration pattern points. It has the same size and order as objectPoints[0] vector. This +parameter is ignored with standard calibration method. + @param perViewErrors Output vector of the RMS re-projection error estimated for each pattern view. +@param flags Different flags that may be zero or a combination of some predefined values. See +calibrateCamera() for details. If the method of releasing object is used, the calibration time may +be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially +less precise and less stable in some rare cases. +@param criteria Termination criteria for the iterative optimization algorithm. + +@return the overall RMS re-projection error. + +The function estimates the intrinsic camera parameters and extrinsic parameters for each of the +views. The algorithm is based on @cite Zhang2000, @cite BouguetMCT and @cite strobl2011iccv. See +calibrateCamera() for other detailed explanations. +@sa + calibrateCamera, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort */ -CV_EXPORTS_W double calibrateCamera( InputArrayOfArrays objectPoints, - InputArrayOfArrays imagePoints, Size imageSize, +CV_EXPORTS_AS(calibrateCameraROExtended) double calibrateCameraRO( InputArrayOfArrays objectPoints, + InputArrayOfArrays imagePoints, Size imageSize, int iFixedPoint, + InputOutputArray cameraMatrix, InputOutputArray distCoeffs, + OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, + OutputArray newObjPoints, + OutputArray stdDeviationsIntrinsics, + OutputArray stdDeviationsExtrinsics, + OutputArray stdDeviationsObjPoints, + OutputArray perViewErrors, + int flags = 0, TermCriteria criteria = TermCriteria( + TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON) ); + +/** @overload */ +CV_EXPORTS_W double calibrateCameraRO( InputArrayOfArrays objectPoints, + InputArrayOfArrays imagePoints, Size imageSize, int iFixedPoint, InputOutputArray cameraMatrix, InputOutputArray distCoeffs, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, + OutputArray newObjPoints, int flags = 0, TermCriteria criteria = TermCriteria( TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON) ); diff --git a/modules/calib3d/include/opencv2/calib3d/calib3d_c.h b/modules/calib3d/include/opencv2/calib3d/calib3d_c.h index 680d7d64fd..f934f707b6 100644 --- a/modules/calib3d/include/opencv2/calib3d/calib3d_c.h +++ b/modules/calib3d/include/opencv2/calib3d/calib3d_c.h @@ -263,6 +263,22 @@ CVAPI(double) cvCalibrateCamera2( const CvMat* object_points, CvTermCriteria term_crit CV_DEFAULT(cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,30,DBL_EPSILON)) ); +/* Finds intrinsic and extrinsic camera parameters + from a few views of known calibration pattern */ +CVAPI(double) cvCalibrateCamera4( const CvMat* object_points, + const CvMat* image_points, + const CvMat* point_counts, + CvSize image_size, + int iFixedPoint, + CvMat* camera_matrix, + CvMat* distortion_coeffs, + CvMat* rotation_vectors CV_DEFAULT(NULL), + CvMat* translation_vectors CV_DEFAULT(NULL), + CvMat* newObjPoints CV_DEFAULT(NULL), + int flags CV_DEFAULT(0), + CvTermCriteria term_crit CV_DEFAULT(cvTermCriteria( + CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,30,DBL_EPSILON)) ); + /* Computes various useful characteristics of the camera from the data computed by cvCalibrateCamera2 */ CVAPI(void) cvCalibrationMatrixValues( const CvMat *camera_matrix, diff --git a/modules/calib3d/src/calibration.cpp b/modules/calib3d/src/calibration.cpp index bca3dc8301..f83adf8f81 100644 --- a/modules/calib3d/src/calibration.cpp +++ b/modules/calib3d/src/calibration.cpp @@ -517,18 +517,20 @@ CV_IMPL int cvRodrigues2( const CvMat* src, CvMat* dst, CvMat* jacobian ) static const char* cvDistCoeffErr = "Distortion coefficients must be 1x4, 4x1, 1x5, 5x1, 1x8, 8x1, 1x12, 12x1, 1x14 or 14x1 floating-point vector"; -CV_IMPL void cvProjectPoints2( const CvMat* objectPoints, +static void cvProjectPoints2Internal( const CvMat* objectPoints, const CvMat* r_vec, const CvMat* t_vec, const CvMat* A, const CvMat* distCoeffs, - CvMat* imagePoints, CvMat* dpdr, - CvMat* dpdt, CvMat* dpdf, - CvMat* dpdc, CvMat* dpdk, - double aspectRatio ) + CvMat* imagePoints, CvMat* dpdr CV_DEFAULT(NULL), + CvMat* dpdt CV_DEFAULT(NULL), CvMat* dpdf CV_DEFAULT(NULL), + CvMat* dpdc CV_DEFAULT(NULL), CvMat* dpdk CV_DEFAULT(NULL), + CvMat* dpdo CV_DEFAULT(NULL), + double aspectRatio CV_DEFAULT(0) ) { Ptr matM, _m; Ptr _dpdr, _dpdt, _dpdc, _dpdf, _dpdk; + Ptr _dpdo; int i, j, count; int calc_derivatives; @@ -541,7 +543,9 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints, CvMat _r, _t, _a = cvMat( 3, 3, CV_64F, a ), _k; CvMat matR = cvMat( 3, 3, CV_64F, R ), _dRdr = cvMat( 3, 9, CV_64F, dRdr ); double *dpdr_p = 0, *dpdt_p = 0, *dpdk_p = 0, *dpdf_p = 0, *dpdc_p = 0; + double* dpdo_p = 0; int dpdr_step = 0, dpdt_step = 0, dpdk_step = 0, dpdf_step = 0, dpdc_step = 0; + int dpdo_step = 0; bool fixedAspectRatio = aspectRatio > FLT_EPSILON; if( !CV_IS_MAT(objectPoints) || !CV_IS_MAT(r_vec) || @@ -745,7 +749,25 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints, dpdk_step = _dpdk->step/sizeof(dpdk_p[0]); } - calc_derivatives = dpdr || dpdt || dpdf || dpdc || dpdk; + if( dpdo ) + { + if( !CV_IS_MAT( dpdo ) || ( CV_MAT_TYPE( dpdo->type ) != CV_32FC1 + && CV_MAT_TYPE( dpdo->type ) != CV_64FC1 ) + || dpdo->rows != count * 2 || dpdo->cols != count * 3 ) + CV_Error( CV_StsBadArg, "dp/do must be 2Nx3N floating-point matrix" ); + + if( CV_MAT_TYPE( dpdo->type ) == CV_64FC1 ) + { + _dpdo.reset( cvCloneMat( dpdo ) ); + } + else + _dpdo.reset( cvCreateMat( 2 * count, 3 * count, CV_64FC1 ) ); + cvZero(_dpdo); + dpdo_p = _dpdo->data.db; + dpdo_step = _dpdo->step / sizeof( dpdo_p[0] ); + } + + calc_derivatives = dpdr || dpdt || dpdf || dpdc || dpdk || dpdo; for( i = 0; i < count; i++ ) { @@ -760,6 +782,7 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints, Matx22d dMatTilt; Vec2d dXdYd; + double z0 = z; z = z ? 1./z : 1; x *= z; y *= z; @@ -943,6 +966,41 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints, } dpdr_p += dpdr_step*2; } + + if( dpdo_p ) + { + double dxdo[] = { z * ( R[0] - x * z * z0 * R[6] ), + z * ( R[1] - x * z * z0 * R[7] ), + z * ( R[2] - x * z * z0 * R[8] ) }; + double dydo[] = { z * ( R[3] - y * z * z0 * R[6] ), + z * ( R[4] - y * z * z0 * R[7] ), + z * ( R[5] - y * z * z0 * R[8] ) }; + for( j = 0; j < 3; j++ ) + { + double dr2do = 2 * x * dxdo[j] + 2 * y * dydo[j]; + double dr4do = 2 * r2 * dr2do; + double dr6do = 3 * r4 * dr2do; + double da1do = 2 * y * dxdo[j] + 2 * x * dydo[j]; + double da2do = dr2do + 4 * x * dxdo[j]; + double da3do = dr2do + 4 * y * dydo[j]; + double dcdist_do + = k[0] * dr2do + k[1] * dr4do + k[4] * dr6do; + double dicdist2_do = -icdist2 * icdist2 + * ( k[5] * dr2do + k[6] * dr4do + k[7] * dr6do ); + double dxd0_do = cdist * icdist2 * dxdo[j] + + x * icdist2 * dcdist_do + x * cdist * dicdist2_do + + k[2] * da1do + k[3] * da2do + k[8] * dr2do + + k[9] * dr4do; + double dyd0_do = cdist * icdist2 * dydo[j] + + y * icdist2 * dcdist_do + y * cdist * dicdist2_do + + k[2] * da3do + k[3] * da1do + k[10] * dr2do + + k[11] * dr4do; + dXdYd = dMatTilt * Vec2d( dxd0_do, dyd0_do ); + dpdo_p[i * 3 + j] = fx * dXdYd( 0 ); + dpdo_p[dpdo_step + i * 3 + j] = fy * dXdYd( 1 ); + } + dpdo_p += dpdo_step * 2; + } } } @@ -963,8 +1021,24 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints, if( _dpdk != dpdk ) cvConvert( _dpdk, dpdk ); + + if( _dpdo != dpdo ) + cvConvert( _dpdo, dpdo ); } +CV_IMPL void cvProjectPoints2( const CvMat* objectPoints, + const CvMat* r_vec, + const CvMat* t_vec, + const CvMat* A, + const CvMat* distCoeffs, + CvMat* imagePoints, CvMat* dpdr, + CvMat* dpdt, CvMat* dpdf, + CvMat* dpdc, CvMat* dpdk, + double aspectRatio ) +{ + cvProjectPoints2Internal( objectPoints, r_vec, t_vec, A, distCoeffs, imagePoints, dpdr, dpdt, + dpdf, dpdc, dpdk, NULL, aspectRatio ); +} CV_IMPL void cvFindExtrinsicCameraParams2( const CvMat* objectPoints, const CvMat* imagePoints, const CvMat* A, @@ -1297,8 +1371,8 @@ static void subMatrix(const cv::Mat& src, cv::Mat& dst, const std::vector static double cvCalibrateCamera2Internal( const CvMat* objectPoints, const CvMat* imagePoints, const CvMat* npoints, - CvSize imageSize, CvMat* cameraMatrix, CvMat* distCoeffs, - CvMat* rvecs, CvMat* tvecs, CvMat* stdDevs, + CvSize imageSize, int iFixedPoint, CvMat* cameraMatrix, CvMat* distCoeffs, + CvMat* rvecs, CvMat* tvecs, CvMat* newObjPoints, CvMat* stdDevs, CvMat* perViewErrors, int flags, CvTermCriteria termCrit ) { const int NINTRINSIC = CV_CALIB_NINTRINSIC; @@ -1361,7 +1435,9 @@ static double cvCalibrateCamera2Internal( const CvMat* objectPoints, "1xn or nx1 array or 1-channel nx3 array, where n is the number of views" ); } - if( stdDevs ) + bool releaseObject = iFixedPoint > 0 && iFixedPoint < npoints->data.i[0] - 1; + + if( stdDevs && !releaseObject ) { cn = CV_MAT_CN(stdDevs->type); if( !CV_IS_MAT(stdDevs) || @@ -1402,6 +1478,29 @@ static double cvCalibrateCamera2Internal( const CvMat* objectPoints, total += ni; } + if( newObjPoints ) + { + cn = CV_MAT_CN(newObjPoints->type); + if( !CV_IS_MAT(newObjPoints) || + (CV_MAT_DEPTH(newObjPoints->type) != CV_32F && CV_MAT_DEPTH(newObjPoints->type) != CV_64F) || + ((newObjPoints->rows != maxPoints || newObjPoints->cols*cn != 3) && + (newObjPoints->rows != 1 || newObjPoints->cols != maxPoints || cn != 3)) ) + CV_Error( CV_StsBadArg, "the output array of refined object points must be 3-channel " + "1xn or nx1 array or 1-channel nx3 array, where n is the number of object points per view" ); + } + + if( stdDevs && releaseObject ) + { + cn = CV_MAT_CN(stdDevs->type); + if( !CV_IS_MAT(stdDevs) || + (CV_MAT_DEPTH(stdDevs->type) != CV_32F && CV_MAT_DEPTH(stdDevs->type) != CV_64F) || + ((stdDevs->rows != (nimages*6 + NINTRINSIC + maxPoints*3) || stdDevs->cols*cn != 1) && + (stdDevs->rows != 1 || stdDevs->cols != (nimages*6 + NINTRINSIC + maxPoints*3) || cn != 1)) ) + CV_Error( CV_StsBadArg, "the output array of standard deviations vectors must be 1-channel " + "1x(n*6 + NINTRINSIC + m*3) or (n*6 + NINTRINSIC + m*3)x1 array, where n is the number of views," + " NINTRINSIC = " STR_(CV_CALIB_NINTRINSIC) ", m is the number of object points per view"); + } + Mat matM( 1, total, CV_64FC3 ); Mat _m( 1, total, CV_64FC2 ); Mat allErrors(1, total, CV_64FC2); @@ -1419,8 +1518,11 @@ static double cvCalibrateCamera2Internal( const CvMat* objectPoints, } nparams = NINTRINSIC + nimages*6; + if( releaseObject ) + nparams += maxPoints * 3; Mat _Ji( maxPoints*2, NINTRINSIC, CV_64FC1, Scalar(0)); Mat _Je( maxPoints*2, 6, CV_64FC1 ); + Mat _Jo( maxPoints*2, maxPoints*3, CV_64FC1, Scalar(0) ); Mat _err( maxPoints*2, 1, CV_64FC1 ); _k = cvMat( distCoeffs->rows, distCoeffs->cols, CV_MAKETYPE(CV_64F,CV_MAT_CN(distCoeffs->type)), k); @@ -1540,6 +1642,21 @@ static double cvCalibrateCamera2Internal( const CvMat* objectPoints, mask[16] = 0; mask[17] = 0; } + + if(releaseObject) + { + // copy object points + std::copy( matM.ptr(), matM.ptr( 0, maxPoints - 1 ) + 3, + param + NINTRINSIC + nimages * 6 ); + // fix points + mask[NINTRINSIC + nimages * 6] = 0; + mask[NINTRINSIC + nimages * 6 + 1] = 0; + mask[NINTRINSIC + nimages * 6 + 2] = 0; + mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3] = 0; + mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3 + 1] = 0; + mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3 + 2] = 0; + mask[nparams - 1] = 0; + } } // 2. initialize extrinsic parameters @@ -1592,22 +1709,31 @@ static double cvCalibrateCamera2Internal( const CvMat* objectPoints, cvGetRows( solver.param, &_ti, NINTRINSIC + i*6 + 3, NINTRINSIC + i*6 + 6 ); CvMat _Mi = cvMat(matM.colRange(pos, pos + ni)); + if( releaseObject ) + { + cvGetRows( solver.param, &_Mi, NINTRINSIC + nimages * 6, + NINTRINSIC + nimages * 6 + ni * 3 ); + cvReshape( &_Mi, &_Mi, 3, 1 ); + } CvMat _mi = cvMat(_m.colRange(pos, pos + ni)); CvMat _me = cvMat(allErrors.colRange(pos, pos + ni)); _Je.resize(ni*2); _Ji.resize(ni*2); _err.resize(ni*2); + _Jo.resize(ni*2); CvMat _dpdr = cvMat(_Je.colRange(0, 3)); CvMat _dpdt = cvMat(_Je.colRange(3, 6)); CvMat _dpdf = cvMat(_Ji.colRange(0, 2)); CvMat _dpdc = cvMat(_Ji.colRange(2, 4)); CvMat _dpdk = cvMat(_Ji.colRange(4, NINTRINSIC)); + CvMat _dpdo = cvMat(_Jo.colRange(0, ni * 3)); CvMat _mp = cvMat(_err.reshape(2, 1)); if( calcJ ) { - cvProjectPoints2( &_Mi, &_ri, &_ti, &matA, &_k, &_mp, &_dpdr, &_dpdt, + cvProjectPoints2Internal( &_Mi, &_ri, &_ti, &matA, &_k, &_mp, &_dpdr, &_dpdt, (flags & CALIB_FIX_FOCAL_LENGTH) ? 0 : &_dpdf, (flags & CALIB_FIX_PRINCIPAL_POINT) ? 0 : &_dpdc, &_dpdk, + &_dpdo, (flags & CALIB_FIX_ASPECT_RATIO) ? aspectRatio : 0); } else @@ -1625,9 +1751,21 @@ static double cvCalibrateCamera2Internal( const CvMat* objectPoints, JtJ(Rect(0, 0, NINTRINSIC, NINTRINSIC)) += _Ji.t() * _Ji; JtJ(Rect(NINTRINSIC + i * 6, NINTRINSIC + i * 6, 6, 6)) = _Je.t() * _Je; JtJ(Rect(NINTRINSIC + i * 6, 0, 6, NINTRINSIC)) = _Ji.t() * _Je; + if( releaseObject ) + { + JtJ(Rect(NINTRINSIC + nimages * 6, 0, maxPoints * 3, NINTRINSIC)) += _Ji.t() * _Jo; + JtJ(Rect(NINTRINSIC + nimages * 6, NINTRINSIC + i * 6, maxPoints * 3, 6)) + += _Je.t() * _Jo; + JtJ(Rect(NINTRINSIC + nimages * 6, NINTRINSIC + nimages * 6, maxPoints * 3, maxPoints * 3)) + += _Jo.t() * _Jo; + } JtErr.rowRange(0, NINTRINSIC) += _Ji.t() * _err; JtErr.rowRange(NINTRINSIC + i * 6, NINTRINSIC + (i + 1) * 6) = _Je.t() * _err; + if( releaseObject ) + { + JtErr.rowRange(NINTRINSIC + nimages * 6, nparams) += _Jo.t() * _err; + } } double viewErr = norm(_err, NORM_L2SQR); @@ -1673,6 +1811,14 @@ static double cvCalibrateCamera2Internal( const CvMat* objectPoints, // 4. store the results cvConvert( &matA, cameraMatrix ); cvConvert( &_k, distCoeffs ); + if( newObjPoints && releaseObject ) + { + CvMat _Mi; + cvGetRows( solver.param, &_Mi, NINTRINSIC + nimages * 6, + NINTRINSIC + nimages * 6 + maxPoints * 3 ); + cvReshape( &_Mi, &_Mi, 3, 1 ); + cvConvert( &_Mi, newObjPoints ); + } for( i = 0, pos = 0; i < nimages; i++ ) { @@ -1717,8 +1863,60 @@ CV_IMPL double cvCalibrateCamera2( const CvMat* objectPoints, CvSize imageSize, CvMat* cameraMatrix, CvMat* distCoeffs, CvMat* rvecs, CvMat* tvecs, int flags, CvTermCriteria termCrit ) { - return cvCalibrateCamera2Internal(objectPoints, imagePoints, npoints, imageSize, cameraMatrix, - distCoeffs, rvecs, tvecs, NULL, NULL, flags, termCrit); + return cvCalibrateCamera2Internal(objectPoints, imagePoints, npoints, imageSize, -1, cameraMatrix, + distCoeffs, rvecs, tvecs, NULL, NULL, NULL, flags, termCrit); +} + +CV_IMPL double cvCalibrateCamera4( const CvMat* objectPoints, + const CvMat* imagePoints, const CvMat* npoints, + CvSize imageSize, int iFixedPoint, CvMat* cameraMatrix, CvMat* distCoeffs, + CvMat* rvecs, CvMat* tvecs, CvMat* newObjPoints, int flags, CvTermCriteria termCrit ) +{ + if( !CV_IS_MAT(npoints) ) + CV_Error( CV_StsBadArg, "npoints is not a valid matrix" ); + if( CV_MAT_TYPE(npoints->type) != CV_32SC1 || + (npoints->rows != 1 && npoints->cols != 1) ) + CV_Error( CV_StsUnsupportedFormat, + "the array of point counters must be 1-dimensional integer vector" ); + + bool releaseObject = iFixedPoint > 0 && iFixedPoint < npoints->data.i[0] - 1; + int nimages = npoints->rows * npoints->cols; + int npstep = npoints->rows == 1 ? 1 : npoints->step / CV_ELEM_SIZE(npoints->type); + int i, ni; + // check object points. If not qualified, report errors. + if( releaseObject ) + { + if( !CV_IS_MAT(objectPoints) ) + CV_Error( CV_StsBadArg, "objectPoints is not a valid matrix" ); + Mat matM; + if(CV_MAT_CN(objectPoints->type) == 3) { + matM = cvarrToMat(objectPoints); + } else { + convertPointsHomogeneous(cvarrToMat(objectPoints), matM); + } + + matM = matM.reshape(3, 1); + ni = npoints->data.i[0]; + for( i = 1; i < nimages; i++ ) + { + if( npoints->data.i[i * npstep] != ni ) + { + CV_Error( CV_StsBadArg, "All objectPoints[i].size() should be equal when " + "object-releasing method is requested." ); + } + Mat ocmp = matM.colRange(ni * i, ni * i + ni) != matM.colRange(0, ni); + ocmp = ocmp.reshape(1); + if( countNonZero(ocmp) ) + { + CV_Error( CV_StsBadArg, "All objectPoints[i] should be identical when object-releasing" + " method is requested." ); + } + } + } + + return cvCalibrateCamera2Internal(objectPoints, imagePoints, npoints, imageSize, iFixedPoint, + cameraMatrix, distCoeffs, rvecs, tvecs, newObjPoints, NULL, + NULL, flags, termCrit); } void cvCalibrationMatrixValues( const CvMat *calibMatr, CvSize imgSize, @@ -3132,6 +3330,7 @@ namespace cv static void collectCalibrationData( InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2, + int& iFixedPoint, Mat& objPtMat, Mat& imgPtMat1, Mat* imgPtMat2, Mat& npoints ) { @@ -3190,6 +3389,39 @@ static void collectCalibrationData( InputArrayOfArrays objectPoints, } } } + + ni = npoints.at(0); + bool releaseObject = iFixedPoint > 0 && iFixedPoint < ni - 1; + // check object points. If not qualified, report errors. + if( releaseObject ) + { + for( i = 1; i < nimages; i++ ) + { + if( npoints.at(i) != ni ) + { + CV_Error( CV_StsBadArg, "All objectPoints[i].size() should be equal when " + "object-releasing method is requested." ); + } + Mat ocmp = objPtMat.colRange(ni * i, ni * i + ni) != objPtMat.colRange(0, ni); + ocmp = ocmp.reshape(1); + if( countNonZero(ocmp) ) + { + CV_Error( CV_StsBadArg, "All objectPoints[i] should be identical when object-releasing" + " method is requested." ); + } + } + } +} + +static void collectCalibrationData( InputArrayOfArrays objectPoints, + InputArrayOfArrays imagePoints1, + InputArrayOfArrays imagePoints2, + Mat& objPtMat, Mat& imgPtMat1, Mat* imgPtMat2, + Mat& npoints ) +{ + int iFixedPoint = -1; + collectCalibrationData( objectPoints, imagePoints1, imagePoints2, iFixedPoint, objPtMat, imgPtMat1, + imgPtMat2, npoints ); } static Mat prepareCameraMatrix(Mat& cameraMatrix0, int rtype) @@ -3385,6 +3617,39 @@ double cv::calibrateCamera(InputArrayOfArrays _objectPoints, { CV_INSTRUMENT_REGION(); + return calibrateCameraRO(_objectPoints, _imagePoints, imageSize, -1, _cameraMatrix, _distCoeffs, + _rvecs, _tvecs, noArray(), stdDeviationsIntrinsics, stdDeviationsExtrinsics, + noArray(), _perViewErrors, flags, criteria); +} + +double cv::calibrateCameraRO(InputArrayOfArrays _objectPoints, + InputArrayOfArrays _imagePoints, + Size imageSize, int iFixedPoint, InputOutputArray _cameraMatrix, + InputOutputArray _distCoeffs, + OutputArrayOfArrays _rvecs, OutputArrayOfArrays _tvecs, + OutputArray newObjPoints, + int flags, TermCriteria criteria) +{ + CV_INSTRUMENT_REGION(); + + return calibrateCameraRO(_objectPoints, _imagePoints, imageSize, iFixedPoint, _cameraMatrix, + _distCoeffs, _rvecs, _tvecs, newObjPoints, noArray(), noArray(), + noArray(), noArray(), flags, criteria); +} + +double cv::calibrateCameraRO(InputArrayOfArrays _objectPoints, + InputArrayOfArrays _imagePoints, + Size imageSize, int iFixedPoint, InputOutputArray _cameraMatrix, + InputOutputArray _distCoeffs, + OutputArrayOfArrays _rvecs, OutputArrayOfArrays _tvecs, + OutputArray newObjPoints, + OutputArray stdDeviationsIntrinsics, + OutputArray stdDeviationsExtrinsics, + OutputArray stdDeviationsObjPoints, + OutputArray _perViewErrors, int flags, TermCriteria criteria ) +{ + CV_INSTRUMENT_REGION(); + int rtype = CV_64F; Mat cameraMatrix = _cameraMatrix.getMat(); cameraMatrix = prepareCameraMatrix(cameraMatrix, rtype); @@ -3403,6 +3668,8 @@ double cv::calibrateCamera(InputArrayOfArrays _objectPoints, bool rvecs_needed = _rvecs.needed(), tvecs_needed = _tvecs.needed(), stddev_needed = stdDeviationsIntrinsics.needed(), errors_needed = _perViewErrors.needed(), stddev_ext_needed = stdDeviationsExtrinsics.needed(); + bool newobj_needed = newObjPoints.needed(); + bool stddev_obj_needed = stdDeviationsObjPoints.needed(); bool rvecs_mat_vec = _rvecs.isMatVector(); bool tvecs_mat_vec = _tvecs.isMatVector(); @@ -3427,10 +3694,26 @@ double cv::calibrateCamera(InputArrayOfArrays _objectPoints, tvecM = _tvecs.getMat(); } - bool stddev_any_needed = stddev_needed || stddev_ext_needed; + collectCalibrationData( _objectPoints, _imagePoints, noArray(), iFixedPoint, + objPt, imgPt, 0, npoints ); + bool releaseObject = iFixedPoint > 0 && iFixedPoint < npoints.at(0) - 1; + + newobj_needed = newobj_needed && releaseObject; + int np = npoints.at( 0 ); + Mat newObjPt; + if( newobj_needed ) { + newObjPoints.create( 1, np, CV_32FC3 ); + newObjPt = newObjPoints.getMat(); + } + + stddev_obj_needed = stddev_obj_needed && releaseObject; + bool stddev_any_needed = stddev_needed || stddev_ext_needed || stddev_obj_needed; if( stddev_any_needed ) { - stdDeviationsM.create(nimages*6 + CV_CALIB_NINTRINSIC, 1, CV_64F); + if( releaseObject ) + stdDeviationsM.create(nimages*6 + CV_CALIB_NINTRINSIC + np * 3, 1, CV_64F); + else + stdDeviationsM.create(nimages*6 + CV_CALIB_NINTRINSIC, 1, CV_64F); } if( errors_needed ) @@ -3439,19 +3722,23 @@ double cv::calibrateCamera(InputArrayOfArrays _objectPoints, errorsM = _perViewErrors.getMat(); } - collectCalibrationData( _objectPoints, _imagePoints, noArray(), - objPt, imgPt, 0, npoints ); CvMat c_objPt = cvMat(objPt), c_imgPt = cvMat(imgPt), c_npoints = cvMat(npoints); CvMat c_cameraMatrix = cvMat(cameraMatrix), c_distCoeffs = cvMat(distCoeffs); CvMat c_rvecM = cvMat(rvecM), c_tvecM = cvMat(tvecM), c_stdDev = cvMat(stdDeviationsM), c_errors = cvMat(errorsM); + CvMat c_newObjPt = cvMat( newObjPt ); double reprojErr = cvCalibrateCamera2Internal(&c_objPt, &c_imgPt, &c_npoints, cvSize(imageSize), + iFixedPoint, &c_cameraMatrix, &c_distCoeffs, rvecs_needed ? &c_rvecM : NULL, tvecs_needed ? &c_tvecM : NULL, + newobj_needed ? &c_newObjPt : NULL, stddev_any_needed ? &c_stdDev : NULL, errors_needed ? &c_errors : NULL, flags, cvTermCriteria(criteria)); + if( newobj_needed ) + newObjPt.copyTo(newObjPoints); + if( stddev_needed ) { stdDeviationsIntrinsics.create(CV_CALIB_NINTRINSIC, 1, CV_64F); @@ -3469,6 +3756,15 @@ double cv::calibrateCamera(InputArrayOfArrays _objectPoints, nimages*6*sizeof(double)); } + if( stddev_obj_needed ) + { + stdDeviationsObjPoints.create( np * 3, 1, CV_64F ); + Mat stdDeviationsObjPointsMat = stdDeviationsObjPoints.getMat(); + std::memcpy( stdDeviationsObjPointsMat.ptr(), stdDeviationsM.ptr() + + ( CV_CALIB_NINTRINSIC + nimages * 6 ) * sizeof( double ), + np * 3 * sizeof( double ) ); + } + // overly complicated and inefficient rvec/ tvec handling to support vector for(int i = 0; i < nimages; i++ ) { diff --git a/modules/calib3d/test/test_cameracalibration.cpp b/modules/calib3d/test/test_cameracalibration.cpp index f20edfea27..cb326a2f55 100644 --- a/modules/calib3d/test/test_cameracalibration.cpp +++ b/modules/calib3d/test/test_cameracalibration.cpp @@ -255,8 +255,9 @@ protected: double eps, const char* paramName); virtual void calibrate( int imageCount, int* pointCounts, CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, - double* distortionCoeffs, double* cameraMatrix, double* translationVectors, - double* rotationMatrices, double *stdDevs, double* perViewErrors, int flags ) = 0; + int iFixedPoint, double* distortionCoeffs, double* cameraMatrix, double* translationVectors, + double* rotationMatrices, double* newObjPoints, double *stdDevs, double* perViewErrors, + int flags ) = 0; virtual void project( int pointCount, CvPoint3D64f* objectPoints, double* rotationMatrix, double* translationVector, double* cameraMatrix, double* distortion, CvPoint2D64f* imagePoints ) = 0; @@ -300,11 +301,13 @@ void CV_CameraCalibrationTest::run( int start_from ) double* transVects; double* rotMatrs; + double* newObjPoints; double* stdDevs; double* perViewErrors; double* goodTransVects; double* goodRotMatrs; + double* goodObjPoints; double* goodPerViewErrors; double* goodStdDevs; @@ -403,6 +406,15 @@ void CV_CameraCalibrationTest::run( int start_from ) goto _exit_; } + /* Read calibration flags */ + values_read = fscanf(file,"%d\n",&calibFlags); + CV_Assert(values_read == 1); + + /* Read index of the fixed point */ + int iFixedPoint; + values_read = fscanf(file,"%d\n",&iFixedPoint); + CV_Assert(values_read == 1); + /* Need to allocate memory */ imagePoints = (CvPoint2D64f*)cvAlloc( numPoints * numImages * sizeof(CvPoint2D64f)); @@ -425,13 +437,17 @@ void CV_CameraCalibrationTest::run( int start_from ) /* Allocate memory for translate vectors and rotmatrixs*/ transVects = (double*)cvAlloc(3 * 1 * numImages * sizeof(double)); rotMatrs = (double*)cvAlloc(3 * 3 * numImages * sizeof(double)); - stdDevs = (double*)cvAlloc((CV_CALIB_NINTRINSIC + 6*numImages) * sizeof(double)); + newObjPoints = (double*)cvAlloc(3 * numbers[0] * sizeof(double)); + stdDevs = (double*)cvAlloc((CV_CALIB_NINTRINSIC + 6*numImages + 3*numbers[0]) + * sizeof(double)); perViewErrors = (double*)cvAlloc(numImages * sizeof(double)); goodTransVects = (double*)cvAlloc(3 * 1 * numImages * sizeof(double)); goodRotMatrs = (double*)cvAlloc(3 * 3 * numImages * sizeof(double)); + goodObjPoints = (double*)cvAlloc(3 * numbers[0] * sizeof(double)); goodPerViewErrors = (double*)cvAlloc(numImages * sizeof(double)); - goodStdDevs = (double*)cvAlloc((CV_CALIB_NINTRINSIC + 6*numImages) * sizeof(double)); + goodStdDevs = (double*)cvAlloc((CV_CALIB_NINTRINSIC + 6*numImages + 3*numbers[0]) + * sizeof(double)); /* Read object points */ i = 0;/* shift for current point */ @@ -506,22 +522,36 @@ void CV_CameraCalibrationTest::run( int start_from ) } } + bool releaseObject = iFixedPoint > 0 && iFixedPoint < numPoints - 1; + /* Read good refined 3D object points */ + if( releaseObject ) + { + for( i = 0; i < numbers[0]; i++ ) + { + for( j = 0; j < 3; j++ ) + { + values_read = fscanf(file, "%lf", goodObjPoints + i * 3 + j); + CV_Assert(values_read == 1); + } + } + } + /* Read good stdDeviations */ for (i = 0; i < CV_CALIB_NINTRINSIC + numImages*6; i++) { values_read = fscanf(file, "%lf", goodStdDevs + i); CV_Assert(values_read == 1); } + if( releaseObject ) + { + for( i = CV_CALIB_NINTRINSIC + numImages*6; i < CV_CALIB_NINTRINSIC + numImages*6 + + numbers[0]*3; i++ ) + { + values_read = fscanf(file, "%lf", goodStdDevs + i); + CV_Assert(values_read == 1); + } + } - calibFlags = 0 - // + CV_CALIB_FIX_PRINCIPAL_POINT - // + CV_CALIB_ZERO_TANGENT_DIST - // + CV_CALIB_FIX_ASPECT_RATIO - // + CV_CALIB_USE_INTRINSIC_GUESS - + CV_CALIB_FIX_K3 - + CV_CALIB_FIX_K4+CV_CALIB_FIX_K5 - + CV_CALIB_FIX_K6 - ; memset( cameraMatrix, 0, 9*sizeof(cameraMatrix[0]) ); cameraMatrix[0] = cameraMatrix[4] = 807.; cameraMatrix[2] = (imageSize.width - 1)*0.5; @@ -534,10 +564,12 @@ void CV_CameraCalibrationTest::run( int start_from ) cvSize(imageSize), imagePoints, objectPoints, + iFixedPoint, distortion, cameraMatrix, transVects, rotMatrs, + newObjPoints, stdDevs, perViewErrors, calibFlags ); @@ -546,6 +578,11 @@ void CV_CameraCalibrationTest::run( int start_from ) for( currImage = 0; currImage < numImages; currImage++ ) { int nPoints = etalonSize.width * etalonSize.height; + if( releaseObject ) + { + memcpy( objectPoints + currImage * nPoints, newObjPoints, + nPoints * 3 * sizeof(double) ); + } project( nPoints, objectPoints + currImage * nPoints, rotMatrs + currImage * 9, @@ -639,6 +676,14 @@ void CV_CameraCalibrationTest::run( int start_from ) if( code < 0 ) goto _exit_; + /* ----- Compare refined 3D object points ----- */ + if( releaseObject ) + { + code = compare(newObjPoints,goodObjPoints, 3*numbers[0],0.1,"refined 3D object points"); + if( code < 0 ) + goto _exit_; + } + /* ----- Compare per view re-projection errors ----- */ code = compare(perViewErrors,goodPerViewErrors, numImages,0.1,"per view errors vector"); if( code < 0 ) @@ -647,12 +692,13 @@ void CV_CameraCalibrationTest::run( int start_from ) /* ----- Compare standard deviations of parameters ----- */ //only for c-version of test (it does not provides evaluation of stdDevs //and returns zeros) - for ( i = 0; i < CV_CALIB_NINTRINSIC + 6*numImages; i++) + for ( i = 0; i < CV_CALIB_NINTRINSIC + 6*numImages + 3*numbers[0]; i++) { if(stdDevs[i] == 0.0) stdDevs[i] = goodStdDevs[i]; } - code = compare(stdDevs,goodStdDevs, CV_CALIB_NINTRINSIC + 6*numImages,.5,"stdDevs vector"); + code = compare(stdDevs,goodStdDevs, CV_CALIB_NINTRINSIC + 6*numImages + 3*numbers[0],.5, + "stdDevs vector"); if( code < 0 ) goto _exit_; @@ -679,10 +725,12 @@ void CV_CameraCalibrationTest::run( int start_from ) cvFree(&transVects); cvFree(&rotMatrs); + cvFree(&newObjPoints); cvFree(&stdDevs); cvFree(&perViewErrors); cvFree(&goodTransVects); cvFree(&goodRotMatrs); + cvFree(&goodObjPoints); cvFree(&goodPerViewErrors); cvFree(&goodStdDevs); @@ -721,18 +769,20 @@ public: CV_CameraCalibrationTest_C(){} protected: virtual void calibrate( int imageCount, int* pointCounts, - CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, + CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, int iFixedPoint, double* distortionCoeffs, double* cameraMatrix, double* translationVectors, - double* rotationMatrices, double *stdDevs, double* perViewErrors, int flags ); + double* rotationMatrices, double* newObjPoints, double *stdDevs, double* perViewErrors, + int flags ); virtual void project( int pointCount, CvPoint3D64f* objectPoints, double* rotationMatrix, double* translationVector, double* cameraMatrix, double* distortion, CvPoint2D64f* imagePoints ); }; void CV_CameraCalibrationTest_C::calibrate(int imageCount, int* pointCounts, - CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, + CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, int iFixedPoint, double* distortionCoeffs, double* cameraMatrix, double* translationVectors, - double* rotationMatrices, double *stdDevs, double *perViewErrors, int flags ) + double* rotationMatrices, double* newObjPoints, double *stdDevs, double *perViewErrors, + int flags ) { int i, total = 0; for( i = 0; i < imageCount; i++ ) @@ -741,7 +791,7 @@ void CV_CameraCalibrationTest_C::calibrate(int imageCount, int* pointCounts, total += pointCounts[i]; } - for( i = 0; i < CV_CALIB_NINTRINSIC + imageCount*6; i++) + for( i = 0; i < CV_CALIB_NINTRINSIC + imageCount*6 + pointCounts[0]*3; i++) { stdDevs[i] = 0.0; } @@ -753,9 +803,11 @@ void CV_CameraCalibrationTest_C::calibrate(int imageCount, int* pointCounts, CvMat _distCoeffs = cvMat(4, 1, CV_64F, distortionCoeffs); CvMat _rotationMatrices = cvMat(imageCount, 9, CV_64F, rotationMatrices); CvMat _translationVectors = cvMat(imageCount, 3, CV_64F, translationVectors); + CvMat _newObjPoints = cvMat(1, pointCounts[0], CV_64FC3, newObjPoints); - cvCalibrateCamera2(&_objectPoints, &_imagePoints, &_pointCounts, imageSize, - &_cameraMatrix, &_distCoeffs, &_rotationMatrices, &_translationVectors, flags); + cvCalibrateCamera4(&_objectPoints, &_imagePoints, &_pointCounts, imageSize, iFixedPoint, + &_cameraMatrix, &_distCoeffs, &_rotationMatrices, &_translationVectors, + &_newObjPoints, flags); } void CV_CameraCalibrationTest_C::project( int pointCount, CvPoint3D64f* objectPoints, @@ -780,25 +832,29 @@ public: CV_CameraCalibrationTest_CPP(){} protected: virtual void calibrate( int imageCount, int* pointCounts, - CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, + CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, int iFixedPoint, double* distortionCoeffs, double* cameraMatrix, double* translationVectors, - double* rotationMatrices, double *stdDevs, double* perViewErrors, int flags ); + double* rotationMatrices, double* newObjPoints, double *stdDevs, double* perViewErrors, + int flags ); virtual void project( int pointCount, CvPoint3D64f* objectPoints, double* rotationMatrix, double* translationVector, double* cameraMatrix, double* distortion, CvPoint2D64f* imagePoints ); }; void CV_CameraCalibrationTest_CPP::calibrate(int imageCount, int* pointCounts, - CvSize _imageSize, CvPoint2D64f* _imagePoints, CvPoint3D64f* _objectPoints, + CvSize _imageSize, CvPoint2D64f* _imagePoints, CvPoint3D64f* _objectPoints, int iFixedPoint, double* _distortionCoeffs, double* _cameraMatrix, double* translationVectors, - double* rotationMatrices, double *stdDevs, double *perViewErrors, int flags ) + double* rotationMatrices, double* newObjPoints, double *stdDevs, double *perViewErrors, + int flags ) { vector > objectPoints( imageCount ); vector > imagePoints( imageCount ); Size imageSize = _imageSize; Mat cameraMatrix, distCoeffs(1,4,CV_64F,Scalar::all(0)); vector rvecs, tvecs; + Mat newObjMat; Mat stdDevsMatInt, stdDevsMatExt; + Mat stdDevsMatObj; Mat perViewErrorsMat; CvPoint3D64f* op = _objectPoints; @@ -819,17 +875,34 @@ void CV_CameraCalibrationTest_CPP::calibrate(int imageCount, int* pointCounts, } } - calibrateCamera( objectPoints, - imagePoints, - imageSize, - cameraMatrix, - distCoeffs, - rvecs, - tvecs, - stdDevsMatInt, - stdDevsMatExt, - perViewErrorsMat, - flags ); + for( int i = CV_CALIB_NINTRINSIC + imageCount*6; i < CV_CALIB_NINTRINSIC + imageCount*6 + + pointCounts[0]*3; i++) + { + stdDevs[i] = 0.0; + } + + calibrateCameraRO( objectPoints, + imagePoints, + imageSize, + iFixedPoint, + cameraMatrix, + distCoeffs, + rvecs, + tvecs, + newObjMat, + stdDevsMatInt, + stdDevsMatExt, + stdDevsMatObj, + perViewErrorsMat, + flags ); + + bool releaseObject = iFixedPoint > 0 && iFixedPoint < pointCounts[0] - 1; + if( releaseObject ) + { + newObjMat.convertTo( newObjMat, CV_64F ); + assert( newObjMat.total() * newObjMat.channels() == static_cast(3*pointCounts[0]) ); + memcpy( newObjPoints, newObjMat.ptr(), 3*pointCounts[0]*sizeof(double) ); + } assert( stdDevsMatInt.type() == CV_64F ); assert( stdDevsMatInt.total() == static_cast(CV_CALIB_NINTRINSIC) ); @@ -839,6 +912,14 @@ void CV_CameraCalibrationTest_CPP::calibrate(int imageCount, int* pointCounts, assert( stdDevsMatExt.total() == static_cast(6*imageCount) ); memcpy( stdDevs + CV_CALIB_NINTRINSIC, stdDevsMatExt.ptr(), 6*imageCount*sizeof(double) ); + if( releaseObject ) + { + assert( stdDevsMatObj.type() == CV_64F ); + assert( stdDevsMatObj.total() == static_cast(3*pointCounts[0]) ); + memcpy( stdDevs + CV_CALIB_NINTRINSIC + 6*imageCount, stdDevsMatObj.ptr(), + 3*pointCounts[0]*sizeof(double) ); + } + assert( perViewErrorsMat.type() == CV_64F); assert( perViewErrorsMat.total() == static_cast(imageCount) ); memcpy( perViewErrors, perViewErrorsMat.ptr(), imageCount*sizeof(double) ); diff --git a/modules/calib3d/test/test_cameracalibration_badarg.cpp b/modules/calib3d/test/test_cameracalibration_badarg.cpp index c367432f40..cc1874dd35 100644 --- a/modules/calib3d/test/test_cameracalibration_badarg.cpp +++ b/modules/calib3d/test/test_cameracalibration_badarg.cpp @@ -54,7 +54,7 @@ protected: void run(int); void run_func(void) {} - const static int M = 1; + const static int M = 2; Size imgSize; Size corSize; @@ -67,16 +67,18 @@ protected: CvMat* imgPts; CvMat* npoints; Size imageSize; + int iFixedPoint; CvMat *cameraMatrix; CvMat *distCoeffs; CvMat *rvecs; CvMat *tvecs; + CvMat *newObjPts; int flags; void operator()() const { - cvCalibrateCamera2(objPts, imgPts, npoints, cvSize(imageSize), - cameraMatrix, distCoeffs, rvecs, tvecs, flags ); + cvCalibrateCamera4(objPts, imgPts, npoints, cvSize(imageSize), iFixedPoint, + cameraMatrix, distCoeffs, rvecs, tvecs, newObjPts, flags ); } }; }; @@ -97,6 +99,7 @@ void CV_CameraCalibrationBadArgTest::run( int /* start_from */ ) Mat_(corSize.height, corSize.width, (Point2f*)&exp_corn[0]).copyTo(corners); CvMat objPts, imgPts, npoints, cameraMatrix, distCoeffs, rvecs, tvecs; + CvMat newObjPts; Mat zeros(1, sizeof(CvMat), CV_8U, Scalar(0)); C_Caller caller, bad_caller; @@ -108,6 +111,7 @@ void CV_CameraCalibrationBadArgTest::run( int /* start_from */ ) caller.distCoeffs = &distCoeffs; caller.rvecs = &rvecs; caller.tvecs = &tvecs; + caller.newObjPts = &newObjPts; ///////////////////////////// Mat objPts_cpp; @@ -117,19 +121,31 @@ void CV_CameraCalibrationBadArgTest::run( int /* start_from */ ) Mat distCoeffs_cpp; Mat rvecs_cpp; Mat tvecs_cpp; + Mat newObjPts_cpp; objPts_cpp.create(corSize, CV_32FC3); for(int j = 0; j < corSize.height; ++j) for(int i = 0; i < corSize.width; ++i) objPts_cpp.at(j, i) = Point3i(i, j, 0); objPts_cpp = objPts_cpp.reshape(3, 1); + Mat objPts_cpp_all(1, objPts_cpp.cols * M, CV_32FC3); + for(int i = 0; i < M; i++) + objPts_cpp.copyTo(objPts_cpp_all.colRange(objPts_cpp.cols * i, objPts_cpp.cols * (i + 1))); + objPts_cpp = objPts_cpp_all; + + caller.iFixedPoint = -1; imgPts_cpp = corners.clone().reshape(2, 1); + Mat imgPts_cpp_all(1, imgPts_cpp.cols * M, CV_32FC2); + for(int i = 0; i < M; i++) + imgPts_cpp.copyTo(imgPts_cpp_all.colRange(imgPts_cpp.cols * i, imgPts_cpp.cols * (i + 1))); + imgPts_cpp = imgPts_cpp_all; npoints_cpp = Mat_(M, 1, corSize.width * corSize.height); cameraMatrix_cpp.create(3, 3, CV_32F); distCoeffs_cpp.create(5, 1, CV_32F); rvecs_cpp.create(M, 1, CV_32FC3); tvecs_cpp.create(M, 1, CV_32FC3); + newObjPts_cpp.create(corSize.width * corSize.height, 1, CV_32FC3); caller.flags = 0; //CV_CALIB_USE_INTRINSIC_GUESS; //CV_CALIB_FIX_ASPECT_RATIO @@ -144,6 +160,7 @@ void CV_CameraCalibrationBadArgTest::run( int /* start_from */ ) distCoeffs = cvMat(distCoeffs_cpp); rvecs = cvMat(rvecs_cpp); tvecs = cvMat(tvecs_cpp); + newObjPts = cvMat(newObjPts_cpp); /* /*//*/ */ int errors = 0; @@ -197,6 +214,10 @@ void CV_CameraCalibrationBadArgTest::run( int /* start_from */ ) bad_caller.tvecs = (CvMat*)zeros.ptr(); errors += run_test_case( CV_StsBadArg, "Bad tvecs header", bad_caller ); + bad_caller = caller; + bad_caller.newObjPts = (CvMat*)zeros.ptr(); + errors += run_test_case( CV_StsBadArg, "Bad newObjPts header", bad_caller ); + Mat bad_rvecs_cpp1(M+1, 1, CV_32FC3); CvMat bad_rvecs_c1 = cvMat(bad_rvecs_cpp1); Mat bad_tvecs_cpp1(M+1, 1, CV_32FC3); CvMat bad_tvecs_c1 = cvMat(bad_tvecs_cpp1); @@ -207,11 +228,11 @@ void CV_CameraCalibrationBadArgTest::run( int /* start_from */ ) bad_caller = caller; bad_caller.rvecs = &bad_rvecs_c1; - errors += run_test_case( CV_StsBadArg, "Bad tvecs header", bad_caller ); + errors += run_test_case( CV_StsBadArg, "Bad rvecs header", bad_caller ); bad_caller = caller; bad_caller.rvecs = &bad_rvecs_c2; - errors += run_test_case( CV_StsBadArg, "Bad tvecs header", bad_caller ); + errors += run_test_case( CV_StsBadArg, "Bad rvecs header", bad_caller ); bad_caller = caller; bad_caller.tvecs = &bad_tvecs_c1; @@ -221,6 +242,14 @@ void CV_CameraCalibrationBadArgTest::run( int /* start_from */ ) bad_caller.tvecs = &bad_tvecs_c2; errors += run_test_case( CV_StsBadArg, "Bad tvecs header", bad_caller ); + bad_caller = caller; + bad_caller.newObjPts = &bad_tvecs_c1; + errors += run_test_case( CV_StsBadArg, "Bad newObjPts header", bad_caller ); + + bad_caller = caller; + bad_caller.newObjPts = &bad_tvecs_c2; + errors += run_test_case( CV_StsBadArg, "Bad newObjPts header", bad_caller ); + Mat bad_cameraMatrix_cpp1(3, 3, CV_32S); CvMat bad_cameraMatrix_c1 = cvMat(bad_cameraMatrix_cpp1); Mat bad_cameraMatrix_cpp2(2, 3, CV_32F); CvMat bad_cameraMatrix_c2 = cvMat(bad_cameraMatrix_cpp2); Mat bad_cameraMatrix_cpp3(3, 2, CV_64F); CvMat bad_cameraMatrix_c3 = cvMat(bad_cameraMatrix_cpp3); @@ -306,11 +335,30 @@ void CV_CameraCalibrationBadArgTest::run( int /* start_from */ ) bad_caller.objPts = &bad_objPts_c5; cv::RNG& rng = theRNG(); - for(int i = 0; i < bad_objPts_cpp5.rows; ++i) + for(int i = 0; i < bad_objPts_cpp5.cols; ++i) bad_objPts_cpp5.at(0, i).z += ((float)rng - 0.5f); errors += run_test_case( CV_StsBadArg, "Bad objPts data", bad_caller ); + bad_objPts_cpp5 = objPts_cpp.clone(); bad_objPts_c5 = cvMat(bad_objPts_cpp5); + bad_caller.objPts = &bad_objPts_c5; + bad_caller.iFixedPoint = corSize.width - 1; + for(int i = 0; i < bad_objPts_cpp5.cols; ++i) + { + bad_objPts_cpp5.at(0, i).x += (float)rng; + bad_objPts_cpp5.at(0, i).y += (float)rng; + } + errors += run_test_case( CV_StsBadArg, "Bad objPts data", bad_caller ); + + bad_caller = caller; + Mat bad_npts_cpp3 = npoints_cpp.clone(); + CvMat bad_npts_c3 = cvMat(bad_npts_cpp3); + bad_caller.npoints = &bad_npts_c3; + bad_caller.iFixedPoint = corSize.width - 1; + for(int i = 0; i < M; i++) + bad_npts_cpp3.at(i) += i; + errors += run_test_case( CV_StsBadArg, "Bad npoints data", bad_caller ); + if (errors) ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH); else diff --git a/samples/cpp/calibration.cpp b/samples/cpp/calibration.cpp index 251ab83e72..5ff08d7c23 100644 --- a/samples/cpp/calibration.cpp +++ b/samples/cpp/calibration.cpp @@ -10,6 +10,7 @@ #include #include #include +#include using namespace cv; using namespace std; @@ -62,6 +63,7 @@ static void help() " [-o=] # the output filename for intrinsic [and extrinsic] parameters\n" " [-op] # write detected feature points\n" " [-oe] # write extrinsic parameters\n" + " [-oo] # write refined 3D object points\n" " [-zt] # assume zero tangential distortion\n" " [-a=] # fix aspect ratio (fx/fy)\n" " [-p] # fix the principal point at the center\n" @@ -69,6 +71,11 @@ static void help() " [-V] # use a video file, and not an image list, uses\n" " # [input_data] string for the video file name\n" " [-su] # show undistorted images after calibration\n" + " [-ws=] # Half of search window for cornerSubPix (11 by default)\n" + " [-dt=] # actual distance between top-left and top-right corners of\n" + " # the calibration grid. If this parameter is specified, a more\n" + " # accurate calibration method will be used which may be better\n" + " # with inaccurate, roughly planar target.\n" " [input_data] # input data, one of the following:\n" " # - text file with a list of the images of the board\n" " # the text file can be generated with imagelist_creator\n" @@ -137,9 +144,11 @@ static void calcChessboardCorners(Size boardSize, float squareSize, vector > imagePoints, Size imageSize, Size boardSize, Pattern patternType, float squareSize, float aspectRatio, + float grid_width, bool release_object, int flags, Mat& cameraMatrix, Mat& distCoeffs, vector& rvecs, vector& tvecs, vector& reprojErrs, + vector& newObjPoints, double& totalAvgErr) { cameraMatrix = Mat::eye(3, 3, CV_64F); @@ -150,16 +159,32 @@ static bool runCalibration( vector > imagePoints, vector > objectPoints(1); calcChessboardCorners(boardSize, squareSize, objectPoints[0], patternType); + objectPoints[0][boardSize.width - 1].x = objectPoints[0][0].x + grid_width; + newObjPoints = objectPoints[0]; objectPoints.resize(imagePoints.size(),objectPoints[0]); - double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, - distCoeffs, rvecs, tvecs, flags|CALIB_FIX_K4|CALIB_FIX_K5); - ///*|CALIB_FIX_K3*/|CALIB_FIX_K4|CALIB_FIX_K5); + double rms; + int iFixedPoint = -1; + if (release_object) + iFixedPoint = boardSize.width - 1; + rms = calibrateCameraRO(objectPoints, imagePoints, imageSize, iFixedPoint, + cameraMatrix, distCoeffs, rvecs, tvecs, newObjPoints, + flags | CALIB_FIX_K3 | CALIB_USE_LU); printf("RMS error reported by calibrateCamera: %g\n", rms); bool ok = checkRange(cameraMatrix) && checkRange(distCoeffs); + if (release_object) { + cout << "New board corners: " << endl; + cout << newObjPoints[0] << endl; + cout << newObjPoints[boardSize.width - 1] << endl; + cout << newObjPoints[boardSize.width * (boardSize.height - 1)] << endl; + cout << newObjPoints.back() << endl; + } + + objectPoints.clear(); + objectPoints.resize(imagePoints.size(), newObjPoints); totalAvgErr = computeReprojectionErrors(objectPoints, imagePoints, rvecs, tvecs, cameraMatrix, distCoeffs, reprojErrs); @@ -174,6 +199,7 @@ static void saveCameraParams( const string& filename, const vector& rvecs, const vector& tvecs, const vector& reprojErrs, const vector >& imagePoints, + const vector& newObjPoints, double totalAvgErr ) { FileStorage fs( filename, FileStorage::WRITE ); @@ -246,6 +272,11 @@ static void saveCameraParams( const string& filename, } fs << "image_points" << imagePtMat; } + + if( !newObjPoints.empty() ) + { + fs << "grid_points" << newObjPoints; + } } static bool readStringList( const string& filename, vector& l ) @@ -267,17 +298,19 @@ static bool readStringList( const string& filename, vector& l ) static bool runAndSave(const string& outputFilename, const vector >& imagePoints, Size imageSize, Size boardSize, Pattern patternType, float squareSize, + float grid_width, bool release_object, float aspectRatio, int flags, Mat& cameraMatrix, - Mat& distCoeffs, bool writeExtrinsics, bool writePoints ) + Mat& distCoeffs, bool writeExtrinsics, bool writePoints, bool writeGrid ) { vector rvecs, tvecs; vector reprojErrs; double totalAvgErr = 0; + vector newObjPoints; bool ok = runCalibration(imagePoints, imageSize, boardSize, patternType, squareSize, - aspectRatio, flags, cameraMatrix, distCoeffs, - rvecs, tvecs, reprojErrs, totalAvgErr); - printf("%s. avg reprojection error = %.2f\n", + aspectRatio, grid_width, release_object, flags, cameraMatrix, distCoeffs, + rvecs, tvecs, reprojErrs, newObjPoints, totalAvgErr); + printf("%s. avg reprojection error = %.7f\n", ok ? "Calibration succeeded" : "Calibration failed", totalAvgErr); @@ -289,6 +322,7 @@ static bool runAndSave(const string& outputFilename, writeExtrinsics ? tvecs : vector(), writeExtrinsics ? reprojErrs : vector(), writePoints ? imagePoints : vector >(), + writeGrid ? newObjPoints : vector(), totalAvgErr ); return ok; } @@ -297,7 +331,7 @@ static bool runAndSave(const string& outputFilename, int main( int argc, char** argv ) { Size boardSize, imageSize; - float squareSize, aspectRatio; + float squareSize, aspectRatio = 1; Mat cameraMatrix, distCoeffs; string outputFilename; string inputFilename = ""; @@ -320,7 +354,8 @@ int main( int argc, char** argv ) cv::CommandLineParser parser(argc, argv, "{help ||}{w||}{h||}{pt|chessboard|}{n|10|}{d|1000|}{s|1|}{o|out_camera_data.yml|}" - "{op||}{oe||}{zt||}{a|1|}{p||}{v||}{V||}{su||}" + "{op||}{oe||}{zt||}{a||}{p||}{v||}{V||}{su||}" + "{oo||}{ws|11|}{dt||}" "{@input_data|0|}"); if (parser.has("help")) { @@ -343,12 +378,14 @@ int main( int argc, char** argv ) } squareSize = parser.get("s"); nframes = parser.get("n"); - aspectRatio = parser.get("a"); delay = parser.get("d"); writePoints = parser.has("op"); writeExtrinsics = parser.has("oe"); - if (parser.has("a")) + bool writeGrid = parser.has("oo"); + if (parser.has("a")) { flags |= CALIB_FIX_ASPECT_RATIO; + aspectRatio = parser.get("a"); + } if ( parser.has("zt") ) flags |= CALIB_ZERO_TANGENT_DIST; if ( parser.has("p") ) @@ -362,6 +399,13 @@ int main( int argc, char** argv ) cameraId = parser.get("@input_data"); else inputFilename = parser.get("@input_data"); + int winSize = parser.get("ws"); + float grid_width = squareSize * (boardSize.width - 1); + bool release_object = false; + if (parser.has("dt")) { + grid_width = parser.get("dt"); + release_object = true; + } if (!parser.check()) { help(); @@ -420,9 +464,9 @@ int main( int argc, char** argv ) { if( imagePoints.size() > 0 ) runAndSave(outputFilename, imagePoints, imageSize, - boardSize, pattern, squareSize, aspectRatio, + boardSize, pattern, squareSize, grid_width, release_object, aspectRatio, flags, cameraMatrix, distCoeffs, - writeExtrinsics, writePoints); + writeExtrinsics, writePoints, writeGrid); break; } @@ -452,8 +496,8 @@ int main( int argc, char** argv ) } // improve the found corners' coordinate accuracy - if( pattern == CHESSBOARD && found) cornerSubPix( viewGray, pointbuf, Size(11,11), - Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.1 )); + if( pattern == CHESSBOARD && found) cornerSubPix( viewGray, pointbuf, Size(winSize,winSize), + Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.0001 )); if( mode == CAPTURING && found && (!capture.isOpened() || clock() - prevTimestamp > delay*1e-3*CLOCKS_PER_SEC) ) @@ -510,9 +554,9 @@ int main( int argc, char** argv ) if( mode == CAPTURING && imagePoints.size() >= (unsigned)nframes ) { if( runAndSave(outputFilename, imagePoints, imageSize, - boardSize, pattern, squareSize, aspectRatio, + boardSize, pattern, squareSize, grid_width, release_object, aspectRatio, flags, cameraMatrix, distCoeffs, - writeExtrinsics, writePoints)) + writeExtrinsics, writePoints, writeGrid)) mode = CALIBRATED; else mode = DETECTION; diff --git a/samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp b/samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp index 87fb4604e2..2d8b1ddac4 100644 --- a/samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp +++ b/samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp @@ -15,13 +15,6 @@ using namespace cv; using namespace std; -static void help() -{ - cout << "This is a camera calibration sample." << endl - << "Usage: camera_calibration [configuration_file -- default ./default.xml]" << endl - << "Near the sample file you'll find the configuration file, which has detailed help of " - "how to edit it. It may be any OpenCV supported file format XML/YAML." << endl; -} class Settings { public: @@ -43,6 +36,7 @@ public: << "Write_DetectedFeaturePoints" << writePoints << "Write_extrinsicParameters" << writeExtrinsics + << "Write_gridPoints" << writeGrid << "Write_outputFileName" << outputFileName << "Show_UndistortedImage" << showUndistorsed @@ -62,6 +56,7 @@ public: node["Calibrate_FixAspectRatio"] >> aspectRatio; node["Write_DetectedFeaturePoints"] >> writePoints; node["Write_extrinsicParameters"] >> writeExtrinsics; + node["Write_gridPoints"] >> writeGrid; node["Write_outputFileName"] >> outputFileName; node["Calibrate_AssumeZeroTangentialDistortion"] >> calibZeroTangentDist; node["Calibrate_FixPrincipalPointAtTheCenter"] >> calibFixPrincipalPoint; @@ -210,6 +205,7 @@ public: int delay; // In case of a video input bool writePoints; // Write detected feature points bool writeExtrinsics; // Write extrinsic parameters + bool writeGrid; // Write refined 3D target grid points bool calibZeroTangentDist; // Assume zero tangential distortion bool calibFixPrincipalPoint; // Fix the principal point at the center bool flipVertical; // Flip the captured images around the horizontal axis @@ -248,19 +244,39 @@ static inline void read(const FileNode& node, Settings& x, const Settings& defau enum { DETECTION = 0, CAPTURING = 1, CALIBRATED = 2 }; bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs, - vector > imagePoints ); + vector > imagePoints, float grid_width, bool release_object); int main(int argc, char* argv[]) { - help(); + const String keys + = "{help h usage ? | | print this message }" + "{@settings |default.xml| input setting file }" + "{d | | actual distance between top-left and top-right corners of " + "the calibration grid }" + "{winSize | 11 | Half of search window for cornerSubPix }"; + CommandLineParser parser(argc, argv, keys); + parser.about("This is a camera calibration sample.\n" + "Usage: camera_calibration [configuration_file -- default ./default.xml]\n" + "Near the sample file you'll find the configuration file, which has detailed help of " + "how to edit it. It may be any OpenCV supported file format XML/YAML."); + if (!parser.check()) { + parser.printErrors(); + return 0; + } + + if (parser.has("help")) { + parser.printMessage(); + return 0; + } //! [file_read] Settings s; - const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml"; + const string inputSettingsFile = parser.get(0); FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settings if (!fs.isOpened()) { cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl; + parser.printMessage(); return -1; } fs["Settings"] >> s; @@ -276,6 +292,15 @@ int main(int argc, char* argv[]) return -1; } + int winSize = parser.get("winSize"); + + float grid_width = s.squareSize * (s.boardSize.width - 1); + bool release_object = false; + if (parser.has("d")) { + grid_width = parser.get("d"); + release_object = true; + } + vector > imagePoints; Mat cameraMatrix, distCoeffs; Size imageSize; @@ -295,7 +320,8 @@ int main(int argc, char* argv[]) //----- If no more image, or got enough, then stop calibration and show result ------------- if( mode == CAPTURING && imagePoints.size() >= (size_t)s.nrFrames ) { - if( runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints)) + if(runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints, grid_width, + release_object)) mode = CALIBRATED; else mode = DETECTION; @@ -304,7 +330,8 @@ int main(int argc, char* argv[]) { // if calibration threshold was not reached yet, calibrate now if( mode != CALIBRATED && !imagePoints.empty() ) - runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints); + runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints, grid_width, + release_object); break; } //! [get_input] @@ -348,8 +375,8 @@ int main(int argc, char* argv[]) { Mat viewGray; cvtColor(view, viewGray, COLOR_BGR2GRAY); - cornerSubPix( viewGray, pointBuf, Size(11,11), - Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.1 )); + cornerSubPix( viewGray, pointBuf, Size(winSize,winSize), + Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.0001 )); } if( mode == CAPTURING && // For camera only take new samples after delay time @@ -515,7 +542,8 @@ static void calcBoardCornerPositions(Size boardSize, float squareSize, vector > imagePoints, vector& rvecs, vector& tvecs, - vector& reprojErrs, double& totalAvgErr) + vector& reprojErrs, double& totalAvgErr, vector& newObjPoints, + float grid_width, bool release_object) { //! [fixed_aspect] cameraMatrix = Mat::eye(3, 3, CV_64F); @@ -530,6 +558,8 @@ static bool runCalibration( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat vector > objectPoints(1); calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern); + objectPoints[0][s.boardSize.width - 1].x = objectPoints[0][0].x + grid_width; + newObjPoints = objectPoints[0]; objectPoints.resize(imagePoints.size(),objectPoints[0]); @@ -548,14 +578,28 @@ static bool runCalibration( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat tvecs.push_back(_tvecs.row(i)); } } else { - rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, - s.flag); + int iFixedPoint = -1; + if (release_object) + iFixedPoint = s.boardSize.width - 1; + rms = calibrateCameraRO(objectPoints, imagePoints, imageSize, iFixedPoint, + cameraMatrix, distCoeffs, rvecs, tvecs, newObjPoints, + s.flag | CALIB_USE_LU); + } + + if (release_object) { + cout << "New board corners: " << endl; + cout << newObjPoints[0] << endl; + cout << newObjPoints[s.boardSize.width - 1] << endl; + cout << newObjPoints[s.boardSize.width * (s.boardSize.height - 1)] << endl; + cout << newObjPoints.back() << endl; } cout << "Re-projection error reported by calibrateCamera: "<< rms << endl; bool ok = checkRange(cameraMatrix) && checkRange(distCoeffs); + objectPoints.clear(); + objectPoints.resize(imagePoints.size(), newObjPoints); totalAvgErr = computeReprojectionErrors(objectPoints, imagePoints, rvecs, tvecs, cameraMatrix, distCoeffs, reprojErrs, s.useFisheye); @@ -566,7 +610,7 @@ static bool runCalibration( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat static void saveCameraParams( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs, const vector& rvecs, const vector& tvecs, const vector& reprojErrs, const vector >& imagePoints, - double totalAvgErr ) + double totalAvgErr, const vector& newObjPoints ) { FileStorage fs( s.outputFileName, FileStorage::WRITE ); @@ -673,24 +717,30 @@ static void saveCameraParams( Settings& s, Size& imageSize, Mat& cameraMatrix, M } fs << "image_points" << imagePtMat; } + + if( s.writeGrid && !newObjPoints.empty() ) + { + fs << "grid_points" << newObjPoints; + } } //! [run_and_save] bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs, - vector > imagePoints) + vector > imagePoints, float grid_width, bool release_object) { vector rvecs, tvecs; vector reprojErrs; double totalAvgErr = 0; + vector newObjPoints; bool ok = runCalibration(s, imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, reprojErrs, - totalAvgErr); + totalAvgErr, newObjPoints, grid_width, release_object); cout << (ok ? "Calibration succeeded" : "Calibration failed") << ". avg re projection error = " << totalAvgErr << endl; if (ok) saveCameraParams(s, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, reprojErrs, imagePoints, - totalAvgErr); + totalAvgErr, newObjPoints); return ok; } //! [run_and_save] diff --git a/samples/cpp/tutorial_code/calib3d/camera_calibration/in_VID5.xml b/samples/cpp/tutorial_code/calib3d/camera_calibration/in_VID5.xml index 5659651e6c..42e80c851e 100644 --- a/samples/cpp/tutorial_code/calib3d/camera_calibration/in_VID5.xml +++ b/samples/cpp/tutorial_code/calib3d/camera_calibration/in_VID5.xml @@ -39,6 +39,8 @@ 1 1 + + 1 1