Merge pull request #3396 from stopmosk:fix-aruco-tutorials

Fix aruco tutorials
pull/3403/head
Alexander Smorkalov 2 years ago committed by GitHub
commit 0c6b04a8a4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 65
      modules/aruco/include/opencv2/aruco.hpp
  2. 16
      modules/aruco/samples/detect_board.cpp
  3. 25
      modules/aruco/samples/detect_markers.cpp
  4. 124
      modules/aruco/tutorials/aruco_board_detection/aruco_board_detection.markdown
  5. 190
      modules/aruco/tutorials/aruco_detection/aruco_detection.markdown

@ -41,34 +41,7 @@ CV_EXPORTS_W void getBoardObjectAndImagePoints(const Ptr<Board> &board, InputArr
InputArray detectedIds, OutputArray objPoints, OutputArray imgPoints); InputArray detectedIds, OutputArray objPoints, OutputArray imgPoints);
/** /** @deprecated Use cv::solvePnP
* @brief Pose estimation for a board of markers
*
* @param corners vector of already detected markers corners. For each marker, its four corners
* are provided, (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers, the
* dimensions of this array should be Nx4. The order of the corners should be clockwise.
* @param ids list of identifiers for each marker in corners
* @param board layout of markers in the board. The layout is composed by the marker identifiers
* and the positions of each marker corner in the board reference system.
* @param cameraMatrix input 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
* @param distCoeffs vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param rvec Output vector (e.g. cv::Mat) corresponding to the rotation vector of the board
* (see cv::Rodrigues). Used as initial guess if not empty.
* @param tvec Output vector (e.g. cv::Mat) corresponding to the translation vector of the board.
* @param useExtrinsicGuess defines whether initial guess for \b rvec and \b tvec will be used or not.
* Used as initial guess if not empty.
*
* This function receives the detected markers and returns the pose of a marker board composed
* by those markers.
* A Board of marker has a single world coordinate system which is defined by the board layout.
* The returned transformation is the one that transforms points from the board coordinate system
* to the camera coordinate system.
* Input markers that are not included in the board layout are ignored.
* The function returns the number of markers from the input employed for the board pose estimation.
* Note that returning a 0 means the pose has not been estimated.
* @sa use cv::drawFrameAxes to get world coordinate system axis for object points
*/ */
CV_EXPORTS_W int estimatePoseBoard(InputArrayOfArrays corners, InputArray ids, const Ptr<Board> &board, CV_EXPORTS_W int estimatePoseBoard(InputArrayOfArrays corners, InputArray ids, const Ptr<Board> &board,
InputArray cameraMatrix, InputArray distCoeffs, InputOutputArray rvec, InputArray cameraMatrix, InputArray distCoeffs, InputOutputArray rvec,
@ -98,41 +71,7 @@ CV_EXPORTS_W bool estimatePoseCharucoBoard(InputArray charucoCorners, InputArray
InputArray distCoeffs, InputOutputArray rvec, InputArray distCoeffs, InputOutputArray rvec,
InputOutputArray tvec, bool useExtrinsicGuess = false); InputOutputArray tvec, bool useExtrinsicGuess = false);
/** /** @deprecated Use cv::solvePnP
* @brief Pose estimation for single markers
*
* @param corners vector of already detected markers corners. For each marker, its four corners
* are provided, (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers,
* the dimensions of this array should be Nx4. The order of the corners should be clockwise.
* @sa detectMarkers
* @param markerLength the length of the markers' side. The returning translation vectors will
* be in the same unit. Normally, unit is meters.
* @param cameraMatrix input 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
* @param distCoeffs vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param rvecs array of output rotation vectors (@sa Rodrigues) (e.g. std::vector<cv::Vec3d>).
* Each element in rvecs corresponds to the specific marker in imgPoints.
* @param tvecs array of output translation vectors (e.g. std::vector<cv::Vec3d>).
* Each element in tvecs corresponds to the specific marker in imgPoints.
* @param objPoints array of object points of all the marker corners
* @param estimateParameters set the origin of coordinate system and the coordinates of the four corners of the marker
* (default estimateParameters.pattern = PatternPositionType::ARUCO_CCW_CENTER, estimateParameters.useExtrinsicGuess = false,
* estimateParameters.solvePnPMethod = SOLVEPNP_ITERATIVE).
*
* This function receives the detected markers and returns their pose estimation respect to
* the camera individually. So for each marker, one rotation and translation vector is returned.
* The returned transformation is the one that transforms points from each marker coordinate system
* to the camera coordinate system.
* The marker coordinate system is centered on the middle (by default) or on the top-left corner of the marker,
* with the Z axis perpendicular to the marker plane.
* estimateParameters defines the coordinates of the four corners of the marker in its own coordinate system (by default) are:
* (-markerLength/2, markerLength/2, 0), (markerLength/2, markerLength/2, 0),
* (markerLength/2, -markerLength/2, 0), (-markerLength/2, -markerLength/2, 0)
* @sa use cv::drawFrameAxes to get world coordinate system axis for object points
* @sa @ref tutorial_aruco_detection
* @sa EstimateParameters
* @sa PatternPositionType
*/ */
CV_EXPORTS_W void estimatePoseSingleMarkers(InputArrayOfArrays corners, float markerLength, CV_EXPORTS_W void estimatePoseSingleMarkers(InputArrayOfArrays corners, float markerLength,
InputArray cameraMatrix, InputArray distCoeffs, InputArray cameraMatrix, InputArray distCoeffs,

@ -177,8 +177,16 @@ int main(int argc, char *argv[]) {
// estimate board pose // estimate board pose
int markersOfBoardDetected = 0; int markersOfBoardDetected = 0;
if(ids.size() > 0) if(!ids.empty()) {
markersOfBoardDetected = estimatePoseBoard(corners, ids, board, camMatrix, distCoeffs, rvec, tvec); // Get object and image points for the solvePnP function
cv::Mat objPoints, imgPoints;
board->matchImagePoints(corners, ids, objPoints, imgPoints);
// Find pose
cv::solvePnP(objPoints, imgPoints, camMatrix, distCoeffs, rvec, tvec);
markersOfBoardDetected = (int)objPoints.total() / 4;
}
double currentTime = ((double)getTickCount() - tick) / getTickFrequency(); double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
totalTime += currentTime; totalTime += currentTime;
@ -190,11 +198,11 @@ int main(int argc, char *argv[]) {
// draw results // draw results
image.copyTo(imageCopy); image.copyTo(imageCopy);
if(ids.size() > 0) { if(!ids.empty()) {
aruco::drawDetectedMarkers(imageCopy, corners, ids); aruco::drawDetectedMarkers(imageCopy, corners, ids);
} }
if(showRejected && rejected.size() > 0) if(showRejected && !rejected.empty())
aruco::drawDetectedMarkers(imageCopy, rejected, noArray(), Scalar(100, 0, 255)); aruco::drawDetectedMarkers(imageCopy, rejected, noArray(), Scalar(100, 0, 255));
if(markersOfBoardDetected > 0) if(markersOfBoardDetected > 0)

@ -148,6 +148,13 @@ int main(int argc, char *argv[]) {
double totalTime = 0; double totalTime = 0;
int totalIterations = 0; int totalIterations = 0;
// Set coordinate system
cv::Mat objPoints(4, 1, CV_32FC3);
objPoints.ptr<Vec3f>(0)[0] = Vec3f(-markerLength/2.f, markerLength/2.f, 0);
objPoints.ptr<Vec3f>(0)[1] = Vec3f(markerLength/2.f, markerLength/2.f, 0);
objPoints.ptr<Vec3f>(0)[2] = Vec3f(markerLength/2.f, -markerLength/2.f, 0);
objPoints.ptr<Vec3f>(0)[3] = Vec3f(-markerLength/2.f, -markerLength/2.f, 0);
while(inputVideo.grab()) { while(inputVideo.grab()) {
Mat image, imageCopy; Mat image, imageCopy;
inputVideo.retrieve(image); inputVideo.retrieve(image);
@ -156,13 +163,19 @@ int main(int argc, char *argv[]) {
vector< int > ids; vector< int > ids;
vector< vector< Point2f > > corners, rejected; vector< vector< Point2f > > corners, rejected;
vector< Vec3d > rvecs, tvecs;
// detect markers and estimate pose // detect markers and estimate pose
detector.detectMarkers(image, corners, ids, rejected); detector.detectMarkers(image, corners, ids, rejected);
if(estimatePose && ids.size() > 0)
aruco::estimatePoseSingleMarkers(corners, markerLength, camMatrix, distCoeffs, rvecs, size_t nMarkers = corners.size();
tvecs); vector<Vec3d> rvecs(nMarkers), tvecs(nMarkers);
if(estimatePose && !ids.empty()) {
// Calculate pose for each marker
for (size_t i = 0; i < nMarkers; i++) {
solvePnP(objPoints, corners.at(i), camMatrix, distCoeffs, rvecs.at(i), tvecs.at(i));
}
}
double currentTime = ((double)getTickCount() - tick) / getTickFrequency(); double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
totalTime += currentTime; totalTime += currentTime;
@ -174,7 +187,7 @@ int main(int argc, char *argv[]) {
// draw results // draw results
image.copyTo(imageCopy); image.copyTo(imageCopy);
if(ids.size() > 0) { if(!ids.empty()) {
aruco::drawDetectedMarkers(imageCopy, corners, ids); aruco::drawDetectedMarkers(imageCopy, corners, ids);
if(estimatePose) { if(estimatePose) {
@ -183,7 +196,7 @@ int main(int argc, char *argv[]) {
} }
} }
if(showRejected && rejected.size() > 0) if(showRejected && !rejected.empty())
aruco::drawDetectedMarkers(imageCopy, rejected, noArray(), Scalar(100, 0, 255)); aruco::drawDetectedMarkers(imageCopy, rejected, noArray(), Scalar(100, 0, 255));
imshow("out", imageCopy); imshow("out", imageCopy);

@ -27,61 +27,57 @@ Thus, the pose can be calculated even in the presence of occlusions or partial v
- The obtained pose is usually more accurate since a higher amount of point correspondences (marker - The obtained pose is usually more accurate since a higher amount of point correspondences (marker
corners) are employed. corners) are employed.
The aruco module allows the use of Boards. The main class is the ```cv::aruco::Board``` class which defines the Board layout:
@code{.cpp}
class Board {
public:
std::vector<std::vector<cv::Point3f> > objPoints;
cv::Ptr<cv::aruco::Dictionary> dictionary;
std::vector<int> ids;
};
@endcode
A object of type ```Board``` has three parameters:
- The ```objPoints``` structure is the list of corner positions in the 3d Board reference system, i.e. its layout.
For each marker, its four corners are stored in the standard order, i.e. in clockwise order and starting
with the top left corner.
- The ```dictionary``` parameter indicates to which marker dictionary the Board markers belong to.
- Finally, the ```ids``` structure indicates the identifiers of each of the markers in ```objPoints``` respect to the specified ```dictionary```.
Board Detection Board Detection
----- -----
A Board detection is similar to the standard marker detection. The only difference is in the pose estimation step. A Board detection is similar to the standard marker detection. The only difference is in the pose estimation step.
In fact, to use marker boards, a standard marker detection should be done before estimating the Board pose. In fact, to use marker boards, a standard marker detection should be done before estimating the Board pose.
The aruco module provides a specific function, ```estimatePoseBoard()```, to perform pose estimation for boards: To perform pose estimation for boards, you should use ```#cv::solvePnP``` function, as shown below:
@code{.cpp} @code{.cpp}
cv::Mat inputImage; cv::Mat inputImage;
// camera parameters are read from somewhere
cv::Mat cameraMatrix, distCoeffs; // Camera parameters are read from somewhere
// You can read camera parameters from tutorial_camera_params.yml cv::Mat cameraMatrix, distCoeffs;
readCameraParameters(filename, cameraMatrix, distCoeffs); // This function is located in detect_board.cpp
// assume we have a function to create the board object // You can read camera parameters from tutorial_camera_params.yml
cv::Ptr<cv::aruco::Board> board = cv::aruco::Board::create(); readCameraParameters(filename, cameraMatrix, distCoeffs); // This function is implemented in aruco_samples_utility.hpp
...
std::vector<int> markerIds; // Assume we have a function to create the board object
std::vector<std::vector<cv::Point2f>> markerCorners; cv::Ptr<cv::aruco::Board> board = cv::aruco::Board::create();
cv::aruco::detectMarkers(inputImage, board.dictionary, markerCorners, markerIds);
// if at least one marker detected ...
if(markerIds.size() > 0) {
cv::Vec3d rvec, tvec; std::vector<int> markerIds;
int valid = cv::aruco::estimatePoseBoard(markerCorners, markerIds, board, cameraMatrix, distCoeffs, rvec, tvec); std::vector<std::vector<cv::Point2f>> markerCorners;
}
cv::aruco::DetectorParameters detectorParams = cv::aruco::DetectorParameters();
cv::aruco::ArucoDetector detector(board.dictionary, detectorParams);
detector.detectMarkers(inputImage, markerCorners, markerIds);
cv::Vec3d rvec, tvec;
// If at least one marker detected
if(markerIds.size() > 0) {
// Get object and image points for the solvePnP function
cv::Mat objPoints, imgPoints;
board->matchImagePoints(markerCorners, markerIds, objPoints, imgPoints);
// Find pose
cv::solvePnP(objPoints, imgPoints, cameraMatrix, distCoeffs, rvec, tvec);
}
@endcode @endcode
The parameters of estimatePoseBoard are: The parameters are:
- ```markerCorners``` and ```markerIds```: structures of detected markers from ```detectMarkers()``` function. - ```objPoints```, ```imgPoints```: object and image points, matched with ```matchImagePoints```, which, in turn, takes as input ```markerCorners``` and ```markerIds```: structures of detected markers from ```detectMarkers()``` function).
- ```board```: the ```Board``` object that defines the board layout and its ids - ```board```: the ```Board``` object that defines the board layout and its ids
- ```cameraMatrix``` and ```distCoeffs```: camera calibration parameters necessary for pose estimation. - ```cameraMatrix``` and ```distCoeffs```: camera calibration parameters necessary for pose estimation.
- ```rvec``` and ```tvec```: estimated pose of the Board. If not empty then treated as initial guess. - ```rvec``` and ```tvec```: estimated pose of the Board. If not empty then treated as initial guess.
- The function returns the total number of markers employed for estimating the board pose. Note that not all the - The function returns the total number of markers employed for estimating the board pose.
markers provided in ```markerCorners``` and ```markerIds``` should be used, since only the markers whose ids are
listed in the ```Board::ids``` structure are considered.
The ```drawFrameAxes()``` function can be used to check the obtained pose. For instance: The ```drawFrameAxes()``` function can be used to check the obtained pose. For instance:
@ -135,8 +131,7 @@ in any unit, having in mind that the estimated pose for this board will be measu
- Finally, the dictionary of the markers is provided. - Finally, the dictionary of the markers is provided.
So, this board will be composed by 5x7=35 markers. The ids of each of the markers are assigned, by default, in ascending So, this board will be composed by 5x7=35 markers. The ids of each of the markers are assigned, by default, in ascending
order starting on 0, so they will be 0, 1, 2, ..., 34. This can be easily customized by accessing to the ids vector order starting on 0, so they will be 0, 1, 2, ..., 34.
through ```board.ids```, like in the ```Board``` parent class.
After creating a Grid Board, we probably want to print it and use it. A function to generate the image After creating a Grid Board, we probably want to print it and use it. A function to generate the image
of a ```GridBoard``` is provided in ```cv::aruco::GridBoard::generateImage()```. For example: of a ```GridBoard``` is provided in ```cv::aruco::GridBoard::generateImage()```. For example:
@ -144,7 +139,7 @@ of a ```GridBoard``` is provided in ```cv::aruco::GridBoard::generateImage()```.
@code{.cpp} @code{.cpp}
cv::Ptr<cv::aruco::GridBoard> board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary); cv::Ptr<cv::aruco::GridBoard> board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary);
cv::Mat boardImage; cv::Mat boardImage;
board->draw( cv::Size(600, 500), boardImage, 10, 1 ); board->generateImage( cv::Size(600, 500), boardImage, 10, 1 );
@endcode @endcode
- The first parameter is the size of the output image in pixels. In this case 600x500 pixels. If this is not proportional - The first parameter is the size of the output image in pixels. In this case 600x500 pixels. If this is not proportional
@ -173,13 +168,17 @@ Finally, a full example of board detection:
cv::Mat cameraMatrix, distCoeffs; cv::Mat cameraMatrix, distCoeffs;
// You can read camera parameters from tutorial_camera_params.yml // You can read camera parameters from tutorial_camera_params.yml
readCameraParameters(filename, cameraMatrix, distCoeffs); // This function is located in detect_board.cpp readCameraParameters(filename, cameraMatrix, distCoeffs); // This function is implemented in aruco_samples_utility.hpp
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250); cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
// To use tutorial sample, you need read custome dictionaty from tutorial_dict.yml
readDictionary(filename, dictionary); // This function is located in detect_board.cpp // To use tutorial sample, you need read custom dictionaty from tutorial_dict.yml
readDictionary(filename, dictionary); // This function is implemented in opencv/modules/objdetect/src/aruco/aruco_dictionary.cpp
cv::Ptr<cv::aruco::GridBoard> board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary); cv::Ptr<cv::aruco::GridBoard> board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary);
cv::aruco::DetectorParameters detectorParams = cv::aruco::DetectorParameters();
cv::aruco::ArucoDetector detector(dictionary, detectorParams);
while (inputVideo.grab()) { while (inputVideo.grab()) {
cv::Mat image, imageCopy; cv::Mat image, imageCopy;
inputVideo.retrieve(image); inputVideo.retrieve(image);
@ -187,17 +186,26 @@ Finally, a full example of board detection:
std::vector<int> ids; std::vector<int> ids;
std::vector<std::vector<cv::Point2f> > corners; std::vector<std::vector<cv::Point2f> > corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
// if at least one marker detected // Detect markers
detector.detectMarkers(image, corners, ids);
// If at least one marker detected
if (ids.size() > 0) { if (ids.size() > 0) {
cv::aruco::drawDetectedMarkers(imageCopy, corners, ids); cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
cv::Vec3d rvec, tvec; cv::Vec3d rvec, tvec;
int valid = estimatePoseBoard(corners, ids, board, cameraMatrix, distCoeffs, rvec, tvec);
// if at least one board marker detected // Get object and image points for the solvePnP function
if(valid > 0) cv::Mat objPoints, imgPoints;
board->matchImagePoints(corners, ids, objPoints, imgPoints);
// Find pose
cv::solvePnP(objPoints, imgPoints, cameraMatrix, distCoeffs, rvec, tvec);
// If at least one board marker detected
markersOfBoardDetected = (int)objPoints.total() / 4;
if(markersOfBoardDetected > 0)
cv::drawFrameAxes(imageCopy, cameraMatrix, distCoeffs, rvec, tvec, 0.1); cv::drawFrameAxes(imageCopy, cameraMatrix, distCoeffs, rvec, tvec, 0.1);
} }
@ -269,13 +277,17 @@ internal bits are not analyzed at all and only the corner distances are evaluate
This is an example of using the ```refineDetectedMarkers()``` function: This is an example of using the ```refineDetectedMarkers()``` function:
@code{.cpp} @code{.cpp}
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::Ptr<cv::aruco::GridBoard> board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary); cv::Ptr<cv::aruco::GridBoard> board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary);
cv::aruco::DetectorParameters detectorParams = cv::aruco::DetectorParameters();
cv::aruco::ArucoDetector detector(dictionary, detectorParams);
std::vector<int> markerIds; std::vector<int> markerIds;
std::vector<std::vector<cv::Point2f>> markerCorners, rejectedCandidates; std::vector<std::vector<cv::Point2f>> markerCorners, rejectedCandidates;
cv::aruco::detectMarkers(inputImage, dictionary, markerCorners, markerIds, cv::aruco::DetectorParameters(), rejectedCandidates); detector.detectMarkers(inputImage, markerCorners, markerIds, rejectedCandidates);
cv::aruco::refineDetectedMarkersinputImage, board, markerCorners, markerIds, rejectedCandidates); detector.refineDetectedMarkers(inputImage, board, markerCorners, markerIds, rejectedCandidates);
// After calling this function, if any new marker has been detected it will be removed from rejectedCandidates and included // After calling this function, if any new marker has been detected it will be removed from rejectedCandidates and included
// at the end of markerCorners and markerIds // at the end of markerCorners and markerIds
@endcode @endcode

@ -14,11 +14,7 @@ to obtain the camera pose. Also, the inner binary codification makes them specia
the possibility of applying error detection and correction techniques. the possibility of applying error detection and correction techniques.
The aruco module is based on the [ArUco library](http://www.uco.es/investiga/grupos/ava/node/26), The aruco module is based on the [ArUco library](http://www.uco.es/investiga/grupos/ava/node/26),
a popular library for detection of square fiducial markers developed by Rafael Muñoz and Sergio Garrido: a popular library for detection of square fiducial markers developed by Rafael Muñoz and Sergio Garrido @cite Aruco2014.
> S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
> "Automatic generation and detection of highly reliable fiducial markers under occlusion".
> Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005
The aruco functionalities are included in: The aruco functionalities are included in:
@code{.cpp} @code{.cpp}
@ -73,18 +69,18 @@ For example, lets analyze the following call:
@code{.cpp} @code{.cpp}
cv::Mat markerImage; cv::Mat markerImage;
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250); cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::drawMarker(dictionary, 23, 200, markerImage, 1); cv::aruco::generateImageMarker(dictionary, 23, 200, markerImage, 1);
cv::imwrite("marker23.png", markerImage); cv::imwrite("marker23.png", markerImage);
@endcode @endcode
First, the `Dictionary` object is created by choosing one of the predefined dictionaries in the aruco module. First, the `Dictionary` object is created by choosing one of the predefined dictionaries in the aruco module.
Concretely, this dictionary is composed of 250 markers and a marker size of 6x6 bits (`DICT_6X6_250`). Concretely, this dictionary is composed of 250 markers and a marker size of 6x6 bits (`#cv::aruco::DICT_6X6_250`).
The parameters of `drawMarker` are: The parameters of `generateImageMarker` are:
- The first parameter is the `Dictionary` object previously created. - The first parameter is the `Dictionary` object previously created.
- The second parameter is the marker id, in this case the marker 23 of the dictionary `DICT_6X6_250`. - The second parameter is the marker id, in this case the marker 23 of the dictionary `#cv::aruco::DICT_6X6_250`.
Note that each dictionary is composed of a different number of markers. In this case, the valid ids Note that each dictionary is composed of a different number of markers. In this case, the valid ids
go from 0 to 249. Any specific id out of the valid range will produce an exception. go from 0 to 249. Any specific id out of the valid range will produce an exception.
- The third parameter, 200, is the size of the output marker image. In this case, the output image - The third parameter, 200, is the size of the output marker image. In this case, the output image
@ -167,22 +163,25 @@ cv::Mat inputImage;
... ...
std::vector<int> markerIds; std::vector<int> markerIds;
std::vector<std::vector<cv::Point2f>> markerCorners, rejectedCandidates; std::vector<std::vector<cv::Point2f>> markerCorners, rejectedCandidates;
cv::Ptr<cv::aruco::DetectorParameters> parameters = cv::aruco::DetectorParameters::create(); cv::aruco::DetectorParameters detectorParams = cv::aruco::DetectorParameters();
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250); cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::detectMarkers(inputImage, dictionary, markerCorners, markerIds, parameters, rejectedCandidates); cv::aruco::ArucoDetector detector(dictionary, detectorParams);
detector.detectMarkers(inputImage, markerCorners, markerIds, rejectedCandidates);
@endcode @endcode
When you create an `#cv::aruco::ArucoDetector` object, you need to pass the following parameters to the constructor:
- A dictionary object, in this case one of the predefined dictionaries (`#cv::aruco::DICT_6X6_250`).
- Object of type `#cv::aruco::DetectorParameters`. This object includes all parameters that can be customized during the detection process. These parameters will be explained in the next section.
The parameters of `detectMarkers` are: The parameters of `detectMarkers` are:
- The first parameter is the image containing the markers to be detected. - The first parameter is the image containing the markers to be detected.
- The second parameter is the dictionary object, in this case one of the predefined dictionaries (`DICT_6X6_250`).
- The detected markers are stored in the `markerCorners` and `markerIds` structures: - The detected markers are stored in the `markerCorners` and `markerIds` structures:
- `markerCorners` is the list of corners of the detected markers. For each marker, its four - `markerCorners` is the list of corners of the detected markers. For each marker, its four
corners are returned in their original order (which is clockwise starting with top left). So, the first corner is the top left corner, followed by the top right, bottom right and bottom left. corners are returned in their original order (which is clockwise starting with top left). So, the first corner is the top left corner, followed by the top right, bottom right and bottom left.
- `markerIds` is the list of ids of each of the detected markers in `markerCorners`. - `markerIds` is the list of ids of each of the detected markers in `markerCorners`.
Note that the returned `markerCorners` and `markerIds` vectors have the same size. Note that the returned `markerCorners` and `markerIds` vectors have the same size.
- The fourth parameter is the object of type `DetectionParameters`. This object includes all the
parameters that can be customized during the detection process. These parameters are explained in the next section.
- The final parameter, `rejectedCandidates`, is a returned list of marker candidates, i.e. - The final parameter, `rejectedCandidates`, is a returned list of marker candidates, i.e.
shapes that were found and considered but did not contain a valid marker. Each candidate is also shapes that were found and considered but did not contain a valid marker. Each candidate is also
defined by its four corners, and its format is the same as the `markerCorners` parameter. This defined by its four corners, and its format is the same as the `markerCorners` parameter. This
@ -211,15 +210,19 @@ camera:
@code{.cpp} @code{.cpp}
cv::VideoCapture inputVideo; cv::VideoCapture inputVideo;
inputVideo.open(0); inputVideo.open(0);
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::DetectorParameters detectorParams = cv::aruco::DetectorParameters();
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::ArucoDetector detector(dictionary, detectorParams);
while (inputVideo.grab()) { while (inputVideo.grab()) {
cv::Mat image, imageCopy; cv::Mat image, imageCopy;
inputVideo.retrieve(image); inputVideo.retrieve(image);
image.copyTo(imageCopy); image.copyTo(imageCopy);
std::vector<int> ids; std::vector<int> ids;
std::vector<std::vector<cv::Point2f> > corners; std::vector<std::vector<cv::Point2f>> corners, rejected;
cv::aruco::detectMarkers(image, dictionary, corners, ids); detector.detectMarkers(image, corners, ids, rejected);
// if at least one marker detected // if at least one marker detected
if (ids.size() > 0) if (ids.size() > 0)
@ -248,15 +251,15 @@ Parameters for `detect_markers.cpp`:
Pose Estimation Pose Estimation
------ ------
The next thing you probably want to do after detecting the markers is to obtain the camera pose from them. The next thing you'll probably want to do after detecting the markers is to use them to get the camera pose.
To perform camera pose estimation you need to know the calibration parameters of your camera. These are To perform camera pose estimation, you need to know your camera's calibration parameters. These are
the camera matrix and distortion coefficients. If you do not know how to calibrate your camera, you can the camera matrix and distortion coefficients. If you do not know how to calibrate your camera, you can
take a look at the `calibrateCamera()` function and the Calibration tutorial of OpenCV. You can also calibrate your camera using the aruco module take a look at the `calibrateCamera()` function and the Calibration tutorial of OpenCV. You can also calibrate your camera using the aruco module
as explained in the **Calibration with ArUco and ChArUco** tutorial. Note that this only needs to be done once unless the as explained in the **Calibration with ArUco and ChArUco** tutorial. Note that this only needs to be done once unless the
camera optics are modified (for instance changing its focus). camera optics are modified (for instance changing its focus).
In the end, what you get after the calibration is the camera matrix: a matrix of 3x3 elements with the As a result of the calibration, you get a camera matrix: a matrix of 3x3 elements with the
focal distances and the camera center coordinates (a.k.a intrinsic parameters), and the distortion focal distances and the camera center coordinates (a.k.a intrinsic parameters), and the distortion
coefficients: a vector of 5 or more elements that models the distortion produced by your camera. coefficients: a vector of 5 or more elements that models the distortion produced by your camera.
@ -264,19 +267,25 @@ When you estimate the pose with ArUco markers, you can estimate the pose of each
If you want to estimate one pose from a set of markers, use ArUco Boards (see the **Detection of ArUco If you want to estimate one pose from a set of markers, use ArUco Boards (see the **Detection of ArUco
Boards** tutorial). Using ArUco boards instead of single markers allows some markers to be occluded. Boards** tutorial). Using ArUco boards instead of single markers allows some markers to be occluded.
The camera pose with respect to a marker is the 3d transformation from the marker coordinate system to the The camera pose relative to the marker is a 3d transformation from the marker coordinate system to the
camera coordinate system. It is specified by rotation and translation vectors (see `solvePnP()` function for more camera coordinate system. It is specified by rotation and translation vectors (see `#cv::solvePnP()` function for more
information). information).
The aruco module provides a function to estimate the poses of all the detected markers:
@code{.cpp} @code{.cpp}
cv::Mat cameraMatrix, distCoeffs; cv::Mat cameraMatrix, distCoeffs;
// You can read camera parameters from tutorial_camera_params.yml // You can read camera parameters from tutorial_camera_params.yml
readCameraParameters(filename, cameraMatrix, distCoeffs); // This function is located in detect_markers.cpp readCameraParameters(cameraParamsFilename, cameraMatrix, distCoeffs); // This function is implemented in aruco_samples_utility.hpp
std::vector<cv::Vec3d> rvecs, tvecs; std::vector<cv::Vec3d> rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(markerCorners, 0.05, cameraMatrix, distCoeffs, rvecs, tvecs);
// Set coordinate system
cv::Mat objPoints(4, 1, CV_32FC3);
...
// Calculate pose for each marker
for (int i = 0; i < nMarkers; i++) {
solvePnP(objPoints, corners.at(i), cameraMatrix, distCoeffs, rvecs.at(i), tvecs.at(i));
}
@endcode @endcode
- The `markerCorners` parameter is the vector of marker corners returned by the `detectMarkers()` function. - The `markerCorners` parameter is the vector of marker corners returned by the `detectMarkers()` function.
@ -316,10 +325,21 @@ cv::VideoCapture inputVideo;
inputVideo.open(0); inputVideo.open(0);
cv::Mat cameraMatrix, distCoeffs; cv::Mat cameraMatrix, distCoeffs;
float markerLength = 0.05;
// You can read camera parameters from tutorial_camera_params.yml // You can read camera parameters from tutorial_camera_params.yml
readCameraParameters(filename, cameraMatrix, distCoeffs); // This function is located in detect_markers.cpp readCameraParameters(cameraParamsFilename, cameraMatrix, distCoeffs); // This function is implemented in aruco_samples_utility.hpp
// Set coordinate system
cv::Mat objPoints(4, 1, CV_32FC3);
objPoints.ptr<cv::Vec3f>(0)[0] = cv::Vec3f(-markerLength/2.f, markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[1] = cv::Vec3f(markerLength/2.f, markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[2] = cv::Vec3f(markerLength/2.f, -markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[3] = cv::Vec3f(-markerLength/2.f, -markerLength/2.f, 0);
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250); cv::aruco::DetectorParameters detectorParams = cv::aruco::DetectorParameters();
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
aruco::ArucoDetector detector(dictionary, detectorParams);
while (inputVideo.grab()) { while (inputVideo.grab()) {
cv::Mat image, imageCopy; cv::Mat image, imageCopy;
@ -328,19 +348,27 @@ while (inputVideo.grab()) {
std::vector<int> ids; std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners; std::vector<std::vector<cv::Point2f>> corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids); detector.detectMarkers(image, corners, ids);
// if at least one marker detected // If at least one marker detected
if (ids.size() > 0) { if (ids.size() > 0) {
cv::aruco::drawDetectedMarkers(imageCopy, corners, ids); cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
std::vector<cv::Vec3d> rvecs, tvecs; int nMarkers = corners.size();
cv::aruco::estimatePoseSingleMarkers(corners, 0.05, cameraMatrix, distCoeffs, rvecs, tvecs); std::vector<cv::Vec3d> rvecs(nMarkers), tvecs(nMarkers);
// draw axis for each marker
for(int i=0; i<ids.size(); i++) // Calculate pose for each marker
for (int i = 0; i < nMarkers; i++) {
solvePnP(objPoints, corners.at(i), cameraMatrix, distCoeffs, rvecs.at(i), tvecs.at(i));
}
// Draw axis for each marker
for(unsigned int i = 0; i < ids.size(); i++) {
cv::drawFrameAxes(imageCopy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1); cv::drawFrameAxes(imageCopy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1);
}
} }
// Show resulting image and close window
cv::imshow("out", imageCopy); cv::imshow("out", imageCopy);
char key = (char) cv::waitKey(waitTime); char key = (char) cv::waitKey(waitTime);
if (key == 27) if (key == 27)
@ -363,8 +391,9 @@ Note: The samples now take input from the command line using cv::CommandLinePars
@endcode @endcode
Parameters for `detect_markers.cpp`: Parameters for `detect_markers.cpp`:
@snippet samples/detect_markers.cpp aruco_detect_markers_keys @snippet samples/detect_markers.cpp aruco_detect_markers_keys
@note To work with examples from the tutorial, you can use camera parameters from `tutorial_camera_params.yml`. @note To work with examples from the tutorial, you can use camera parameters from `tutorial_camera_params.yml`.
An example of use in `detect.cpp`. An example of use in `detect_markers.cpp`.
@ -373,12 +402,12 @@ Selecting a dictionary
The aruco module provides the `Dictionary` class to represent a dictionary of markers. The aruco module provides the `Dictionary` class to represent a dictionary of markers.
In addition to the marker size and the number of markers in the dictionary, there is another important dictionary In addition to the marker size and the number of markers in the dictionary, there is another important parameter of the dictionary -
parameter, the inter-marker distance. The inter-marker distance is the minimum distance among its markers the inter-marker distance. The inter-marker distance is the minimum distance between dictionary markers
and it determines the error detection and correction capabilities of the dictionary. that determines the dictionary's ability to detect and correct errors.
In general, lower dictionary sizes and higher marker sizes increase the inter-marker distance and In general, smaller dictionary sizes and larger marker sizes increase the inter-marker distance and
vice-versa. However, the detection of markers with higher sizes is more complex, due to the higher vice versa. However, the detection of markers with larger sizes is more difficult due to the higher
number of bits that need to be extracted from the image. number of bits that need to be extracted from the image.
For instance, if you need only 10 markers in your application, it is better to use a dictionary composed only of those 10 markers than using a dictionary composed of 1000 markers. The reason is that For instance, if you need only 10 markers in your application, it is better to use a dictionary composed only of those 10 markers than using a dictionary composed of 1000 markers. The reason is that
@ -390,29 +419,28 @@ you can increase your system robustness:
### Predefined dictionaries ### Predefined dictionaries
This is the easiest way to select a dictionary. The aruco module includes a set of predefined dictionaries This is the easiest way to select a dictionary. The aruco module includes a set of predefined
in a variety of marker sizes and number of markers. For instance: dictionaries in a variety of marker sizes and number of markers. For instance:
@code{.cpp} @code{.cpp}
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250); cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
@endcode @endcode
`DICT_6X6_250` is an example of predefined dictionary of markers with 6x6 bits and a total of 250 `#cv::aruco::DICT_6X6_250` is an example of predefined dictionary of markers with 6x6 bits and a total of 250
markers. markers.
From all the provided dictionaries, it is recommended to choose the smallest one that fits your application. From all the provided dictionaries, it is recommended to choose the smallest one that fits your application.
For instance, if you need 200 markers of 6x6 bits, it is better to use `DICT_6X6_250` than `DICT_6X6_1000`. For instance, if you need 200 markers of 6x6 bits, it is better to use `#cv::aruco::DICT_6X6_250` than `cv::aruco::DICT_6X6_1000`.
The smaller the dictionary, the higher the inter-marker distance. The smaller the dictionary, the higher the inter-marker distance.
The list of available predefined dictionaries can be found in the documentation for the `PredefinedDictionaryType` enum. The list of available predefined dictionaries can be found in the documentation for the `PredefinedDictionaryType` enum.
### Automatic dictionary generation ### Automatic dictionary generation
The dictionary can be generated automatically to adjust to the desired number of markers and bits, so that A dictionary can be generated automatically to adjust the desired number of markers and bits to optimize the inter-marker distance:
the inter-marker distance is optimized:
@code{.cpp} @code{.cpp}
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::generateCustomDictionary(36, 5); cv::aruco::Dictiojary dictionary = cv::aruco::extendDictionary(36, 5);
@endcode @endcode
This will generate a customized dictionary composed of 36 markers of 5x5 bits. The process can take several This will generate a customized dictionary composed of 36 markers of 5x5 bits. The process can take several
@ -420,20 +448,19 @@ seconds, depending on the parameters (it is slower for larger dictionaries and h
### Manual dictionary generation ### Manual dictionary generation
Finally, the dictionary can be configured manually, so that any codification can be employed. To do that, Finally, the dictionary can be configured manually, so that any encoding can be used. To do that,
the `Dictionary` object parameters need to be assigned manually. It must be noted that, unless you have the `Dictionary` object parameters need to be assigned manually. It must be noted that, unless you have
a special reason to do this manually, it is preferable to use one of the previous alternatives. a special reason to do this manually, it is preferable to use one of the previous alternatives.
The `Dictionary` parameters are: The `Dictionary` parameters are:
@code{.cpp} @code{.cpp}
class Dictionary { class Dictionary {
public: public:
Mat bytesList; cv::Mat bytesList; // marker code information
int markerSize; int markerSize; // number of bits per dimension
int maxCorrectionBits; // maximum number of bits that can be corrected int maxCorrectionBits; // maximum number of bits that can be corrected
... ...
@ -454,19 +481,22 @@ For example:
@code{.cpp} @code{.cpp}
cv::aruco::Dictionary dictionary; cv::aruco::Dictionary dictionary;
// markers of 6x6 bits
// Markers of 6x6 bits
dictionary.markerSize = 6; dictionary.markerSize = 6;
// maximum number of bit corrections
// Maximum number of bit corrections
dictionary.maxCorrectionBits = 3; dictionary.maxCorrectionBits = 3;
// lets create a dictionary of 100 markers // Let's create a dictionary of 100 markers
for(int i=0; i<100; i++) for(int i = 0; i < 100; i++)
{ {
// assume generateMarkerBits() generates a new marker in binary format, so that // Assume generateMarkerBits() generates a new marker in binary format, so that
// markerBits is a 6x6 matrix of CV_8UC1 type, only containing 0s and 1s // markerBits is a 6x6 matrix of CV_8UC1 type, only containing 0s and 1s
cv::Mat markerBits = generateMarkerBits(); cv::Mat markerBits = generateMarkerBits();
cv::Mat markerCompressed = cv::aruco::Dictionary::getByteListFromBits(markerBits); cv::Mat markerCompressed = cv::aruco::Dictionary::getByteListFromBits(markerBits);
// add the marker as a new row
// Add the marker as a new row
dictionary.bytesList.push_back(markerCompressed); dictionary.bytesList.push_back(markerCompressed);
} }
@endcode @endcode
@ -477,7 +507,7 @@ For example:
Detector Parameters Detector Parameters
------ ------
One of the parameters of `detectMarkers()` function is a `DetectorParameters` object. This object One of the parameters of `ArucoDetector` is a `DetectorParameters` object. This object
includes all the options that can be customized during the marker detection process. includes all the options that can be customized during the marker detection process.
This section describes each detector parameter. The parameters can be classified depending on This section describes each detector parameter. The parameters can be classified depending on
@ -485,7 +515,7 @@ the process in which they’re involved:
### Thresholding ### Thresholding
One of the first steps of the marker detection process is an adaptive thresholding of the input image. One of the first steps in the marker detection process is adaptive thresholding of the input image.
For instance, the thresholded image for the sample image used above is: For instance, the thresholded image for the sample image used above is:
@ -506,12 +536,12 @@ For instance, for the values `adaptiveThreshWinSizeMin` = 5 and `adaptiveThreshW
`adaptiveThreshWinSizeStep` = 4, there will be 5 thresholding steps with window sizes 5, 9, 13, 17 and 21. `adaptiveThreshWinSizeStep` = 4, there will be 5 thresholding steps with window sizes 5, 9, 13, 17 and 21.
On each thresholding image, marker candidates will be extracted. On each thresholding image, marker candidates will be extracted.
Low values of window size can ‘break’ the marker border if the marker size is too large, causing it to not be detected, as in the following image: Low values of window size can "break" the marker border if the marker size is too large, causing it to not be detected, as in the following image:
![Broken marker image](images/singlemarkersbrokenthresh.png) ![Broken marker image](images/singlemarkersbrokenthresh.png)
On the other hand, values that are too high can produce the same effect if the markers are too small, and it can also On the other hand, too large values can produce the same effect if the markers are too small, and can also
reduce the performance. Moreover the process would tend to a global thresholding, losing the adaptive benefits. reduce the performance. Moreover the process will tend to global thresholding, resulting in a loss of adaptive benefits.
The simplest case is using the same value for `adaptiveThreshWinSizeMin` and The simplest case is using the same value for `adaptiveThreshWinSizeMin` and
`adaptiveThreshWinSizeMax`, which produces a single thresholding step. However, it is usually better to use a `adaptiveThreshWinSizeMax`, which produces a single thresholding step. However, it is usually better to use a
@ -556,8 +586,8 @@ For instance, a image with size 640x480 and a minimum relative marker perimeter
to a minimum marker perimeter of 640x0.05 = 32 pixels, since 640 is the maximum dimension of the to a minimum marker perimeter of 640x0.05 = 32 pixels, since 640 is the maximum dimension of the
image. The same applies for the `maxMarkerPerimeterRate` parameter. image. The same applies for the `maxMarkerPerimeterRate` parameter.
If the `minMarkerPerimeterRate` is too low, it can penalize considerably the detection performance since If the `minMarkerPerimeterRate` is too low, detection performance can be significantly reduced,
many more contours would be considered for future stages. as many more contours will be considered for future stages.
This penalization is not so noticeable for the `maxMarkerPerimeterRate` parameter, since there are This penalization is not so noticeable for the `maxMarkerPerimeterRate` parameter, since there are
usually many more small contours than big contours. usually many more small contours than big contours.
A `minMarkerPerimeterRate` value of 0 and a `maxMarkerPerimeterRate` value of 4 (or more) will be A `minMarkerPerimeterRate` value of 0 and a `maxMarkerPerimeterRate` value of 4 (or more) will be
@ -623,8 +653,8 @@ Default value:
After candidate detection, the bits of each candidate are analyzed in order to determine if they After candidate detection, the bits of each candidate are analyzed in order to determine if they
are markers or not. are markers or not.
Before analyzing the binary code itself, the bits need to be extracted. To do so, the perspective Before analyzing the binary code itself, the bits need to be extracted. To do this, perspective
distortion is removed and the resulting image is thresholded using Otsu threshold to separate distortion is corrected and the resulting image is thresholded using Otsu threshold to separate
black and white pixels. black and white pixels.
This is an example of the image obtained after removing the perspective distortion of a marker: This is an example of the image obtained after removing the perspective distortion of a marker:
@ -663,7 +693,7 @@ Default value:
#### perspectiveRemovePixelPerCell #### perspectiveRemovePixelPerCell
This parameter determines the number of pixels (per cell) in the obtained image after removing perspective This parameter determines the number of pixels (per cell) in the obtained image after correcting perspective
distortion (including the border). This is the size of the red squares in the image above. distortion (including the border). This is the size of the red squares in the image above.
For instance, let’s assume we are dealing with markers of 5x5 bits and border size of 1 bit For instance, let’s assume we are dealing with markers of 5x5 bits and border size of 1 bit
@ -687,7 +717,7 @@ not recommended to consider all the cell pixels. Instead it is better to ignore
margins of the cells. margins of the cells.
The reason for this is that, after removing the perspective distortion, the cells’ colors are, in general, not The reason for this is that, after removing the perspective distortion, the cells’ colors are, in general, not
perfectly separated and white cells can invade some pixels of black cells (and vice-versa). Thus, it is perfectly separated and white cells can invade some pixels of black cells (and vice versa). Thus, it is
better to ignore some pixels just to avoid counting erroneous pixels. better to ignore some pixels just to avoid counting erroneous pixels.
For instance, in the following image: For instance, in the following image:
@ -748,8 +778,9 @@ be accurate, for instance for pose estimation. It is usually a time-consuming st
#### cornerRefinementMethod #### cornerRefinementMethod
This parameter determines whether the corner subpixel process is performed or not and which method to use if it is being performed. It can be disabled This parameter determines whether the corner subpixel process is performed or not and which method to use
if accurate corners are not necessary. Possible values are `CORNER_REFINE_NONE`, `CORNER_REFINE_SUBPIX`, `CORNER_REFINE_CONTOUR`, and `CORNER_REFINE_APRILTAG`. if it is being performed. It can be disabled if accurate corners are not necessary. Possible values are
`CORNER_REFINE_NONE`, `CORNER_REFINE_SUBPIX`, `CORNER_REFINE_CONTOUR`, and `CORNER_REFINE_APRILTAG`.
Default value: Default value:
@ -759,9 +790,8 @@ Default value:
This parameter determines the window size of the subpixel refinement process. This parameter determines the window size of the subpixel refinement process.
High values can produce the effect that close image corners are included in the window region, so that the High values can cause close corners of the image to be included in the window area, so that the corner
marker corner moves to a different and wrong location during the process. Furthermore of the marker moves to a different and incorrect location during the process. Also, it may affect performance.
it can affect performance.
Default value: Default value:
@ -773,8 +803,8 @@ These two parameters determine the stop criteria of the subpixel refinement proc
`cornerRefinementMaxIterations` indicates the maximum number of iterations and `cornerRefinementMaxIterations` indicates the maximum number of iterations and
`cornerRefinementMinAccuracy` the minimum error value before stopping the process. `cornerRefinementMinAccuracy` the minimum error value before stopping the process.
If the number of iterations is too high, it can affect the performance. On the other hand, if it is If the number of iterations is too high, it may affect the performance. On the other hand, if it is
too low, it can produce a poor subpixel refinement. too low, it can result in poor subpixel refinement.
Default values: Default values:

Loading…
Cancel
Save