Added aruco module

pull/346/head
S. Garrido 9 years ago
parent e18103e2e3
commit 7c08b2f7db
  1. 2
      modules/aruco/CMakeLists.txt
  2. 2
      modules/aruco/README.md
  3. 502
      modules/aruco/include/opencv2/aruco.hpp
  4. 322
      modules/aruco/include/opencv2/aruco/charuco.hpp
  5. 157
      modules/aruco/include/opencv2/aruco/dictionary.hpp
  6. 311
      modules/aruco/samples/calibrate_camera.cpp
  7. 375
      modules/aruco/samples/calibrate_camera_charuco.cpp
  8. 143
      modules/aruco/samples/create_board.cpp
  9. 141
      modules/aruco/samples/create_board_charuco.cpp
  10. 146
      modules/aruco/samples/create_diamond.cpp
  11. 126
      modules/aruco/samples/create_marker.cpp
  12. 252
      modules/aruco/samples/detect_board.cpp
  13. 267
      modules/aruco/samples/detect_board_charuco.cpp
  14. 272
      modules/aruco/samples/detect_diamonds.cpp
  15. 234
      modules/aruco/samples/detect_markers.cpp
  16. 17
      modules/aruco/samples/detector_params.yml
  17. 1573
      modules/aruco/src/aruco.cpp
  18. 946
      modules/aruco/src/charuco.cpp
  19. 473
      modules/aruco/src/dictionary.cpp
  20. 49
      modules/aruco/src/precomp.hpp
  21. 20163
      modules/aruco/src/predefined_dictionaries.cpp
  22. 506
      modules/aruco/test/test_arucodetection.cpp
  23. 338
      modules/aruco/test/test_boarddetection.cpp
  24. 564
      modules/aruco/test/test_charucodetection.cpp
  25. 3
      modules/aruco/test/test_main.cpp
  26. 18
      modules/aruco/test/test_precomp.hpp
  27. 260
      modules/aruco/tutorials/aruco_board_detection/aruco_board_detection.markdown
  28. BIN
      modules/aruco/tutorials/aruco_board_detection/images/board.jpg
  29. BIN
      modules/aruco/tutorials/aruco_board_detection/images/gbaxis.png
  30. BIN
      modules/aruco/tutorials/aruco_board_detection/images/gbmarkersaxis.png
  31. BIN
      modules/aruco/tutorials/aruco_board_detection/images/gbocclusion.png
  32. BIN
      modules/aruco/tutorials/aruco_board_detection/images/gboriginal.png
  33. 102
      modules/aruco/tutorials/aruco_calibration/aruco_calibration.markdown
  34. BIN
      modules/aruco/tutorials/aruco_calibration/images/arucocalibration.png
  35. BIN
      modules/aruco/tutorials/aruco_calibration/images/charucocalibration.png
  36. 721
      modules/aruco/tutorials/aruco_detection/aruco_detection.markdown
  37. BIN
      modules/aruco/tutorials/aruco_detection/images/bitsextraction1.png
  38. BIN
      modules/aruco/tutorials/aruco_detection/images/bitsextraction2.png
  39. BIN
      modules/aruco/tutorials/aruco_detection/images/marker23.jpg
  40. BIN
      modules/aruco/tutorials/aruco_detection/images/markers.jpg
  41. BIN
      modules/aruco/tutorials/aruco_detection/images/removeperspective.png
  42. BIN
      modules/aruco/tutorials/aruco_detection/images/singlemarkersaxis.png
  43. BIN
      modules/aruco/tutorials/aruco_detection/images/singlemarkersbrokenthresh.png
  44. BIN
      modules/aruco/tutorials/aruco_detection/images/singlemarkersdetection.png
  45. BIN
      modules/aruco/tutorials/aruco_detection/images/singlemarkersoriginal.png
  46. BIN
      modules/aruco/tutorials/aruco_detection/images/singlemarkersrejected.png
  47. BIN
      modules/aruco/tutorials/aruco_detection/images/singlemarkersthresh.png
  48. 149
      modules/aruco/tutorials/aruco_faq/aruco_faq.markdown
  49. 312
      modules/aruco/tutorials/charuco_detection/charuco_detection.markdown
  50. BIN
      modules/aruco/tutorials/charuco_detection/images/charucoboard.jpg
  51. BIN
      modules/aruco/tutorials/charuco_detection/images/charucodefinition.png
  52. BIN
      modules/aruco/tutorials/charuco_detection/images/chaxis.png
  53. BIN
      modules/aruco/tutorials/charuco_detection/images/chcorners.png
  54. BIN
      modules/aruco/tutorials/charuco_detection/images/chocclusion.png
  55. BIN
      modules/aruco/tutorials/charuco_detection/images/choriginal.png
  56. 159
      modules/aruco/tutorials/charuco_diamond_detection/charuco_diamond_detection.markdown
  57. BIN
      modules/aruco/tutorials/charuco_diamond_detection/images/detecteddiamonds.png
  58. BIN
      modules/aruco/tutorials/charuco_diamond_detection/images/diamondmarker.png
  59. BIN
      modules/aruco/tutorials/charuco_diamond_detection/images/diamondmarkers.png
  60. BIN
      modules/aruco/tutorials/charuco_diamond_detection/images/diamondsaxis.png
  61. 60
      modules/aruco/tutorials/table_of_content_aruco.markdown

@ -0,0 +1,2 @@
set(the_description "ArUco Marker Detection")
ocv_define_module(aruco opencv_core opencv_imgproc opencv_calib3d WRAP python)

@ -0,0 +1,2 @@
ArUco Marker Detection
======================

@ -0,0 +1,502 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#ifndef __OPENCV_ARUCO_HPP__
#define __OPENCV_ARUCO_HPP__
#include <opencv2/core.hpp>
#include <vector>
#include "opencv2/aruco/dictionary.hpp"
/**
* @defgroup aruco ArUco Marker Detection
* This module is dedicated to square fiducial markers (also known as Augmented Reality Markers)
* These markers are useful for easy, fast and robust camera pose estimation.ç
*
* The main functionalities are:
* - Detection of markers in a image
* - Pose estimation from a single marker or from a board/set of markers
* - Detection of ChArUco board for high subpixel accuracy
* - Camera calibration from both, ArUco boards and ChArUco boards.
* - Detection of ChArUco diamond markers
* The samples directory includes easy examples of how to use the module.
*
* The implementation is based on the ArUco Library by R. Muñoz-Salinas and S. Garrido-Jurado.
*
* @sa S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
* "Automatic generation and detection of highly reliable fiducial markers under occlusion".
* Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005
*
* @sa http://www.uco.es/investiga/grupos/ava/node/26
*
* This module has been originally developed by Sergio Garrido-Jurado as a project
* for Google Summer of Code 2015 (GSoC 15).
*
*
*/
namespace cv {
namespace aruco {
//! @addtogroup aruco
//! @{
/**
* @brief Parameters for the detectMarker process:
* - adaptiveThreshWinSizeMin: minimum window size for adaptive thresholding before finding
* contours (default 3).
* - adaptiveThreshWinSizeMax: maximum window size for adaptive thresholding before finding
* contours (default 23).
* - adaptiveThreshWinSizeStep: increments from adaptiveThreshWinSizeMin to adaptiveThreshWinSizeMax
* during the thresholding (default 10).
* - adaptiveThreshConstant: constant for adaptive thresholding before finding contours (default 7)
* - minMarkerPerimeterRate: determine minimum perimeter for marker contour to be detected. This
* is defined as a rate respect to the maximum dimension of the input image (default 0.03).
* - maxMarkerPerimeterRate: determine maximum perimeter for marker contour to be detected. This
* is defined as a rate respect to the maximum dimension of the input image (default 4.0).
* - polygonalApproxAccuracyRate: minimum accuracy during the polygonal approximation process to
* determine which contours are squares.
* - minCornerDistanceRate: minimum distance between corners for detected markers relative to its
* perimeter (default 0.05)
* - minDistanceToBorder: minimum distance of any corner to the image border for detected markers
* (in pixels) (default 3)
* - minMarkerDistanceRate: minimum mean distance beetween two marker corners to be considered
* similar, so that the smaller one is removed. The rate is relative to the smaller perimeter
* of the two markers (default 0.05).
* - doCornerRefinement: do subpixel refinement or not
* - cornerRefinementWinSize: window size for the corner refinement process (in pixels) (default 5).
* - cornerRefinementMaxIterations: maximum number of iterations for stop criteria of the corner
* refinement process (default 30).
* - cornerRefinementMinAccuracy: minimum error for the stop cristeria of the corner refinement
* process (default: 0.1)
* - markerBorderBits: number of bits of the marker border, i.e. marker border width (default 1).
* - perpectiveRemovePixelPerCell: number of bits (per dimension) for each cell of the marker
* when removing the perspective (default 8).
* - perspectiveRemoveIgnoredMarginPerCell: width of the margin of pixels on each cell not
* considered for the determination of the cell bit. Represents the rate respect to the total
* size of the cell, i.e. perpectiveRemovePixelPerCell (default 0.13)
* - maxErroneousBitsInBorderRate: maximum number of accepted erroneous bits in the border (i.e.
* number of allowed white bits in the border). Represented as a rate respect to the total
* number of bits per marker (default 0.5).
* - minOtsuStdDev: minimun standard deviation in pixels values during the decodification step to
* apply Otsu thresholding (otherwise, all the bits are set to 0 or 1 depending on mean higher
* than 128 or not) (default 5.0)
* - errorCorrectionRate error correction rate respect to the maximun error correction capability
* for each dictionary. (default 0.6).
*/
struct CV_EXPORTS DetectorParameters {
DetectorParameters();
int adaptiveThreshWinSizeMin;
int adaptiveThreshWinSizeMax;
int adaptiveThreshWinSizeStep;
double adaptiveThreshConstant;
double minMarkerPerimeterRate;
double maxMarkerPerimeterRate;
double polygonalApproxAccuracyRate;
double minCornerDistanceRate;
int minDistanceToBorder;
double minMarkerDistanceRate;
bool doCornerRefinement;
int cornerRefinementWinSize;
int cornerRefinementMaxIterations;
double cornerRefinementMinAccuracy;
int markerBorderBits;
int perspectiveRemovePixelPerCell;
double perspectiveRemoveIgnoredMarginPerCell;
double maxErroneousBitsInBorderRate;
double minOtsuStdDev;
double errorCorrectionRate;
};
/**
* @brief Basic marker detection
*
* @param image input image
* @param dictionary indicates the type of markers that will be searched
* @param corners vector of detected marker corners. For each marker, its four corners
* are provided, (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers,
* the dimensions of this array is Nx4. The order of the corners is clockwise.
* @param ids vector of identifiers of the detected markers. The identifier is of type int
* (e.g. std::vector<int>). For N detected markers, the size of ids is also N.
* The identifiers have the same order than the markers in the imgPoints array.
* @param parameters marker detection parameters
* @param rejectedImgPoints contains the imgPoints of those squares whose inner code has not a
* correct codification. Useful for debugging purposes.
*
* Performs marker detection in the input image. Only markers included in the specific dictionary
* are searched. For each detected marker, it returns the 2D position of its corner in the image
* and its corresponding identifier.
* Note that this function does not perform pose estimation.
* @sa estimatePoseSingleMarkers, estimatePoseBoard
*
*/
CV_EXPORTS void detectMarkers(InputArray image, Dictionary dictionary, OutputArrayOfArrays corners,
OutputArray ids, DetectorParameters parameters = DetectorParameters(),
OutputArrayOfArrays rejectedImgPoints = noArray());
/**
* @brief Pose estimation for single markers
*
* @param corners vector of already detected markers corners. For each marker, its four corners
* are provided, (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers,
* the dimensions of this array should be Nx4. The order of the corners should be clockwise.
* @sa detectMarkers
* @param markerLength the length of the markers' side. The returning translation vectors will
* be in the same unit. Normally, unit is meters.
* @param cameraMatrix input 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
* @param distCoeffs vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param rvecs array of output rotation vectors (@sa Rodrigues) (e.g. std::vector<cv::Mat>>).
* Each element in rvecs corresponds to the specific marker in imgPoints.
* @param tvecs array of output translation vectors (e.g. std::vector<cv::Mat>>).
* Each element in tvecs corresponds to the specific marker in imgPoints.
*
* This function receives the detected markers and returns their pose estimation respect to
* the camera individually. So for each marker, one rotation and translation vector is returned.
* The returned transformation is the one that transforms points from each marker coordinate system
* to the camera coordinate system.
* The marker corrdinate system is centered on the middle of the marker, with the Z axis
* perpendicular to the marker plane.
* The coordinates of the four corners of the marker in its own coordinate system are:
* (-markerLength/2, markerLength/2, 0), (markerLength/2, markerLength/2, 0),
* (markerLength/2, -markerLength/2, 0), (-markerLength/2, -markerLength/2, 0)
*/
CV_EXPORTS void estimatePoseSingleMarkers(InputArrayOfArrays corners, float markerLength,
InputArray cameraMatrix, InputArray distCoeffs,
OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs);
/**
* @brief Board of markers
*
* A board is a set of markers in the 3D space with a common cordinate system.
* The common form of a board of marker is a planar (2D) board, however any 3D layout can be used.
* A Board object is composed by:
* - The object points of the marker corners, i.e. their coordinates respect to the board system.
* - The dictionary which indicates the type of markers of the board
* - The identifier of all the markers in the board.
*/
class CV_EXPORTS Board {
public:
// array of object points of all the marker corners in the board
// each marker include its 4 corners, i.e. for M markers, the size is Mx4
std::vector< std::vector< Point3f > > objPoints;
// the dictionary of markers employed for this board
Dictionary dictionary;
// vector of the identifiers of the markers in the board (same size than objPoints)
// The identifiers refers to the board dictionary
std::vector< int > ids;
};
/**
* @brief Planar board with grid arrangement of markers
* More common type of board. All markers are placed in the same plane in a grid arrangment.
* The board can be drawn using drawPlanarBoard() function (@sa drawPlanarBoard)
*/
class CV_EXPORTS GridBoard : public Board {
public:
/**
* @brief Draw a GridBoard
*
* @param outSize size of the output image in pixels.
* @param img output image with the board. The size of this image will be outSize
* and the board will be on the center, keeping the board proportions.
* @param marginSize minimum margins (in pixels) of the board in the output image
* @param borderBits width of the marker borders.
*
* This function return the image of the GridBoard, ready to be printed.
*/
void draw(Size outSize, OutputArray img, int marginSize = 0, int borderBits = 1);
/**
* @brief Create a GridBoard object
*
* @param markersX number of markers in X direction
* @param markersY number of markers in Y direction
* @param markerLength marker side length (normally in meters)
* @param markerSeparation separation between two markers (same unit than markerLenght)
* @param dictionary dictionary of markers indicating the type of markers.
* The first markersX*markersY markers in the dictionary are used.
* @return the output GridBoard object
*
* This functions creates a GridBoard object given the number of markers in each direction and
* the marker size and marker separation.
*/
static GridBoard create(int markersX, int markersY, float markerLength, float markerSeparation,
Dictionary dictionary);
/**
*
*/
Size getGridSize() const { return Size(_markersX, _markersY); }
/**
*
*/
float getMarkerLength() const { return _markerLength; }
/**
*
*/
float getMarkerSeparation() const { return _markerSeparation; }
private:
// number of markers in X and Y directions
int _markersX, _markersY;
// marker side lenght (normally in meters)
float _markerLength;
// separation between markers in the grid
float _markerSeparation;
};
/**
* @brief Pose estimation for a board of markers
*
* @param corners vector of already detected markers corners. For each marker, its four corners
* are provided, (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers, the
* dimensions of this array should be Nx4. The order of the corners should be clockwise.
* @param ids list of identifiers for each marker in corners
* @param board layout of markers in the board. The layout is composed by the marker identifiers
* and the positions of each marker corner in the board reference system.
* @param cameraMatrix input 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
* @param distCoeffs vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param rvec Output vector (e.g. cv::Mat) corresponding to the rotation vector of the board
* (@sa Rodrigues).
* @param tvec Output vector (e.g. cv::Mat) corresponding to the translation vector of the board.
*
* This function receives the detected markers and returns the pose of a marker board composed
* by those markers.
* A Board of marker has a single world coordinate system which is defined by the board layout.
* The returned transformation is the one that transforms points from the board coordinate system
* to the camera coordinate system.
* Input markers that are not included in the board layout are ignored.
* The function returns the number of markers from the input employed for the board pose estimation.
* Note that returning a 0 means the pose has not been estimated.
*/
CV_EXPORTS int estimatePoseBoard(InputArrayOfArrays corners, InputArray ids, const Board &board,
InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec,
OutputArray tvec);
/**
* @brief Refind not detected markers based on the already detected and the board layout
*
* @param image input image
* @param board layout of markers in the board.
* @param detectedCorners vector of already detected marker corners.
* @param detectedIds vector of already detected marker identifiers.
* @param rejectedCorners vector of rejected candidates during the marker detection process.
* @param cameraMatrix optional input 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
* @param distCoeffs optional vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param minRepDistance minimum distance between the corners of the rejected candidate and the
* reprojected marker in order to consider it as a correspondence.
* @param errorCorrectionRate rate of allowed erroneous bits respect to the error correction
* capability of the used dictionary. -1 ignores the error correction step.
* @param checkAllOrders Consider the four posible corner orders in the rejectedCorners array.
* If it set to false, only the provided corner order is considered (default true).
* @param recoveredIdxs Optional array to returns the indexes of the recovered candidates in the
* original rejectedCorners array.
* @param parameters marker detection parameters
*
* This function tries to find markers that were not detected in the basic detecMarkers function.
* First, based on the current detected marker and the board layout, the function interpolates
* the position of the missing markers. Then it tries to find correspondence between the reprojected
* markers and the rejected candidates based on the minRepDistance and errorCorrectionRate
* parameters.
* If camera parameters and distortion coefficients are provided, missing markers are reprojected
* using projectPoint function. If not, missing marker projections are interpolated using global
* homography, and all the marker corners in the board must have the same Z coordinate.
*/
CV_EXPORTS void refineDetectedMarkers(
InputArray image, const Board &board, InputOutputArrayOfArrays detectedCorners,
InputOutputArray detectedIds, InputOutputArray rejectedCorners,
InputArray cameraMatrix = noArray(), InputArray distCoeffs = noArray(),
float minRepDistance = 10.f, float errorCorrectionRate = 3.f, bool checkAllOrders = true,
OutputArray recoveredIdxs = noArray(), DetectorParameters parameters = DetectorParameters());
/**
* @brief Draw detected markers in image
*
* @param image input/output image. It must have 1 or 3 channels. The number of channels is not
* altered.
* @param corners positions of marker corners on input image.
* (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers, the dimensions of
* this array should be Nx4. The order of the corners should be clockwise.
* @param ids vector of identifiers for markers in markersCorners .
* Optional, if not provided, ids are not painted.
* @param borderColor color of marker borders. Rest of colors (text color and first corner color)
* are calculated based on this one to improve visualization.
*
* Given an array of detected marker corners and its corresponding ids, this functions draws
* the markers in the image. The marker borders are painted and the markers identifiers if provided.
* Useful for debugging purposes.
*/
CV_EXPORTS void drawDetectedMarkers(InputOutputArray image, InputArrayOfArrays corners,
InputArray ids = noArray(),
Scalar borderColor = Scalar(0, 255, 0));
/**
* @brief Draw coordinate system axis from pose estimation
*
* @param image input/output image. It must have 1 or 3 channels. The number of channels is not
* altered.
* @param cameraMatrix input 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
* @param distCoeffs vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param rvec rotation vector of the coordinate system that will be drawn. (@sa Rodrigues).
* @param tvec translation vector of the coordinate system that will be drawn.
* @param length length of the painted axis in the same unit than tvec (usually in meters)
*
* Given the pose estimation of a marker or board, this function draws the axis of the world
* coordinate system, i.e. the system centered on the marker/board. Useful for debugging purposes.
*/
CV_EXPORTS void drawAxis(InputOutputArray image, InputArray cameraMatrix, InputArray distCoeffs,
InputArray rvec, InputArray tvec, float length);
/**
* @brief Draw a canonical marker image
*
* @param dictionary dictionary of markers indicating the type of markers
* @param id identifier of the marker that will be returned. It has to be a valid id
* in the specified dictionary.
* @param sidePixels size of the image in pixels
* @param img output image with the marker
* @param borderBits width of the marker border.
*
* This function returns a marker image in its canonical form (i.e. ready to be printed)
*/
CV_EXPORTS void drawMarker(Dictionary dictionary, int id, int sidePixels, OutputArray img,
int borderBits = 1);
/**
* @brief Draw a planar board
*
* @param board layout of the board that will be drawn. The board should be planar,
* z coordinate is ignored
* @param outSize size of the output image in pixels.
* @param img output image with the board. The size of this image will be outSize
* and the board will be on the center, keeping the board proportions.
* @param marginSize minimum margins (in pixels) of the board in the output image
* @param borderBits width of the marker borders.
*
* This function return the image of a planar board, ready to be printed. It assumes
* the Board layout specified is planar by ignoring the z coordinates of the object points.
*/
CV_EXPORTS void drawPlanarBoard(const Board &board, Size outSize, OutputArray img,
int marginSize = 0, int borderBits = 1);
/**
* @brief Calibrate a camera using aruco markers
*
* @param corners vector of detected marker corners in all frames.
* The corners should have the same format returned by detectMarkers (@sa detectMarkers).
* @param ids list of identifiers for each marker in corners
* @param counter number of markers in each frame so that corners and ids can be split
* @param board Marker Board layout
* @param imageSize Size of the image used only to initialize the intrinsic camera matrix.
* @param cameraMatrix Output 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ . If CV\_CALIB\_USE\_INTRINSIC\_GUESS
* and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be
* initialized before calling the function.
* @param distCoeffs Output vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each board view
* (e.g. std::vector<cv::Mat>>). That is, each k-th rotation vector together with the corresponding
* k-th translation vector (see the next output parameter description) brings the board pattern
* from the model coordinate space (in which object points are specified) to the world coordinate
* space, that is, a real position of the board pattern in the k-th pattern view (k=0.. *M* -1).
* @param tvecs Output vector of translation vectors estimated for each pattern view.
* @param flags flags Different flags for the calibration process (@sa calibrateCamera)
* @param criteria Termination criteria for the iterative optimization algorithm.
*
* This function calibrates a camera using an Aruco Board. The function receives a list of
* detected markers from several views of the Board. The process is similar to the chessboard
* calibration in calibrateCamera(). The function returns the final re-projection error.
*/
CV_EXPORTS double calibrateCameraAruco(
InputArrayOfArrays corners, InputArray ids, InputArray counter, const Board &board,
Size imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs,
OutputArrayOfArrays rvecs = noArray(), OutputArrayOfArrays tvecs = noArray(), int flags = 0,
TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON));
//! @}
}
}
#endif

@ -0,0 +1,322 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#ifndef __OPENCV_CHARUCO_HPP__
#define __OPENCV_CHARUCO_HPP__
#include <opencv2/core.hpp>
#include <vector>
#include <opencv2/aruco.hpp>
namespace cv {
namespace aruco {
//! @addtogroup aruco
//! @{
/**
* @brief ChArUco board
* Specific class for ChArUco boards. A ChArUco board is a planar board where the markers are placed
* inside the white squares of a chessboard. The benefits of ChArUco boards is that they provide
* both, ArUco markers versatility and chessboard corner precision, which is important for
* calibration and pose estimation.
* This class also allows the easy creation and drawing of ChArUco boards.
*/
class CV_EXPORTS CharucoBoard : public Board {
public:
// vector of chessboard 3D corners precalculated
std::vector< Point3f > chessboardCorners;
// for each charuco corner, nearest marker id and nearest marker corner id of each marker
std::vector< std::vector< int > > nearestMarkerIdx;
std::vector< std::vector< int > > nearestMarkerCorners;
/**
* @brief Draw a ChArUco board
*
* @param outSize size of the output image in pixels.
* @param img output image with the board. The size of this image will be outSize
* and the board will be on the center, keeping the board proportions.
* @param marginSize minimum margins (in pixels) of the board in the output image
* @param borderBits width of the marker borders.
*
* This function return the image of the ChArUco board, ready to be printed.
*/
void draw(Size outSize, OutputArray img, int marginSize = 0, int borderBits = 1);
/**
* @brief Create a CharucoBoard object
*
* @param squaresX number of chessboard squares in X direction
* @param squaresY number of chessboard squares in Y direction
* @param squareLength chessboard square side length (normally in meters)
* @param markerLength marker side length (same unit than squareLength)
* @param dictionary dictionary of markers indicating the type of markers.
* The first markers in the dictionary are used to fill the white chessboard squares.
* @return the output CharucoBoard object
*
* This functions creates a CharucoBoard object given the number of squares in each direction
* and the size of the markers and chessboard squares.
*/
static CharucoBoard create(int squaresX, int squaresY, float squareLength, float markerLength,
Dictionary dictionary);
/**
*
*/
Size getChessboardSize() const { return Size(_squaresX, _squaresY); }
/**
*
*/
float getSquareLength() const { return _squareLength; }
/**
*
*/
float getMarkerLength() const { return _markerLength; }
private:
void _getNearestMarkerCorners();
// number of markers in X and Y directions
int _squaresX, _squaresY;
// size of chessboard squares side (normally in meters)
float _squareLength;
// marker side lenght (normally in meters)
float _markerLength;
};
/**
* @brief Interpolate position of ChArUco board corners
* @param markerCorners vector of already detected markers corners. For each marker, its four
* corners are provided, (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers, the
* dimensions of this array should be Nx4. The order of the corners should be clockwise.
* @param markerIds list of identifiers for each marker in corners
* @param image input image necesary for corner refinement. Note that markers are not detected and
* should be sent in corners and ids parameters.
* @param board layout of ChArUco board.
* @param charucoCorners interpolated chessboard corners
* @param charucoIds interpolated chessboard corners identifiers
* @param cameraMatrix optional 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
* @param distCoeffs optional vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
*
* This function receives the detected markers and returns the 2D position of the chessboard corners
* from a ChArUco board using the detected Aruco markers. If camera parameters are provided,
* the process is based in an approximated pose estimation, else it is based on local homography.
* Only visible corners are returned. For each corner, its corresponding identifier is
* also returned in charucoIds.
* The function returns the number of interpolated corners.
*/
CV_EXPORTS int interpolateCornersCharuco(InputArrayOfArrays markerCorners, InputArray markerIds,
InputArray image, const CharucoBoard &board,
OutputArray charucoCorners, OutputArray charucoIds,
InputArray cameraMatrix = noArray(),
InputArray distCoeffs = noArray());
/**
* @brief Pose estimation for a ChArUco board given some of their corners
* @param charucoCorners vector of detected charuco corners
* @param charucoIds list of identifiers for each corner in charucoCorners
* @param board layout of ChArUco board.
* @param cameraMatrix input 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
* @param distCoeffs vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param rvec Output vector (e.g. cv::Mat) corresponding to the rotation vector of the board
* (@sa Rodrigues).
* @param tvec Output vector (e.g. cv::Mat) corresponding to the translation vector of the board.
*
* This function estimates a Charuco board pose from some detected corners.
* The function checks if the input corners are enough and valid to perform pose estimation.
* If pose estimation is valid, returns true, else returns false.
*/
CV_EXPORTS bool estimatePoseCharucoBoard(InputArray charucoCorners, InputArray charucoIds,
CharucoBoard &board, InputArray cameraMatrix,
InputArray distCoeffs, OutputArray rvec, OutputArray tvec);
/**
* @brief Draws a set of Charuco corners
* @param image input/output image. It must have 1 or 3 channels. The number of channels is not
* altered.
* @param charucoCorners vector of detected charuco corners
* @param charucoIds list of identifiers for each corner in charucoCorners
* @param cornerColor color of the square surrounding each corner
*
* This function draws a set of detected Charuco corners. If identifiers vector is provided, it also
* draws the id of each corner.
*/
CV_EXPORTS void drawDetectedCornersCharuco(InputOutputArray image, InputArray charucoCorners,
InputArray charucoIds = noArray(),
Scalar cornerColor = Scalar(255, 0, 0));
/**
* @brief Calibrate a camera using Charuco corners
*
* @param charucoCorners vector of detected charuco corners per frame
* @param charucoIds list of identifiers for each corner in charucoCorners per frame
* @param board Marker Board layout
* @param imageSize input image size
* @param cameraMatrix Output 3x3 floating-point camera matrix
* \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ . If CV\_CALIB\_USE\_INTRINSIC\_GUESS
* and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be
* initialized before calling the function.
* @param distCoeffs Output vector of distortion coefficients
* \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
* @param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each board view
* (e.g. std::vector<cv::Mat>>). That is, each k-th rotation vector together with the corresponding
* k-th translation vector (see the next output parameter description) brings the board pattern
* from the model coordinate space (in which object points are specified) to the world coordinate
* space, that is, a real position of the board pattern in the k-th pattern view (k=0.. *M* -1).
* @param tvecs Output vector of translation vectors estimated for each pattern view.
* @param flags flags Different flags for the calibration process (@sa calibrateCamera)
* @param criteria Termination criteria for the iterative optimization algorithm.
*
* This function calibrates a camera using a set of corners of a Charuco Board. The function
* receives a list of detected corners and its identifiers from several views of the Board.
* The function returns the final re-projection error.
*/
CV_EXPORTS double calibrateCameraCharuco(
InputArrayOfArrays charucoCorners, InputArrayOfArrays charucoIds, const CharucoBoard &board,
Size imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs,
OutputArrayOfArrays rvecs = noArray(), OutputArrayOfArrays tvecs = noArray(), int flags = 0,
TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON));
/**
* @brief Detect ChArUco Diamond markers
*
* @param image input image necessary for corner subpixel.
* @param markerCorners list of detected marker corners from detectMarkers function.
* @param markerIds list of marker ids in markerCorners.
* @param squareMarkerLengthRate rate between square and marker length:
* squareMarkerLengthRate = squareLength/markerLength. The real units are not necessary.
* @param diamondCorners output list of detected diamond corners (4 corners per diamond). The order
* is the same than in marker corners: top left, top right, bottom right and bottom left. Similar
* format than the corners returned by detectMarkers (e.g std::vector<std::vector<cv::Point2f> > ).
* @param diamondIds ids of the diamonds in diamondCorners. The id of each diamond is in fact of
* type Vec4i, so each diamond has 4 ids, which are the ids of the aruco markers composing the
* diamond.
* @param cameraMatrix Optional camera calibration matrix.
* @param distCoeffs Optional camera distortion coefficients.
*
* This function detects Diamond markers from the previous detected ArUco markers. The diamonds
* are returned in the diamondCorners and diamondIds parameters. If camera calibration parameters
* are provided, the diamond search is based on reprojection. If not, diamond search is based on
* homography. Homography is faster than reprojection but can slightly reduce the detection rate.
*/
CV_EXPORTS void detectCharucoDiamond(InputArray image, InputArrayOfArrays markerCorners,
InputArray markerIds, float squareMarkerLengthRate,
OutputArrayOfArrays diamondCorners, OutputArray diamondIds,
InputArray cameraMatrix = noArray(),
InputArray distCoeffs = noArray());
/**
* @brief Draw a set of detected ChArUco Diamond markers
*
* @param image input/output image. It must have 1 or 3 channels. The number of channels is not
* altered.
* @param diamondCorners positions of diamond corners in the same format returned by
* detectCharucoDiamond(). (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers,
* the dimensions of this array should be Nx4. The order of the corners should be clockwise.
* @param diamondIds vector of identifiers for diamonds in diamondCorners, in the same format
* returned by detectCharucoDiamond() (e.g. std::vector<Vec4i>).
* Optional, if not provided, ids are not painted.
* @param borderColor color of marker borders. Rest of colors (text color and first corner color)
* are calculated based on this one.
*
* Given an array of detected diamonds, this functions draws them in the image. The marker borders
* are painted and the markers identifiers if provided.
* Useful for debugging purposes.
*/
CV_EXPORTS void drawDetectedDiamonds(InputOutputArray image, InputArrayOfArrays diamondCorners,
InputArray diamondIds = noArray(),
Scalar borderColor = Scalar(0, 0, 255));
/**
* @brief Draw a ChArUco Diamond marker
*
* @param dictionary dictionary of markers indicating the type of markers.
* @param ids list of 4 ids for each ArUco marker in the ChArUco marker.
* @param squareLength size of the chessboard squares in pixels.
* @param markerLength size of the markers in pixels.
* @param img output image with the marker. The size of this image will be
* 3*squareLength + 2*marginSize,.
* @param marginSize minimum margins (in pixels) of the marker in the output image
* @param borderBits width of the marker borders.
*
* This function return the image of a ChArUco marker, ready to be printed.
*/
CV_EXPORTS void drawCharucoDiamond(Dictionary dictionary, Vec4i ids, int squareLength,
int markerLength, OutputArray img, int marginSize = 0,
int borderBits = 1);
//! @}
}
}
#endif

@ -0,0 +1,157 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#ifndef __OPENCV_DICTIONARY_HPP__
#define __OPENCV_DICTIONARY_HPP__
#include <opencv2/core.hpp>
namespace cv {
namespace aruco {
//! @addtogroup aruco
//! @{
/**
* @brief Dictionary/Set of markers. It contains the inner codification
*
*/
class CV_EXPORTS Dictionary {
public:
Mat bytesList; // marker code information
int markerSize; // number of bits per dimension
int maxCorrectionBits; // maximum number of bits that can be corrected
/**
*/
Dictionary(const unsigned char *bytes = 0, int _markerSize = 0, int dictsize = 0,
int _maxcorr = 0);
/**
* @brief Given a matrix of bits. Returns whether if marker is identified or not.
* It returns by reference the correct id (if any) and the correct rotation
*/
bool identify(const Mat &onlyBits, int &idx, int &rotation, double maxCorrectionRate) const;
/**
* @brief Returns the distance of the input bits to the specific id. If allRotations is true,
* the four posible bits rotation are considered
*/
int getDistanceToId(InputArray bits, int id, bool allRotations = true) const;
/**
* @brief Draw a canonical marker image
*/
void drawMarker(int id, int sidePixels, OutputArray _img, int borderBits = 1) const;
/**
* @brief Transform matrix of bits to list of bytes in the 4 rotations
*/
static Mat getByteListFromBits(const Mat &bits);
/**
* @brief Transform list of bytes to matrix of bits
*/
static Mat getBitsFromByteList(const Mat &byteList, int markerSize);
};
/**
* @brief Predefined markers dictionaries/sets
* Each dictionary indicates the number of bits and the number of markers contained
* - DICT_ARUCO: standard ArUco Library Markers. 1024 markers, 5x5 bits, 0 minimum distance
*/
enum PREDEFINED_DICTIONARY_NAME {
DICT_4X4_50 = 0,
DICT_4X4_100,
DICT_4X4_250,
DICT_4X4_1000,
DICT_5X5_50,
DICT_5X5_100,
DICT_5X5_250,
DICT_5X5_1000,
DICT_6X6_50,
DICT_6X6_100,
DICT_6X6_250,
DICT_6X6_1000,
DICT_7X7_50,
DICT_7X7_100,
DICT_7X7_250,
DICT_7X7_1000,
DICT_ARUCO_ORIGINAL
};
/**
* @brief Returns one of the predefined dictionaries defined in PREDEFINED_DICTIONARY_NAME
*/
CV_EXPORTS const Dictionary &getPredefinedDictionary(PREDEFINED_DICTIONARY_NAME name);
/**
* @brief Generates a new customizable marker dictionary
*
* @param nMarkers number of markers in the dictionary
* @param markerSize number of bits per dimension of each markers
* @param baseDictionary Include the markers in this dictionary at the beginning (optional)
*
* This function creates a new dictionary composed by nMarkers markers and each markers composed
* by markerSize x markerSize bits. If baseDictionary is provided, its markers are directly
* included and the rest are generated based on them. If the size of baseDictionary is higher
* than nMarkers, only the first nMarkers in baseDictionary are taken and no new marker is added.
*/
CV_EXPORTS Dictionary generateCustomDictionary(int nMarkers, int markerSize,
const Dictionary &baseDictionary = Dictionary());
//! @}
}
}
#endif

@ -0,0 +1,311 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/aruco.hpp>
#include <opencv2/imgproc.hpp>
#include <vector>
#include <iostream>
#include <ctime>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Calibration using a ArUco Planar Grid board" << endl;
cout << "How to Use:" << endl;
cout << "To capture a frame for calibration, press 'c'," << endl;
cout << "If input comes from video, press any key for next frame" << endl;
cout << "To finish capturing, press 'ESC' key and calibration starts." << endl;
cout << "Parameters: " << endl;
cout << "-w <nmarkers> # Number of markers in X direction" << endl;
cout << "-h <nmarkers> # Number of markers in Y direction" << endl;
cout << "-l <markerLength> # Marker side lenght (in meters)" << endl;
cout << "-s <markerSeparation> # Separation between two consecutive"
<< "markers in the grid (in meters)" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "-o <outputFile> # Output file with calibrated camera parameters" << endl;
cout << "[-v <videoFile>] # Input from video file, if ommited, input comes from camera" << endl;
cout << "[-ci <int>] # Camera id if input doesnt come from video (-v). Default is 0" << endl;
cout << "[-dp <detectorParams>] # File of marker detector parameters" << endl;
cout << "[-rs] # Apply refind strategy" << endl;
cout << "[-zt] # Assume zero tangential distortion" << endl;
cout << "[-a <aspectRatio>] # Fix aspect ratio (fx/fy)" << endl;
cout << "[-p] # Fix the principal point at the center" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
static bool readDetectorParameters(string filename, aruco::DetectorParameters &params) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["adaptiveThreshWinSizeMin"] >> params.adaptiveThreshWinSizeMin;
fs["adaptiveThreshWinSizeMax"] >> params.adaptiveThreshWinSizeMax;
fs["adaptiveThreshWinSizeStep"] >> params.adaptiveThreshWinSizeStep;
fs["adaptiveThreshConstant"] >> params.adaptiveThreshConstant;
fs["minMarkerPerimeterRate"] >> params.minMarkerPerimeterRate;
fs["maxMarkerPerimeterRate"] >> params.maxMarkerPerimeterRate;
fs["polygonalApproxAccuracyRate"] >> params.polygonalApproxAccuracyRate;
fs["minCornerDistanceRate"] >> params.minCornerDistanceRate;
fs["minDistanceToBorder"] >> params.minDistanceToBorder;
fs["minMarkerDistanceRate"] >> params.minMarkerDistanceRate;
fs["doCornerRefinement"] >> params.doCornerRefinement;
fs["cornerRefinementWinSize"] >> params.cornerRefinementWinSize;
fs["cornerRefinementMaxIterations"] >> params.cornerRefinementMaxIterations;
fs["cornerRefinementMinAccuracy"] >> params.cornerRefinementMinAccuracy;
fs["markerBorderBits"] >> params.markerBorderBits;
fs["perspectiveRemovePixelPerCell"] >> params.perspectiveRemovePixelPerCell;
fs["perspectiveRemoveIgnoredMarginPerCell"] >> params.perspectiveRemoveIgnoredMarginPerCell;
fs["maxErroneousBitsInBorderRate"] >> params.maxErroneousBitsInBorderRate;
fs["minOtsuStdDev"] >> params.minOtsuStdDev;
fs["errorCorrectionRate"] >> params.errorCorrectionRate;
return true;
}
/**
*/
static bool saveCameraParams(const string &filename, Size imageSize, float aspectRatio, int flags,
const Mat &cameraMatrix, const Mat &distCoeffs, double totalAvgErr) {
FileStorage fs(filename, FileStorage::WRITE);
if(!fs.isOpened())
return false;
time_t tt;
time(&tt);
struct tm *t2 = localtime(&tt);
char buf[1024];
strftime(buf, sizeof(buf) - 1, "%c", t2);
fs << "calibration_time" << buf;
fs << "image_width" << imageSize.width;
fs << "image_height" << imageSize.height;
if(flags & CALIB_FIX_ASPECT_RATIO) fs << "aspectRatio" << aspectRatio;
if(flags != 0) {
sprintf(buf, "flags: %s%s%s%s",
flags & CALIB_USE_INTRINSIC_GUESS ? "+use_intrinsic_guess" : "",
flags & CALIB_FIX_ASPECT_RATIO ? "+fix_aspectRatio" : "",
flags & CALIB_FIX_PRINCIPAL_POINT ? "+fix_principal_point" : "",
flags & CALIB_ZERO_TANGENT_DIST ? "+zero_tangent_dist" : "");
}
fs << "flags" << flags;
fs << "camera_matrix" << cameraMatrix;
fs << "distortion_coefficients" << distCoeffs;
fs << "avg_reprojection_error" << totalAvgErr;
return true;
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-w", argc, argv) || !isParam("-h", argc, argv) || !isParam("-l", argc, argv) ||
!isParam("-s", argc, argv) || !isParam("-d", argc, argv) || !isParam("-o", argc, argv)) {
help();
return 0;
}
int markersX = atoi(getParam("-w", argc, argv).c_str());
int markersY = atoi(getParam("-h", argc, argv).c_str());
float markerLength = (float)atof(getParam("-l", argc, argv).c_str());
float markerSeparation = (float)atof(getParam("-s", argc, argv).c_str());
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
string outputFile = getParam("-o", argc, argv);
int calibrationFlags = 0;
float aspectRatio = 1;
if(isParam("-a", argc, argv)) {
calibrationFlags |= CALIB_FIX_ASPECT_RATIO;
aspectRatio = (float)atof(getParam("-a", argc, argv).c_str());
}
if(isParam("-zt", argc, argv)) calibrationFlags |= CALIB_ZERO_TANGENT_DIST;
if(isParam("-p", argc, argv)) calibrationFlags |= CALIB_FIX_PRINCIPAL_POINT;
aruco::DetectorParameters detectorParams;
if(isParam("-dp", argc, argv)) {
bool readOk = readDetectorParameters(getParam("-dp", argc, argv), detectorParams);
if(!readOk) {
cerr << "Invalid detector parameters file" << endl;
return 0;
}
}
bool refindStrategy = false;
if(isParam("-rs", argc, argv)) refindStrategy = true;
VideoCapture inputVideo;
int waitTime;
if(isParam("-v", argc, argv)) {
inputVideo.open(getParam("-v", argc, argv));
waitTime = 0;
} else {
int camId = 0;
if(isParam("-ci", argc, argv)) camId = atoi(getParam("-ci", argc, argv).c_str());
inputVideo.open(camId);
waitTime = 10;
}
// create board object
aruco::GridBoard board =
aruco::GridBoard::create(markersX, markersY, markerLength, markerSeparation, dictionary);
// collected frames for calibration
vector< vector< vector< Point2f > > > allCorners;
vector< vector< int > > allIds;
Size imgSize;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
vector< int > ids;
vector< vector< Point2f > > corners, rejected;
// detect markers
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
// refind strategy to detect more markers
if(refindStrategy) aruco::refineDetectedMarkers(image, board, corners, ids, rejected);
// draw results
image.copyTo(imageCopy);
if(ids.size() > 0) aruco::drawDetectedMarkers(imageCopy, corners, ids);
putText(imageCopy, "Press 'c' to add current frame. 'ESC' to finish and calibrate",
Point(10, 20), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(255, 0, 0), 2);
imshow("out", imageCopy);
char key = (char)waitKey(waitTime);
if(key == 27) break;
if(key == 'c' && ids.size() > 0) {
cout << "Frame captured" << endl;
allCorners.push_back(corners);
allIds.push_back(ids);
imgSize = image.size();
}
}
if(allIds.size() < 1) {
cerr << "Not enough captures for calibration" << endl;
return 0;
}
Mat cameraMatrix, distCoeffs;
vector< Mat > rvecs, tvecs;
double repError;
if(calibrationFlags & CALIB_FIX_ASPECT_RATIO) {
cameraMatrix = Mat::eye(3, 3, CV_64F);
cameraMatrix.at< double >(0, 0) = aspectRatio;
}
// prepare data for calibration
vector< vector< Point2f > > allCornersConcatenated;
vector< int > allIdsConcatenated;
vector< int > markerCounterPerFrame;
markerCounterPerFrame.reserve(allCorners.size());
for(unsigned int i = 0; i < allCorners.size(); i++) {
markerCounterPerFrame.push_back((int)allCorners[i].size());
for(unsigned int j = 0; j < allCorners[i].size(); j++) {
allCornersConcatenated.push_back(allCorners[i][j]);
allIdsConcatenated.push_back(allIds[i][j]);
}
}
// calibrate camera
repError = aruco::calibrateCameraAruco(allCornersConcatenated, allIdsConcatenated,
markerCounterPerFrame, board, imgSize, cameraMatrix,
distCoeffs, rvecs, tvecs, calibrationFlags);
bool saveOk = saveCameraParams(outputFile, imgSize, aspectRatio, calibrationFlags, cameraMatrix,
distCoeffs, repError);
if(!saveOk) {
cerr << "Cannot save output file" << endl;
return 0;
}
cout << "Rep Error: " << repError << endl;
cout << "Calibration saved to " << outputFile << endl;
return 0;
}

@ -0,0 +1,375 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/aruco/charuco.hpp>
#include <opencv2/imgproc.hpp>
#include <vector>
#include <iostream>
#include <ctime>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Calibration using a ChArUco board" << endl;
cout << "How to Use:" << endl;
cout << "To capture a frame for calibration, press 'c'," << endl;
cout << "If input comes from video, press any key for next frame" << endl;
cout << "To finish capturing, press 'ESC' key and calibration starts." << endl;
cout << "Parameters: " << endl;
cout << "-w <nmarkers> # Number of markers in X direction" << endl;
cout << "-h <nsquares> # Number of squares in Y direction" << endl;
cout << "-sl <squareLength> # Square side lenght (in meters)" << endl;
cout << "-ml <markerLength> # Marker side lenght (in meters)" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "-o <outputFile> # Output file with calibrated camera parameters" << endl;
cout << "[-v <videoFile>] # Input from video file, if ommited, input comes from camera" << endl;
cout << "[-ci <int>] # Camera id if input doesnt come from video (-v). Default is 0" << endl;
cout << "[-dp <detectorParams>] # File of marker detector parameters" << endl;
cout << "[-rs] # Apply refind strategy" << endl;
cout << "[-zt] # Assume zero tangential distortion" << endl;
cout << "[-a <aspectRatio>] # Fix aspect ratio (fx/fy)" << endl;
cout << "[-p] # Fix the principal point at the center" << endl;
cout << "[-sc] # Show detected chessboard corners after calibration" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
static bool readDetectorParameters(string filename, aruco::DetectorParameters &params) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["adaptiveThreshWinSizeMin"] >> params.adaptiveThreshWinSizeMin;
fs["adaptiveThreshWinSizeMax"] >> params.adaptiveThreshWinSizeMax;
fs["adaptiveThreshWinSizeStep"] >> params.adaptiveThreshWinSizeStep;
fs["adaptiveThreshConstant"] >> params.adaptiveThreshConstant;
fs["minMarkerPerimeterRate"] >> params.minMarkerPerimeterRate;
fs["maxMarkerPerimeterRate"] >> params.maxMarkerPerimeterRate;
fs["polygonalApproxAccuracyRate"] >> params.polygonalApproxAccuracyRate;
fs["minCornerDistanceRate"] >> params.minCornerDistanceRate;
fs["minDistanceToBorder"] >> params.minDistanceToBorder;
fs["minMarkerDistanceRate"] >> params.minMarkerDistanceRate;
fs["doCornerRefinement"] >> params.doCornerRefinement;
fs["cornerRefinementWinSize"] >> params.cornerRefinementWinSize;
fs["cornerRefinementMaxIterations"] >> params.cornerRefinementMaxIterations;
fs["cornerRefinementMinAccuracy"] >> params.cornerRefinementMinAccuracy;
fs["markerBorderBits"] >> params.markerBorderBits;
fs["perspectiveRemovePixelPerCell"] >> params.perspectiveRemovePixelPerCell;
fs["perspectiveRemoveIgnoredMarginPerCell"] >> params.perspectiveRemoveIgnoredMarginPerCell;
fs["maxErroneousBitsInBorderRate"] >> params.maxErroneousBitsInBorderRate;
fs["minOtsuStdDev"] >> params.minOtsuStdDev;
fs["errorCorrectionRate"] >> params.errorCorrectionRate;
return true;
}
/**
*/
static bool saveCameraParams(const string &filename, Size imageSize, float aspectRatio, int flags,
const Mat &cameraMatrix, const Mat &distCoeffs, double totalAvgErr) {
FileStorage fs(filename, FileStorage::WRITE);
if(!fs.isOpened())
return false;
time_t tt;
time(&tt);
struct tm *t2 = localtime(&tt);
char buf[1024];
strftime(buf, sizeof(buf) - 1, "%c", t2);
fs << "calibration_time" << buf;
fs << "image_width" << imageSize.width;
fs << "image_height" << imageSize.height;
if(flags & CALIB_FIX_ASPECT_RATIO) fs << "aspectRatio" << aspectRatio;
if(flags != 0) {
sprintf(buf, "flags: %s%s%s%s",
flags & CALIB_USE_INTRINSIC_GUESS ? "+use_intrinsic_guess" : "",
flags & CALIB_FIX_ASPECT_RATIO ? "+fix_aspectRatio" : "",
flags & CALIB_FIX_PRINCIPAL_POINT ? "+fix_principal_point" : "",
flags & CALIB_ZERO_TANGENT_DIST ? "+zero_tangent_dist" : "");
}
fs << "flags" << flags;
fs << "camera_matrix" << cameraMatrix;
fs << "distortion_coefficients" << distCoeffs;
fs << "avg_reprojection_error" << totalAvgErr;
return true;
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-w", argc, argv) || !isParam("-h", argc, argv) || !isParam("-sl", argc, argv) ||
!isParam("-ml", argc, argv) || !isParam("-d", argc, argv) || !isParam("-o", argc, argv)) {
help();
return 0;
}
int squaresX = atoi(getParam("-w", argc, argv).c_str());
int squaresY = atoi(getParam("-h", argc, argv).c_str());
float squareLength = (float)atof(getParam("-sl", argc, argv).c_str());
float markerLength = (float)atof(getParam("-ml", argc, argv).c_str());
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
string outputFile = getParam("-o", argc, argv);
bool showChessboardCorners = isParam("-sc", argc, argv);
int calibrationFlags = 0;
float aspectRatio = 1;
if(isParam("-a", argc, argv)) {
calibrationFlags |= CALIB_FIX_ASPECT_RATIO;
aspectRatio = (float)atof(getParam("-a", argc, argv).c_str());
}
if(isParam("-zt", argc, argv)) calibrationFlags |= CALIB_ZERO_TANGENT_DIST;
if(isParam("-p", argc, argv)) calibrationFlags |= CALIB_FIX_PRINCIPAL_POINT;
aruco::DetectorParameters detectorParams;
if(isParam("-dp", argc, argv)) {
bool readOk = readDetectorParameters(getParam("-dp", argc, argv), detectorParams);
if(!readOk) {
cerr << "Invalid detector parameters file" << endl;
return 0;
}
}
bool refindStrategy = false;
if(isParam("-rs", argc, argv)) refindStrategy = true;
VideoCapture inputVideo;
int waitTime;
if(isParam("-v", argc, argv)) {
inputVideo.open(getParam("-v", argc, argv));
waitTime = 0;
} else {
int camId = 0;
if(isParam("-ci", argc, argv)) camId = atoi(getParam("-ci", argc, argv).c_str());
inputVideo.open(camId);
waitTime = 10;
}
// create charuco board object
aruco::CharucoBoard board =
aruco::CharucoBoard::create(squaresX, squaresY, squareLength, markerLength, dictionary);
// collect data from each frame
vector< vector< vector< Point2f > > > allCorners;
vector< vector< int > > allIds;
vector< Mat > allImgs;
Size imgSize;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
vector< int > ids;
vector< vector< Point2f > > corners, rejected;
// detect markers
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
// refind strategy to detect more markers
if(refindStrategy) aruco::refineDetectedMarkers(image, board, corners, ids, rejected);
// interpolate charuco corners
Mat currentCharucoCorners, currentCharucoIds;
if(ids.size() > 0)
aruco::interpolateCornersCharuco(corners, ids, image, board, currentCharucoCorners,
currentCharucoIds);
// draw results
image.copyTo(imageCopy);
if(ids.size() > 0) aruco::drawDetectedMarkers(imageCopy, corners);
if(currentCharucoCorners.total() > 0)
aruco::drawDetectedCornersCharuco(imageCopy, currentCharucoCorners, currentCharucoIds);
putText(imageCopy, "Press 'c' to add current frame. 'ESC' to finish and calibrate",
Point(10, 20), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(255, 0, 0), 2);
imshow("out", imageCopy);
char key = (char)waitKey(waitTime);
if(key == 27) break;
if(key == 'c' && ids.size() > 0) {
cout << "Frame captured" << endl;
allCorners.push_back(corners);
allIds.push_back(ids);
allImgs.push_back(image);
imgSize = image.size();
}
}
if(allIds.size() < 1) {
cerr << "Not enough captures for calibration" << endl;
return 0;
}
Mat cameraMatrix, distCoeffs;
vector< Mat > rvecs, tvecs;
double repError;
if(calibrationFlags & CALIB_FIX_ASPECT_RATIO) {
cameraMatrix = Mat::eye(3, 3, CV_64F);
cameraMatrix.at< double >(0, 0) = aspectRatio;
}
// prepare data for calibration
vector< vector< Point2f > > allCornersConcatenated;
vector< int > allIdsConcatenated;
vector< int > markerCounterPerFrame;
markerCounterPerFrame.reserve(allCorners.size());
for(unsigned int i = 0; i < allCorners.size(); i++) {
markerCounterPerFrame.push_back((int)allCorners[i].size());
for(unsigned int j = 0; j < allCorners[i].size(); j++) {
allCornersConcatenated.push_back(allCorners[i][j]);
allIdsConcatenated.push_back(allIds[i][j]);
}
}
// calibrate camera using aruco markers
double arucoRepErr;
arucoRepErr = aruco::calibrateCameraAruco(allCornersConcatenated, allIdsConcatenated,
markerCounterPerFrame, board, imgSize, cameraMatrix,
distCoeffs, noArray(), noArray(), calibrationFlags);
// prepare data for charuco calibration
int nFrames = (int)allCorners.size();
vector< Mat > allCharucoCorners;
vector< Mat > allCharucoIds;
vector< Mat > filteredImages;
allCharucoCorners.reserve(nFrames);
allCharucoIds.reserve(nFrames);
for(int i = 0; i < nFrames; i++) {
// interpolate using camera parameters
Mat currentCharucoCorners, currentCharucoIds;
aruco::interpolateCornersCharuco(allCorners[i], allIds[i], allImgs[i], board,
currentCharucoCorners, currentCharucoIds, cameraMatrix,
distCoeffs);
allCharucoCorners.push_back(currentCharucoCorners);
allCharucoIds.push_back(currentCharucoIds);
filteredImages.push_back(allImgs[i]);
}
if(allCharucoCorners.size() < 4) {
cerr << "Not enough corners for calibration" << endl;
return 0;
}
// calibrate camera using charuco
repError =
aruco::calibrateCameraCharuco(allCharucoCorners, allCharucoIds, board, imgSize,
cameraMatrix, distCoeffs, rvecs, tvecs, calibrationFlags);
bool saveOk = saveCameraParams(outputFile, imgSize, aspectRatio, calibrationFlags,
cameraMatrix, distCoeffs, repError);
if(!saveOk) {
cerr << "Cannot save output file" << endl;
return 0;
}
cout << "Rep Error: " << repError << endl;
cout << "Rep Error Aruco: " << arucoRepErr << endl;
cout << "Calibration saved to " << outputFile << endl;
// show interpolated charuco corners for debugging
if(showChessboardCorners) {
for(unsigned int frame = 0; frame < filteredImages.size(); frame++) {
Mat imageCopy = filteredImages[frame].clone();
if(allIds[frame].size() > 0) {
if(allCharucoCorners[frame].total() > 0) {
aruco::drawDetectedCornersCharuco( imageCopy, allCharucoCorners[frame],
allCharucoIds[frame]);
}
}
imshow("out", imageCopy);
char key = (char)waitKey(0);
if(key == 27) break;
}
}
return 0;
}

@ -0,0 +1,143 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/aruco.hpp>
#include <iostream>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Create an ArUco grid board image" << endl;
cout << "Parameters: " << endl;
cout << "-o <image> # Output image" << endl;
cout << "-w <nmarkers> # Number of markers in X direction" << endl;
cout << "-h <nmarkers> # Number of markers in Y direction" << endl;
cout << "-l <markerLength> # Marker side lenght (in pixels)" << endl;
cout << "-s <markerSeparation> # Separation between two consecutive"
<< "markers in the grid (in pixels)" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "[-m <marginSize>] # Margins size (in pixels)"
<< "Default is marker separation" << endl;
cout << "[-bb <int>] # Number of bits in marker borders. Default is 1" << endl;
cout << "[-si] # show generated image" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-w", argc, argv) || !isParam("-h", argc, argv) || !isParam("-l", argc, argv) ||
!isParam("-s", argc, argv) || !isParam("-d", argc, argv) || !isParam("-o", argc, argv)) {
help();
return 0;
}
int markersX = atoi(getParam("-w", argc, argv).c_str());
int markersY = atoi(getParam("-h", argc, argv).c_str());
int markerLength = atoi(getParam("-l", argc, argv).c_str());
int markerSeparation = atoi(getParam("-s", argc, argv).c_str());
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
int margins = markerSeparation;
if(isParam("-m", argc, argv)) {
margins = atoi(getParam("-m", argc, argv).c_str());
}
int borderBits = 1;
if(isParam("-bb", argc, argv)) {
borderBits = atoi(getParam("-bb", argc, argv).c_str());
}
bool showImage = false;
if(isParam("-si", argc, argv)) showImage = true;
Size imageSize;
imageSize.width = markersX * (markerLength + markerSeparation) - markerSeparation + 2 * margins;
imageSize.height =
markersY * (markerLength + markerSeparation) - markerSeparation + 2 * margins;
aruco::GridBoard board = aruco::GridBoard::create(markersX, markersY, float(markerLength),
float(markerSeparation), dictionary);
// show created board
Mat boardImage;
board.draw(imageSize, boardImage, margins, borderBits);
if(showImage) {
imshow("board", boardImage);
waitKey(0);
}
imwrite(getParam("-o", argc, argv), boardImage);
return 0;
}

@ -0,0 +1,141 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/aruco/charuco.hpp>
#include <iostream>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Create a ChArUco board image" << endl;
cout << "Parameters: " << endl;
cout << "-o <image> # Output image" << endl;
cout << "-w <nsquares> # Number of squares in X direction" << endl;
cout << "-h <nsquares> # Number of squares in Y direction" << endl;
cout << "-sl <squareLength> # Square side lenght (in pixels)" << endl;
cout << "-ml <markerLength> # Marker side lenght (in pixels)" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "[-m <marginSize>] # Margins size (in pixels)"
<< "Default is (squareLength-markerLength)" << endl;
cout << "[-bb <int>] # Number of bits in marker borders. Default is 1" << endl;
cout << "[-si] # show generated image" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-w", argc, argv) || !isParam("-h", argc, argv) || !isParam("-sl", argc, argv) ||
!isParam("-ml", argc, argv) || !isParam("-d", argc, argv) || !isParam("-o", argc, argv)) {
help();
return 0;
}
int squaresX = atoi(getParam("-w", argc, argv).c_str());
int squaresY = atoi(getParam("-h", argc, argv).c_str());
int squareLength = atoi(getParam("-sl", argc, argv).c_str());
int markerLength = atoi(getParam("-ml", argc, argv).c_str());
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
int margins = squareLength - markerLength;
if(isParam("-m", argc, argv)) {
margins = atoi(getParam("-m", argc, argv).c_str());
}
int borderBits = 1;
if(isParam("-bb", argc, argv)) {
borderBits = atoi(getParam("-bb", argc, argv).c_str());
}
bool showImage = false;
if(isParam("-si", argc, argv)) showImage = true;
Size imageSize;
imageSize.width = squaresX * squareLength + 2 * margins;
imageSize.height = squaresY * squareLength + 2 * margins;
aruco::CharucoBoard board = aruco::CharucoBoard::create(squaresX, squaresY, (float)squareLength,
(float)markerLength, dictionary);
// show created board
Mat boardImage;
board.draw(imageSize, boardImage, margins, borderBits);
if(showImage) {
imshow("board", boardImage);
waitKey(0);
}
imwrite(getParam("-o", argc, argv), boardImage);
return 0;
}

@ -0,0 +1,146 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/aruco/charuco.hpp>
#include <vector>
#include <iostream>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Create a ChArUco marker image" << endl;
cout << "Parameters: " << endl;
cout << "-o <image> # Output image" << endl;
cout << "-sl <squareLength> # Square side lenght (in pixels)" << endl;
cout << "-ml <markerLength> # Marker side lenght (in pixels)" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "-ids <id1,id2,id3,id4> # Four ids for the ChArUco marker" << endl;
cout << "[-m <marginSize>] # Margins size (in pixels). Default is 0" << endl;
cout << "[-bb <int>] # Number of bits in marker borders. Default is 1" << endl;
cout << "[-si] # show generated image" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-sl", argc, argv) || !isParam("-ml", argc, argv) || !isParam("-d", argc, argv) ||
!isParam("-ids", argc, argv)) {
help();
return 0;
}
int squareLength = atoi(getParam("-sl", argc, argv).c_str());
int markerLength = atoi(getParam("-ml", argc, argv).c_str());
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
string idsString = getParam("-ids", argc, argv);
istringstream ss(idsString);
vector< string > splittedIds;
string token;
while(getline(ss, token, ','))
splittedIds.push_back(token);
if(splittedIds.size() < 4) {
cerr << "Incorrect ids format" << endl;
help();
return 0;
}
Vec4i ids;
for(int i = 0; i < 4; i++)
ids[i] = atoi(splittedIds[i].c_str());
int margins = 0;
if(isParam("-m", argc, argv)) {
margins = atoi(getParam("-m", argc, argv).c_str());
}
int borderBits = 1;
if(isParam("-bb", argc, argv)) {
borderBits = atoi(getParam("-bb", argc, argv).c_str());
}
bool showImage = false;
if(isParam("-si", argc, argv)) showImage = true;
Mat markerImg;
aruco::drawCharucoDiamond(dictionary, ids, squareLength, markerLength, markerImg, margins,
borderBits);
if(showImage) {
imshow("board", markerImg);
waitKey(0);
}
imwrite(getParam("-o", argc, argv), markerImg);
return 0;
}

@ -0,0 +1,126 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/aruco.hpp>
#include <iostream>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Create an ArUco marker image" << endl;
cout << "Parameters: " << endl;
cout << "-o <image> # Output image" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "-id <int> # Marker id in the dictionary" << endl;
cout << "[-ms <int>] # Marker size in pixels. Default is 200" << endl;
cout << "[-bb <int>] # Number of bits in marker borders. Default is 1" << endl;
cout << "[-si] # show generated image" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-d", argc, argv) || !isParam("-o", argc, argv)) {
help();
return 0;
}
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
int markerId = atoi(getParam("-id", argc, argv).c_str());
int borderBits = 1;
if(isParam("-bb", argc, argv)) {
borderBits = atoi(getParam("-bb", argc, argv).c_str());
}
int markerSize = 200;
if(isParam("-ms", argc, argv)) {
markerSize = atoi(getParam("-ms", argc, argv).c_str());
}
bool showImage = false;
if(isParam("-si", argc, argv)) showImage = true;
Mat markerImg;
aruco::drawMarker(dictionary, markerId, markerSize, markerImg, borderBits);
if(showImage) {
imshow("marker", markerImg);
waitKey(0);
}
imwrite(getParam("-o", argc, argv), markerImg);
return 0;
}

@ -0,0 +1,252 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/aruco.hpp>
#include <vector>
#include <iostream>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Pose estimation using a ArUco Planar Grid board" << endl;
cout << "Parameters: " << endl;
cout << "-w <nmarkers> # Number of markers in X direction" << endl;
cout << "-h <nmarkers> # Number of markers in Y direction" << endl;
cout << "-l <markerLength> # Marker side lenght (in meters)" << endl;
cout << "-s <markerSeparation> # Separation between two consecutive"
<< "markers in the grid (in meters)" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "-c <cameraParams> # Camera intrinsic parameters file" << endl;
cout << "[-v <videoFile>] # Input from video file, if ommited, input comes from camera" << endl;
cout << "[-ci <int>] # Camera id if input doesnt come from video (-v). Default is 0" << endl;
cout << "[-dp <detectorParams>] # File of marker detector parameters" << endl;
cout << "[-rs] # Apply refind strategy" << endl;
cout << "[-r] # show rejected candidates too" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
static bool readCameraParameters(string filename, Mat &camMatrix, Mat &distCoeffs) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["camera_matrix"] >> camMatrix;
fs["distortion_coefficients"] >> distCoeffs;
return true;
}
/**
*/
static bool readDetectorParameters(string filename, aruco::DetectorParameters &params) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["adaptiveThreshWinSizeMin"] >> params.adaptiveThreshWinSizeMin;
fs["adaptiveThreshWinSizeMax"] >> params.adaptiveThreshWinSizeMax;
fs["adaptiveThreshWinSizeStep"] >> params.adaptiveThreshWinSizeStep;
fs["adaptiveThreshConstant"] >> params.adaptiveThreshConstant;
fs["minMarkerPerimeterRate"] >> params.minMarkerPerimeterRate;
fs["maxMarkerPerimeterRate"] >> params.maxMarkerPerimeterRate;
fs["polygonalApproxAccuracyRate"] >> params.polygonalApproxAccuracyRate;
fs["minCornerDistanceRate"] >> params.minCornerDistanceRate;
fs["minDistanceToBorder"] >> params.minDistanceToBorder;
fs["minMarkerDistanceRate"] >> params.minMarkerDistanceRate;
fs["doCornerRefinement"] >> params.doCornerRefinement;
fs["cornerRefinementWinSize"] >> params.cornerRefinementWinSize;
fs["cornerRefinementMaxIterations"] >> params.cornerRefinementMaxIterations;
fs["cornerRefinementMinAccuracy"] >> params.cornerRefinementMinAccuracy;
fs["markerBorderBits"] >> params.markerBorderBits;
fs["perspectiveRemovePixelPerCell"] >> params.perspectiveRemovePixelPerCell;
fs["perspectiveRemoveIgnoredMarginPerCell"] >> params.perspectiveRemoveIgnoredMarginPerCell;
fs["maxErroneousBitsInBorderRate"] >> params.maxErroneousBitsInBorderRate;
fs["minOtsuStdDev"] >> params.minOtsuStdDev;
fs["errorCorrectionRate"] >> params.errorCorrectionRate;
return true;
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-w", argc, argv) || !isParam("-h", argc, argv) || !isParam("-l", argc, argv) ||
!isParam("-s", argc, argv) || !isParam("-d", argc, argv) || !isParam("-c", argc, argv)) {
help();
return 0;
}
int markersX = atoi(getParam("-w", argc, argv).c_str());
int markersY = atoi(getParam("-h", argc, argv).c_str());
float markerLength = (float)atof(getParam("-l", argc, argv).c_str());
float markerSeparation = (float)atof(getParam("-s", argc, argv).c_str());
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
bool showRejected = false;
if(isParam("-r", argc, argv)) showRejected = true;
Mat camMatrix, distCoeffs;
if(isParam("-c", argc, argv)) {
bool readOk = readCameraParameters(getParam("-c", argc, argv), camMatrix, distCoeffs);
if(!readOk) {
cerr << "Invalid camera file" << endl;
return 0;
}
}
aruco::DetectorParameters detectorParams;
if(isParam("-dp", argc, argv)) {
bool readOk = readDetectorParameters(getParam("-dp", argc, argv), detectorParams);
if(!readOk) {
cerr << "Invalid detector parameters file" << endl;
return 0;
}
}
detectorParams.doCornerRefinement = true; // do corner refinement in markers
bool refindStrategy = false;
if(isParam("-rs", argc, argv)) refindStrategy = true;
VideoCapture inputVideo;
int waitTime;
if(isParam("-v", argc, argv)) {
inputVideo.open(getParam("-v", argc, argv));
waitTime = 0;
} else {
int camId = 0;
if(isParam("-ci", argc, argv)) camId = atoi(getParam("-ci", argc, argv).c_str());
inputVideo.open(camId);
waitTime = 10;
}
float axisLength = 0.5f * ((float)min(markersX, markersY) * (markerLength + markerSeparation) +
markerSeparation);
// create board object
aruco::GridBoard board =
aruco::GridBoard::create(markersX, markersY, markerLength, markerSeparation, dictionary);
double totalTime = 0;
int totalIterations = 0;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
double tick = (double)getTickCount();
vector< int > ids;
vector< vector< Point2f > > corners, rejected;
Mat rvec, tvec;
// detect markers
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
// refind strategy to detect more markers
if(refindStrategy)
aruco::refineDetectedMarkers(image, board, corners, ids, rejected, camMatrix,
distCoeffs);
// estimate board pose
int markersOfBoardDetected = 0;
if(ids.size() > 0)
markersOfBoardDetected =
aruco::estimatePoseBoard(corners, ids, board, camMatrix, distCoeffs, rvec, tvec);
double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
totalTime += currentTime;
totalIterations++;
if(totalIterations % 30 == 0) {
cout << "Detection Time = " << currentTime * 1000 << " ms "
<< "(Mean = " << 1000 * totalTime / double(totalIterations) << " ms)" << endl;
}
// draw results
image.copyTo(imageCopy);
if(ids.size() > 0) {
aruco::drawDetectedMarkers(imageCopy, corners, ids);
}
if(showRejected && rejected.size() > 0)
aruco::drawDetectedMarkers(imageCopy, rejected, noArray(), Scalar(100, 0, 255));
if(markersOfBoardDetected > 0)
aruco::drawAxis(imageCopy, camMatrix, distCoeffs, rvec, tvec, axisLength);
imshow("out", imageCopy);
char key = (char)waitKey(waitTime);
if(key == 27) break;
}
return 0;
}

@ -0,0 +1,267 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/aruco/charuco.hpp>
#include <vector>
#include <iostream>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Pose estimation using a ChArUco board" << endl;
cout << "Parameters: " << endl;
cout << "-w <nmarkers> # Number of markers in X direction" << endl;
cout << "-h <nsquares> # Number of squares in Y direction" << endl;
cout << "-sl <squareLength> # Square side lenght (in meters)" << endl;
cout << "-ml <markerLength> # Marker side lenght (in meters)" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "[-c <cameraParams>] # Camera intrinsic parameters file" << endl;
cout << "[-v <videoFile>] # Input from video file, if ommited, input comes from camera" << endl;
cout << "[-ci <int>] # Camera id if input doesnt come from video (-v). Default is 0" << endl;
cout << "[-dp <detectorParams>] # File of marker detector parameters" << endl;
cout << "[-rs] # Apply refind strategy" << endl;
cout << "[-r] # show rejected candidates too" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
static bool readCameraParameters(string filename, Mat &camMatrix, Mat &distCoeffs) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["camera_matrix"] >> camMatrix;
fs["distortion_coefficients"] >> distCoeffs;
return true;
}
/**
*/
static bool readDetectorParameters(string filename, aruco::DetectorParameters &params) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["adaptiveThreshWinSizeMin"] >> params.adaptiveThreshWinSizeMin;
fs["adaptiveThreshWinSizeMax"] >> params.adaptiveThreshWinSizeMax;
fs["adaptiveThreshWinSizeStep"] >> params.adaptiveThreshWinSizeStep;
fs["adaptiveThreshConstant"] >> params.adaptiveThreshConstant;
fs["minMarkerPerimeterRate"] >> params.minMarkerPerimeterRate;
fs["maxMarkerPerimeterRate"] >> params.maxMarkerPerimeterRate;
fs["polygonalApproxAccuracyRate"] >> params.polygonalApproxAccuracyRate;
fs["minCornerDistanceRate"] >> params.minCornerDistanceRate;
fs["minDistanceToBorder"] >> params.minDistanceToBorder;
fs["minMarkerDistanceRate"] >> params.minMarkerDistanceRate;
fs["doCornerRefinement"] >> params.doCornerRefinement;
fs["cornerRefinementWinSize"] >> params.cornerRefinementWinSize;
fs["cornerRefinementMaxIterations"] >> params.cornerRefinementMaxIterations;
fs["cornerRefinementMinAccuracy"] >> params.cornerRefinementMinAccuracy;
fs["markerBorderBits"] >> params.markerBorderBits;
fs["perspectiveRemovePixelPerCell"] >> params.perspectiveRemovePixelPerCell;
fs["perspectiveRemoveIgnoredMarginPerCell"] >> params.perspectiveRemoveIgnoredMarginPerCell;
fs["maxErroneousBitsInBorderRate"] >> params.maxErroneousBitsInBorderRate;
fs["minOtsuStdDev"] >> params.minOtsuStdDev;
fs["errorCorrectionRate"] >> params.errorCorrectionRate;
return true;
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-w", argc, argv) || !isParam("-h", argc, argv) || !isParam("-sl", argc, argv) ||
!isParam("-ml", argc, argv) || !isParam("-d", argc, argv)) {
help();
return 0;
}
int squaresX = atoi(getParam("-w", argc, argv).c_str());
int squaresY = atoi(getParam("-h", argc, argv).c_str());
float squareLength = (float)atof(getParam("-sl", argc, argv).c_str());
float markerLength = (float)atof(getParam("-ml", argc, argv).c_str());
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
bool showRejected = false;
if(isParam("-r", argc, argv)) showRejected = true;
Mat camMatrix, distCoeffs;
if(isParam("-c", argc, argv)) {
bool readOk = readCameraParameters(getParam("-c", argc, argv), camMatrix, distCoeffs);
if(!readOk) {
cerr << "Invalid camera file" << endl;
return 0;
}
}
aruco::DetectorParameters detectorParams;
if(isParam("-dp", argc, argv)) {
bool readOk = readDetectorParameters(getParam("-dp", argc, argv), detectorParams);
if(!readOk) {
cerr << "Invalid detector parameters file" << endl;
return 0;
}
}
bool refindStrategy = false;
if(isParam("-rs", argc, argv)) refindStrategy = true;
VideoCapture inputVideo;
int waitTime;
if(isParam("-v", argc, argv)) {
inputVideo.open(getParam("-v", argc, argv));
waitTime = 0;
} else {
int camId = 0;
if(isParam("-ci", argc, argv)) camId = atoi(getParam("-ci", argc, argv).c_str());
inputVideo.open(camId);
waitTime = 10;
}
float axisLength = 0.5f * ((float)min(squaresX, squaresY) * (squareLength));
// create charuco board object
aruco::CharucoBoard board =
aruco::CharucoBoard::create(squaresX, squaresY, squareLength, markerLength, dictionary);
double totalTime = 0;
int totalIterations = 0;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
double tick = (double)getTickCount();
vector< int > markerIds, charucoIds;
vector< vector< Point2f > > markerCorners, rejectedMarkers;
vector< Point2f > charucoCorners;
Mat rvec, tvec;
// detect markers
aruco::detectMarkers(image, dictionary, markerCorners, markerIds, detectorParams,
rejectedMarkers);
// refind strategy to detect more markers
if(refindStrategy)
aruco::refineDetectedMarkers(image, board, markerCorners, markerIds, rejectedMarkers,
camMatrix, distCoeffs);
// interpolate charuco corners
int interpolatedCorners = 0;
if(markerIds.size() > 0)
interpolatedCorners =
aruco::interpolateCornersCharuco(markerCorners, markerIds, image, board,
charucoCorners, charucoIds, camMatrix, distCoeffs);
// estimate charuco board pose
bool validPose = false;
if(camMatrix.total() != 0)
validPose = aruco::estimatePoseCharucoBoard(charucoCorners, charucoIds, board,
camMatrix, distCoeffs, rvec, tvec);
double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
totalTime += currentTime;
totalIterations++;
if(totalIterations % 30 == 0) {
cout << "Detection Time = " << currentTime * 1000 << " ms "
<< "(Mean = " << 1000 * totalTime / double(totalIterations) << " ms)" << endl;
}
// draw results
image.copyTo(imageCopy);
if(markerIds.size() > 0) {
aruco::drawDetectedMarkers(imageCopy, markerCorners);
}
if(showRejected && rejectedMarkers.size() > 0)
aruco::drawDetectedMarkers(imageCopy, rejectedMarkers, noArray(), Scalar(100, 0, 255));
if(interpolatedCorners > 0) {
Scalar color;
color = Scalar(0, 0, 255);
aruco::drawDetectedCornersCharuco(imageCopy, charucoCorners, charucoIds, color);
}
if(validPose)
aruco::drawAxis(imageCopy, camMatrix, distCoeffs, rvec, tvec, axisLength);
imshow("out", imageCopy);
char key = (char)waitKey(waitTime);
if(key == 27) break;
}
return 0;
}

@ -0,0 +1,272 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/aruco/charuco.hpp>
#include <vector>
#include <iostream>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Detect ChArUco markers" << endl;
cout << "Parameters: " << endl;
cout << "-sl <squareLength> # Square side lenght (in meters)" << endl;
cout << "-ml <markerLength> # Marker side lenght (in meters)" << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "[-c <cameraParams>] # Camera intrinsic parameters file" << endl;
cout << "[-as <float>] # Automatic scale. The provided number is multiplied by the last "
<< "diamond id becoming an indicator of the square length. In this case, the -sl and "
<< "-ml are only used to know the relative length relation between "
<< "squares and markers" << endl;
cout << "[-v <videoFile>] # Input from video file, if ommited, input comes from camera" << endl;
cout << "[-ci <int>] # Camera id if input doesnt come from video (-v). Default is 0" << endl;
cout << "[-dp <detectorParams>] # File of marker detector parameters" << endl;
cout << "[-r] # show rejected candidates too" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
static bool readCameraParameters(string filename, Mat &camMatrix, Mat &distCoeffs) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["camera_matrix"] >> camMatrix;
fs["distortion_coefficients"] >> distCoeffs;
return true;
}
/**
*/
static bool readDetectorParameters(string filename, aruco::DetectorParameters &params) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["adaptiveThreshWinSizeMin"] >> params.adaptiveThreshWinSizeMin;
fs["adaptiveThreshWinSizeMax"] >> params.adaptiveThreshWinSizeMax;
fs["adaptiveThreshWinSizeStep"] >> params.adaptiveThreshWinSizeStep;
fs["adaptiveThreshConstant"] >> params.adaptiveThreshConstant;
fs["minMarkerPerimeterRate"] >> params.minMarkerPerimeterRate;
fs["maxMarkerPerimeterRate"] >> params.maxMarkerPerimeterRate;
fs["polygonalApproxAccuracyRate"] >> params.polygonalApproxAccuracyRate;
fs["minCornerDistanceRate"] >> params.minCornerDistanceRate;
fs["minDistanceToBorder"] >> params.minDistanceToBorder;
fs["minMarkerDistanceRate"] >> params.minMarkerDistanceRate;
fs["doCornerRefinement"] >> params.doCornerRefinement;
fs["cornerRefinementWinSize"] >> params.cornerRefinementWinSize;
fs["cornerRefinementMaxIterations"] >> params.cornerRefinementMaxIterations;
fs["cornerRefinementMinAccuracy"] >> params.cornerRefinementMinAccuracy;
fs["markerBorderBits"] >> params.markerBorderBits;
fs["perspectiveRemovePixelPerCell"] >> params.perspectiveRemovePixelPerCell;
fs["perspectiveRemoveIgnoredMarginPerCell"] >> params.perspectiveRemoveIgnoredMarginPerCell;
fs["maxErroneousBitsInBorderRate"] >> params.maxErroneousBitsInBorderRate;
fs["minOtsuStdDev"] >> params.minOtsuStdDev;
fs["errorCorrectionRate"] >> params.errorCorrectionRate;
return true;
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-sl", argc, argv) || !isParam("-ml", argc, argv) || !isParam("-d", argc, argv)) {
help();
return 0;
}
float squareLength = (float)atof(getParam("-sl", argc, argv).c_str());
float markerLength = (float)atof(getParam("-ml", argc, argv).c_str());
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
bool showRejected = false;
if(isParam("-r", argc, argv)) showRejected = true;
bool estimatePose = false;
Mat camMatrix, distCoeffs;
if(isParam("-c", argc, argv)) {
bool readOk = readCameraParameters(getParam("-c", argc, argv), camMatrix, distCoeffs);
if(!readOk) {
cerr << "Invalid camera file" << endl;
return 0;
}
estimatePose = true;
}
bool autoScale = false;
float autoScaleFactor = 1.;
if(isParam("-as", argc, argv)) {
autoScaleFactor = (float)atof(getParam("-as", argc, argv).c_str());
autoScale = true;
}
aruco::DetectorParameters detectorParams;
if(isParam("-dp", argc, argv)) {
bool readOk = readDetectorParameters(getParam("-dp", argc, argv), detectorParams);
if(!readOk) {
cerr << "Invalid detector parameters file" << endl;
return 0;
}
}
VideoCapture inputVideo;
int waitTime;
if(isParam("-v", argc, argv)) {
inputVideo.open(getParam("-v", argc, argv));
waitTime = 0;
} else {
int camId = 0;
if(isParam("-ci", argc, argv)) camId = atoi(getParam("-ci", argc, argv).c_str());
inputVideo.open(camId);
waitTime = 10;
}
double totalTime = 0;
int totalIterations = 0;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
double tick = (double)getTickCount();
vector< int > markerIds;
vector< Vec4i > diamondIds;
vector< vector< Point2f > > markerCorners, rejectedMarkers, diamondCorners;
vector< Mat > rvecs, tvecs;
// detect markers
aruco::detectMarkers(image, dictionary, markerCorners, markerIds, detectorParams,
rejectedMarkers);
// detect diamonds
if(markerIds.size() > 0)
aruco::detectCharucoDiamond(image, markerCorners, markerIds,
squareLength / markerLength, diamondCorners, diamondIds,
camMatrix, distCoeffs);
// estimate diamond pose
if(estimatePose && diamondIds.size() > 0) {
if(!autoScale) {
aruco::estimatePoseSingleMarkers(diamondCorners, squareLength, camMatrix,
distCoeffs, rvecs, tvecs);
} else {
// if autoscale, extract square size from last diamond id
for(unsigned int i = 0; i < diamondCorners.size(); i++) {
float autoSquareLength = autoScaleFactor * float(diamondIds[i].val[3]);
vector< vector< Point2f > > currentCorners;
vector< Mat > currentRvec, currentTvec;
currentCorners.push_back(diamondCorners[i]);
aruco::estimatePoseSingleMarkers(currentCorners, autoSquareLength, camMatrix,
distCoeffs, currentRvec, currentTvec);
rvecs.push_back(currentRvec[0]);
tvecs.push_back(currentTvec[0]);
}
}
}
double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
totalTime += currentTime;
totalIterations++;
if(totalIterations % 30 == 0) {
cout << "Detection Time = " << currentTime * 1000 << " ms "
<< "(Mean = " << 1000 * totalTime / double(totalIterations) << " ms)" << endl;
}
// draw results
image.copyTo(imageCopy);
if(markerIds.size() > 0)
aruco::drawDetectedMarkers(imageCopy, markerCorners);
if(showRejected && rejectedMarkers.size() > 0)
aruco::drawDetectedMarkers(imageCopy, rejectedMarkers, noArray(), Scalar(100, 0, 255));
if(diamondIds.size() > 0) {
aruco::drawDetectedDiamonds(imageCopy, diamondCorners, diamondIds);
if(estimatePose) {
for(unsigned int i = 0; i < diamondIds.size(); i++)
aruco::drawAxis(imageCopy, camMatrix, distCoeffs, rvecs[i], tvecs[i],
squareLength * 0.5f);
}
}
imshow("out", imageCopy);
char key = (char)waitKey(waitTime);
if(key == 27) break;
}
return 0;
}

@ -0,0 +1,234 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/highgui.hpp>
#include <opencv2/aruco.hpp>
#include <vector>
#include <iostream>
using namespace std;
using namespace cv;
/**
*/
static void help() {
cout << "Basic marker detection" << endl;
cout << "Parameters: " << endl;
cout << "-d <dictionary> # DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2, "
<< "DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
<< "DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
<< "DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16" << endl;
cout << "[-v <videoFile>] # Input from video file, if ommited, input comes from camera" << endl;
cout << "[-ci <int>] # Camera id if input doesnt come from video (-v). Default is 0" << endl;
cout << "[-c <cameraParams>] # Camera intrinsic parameters. Needed for camera pose" << endl;
cout << "[-l <markerLength>] # Marker side lenght (in meters). Needed for correct"
<< "scale in camera pose, default 0.1" << endl;
cout << "[-dp <detectorParams>] # File of marker detector parameters" << endl;
cout << "[-r] # show rejected candidates too" << endl;
}
/**
*/
static bool isParam(string param, int argc, char **argv) {
for(int i = 0; i < argc; i++)
if(string(argv[i]) == param) return true;
return false;
}
/**
*/
static string getParam(string param, int argc, char **argv, string defvalue = "") {
int idx = -1;
for(int i = 0; i < argc && idx == -1; i++)
if(string(argv[i]) == param) idx = i;
if(idx == -1 || (idx + 1) >= argc)
return defvalue;
else
return argv[idx + 1];
}
/**
*/
static bool readCameraParameters(string filename, Mat &camMatrix, Mat &distCoeffs) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["camera_matrix"] >> camMatrix;
fs["distortion_coefficients"] >> distCoeffs;
return true;
}
/**
*/
static bool readDetectorParameters(string filename, aruco::DetectorParameters &params) {
FileStorage fs(filename, FileStorage::READ);
if(!fs.isOpened())
return false;
fs["adaptiveThreshWinSizeMin"] >> params.adaptiveThreshWinSizeMin;
fs["adaptiveThreshWinSizeMax"] >> params.adaptiveThreshWinSizeMax;
fs["adaptiveThreshWinSizeStep"] >> params.adaptiveThreshWinSizeStep;
fs["adaptiveThreshConstant"] >> params.adaptiveThreshConstant;
fs["minMarkerPerimeterRate"] >> params.minMarkerPerimeterRate;
fs["maxMarkerPerimeterRate"] >> params.maxMarkerPerimeterRate;
fs["polygonalApproxAccuracyRate"] >> params.polygonalApproxAccuracyRate;
fs["minCornerDistanceRate"] >> params.minCornerDistanceRate;
fs["minDistanceToBorder"] >> params.minDistanceToBorder;
fs["minMarkerDistanceRate"] >> params.minMarkerDistanceRate;
fs["doCornerRefinement"] >> params.doCornerRefinement;
fs["cornerRefinementWinSize"] >> params.cornerRefinementWinSize;
fs["cornerRefinementMaxIterations"] >> params.cornerRefinementMaxIterations;
fs["cornerRefinementMinAccuracy"] >> params.cornerRefinementMinAccuracy;
fs["markerBorderBits"] >> params.markerBorderBits;
fs["perspectiveRemovePixelPerCell"] >> params.perspectiveRemovePixelPerCell;
fs["perspectiveRemoveIgnoredMarginPerCell"] >> params.perspectiveRemoveIgnoredMarginPerCell;
fs["maxErroneousBitsInBorderRate"] >> params.maxErroneousBitsInBorderRate;
fs["minOtsuStdDev"] >> params.minOtsuStdDev;
fs["errorCorrectionRate"] >> params.errorCorrectionRate;
return true;
}
/**
*/
int main(int argc, char *argv[]) {
if(!isParam("-d", argc, argv)) {
help();
return 0;
}
int dictionaryId = atoi(getParam("-d", argc, argv).c_str());
aruco::Dictionary dictionary =
aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME(dictionaryId));
bool showRejected = false;
if(isParam("-r", argc, argv)) showRejected = true;
bool estimatePose = false;
Mat camMatrix, distCoeffs;
if(isParam("-c", argc, argv)) {
bool readOk = readCameraParameters(getParam("-c", argc, argv), camMatrix, distCoeffs);
if(!readOk) {
cerr << "Invalid camera file" << endl;
return 0;
}
estimatePose = true;
}
float markerLength = (float)atof(getParam("-l", argc, argv, "0.1").c_str());
aruco::DetectorParameters detectorParams;
if(isParam("-dp", argc, argv)) {
bool readOk = readDetectorParameters(getParam("-dp", argc, argv), detectorParams);
if(!readOk) {
cerr << "Invalid detector parameters file" << endl;
return 0;
}
}
detectorParams.doCornerRefinement = true; // do corner refinement in markers
VideoCapture inputVideo;
int waitTime;
if(isParam("-v", argc, argv)) {
inputVideo.open(getParam("-v", argc, argv));
waitTime = 0;
} else {
int camId = 0;
if(isParam("-ci", argc, argv)) camId = atoi(getParam("-ci", argc, argv).c_str());
inputVideo.open(camId);
waitTime = 10;
}
double totalTime = 0;
int totalIterations = 0;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
double tick = (double)getTickCount();
vector< int > ids;
vector< vector< Point2f > > corners, rejected;
vector< Mat > rvecs, tvecs;
// detect markers and estimate pose
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
if(estimatePose && ids.size() > 0)
aruco::estimatePoseSingleMarkers(corners, markerLength, camMatrix, distCoeffs, rvecs,
tvecs);
double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
totalTime += currentTime;
totalIterations++;
if(totalIterations % 30 == 0) {
cout << "Detection Time = " << currentTime * 1000 << " ms "
<< "(Mean = " << 1000 * totalTime / double(totalIterations) << " ms)" << endl;
}
// draw results
image.copyTo(imageCopy);
if(ids.size() > 0) {
aruco::drawDetectedMarkers(imageCopy, corners, ids);
if(estimatePose) {
for(unsigned int i = 0; i < ids.size(); i++)
aruco::drawAxis(imageCopy, camMatrix, distCoeffs, rvecs[i], tvecs[i],
markerLength * 0.5f);
}
}
if(showRejected && rejected.size() > 0)
aruco::drawDetectedMarkers(imageCopy, rejected, noArray(), Scalar(100, 0, 255));
imshow("out", imageCopy);
char key = (char)waitKey(waitTime);
if(key == 27) break;
}
return 0;
}

@ -0,0 +1,17 @@
%YAML:1.0
nmarkers: 1024
adaptiveThreshWinSize: 21
adaptiveThreshConstant: 7
minMarkerPerimeterRate: 0.03
maxMarkerPerimeterRate: 4.0
polygonalApproxAccuracyRate: 0.05
minCornerDistance: 10.0
minDistanceToBorder: 3
minMarkerDistance: 10.0
cornerRefinementWinSize: 5
cornerRefinementMaxIterations: 30
cornerRefinementMinAccuracy: 0.1
markerBorderBits: 1
perspectiveRemovePixelPerCell: 8
perspectiveRemoveIgnoredMarginPerCell: 0.13
maxErroneousBitsInBorderRate: 0.04

File diff suppressed because it is too large Load Diff

@ -0,0 +1,946 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include "precomp.hpp"
#include "opencv2/aruco/charuco.hpp"
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
namespace cv {
namespace aruco {
using namespace std;
/**
*/
void CharucoBoard::draw(Size outSize, OutputArray _img, int marginSize, int borderBits) {
CV_Assert(outSize.area() > 0);
CV_Assert(marginSize >= 0);
_img.create(outSize, CV_8UC1);
_img.setTo(255);
Mat out = _img.getMat();
Mat noMarginsImg =
out.colRange(marginSize, out.cols - marginSize).rowRange(marginSize, out.rows - marginSize);
double totalLengthX, totalLengthY;
totalLengthX = _squareLength * _squaresX;
totalLengthY = _squareLength * _squaresY;
// proportional transformation
double xReduction = totalLengthX / double(noMarginsImg.cols);
double yReduction = totalLengthY / double(noMarginsImg.rows);
// determine the zone where the chessboard is placed
Mat chessboardZoneImg;
if(xReduction > yReduction) {
int nRows = int(totalLengthY / xReduction);
int rowsMargins = (noMarginsImg.rows - nRows) / 2;
chessboardZoneImg = noMarginsImg.rowRange(rowsMargins, noMarginsImg.rows - rowsMargins);
} else {
int nCols = int(totalLengthX / yReduction);
int colsMargins = (noMarginsImg.cols - nCols) / 2;
chessboardZoneImg = noMarginsImg.colRange(colsMargins, noMarginsImg.cols - colsMargins);
}
// determine the margins to draw only the markers
// take the minimum just to be sure
double squareSizePixels = min(double(chessboardZoneImg.cols) / double(_squaresX),
double(chessboardZoneImg.rows) / double(_squaresY));
double diffSquareMarkerLength = (_squareLength - _markerLength) / 2;
int diffSquareMarkerLengthPixels =
int(diffSquareMarkerLength * squareSizePixels / _squareLength);
// draw markers
Mat markersImg;
aruco::drawPlanarBoard((*this), chessboardZoneImg.size(), markersImg,
diffSquareMarkerLengthPixels, borderBits);
markersImg.copyTo(chessboardZoneImg);
// now draw black squares
for(int y = 0; y < _squaresY; y++) {
for(int x = 0; x < _squaresX; x++) {
if(y % 2 != x % 2) continue; // white corner, dont do anything
double startX, startY;
startX = squareSizePixels * double(x);
startY = double(chessboardZoneImg.rows) - squareSizePixels * double(y + 1);
Mat squareZone = chessboardZoneImg.rowRange(int(startY), int(startY + squareSizePixels))
.colRange(int(startX), int(startX + squareSizePixels));
squareZone.setTo(0);
}
}
}
/**
*/
CharucoBoard CharucoBoard::create(int squaresX, int squaresY, float squareLength,
float markerLength, Dictionary dictionary) {
CV_Assert(squaresX > 1 && squaresY > 1 && markerLength > 0 && squareLength > markerLength);
CharucoBoard res;
res._squaresX = squaresX;
res._squaresY = squaresY;
res._squareLength = squareLength;
res._markerLength = markerLength;
res.dictionary = dictionary;
float diffSquareMarkerLength = (squareLength - markerLength) / 2;
// calculate Board objPoints
for(int y = squaresY - 1; y >= 0; y--) {
for(int x = 0; x < squaresX; x++) {
if(y % 2 == x % 2) continue; // black corner, no marker here
vector< Point3f > corners;
corners.resize(4);
corners[0] = Point3f(x * squareLength + diffSquareMarkerLength,
y * squareLength + diffSquareMarkerLength + markerLength, 0);
corners[1] = corners[0] + Point3f(markerLength, 0, 0);
corners[2] = corners[0] + Point3f(markerLength, -markerLength, 0);
corners[3] = corners[0] + Point3f(0, -markerLength, 0);
res.objPoints.push_back(corners);
// first ids in dictionary
int nextId = (int)res.ids.size();
res.ids.push_back(nextId);
}
}
// now fill chessboardCorners
for(int y = 0; y < squaresY - 1; y++) {
for(int x = 0; x < squaresX - 1; x++) {
Point3f corner;
corner.x = (x + 1) * squareLength;
corner.y = (y + 1) * squareLength;
corner.z = 0;
res.chessboardCorners.push_back(corner);
}
}
res._getNearestMarkerCorners();
return res;
}
/**
* Fill nearestMarkerIdx and nearestMarkerCorners arrays
*/
void CharucoBoard::_getNearestMarkerCorners() {
nearestMarkerIdx.resize(chessboardCorners.size());
nearestMarkerCorners.resize(chessboardCorners.size());
unsigned int nMarkers = (unsigned int)ids.size();
unsigned int nCharucoCorners = (unsigned int)chessboardCorners.size();
for(unsigned int i = 0; i < nCharucoCorners; i++) {
double minDist = -1; // distance of closest markers
Point3f charucoCorner = chessboardCorners[i];
for(unsigned int j = 0; j < nMarkers; j++) {
// calculate distance from marker center to charuco corner
Point3f center = Point3f(0, 0, 0);
for(unsigned int k = 0; k < 4; k++)
center += objPoints[j][k];
center /= 4.;
double sqDistance;
Point3f distVector = charucoCorner - center;
sqDistance = distVector.x * distVector.x + distVector.y * distVector.y;
if(j == 0 || fabs(sqDistance - minDist) < 0.0001) {
// if same minimum distance (or first iteration), add to nearestMarkerIdx vector
nearestMarkerIdx[i].push_back(j);
minDist = sqDistance;
} else if(sqDistance < minDist) {
// if finding a closest marker to the charuco corner
nearestMarkerIdx[i].clear(); // remove any previous added marker
nearestMarkerIdx[i].push_back(j); // add the new closest marker index
minDist = sqDistance;
}
}
// for each of the closest markers, search the marker corner index closer
// to the charuco corner
for(unsigned int j = 0; j < nearestMarkerIdx[i].size(); j++) {
nearestMarkerCorners[i].resize(nearestMarkerIdx[i].size());
double minDistCorner = -1;
for(unsigned int k = 0; k < 4; k++) {
double sqDistance;
Point3f distVector = charucoCorner - objPoints[nearestMarkerIdx[i][j]][k];
sqDistance = distVector.x * distVector.x + distVector.y * distVector.y;
if(k == 0 || sqDistance < minDistCorner) {
// if this corner is closer to the charuco corner, assing its index
// to nearestMarkerCorners
minDistCorner = sqDistance;
nearestMarkerCorners[i][j] = k;
}
}
}
}
}
/**
* Remove charuco corners if any of their minMarkers closest markers has not been detected
*/
static unsigned int _filterCornersWithoutMinMarkers(const CharucoBoard &board,
InputArray _allCharucoCorners,
InputArray _allCharucoIds,
InputArray _allArucoIds, int minMarkers,
OutputArray _filteredCharucoCorners,
OutputArray _filteredCharucoIds) {
CV_Assert(minMarkers >= 0 && minMarkers <= 2);
vector< Point2f > filteredCharucoCorners;
vector< int > filteredCharucoIds;
// for each charuco corner
for(unsigned int i = 0; i < _allCharucoIds.getMat().total(); i++) {
int currentCharucoId = _allCharucoIds.getMat().ptr< int >(0)[i];
int totalMarkers = 0; // nomber of closest marker detected
// look for closest markers
for(unsigned int m = 0; m < board.nearestMarkerIdx[currentCharucoId].size(); m++) {
int markerId = board.ids[board.nearestMarkerIdx[currentCharucoId][m]];
bool found = false;
for(unsigned int k = 0; k < _allArucoIds.getMat().total(); k++) {
if(_allArucoIds.getMat().ptr< int >(0)[k] == markerId) {
found = true;
break;
}
}
if(found) totalMarkers++;
}
// if enough markers detected, add the charuco corner to the final list
if(totalMarkers >= minMarkers) {
filteredCharucoIds.push_back(currentCharucoId);
filteredCharucoCorners.push_back(_allCharucoCorners.getMat().ptr< Point2f >(0)[i]);
}
}
// parse output
_filteredCharucoCorners.create((int)filteredCharucoCorners.size(), 1, CV_32FC2);
for(unsigned int i = 0; i < filteredCharucoCorners.size(); i++) {
_filteredCharucoCorners.getMat().ptr< Point2f >(0)[i] = filteredCharucoCorners[i];
}
_filteredCharucoIds.create((int)filteredCharucoIds.size(), 1, CV_32SC1);
for(unsigned int i = 0; i < filteredCharucoIds.size(); i++) {
_filteredCharucoIds.getMat().ptr< int >(0)[i] = filteredCharucoIds[i];
}
return (unsigned int)filteredCharucoCorners.size();
}
/**
* ParallelLoopBody class for the parallelization of the charuco corners subpixel refinement
* Called from function _selectAndRefineChessboardCorners()
*/
class CharucoSubpixelParallel : public ParallelLoopBody {
public:
CharucoSubpixelParallel(const Mat *_grey, vector< Point2f > *_filteredChessboardImgPoints,
vector< Size > *_filteredWinSizes, DetectorParameters *_params)
: grey(_grey), filteredChessboardImgPoints(_filteredChessboardImgPoints),
filteredWinSizes(_filteredWinSizes), params(_params) {}
void operator()(const Range &range) const {
const int begin = range.start;
const int end = range.end;
for(int i = begin; i < end; i++) {
vector< Point2f > in;
in.push_back((*filteredChessboardImgPoints)[i]);
Size winSize = (*filteredWinSizes)[i];
if(winSize.height == -1 || winSize.width == -1)
winSize = Size(params->cornerRefinementWinSize, params->cornerRefinementWinSize);
cornerSubPix(*grey, in, winSize, Size(),
TermCriteria(TermCriteria::MAX_ITER | TermCriteria::EPS,
params->cornerRefinementMaxIterations,
params->cornerRefinementMinAccuracy));
(*filteredChessboardImgPoints)[i] = in[0];
}
}
private:
CharucoSubpixelParallel &operator=(const CharucoSubpixelParallel &); // to quiet MSVC
const Mat *grey;
vector< Point2f > *filteredChessboardImgPoints;
vector< Size > *filteredWinSizes;
DetectorParameters *params;
};
/**
* @brief From all projected chessboard corners, select those inside the image and apply subpixel
* refinement. Returns number of valid corners.
*/
static unsigned int _selectAndRefineChessboardCorners(InputArray _allCorners, InputArray _image,
OutputArray _selectedCorners,
OutputArray _selectedIds,
const vector< Size > &winSizes) {
const int minDistToBorder = 2; // minimum distance of the corner to the image border
// remaining corners, ids and window refinement sizes after removing corners outside the image
vector< Point2f > filteredChessboardImgPoints;
vector< Size > filteredWinSizes;
vector< int > filteredIds;
// filter corners outside the image
Rect innerRect(minDistToBorder, minDistToBorder, _image.getMat().cols - 2 * minDistToBorder,
_image.getMat().rows - 2 * minDistToBorder);
for(unsigned int i = 0; i < _allCorners.getMat().total(); i++) {
if(innerRect.contains(_allCorners.getMat().ptr< Point2f >(0)[i])) {
filteredChessboardImgPoints.push_back(_allCorners.getMat().ptr< Point2f >(0)[i]);
filteredIds.push_back(i);
filteredWinSizes.push_back(winSizes[i]);
}
}
// if none valid, return 0
if(filteredChessboardImgPoints.size() == 0) return 0;
// corner refinement, first convert input image to grey
Mat grey;
if(_image.getMat().type() == CV_8UC3)
cvtColor(_image.getMat(), grey, COLOR_BGR2GRAY);
else
_image.getMat().copyTo(grey);
DetectorParameters params; // use default params for corner refinement
//// For each of the charuco corners, apply subpixel refinement using its correspondind winSize
// for(unsigned int i=0; i<filteredChessboardImgPoints.size(); i++) {
// vector<Point2f> in;
// in.push_back(filteredChessboardImgPoints[i]);
// Size winSize = filteredWinSizes[i];
// if(winSize.height == -1 || winSize.width == -1)
// winSize = Size(params.cornerRefinementWinSize, params.cornerRefinementWinSize);
// cornerSubPix(grey, in, winSize, Size(),
// TermCriteria(TermCriteria::MAX_ITER | TermCriteria::EPS,
// params->cornerRefinementMaxIterations,
// params->cornerRefinementMinAccuracy));
// filteredChessboardImgPoints[i] = in[0];
//}
// this is the parallel call for the previous commented loop (result is equivalent)
parallel_for_(
Range(0, (int)filteredChessboardImgPoints.size()),
CharucoSubpixelParallel(&grey, &filteredChessboardImgPoints, &filteredWinSizes, &params));
// parse output
_selectedCorners.create((int)filteredChessboardImgPoints.size(), 1, CV_32FC2);
for(unsigned int i = 0; i < filteredChessboardImgPoints.size(); i++) {
_selectedCorners.getMat().ptr< Point2f >(0)[i] = filteredChessboardImgPoints[i];
}
_selectedIds.create((int)filteredIds.size(), 1, CV_32SC1);
for(unsigned int i = 0; i < filteredIds.size(); i++) {
_selectedIds.getMat().ptr< int >(0)[i] = filteredIds[i];
}
return (unsigned int)filteredChessboardImgPoints.size();
}
/**
* Calculate the maximum window sizes for corner refinement for each charuco corner based on the
* distance to their closest markers
*/
static void _getMaximumSubPixWindowSizes(InputArrayOfArrays markerCorners, InputArray markerIds,
InputArray charucoCorners, const CharucoBoard &board,
vector< Size > &sizes) {
unsigned int nCharucoCorners = (unsigned int)charucoCorners.getMat().total();
sizes.resize(nCharucoCorners, Size(-1, -1));
for(unsigned int i = 0; i < nCharucoCorners; i++) {
if(charucoCorners.getMat().ptr< Point2f >(0)[i] == Point2f(-1, -1)) continue;
if(board.nearestMarkerIdx[i].size() == 0) continue;
double minDist = -1;
int counter = 0;
// calculate the distance to each of the closest corner of each closest marker
for(unsigned int j = 0; j < board.nearestMarkerIdx[i].size(); j++) {
// find marker
int markerId = board.ids[board.nearestMarkerIdx[i][j]];
int markerIdx = -1;
for(unsigned int k = 0; k < markerIds.getMat().total(); k++) {
if(markerIds.getMat().ptr< int >(0)[k] == markerId) {
markerIdx = k;
break;
}
}
if(markerIdx == -1) continue;
Point2f markerCorner =
markerCorners.getMat(markerIdx).ptr< Point2f >(0)[board.nearestMarkerCorners[i][j]];
Point2f charucoCorner = charucoCorners.getMat().ptr< Point2f >(0)[i];
double dist = norm(markerCorner - charucoCorner);
if(minDist == -1) minDist = dist; // if first distance, just assign it
minDist = min(dist, minDist);
counter++;
}
// if this is the first closest marker, dont do anything
if(counter == 0)
continue;
else {
// else, calculate the maximum window size
int winSizeInt = int(minDist - 2); // remove 2 pixels for safety
if(winSizeInt < 1) winSizeInt = 1; // minimum size is 1
if(winSizeInt > 10) winSizeInt = 10; // maximum size is 10
sizes[i] = Size(winSizeInt, winSizeInt);
}
}
}
/**
* Interpolate charuco corners using approximated pose estimation
*/
static int _interpolateCornersCharucoApproxCalib(InputArrayOfArrays _markerCorners,
InputArray _markerIds, InputArray _image,
const CharucoBoard &board,
InputArray _cameraMatrix, InputArray _distCoeffs,
OutputArray _charucoCorners,
OutputArray _charucoIds) {
CV_Assert(_image.getMat().channels() == 1 || _image.getMat().channels() == 3);
CV_Assert(_markerCorners.total() == _markerIds.getMat().total() &&
_markerIds.getMat().total() > 0);
// approximated pose estimation using marker corners
Mat approximatedRvec, approximatedTvec;
int detectedBoardMarkers;
detectedBoardMarkers =
aruco::estimatePoseBoard(_markerCorners, _markerIds, board, _cameraMatrix, _distCoeffs,
approximatedRvec, approximatedTvec);
if(detectedBoardMarkers == 0) return 0;
// project chessboard corners
vector< Point2f > allChessboardImgPoints;
projectPoints(board.chessboardCorners, approximatedRvec, approximatedTvec, _cameraMatrix,
_distCoeffs, allChessboardImgPoints);
// calculate maximum window sizes for subpixel refinement. The size is limited by the distance
// to the closes marker corner to avoid erroneous displacements to marker corners
vector< Size > subPixWinSizes;
_getMaximumSubPixWindowSizes(_markerCorners, _markerIds, allChessboardImgPoints, board,
subPixWinSizes);
// filter corners outside the image and subpixel-refine charuco corners
unsigned int nRefinedCorners;
nRefinedCorners = _selectAndRefineChessboardCorners(
allChessboardImgPoints, _image, _charucoCorners, _charucoIds, subPixWinSizes);
// to return a charuco corner, its two closes aruco markers should have been detected
nRefinedCorners = _filterCornersWithoutMinMarkers(board, _charucoCorners, _charucoIds,
_markerIds, 2, _charucoCorners, _charucoIds);
return nRefinedCorners;
}
/**
* Interpolate charuco corners using local homography
*/
static int _interpolateCornersCharucoLocalHom(InputArrayOfArrays _markerCorners,
InputArray _markerIds, InputArray _image,
const CharucoBoard &board,
OutputArray _charucoCorners,
OutputArray _charucoIds) {
CV_Assert(_image.getMat().channels() == 1 || _image.getMat().channels() == 3);
CV_Assert(_markerCorners.total() == _markerIds.getMat().total() &&
_markerIds.getMat().total() > 0);
unsigned int nMarkers = (unsigned int)_markerIds.getMat().total();
// calculate local homographies for each marker
vector< Mat > transformations;
transformations.resize(nMarkers);
for(unsigned int i = 0; i < nMarkers; i++) {
vector< Point2f > markerObjPoints2D;
int markerId = _markerIds.getMat().ptr< int >(0)[i];
vector< int >::const_iterator it = find(board.ids.begin(), board.ids.end(), markerId);
if(it == board.ids.end()) continue;
int boardIdx = (int)std::distance(board.ids.begin(), it);
markerObjPoints2D.resize(4);
for(unsigned int j = 0; j < 4; j++)
markerObjPoints2D[j] =
Point2f(board.objPoints[boardIdx][j].x, board.objPoints[boardIdx][j].y);
transformations[i] = getPerspectiveTransform(markerObjPoints2D, _markerCorners.getMat(i));
}
unsigned int nCharucoCorners = (unsigned int)board.chessboardCorners.size();
vector< Point2f > allChessboardImgPoints(nCharucoCorners, Point2f(-1, -1));
// for each charuco corner, calculate its interpolation position based on the closest markers
// homographies
for(unsigned int i = 0; i < nCharucoCorners; i++) {
Point2f objPoint2D = Point2f(board.chessboardCorners[i].x, board.chessboardCorners[i].y);
vector< Point2f > interpolatedPositions;
for(unsigned int j = 0; j < board.nearestMarkerIdx[i].size(); j++) {
int markerId = board.ids[board.nearestMarkerIdx[i][j]];
int markerIdx = -1;
for(unsigned int k = 0; k < _markerIds.getMat().total(); k++) {
if(_markerIds.getMat().ptr< int >(0)[k] == markerId) {
markerIdx = k;
break;
}
}
if(markerIdx != -1) {
vector< Point2f > in, out;
in.push_back(objPoint2D);
perspectiveTransform(in, out, transformations[markerIdx]);
interpolatedPositions.push_back(out[0]);
}
}
// none of the closest markers detected
if(interpolatedPositions.size() == 0) continue;
// more than one closest marker detected, take middle point
if(interpolatedPositions.size() > 1) {
allChessboardImgPoints[i] = (interpolatedPositions[0] + interpolatedPositions[1]) / 2.;
}
// a single closest marker detected
else allChessboardImgPoints[i] = interpolatedPositions[0];
}
// calculate maximum window sizes for subpixel refinement. The size is limited by the distance
// to the closes marker corner to avoid erroneous displacements to marker corners
vector< Size > subPixWinSizes;
_getMaximumSubPixWindowSizes(_markerCorners, _markerIds, allChessboardImgPoints, board,
subPixWinSizes);
// filter corners outside the image and subpixel-refine charuco corners
unsigned int nRefinedCorners;
nRefinedCorners = _selectAndRefineChessboardCorners(
allChessboardImgPoints, _image, _charucoCorners, _charucoIds, subPixWinSizes);
// to return a charuco corner, its two closes aruco markers should have been detected
nRefinedCorners = _filterCornersWithoutMinMarkers(board, _charucoCorners, _charucoIds,
_markerIds, 2, _charucoCorners, _charucoIds);
return nRefinedCorners;
}
/**
*/
int interpolateCornersCharuco(InputArrayOfArrays _markerCorners, InputArray _markerIds,
InputArray _image, const CharucoBoard &board,
OutputArray _charucoCorners, OutputArray _charucoIds,
InputArray _cameraMatrix, InputArray _distCoeffs) {
// if camera parameters are avaible, use approximated calibration
if(_cameraMatrix.total() != 0) {
return _interpolateCornersCharucoApproxCalib(_markerCorners, _markerIds, _image, board,
_cameraMatrix, _distCoeffs, _charucoCorners,
_charucoIds);
}
// else use local homography
else {
return _interpolateCornersCharucoLocalHom(_markerCorners, _markerIds, _image, board,
_charucoCorners, _charucoIds);
}
}
/**
*/
void drawDetectedCornersCharuco(InputOutputArray _image, InputArray _charucoCorners,
InputArray _charucoIds, Scalar cornerColor) {
CV_Assert(_image.getMat().total() != 0 &&
(_image.getMat().channels() == 1 || _image.getMat().channels() == 3));
CV_Assert((_charucoCorners.getMat().total() == _charucoIds.getMat().total()) ||
_charucoIds.getMat().total() == 0);
unsigned int nCorners = (unsigned int)_charucoCorners.getMat().total();
for(unsigned int i = 0; i < nCorners; i++) {
Point2f corner = _charucoCorners.getMat().ptr< Point2f >(0)[i];
// draw first corner mark
rectangle(_image, corner - Point2f(3, 3), corner + Point2f(3, 3), cornerColor, 1, LINE_AA);
// draw ID
if(_charucoIds.total() != 0) {
int id = _charucoIds.getMat().ptr< int >(0)[i];
stringstream s;
s << "id=" << id;
putText(_image, s.str(), corner + Point2f(5, -5), FONT_HERSHEY_SIMPLEX, 0.5,
cornerColor, 2);
}
}
}
/**
* Check if a set of 3d points are enough for calibration. Z coordinate is ignored.
* Only axis paralel lines are considered
*/
static bool _arePointsEnoughForPoseEstimation(const vector< Point3f > &points) {
if(points.size() < 4) return false;
vector< double > sameXValue; // different x values in points
vector< int > sameXCounter; // number of points with the x value in sameXValue
for(unsigned int i = 0; i < points.size(); i++) {
bool found = false;
for(unsigned int j = 0; j < sameXValue.size(); j++) {
if(sameXValue[j] == points[i].x) {
found = true;
sameXCounter[j]++;
}
}
if(!found) {
sameXValue.push_back(points[i].x);
sameXCounter.push_back(1);
}
}
// count how many x values has more than 2 points
int moreThan2 = 0;
for(unsigned int i = 0; i < sameXCounter.size(); i++) {
if(sameXCounter[i] >= 2) moreThan2++;
}
// if we have more than 1 two xvalues with more than 2 points, calibration is ok
if(moreThan2 > 1)
return true;
else
return false;
}
/**
*/
bool estimatePoseCharucoBoard(InputArray _charucoCorners, InputArray _charucoIds,
CharucoBoard &board, InputArray _cameraMatrix, InputArray _distCoeffs,
OutputArray _rvec, OutputArray _tvec) {
CV_Assert((_charucoCorners.getMat().total() == _charucoIds.getMat().total()));
// need, at least, 4 corners
if(_charucoIds.getMat().total() < 4) return false;
vector< Point3f > objPoints;
objPoints.reserve(_charucoIds.getMat().total());
for(unsigned int i = 0; i < _charucoIds.getMat().total(); i++) {
int currId = _charucoIds.getMat().ptr< int >(0)[i];
CV_Assert(currId >= 0 && currId < (int)board.chessboardCorners.size());
objPoints.push_back(board.chessboardCorners[currId]);
}
// points need to be in different lines, check if detected points are enough
if(!_arePointsEnoughForPoseEstimation(objPoints)) return false;
solvePnP(objPoints, _charucoCorners, _cameraMatrix, _distCoeffs, _rvec, _tvec);
return true;
}
/**
*/
double calibrateCameraCharuco(InputArrayOfArrays _charucoCorners, InputArrayOfArrays _charucoIds,
const CharucoBoard &board, Size imageSize,
InputOutputArray _cameraMatrix, InputOutputArray _distCoeffs,
OutputArrayOfArrays _rvecs, OutputArrayOfArrays _tvecs, int flags,
TermCriteria criteria) {
CV_Assert(_charucoIds.total() > 0 && (_charucoIds.total() == _charucoCorners.total()));
// Join object points of charuco corners in a single vector for calibrateCamera() function
vector< vector< Point3f > > allObjPoints;
allObjPoints.resize(_charucoIds.total());
for(unsigned int i = 0; i < _charucoIds.total(); i++) {
unsigned int nCorners = (unsigned int)_charucoIds.getMat(i).total();
CV_Assert(nCorners > 0 && nCorners == _charucoCorners.getMat(i).total());
allObjPoints[i].reserve(nCorners);
for(unsigned int j = 0; j < nCorners; j++) {
int pointId = _charucoIds.getMat(i).ptr< int >(0)[j];
CV_Assert(pointId >= 0 && pointId < (int)board.chessboardCorners.size());
allObjPoints[i].push_back(board.chessboardCorners[pointId]);
}
}
return calibrateCamera(allObjPoints, _charucoCorners, imageSize, _cameraMatrix, _distCoeffs,
_rvecs, _tvecs, flags, criteria);
}
/**
*/
void detectCharucoDiamond(InputArray _image, InputArrayOfArrays _markerCorners,
InputArray _markerIds, float squareMarkerLengthRate,
OutputArrayOfArrays _diamondCorners, OutputArray _diamondIds,
InputArray _cameraMatrix, InputArray _distCoeffs) {
CV_Assert(_markerIds.total() > 0 && _markerIds.total() == _markerCorners.total());
const float minRepDistanceRate = 0.12f;
// create Charuco board layout for diamond (3x3 layout)
CharucoBoard charucoDiamondLayout;
Dictionary dict = getPredefinedDictionary(PREDEFINED_DICTIONARY_NAME(0));
charucoDiamondLayout = CharucoBoard::create(3, 3, squareMarkerLengthRate, 1., dict);
vector< vector< Point2f > > diamondCorners;
vector< Vec4i > diamondIds;
// stores if the detected markers have been assigned or not to a diamond
vector< bool > assigned(_markerIds.total(), false);
if(_markerIds.total() < 4) return; // a diamond need at least 4 markers
// convert input image to grey
Mat grey;
if(_image.getMat().type() == CV_8UC3)
cvtColor(_image.getMat(), grey, COLOR_BGR2GRAY);
else
_image.getMat().copyTo(grey);
// for each of the detected markers, try to find a diamond
for(unsigned int i = 0; i < _markerIds.total(); i++) {
if(assigned[i]) continue;
// calculate marker perimeter
float perimeterSq = 0;
Mat corners = _markerCorners.getMat(i);
for(int c = 0; c < 4; c++) {
perimeterSq +=
(corners.ptr< Point2f >()[c].x - corners.ptr< Point2f >()[(c + 1) % 4].x) *
(corners.ptr< Point2f >()[c].x - corners.ptr< Point2f >()[(c + 1) % 4].x) +
(corners.ptr< Point2f >()[c].y - corners.ptr< Point2f >()[(c + 1) % 4].y) *
(corners.ptr< Point2f >()[c].y - corners.ptr< Point2f >()[(c + 1) % 4].y);
}
// maximum reprojection error relative to perimeter
float minRepDistance = perimeterSq * minRepDistanceRate * minRepDistanceRate;
int currentId = _markerIds.getMat().ptr< int >()[i];
// prepare data to call refineDetectedMarkers()
// detected markers (only the current one)
vector< Mat > currentMarker;
vector< int > currentMarkerId;
currentMarker.push_back(_markerCorners.getMat(i));
currentMarkerId.push_back(currentId);
// marker candidates (the rest of markers if they have not been assigned)
vector< Mat > candidates;
vector< int > candidatesIdxs;
for(unsigned int k = 0; k < assigned.size(); k++) {
if(k == i) continue;
if(!assigned[k]) {
candidates.push_back(_markerCorners.getMat(k));
candidatesIdxs.push_back(k);
}
}
if(candidates.size() < 3) break; // we need at least 3 free markers
// modify charuco layout id to make sure all the ids are different than current id
for(int k = 1; k < 4; k++)
charucoDiamondLayout.ids[k] = currentId + 1 + k;
// current id is assigned to [0], so it is the marker on the top
charucoDiamondLayout.ids[0] = currentId;
// try to find the rest of markers in the diamond
vector< int > acceptedIdxs;
aruco::refineDetectedMarkers(grey, charucoDiamondLayout, currentMarker, currentMarkerId,
candidates, noArray(), noArray(), minRepDistance, -1, false,
acceptedIdxs);
// if found, we have a diamond
if(currentMarker.size() == 4) {
assigned[i] = true;
// calculate diamond id, acceptedIdxs array indicates the markers taken from candidates
// array
Vec4i markerId;
markerId[0] = currentId;
for(int k = 1; k < 4; k++) {
int currentMarkerIdx = candidatesIdxs[acceptedIdxs[k - 1]];
markerId[k] = _markerIds.getMat().ptr< int >()[currentMarkerIdx];
assigned[currentMarkerIdx] = true;
}
// interpolate the charuco corners of the diamond
vector< Point2f > currentMarkerCorners;
Mat aux;
interpolateCornersCharuco(currentMarker, currentMarkerId, grey, charucoDiamondLayout,
currentMarkerCorners, aux, _cameraMatrix, _distCoeffs);
// if everything is ok, save the diamond
if(currentMarkerCorners.size() > 0) {
// reorder corners
vector< Point2f > currentMarkerCornersReorder;
currentMarkerCornersReorder.resize(4);
currentMarkerCornersReorder[0] = currentMarkerCorners[2];
currentMarkerCornersReorder[1] = currentMarkerCorners[3];
currentMarkerCornersReorder[2] = currentMarkerCorners[1];
currentMarkerCornersReorder[3] = currentMarkerCorners[0];
diamondCorners.push_back(currentMarkerCornersReorder);
diamondIds.push_back(markerId);
}
}
}
if(diamondIds.size() > 0) {
// parse output
_diamondIds.create((int)diamondIds.size(), 1, CV_32SC4);
for(unsigned int i = 0; i < diamondIds.size(); i++)
_diamondIds.getMat().ptr< Vec4i >(0)[i] = diamondIds[i];
_diamondCorners.create((int)diamondCorners.size(), 1, CV_32FC2);
for(unsigned int i = 0; i < diamondCorners.size(); i++) {
_diamondCorners.create(4, 1, CV_32FC2, i, true);
for(int j = 0; j < 4; j++) {
_diamondCorners.getMat(i).ptr< Point2f >()[j] = diamondCorners[i][j];
}
}
}
}
/**
*/
void drawCharucoDiamond(Dictionary dictionary, Vec4i ids, int squareLength, int markerLength,
OutputArray _img, int marginSize, int borderBits) {
CV_Assert(squareLength > 0 && markerLength > 0 && squareLength > markerLength);
CV_Assert(marginSize >= 0 && borderBits > 0);
// create a charuco board similar to a charuco marker and print it
CharucoBoard board =
CharucoBoard::create(3, 3, (float)squareLength, (float)markerLength, dictionary);
// assign the charuco marker ids
for(int i = 0; i < 4; i++)
board.ids[i] = ids[i];
Size outSize(3 * squareLength + 2 * marginSize, 3 * squareLength + 2 * marginSize);
board.draw(outSize, _img, marginSize, borderBits);
}
/**
*/
void drawDetectedDiamonds(InputOutputArray _image, InputArrayOfArrays _corners,
InputArray _ids, Scalar borderColor) {
CV_Assert(_image.getMat().total() != 0 &&
(_image.getMat().channels() == 1 || _image.getMat().channels() == 3));
CV_Assert((_corners.total() == _ids.total()) || _ids.total() == 0);
// calculate colors
Scalar textColor, cornerColor;
textColor = cornerColor = borderColor;
swap(textColor.val[0], textColor.val[1]); // text color just sawp G and R
swap(cornerColor.val[1], cornerColor.val[2]); // corner color just sawp G and B
int nMarkers = (int)_corners.total();
for(int i = 0; i < nMarkers; i++) {
Mat currentMarker = _corners.getMat(i);
CV_Assert(currentMarker.total() == 4 && currentMarker.type() == CV_32FC2);
// draw marker sides
for(int j = 0; j < 4; j++) {
Point2f p0, p1;
p0 = currentMarker.ptr< Point2f >(0)[j];
p1 = currentMarker.ptr< Point2f >(0)[(j + 1) % 4];
line(_image, p0, p1, borderColor, 1);
}
// draw first corner mark
rectangle(_image, currentMarker.ptr< Point2f >(0)[0] - Point2f(3, 3),
currentMarker.ptr< Point2f >(0)[0] + Point2f(3, 3), cornerColor, 1, LINE_AA);
// draw id composed by four numbers
if(_ids.total() != 0) {
Point2f cent(0, 0);
for(int p = 0; p < 4; p++)
cent += currentMarker.ptr< Point2f >(0)[p];
cent = cent / 4.;
stringstream s;
s << "id=" << _ids.getMat().ptr< Vec4i >(0)[i];
putText(_image, s.str(), cent, FONT_HERSHEY_SIMPLEX, 0.5, textColor, 2);
}
}
}
}
}

@ -0,0 +1,473 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include "precomp.hpp"
#include "opencv2/aruco/dictionary.hpp"
#include "predefined_dictionaries.cpp"
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
namespace cv {
namespace aruco {
using namespace std;
/**
* Hamming weight look up table from 0 to 255
*/
const unsigned char hammingWeightLUT[] = {
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8
};
/**
*/
Dictionary::Dictionary(const unsigned char *bytes, int _markerSize, int dictsize, int _maxcorr) {
markerSize = _markerSize;
maxCorrectionBits = _maxcorr;
int nbytes = (markerSize * markerSize) / 8;
if((markerSize * markerSize) % 8 != 0) nbytes++;
// save bytes in internal format
// bytesList.at<Vec4b>(i, j)[k] is j-th byte of i-th marker, in its k-th rotation
bytesList = Mat(dictsize, nbytes, CV_8UC4);
for(int i = 0; i < dictsize; i++) {
for(int j = 0; j < nbytes; j++) {
for(int k = 0; k < 4; k++)
bytesList.at< Vec4b >(i, j)[k] = bytes[i * (4 * nbytes) + k * nbytes + j];
}
}
}
/**
*/
bool Dictionary::identify(const Mat &onlyBits, int &idx, int &rotation,
double maxCorrectionRate) const {
CV_Assert(onlyBits.rows == markerSize && onlyBits.cols == markerSize);
int maxCorrectionRecalculed = int(double(maxCorrectionBits) * maxCorrectionRate);
// get as a byte list
Mat candidateBytes = getByteListFromBits(onlyBits);
idx = -1; // by default, not found
// search closest marker in dict
for(int m = 0; m < bytesList.rows; m++) {
int currentMinDistance = markerSize * markerSize + 1;
int currentRotation = -1;
for(unsigned int r = 0; r < 4; r++) {
int currentHamming = 0;
// for each byte, calculate XOR result and then sum the Hamming weight from the LUT
for(unsigned int b = 0; b < candidateBytes.total(); b++) {
unsigned char xorRes =
bytesList.ptr< Vec4b >(m)[b][r] ^ candidateBytes.ptr< Vec4b >(0)[b][0];
currentHamming += hammingWeightLUT[xorRes];
}
if(currentHamming < currentMinDistance) {
currentMinDistance = currentHamming;
currentRotation = r;
}
}
// if maxCorrection is fullfilled, return this one
if(currentMinDistance <= maxCorrectionRecalculed) {
idx = m;
rotation = currentRotation;
break;
}
}
if(idx != -1)
return true;
else
return false;
}
/**
*/
int Dictionary::getDistanceToId(InputArray bits, int id, bool allRotations) const {
CV_Assert(id >= 0 && id < bytesList.rows);
unsigned int nRotations = 4;
if(!allRotations) nRotations = 1;
Mat candidateBytes = getByteListFromBits(bits.getMat());
int currentMinDistance = int(bits.total() * bits.total());
for(unsigned int r = 0; r < nRotations; r++) {
int currentHamming = 0;
for(unsigned int b = 0; b < candidateBytes.total(); b++) {
unsigned char xorRes =
bytesList.ptr< Vec4b >(id)[b][r] ^ candidateBytes.ptr< Vec4b >(0)[b][0];
currentHamming += hammingWeightLUT[xorRes];
}
if(currentHamming < currentMinDistance) {
currentMinDistance = currentHamming;
}
}
return currentMinDistance;
}
/**
* @brief Draw a canonical marker image
*/
void Dictionary::drawMarker(int id, int sidePixels, OutputArray _img, int borderBits) const {
CV_Assert(sidePixels > markerSize);
CV_Assert(id < bytesList.rows);
CV_Assert(borderBits > 0);
_img.create(sidePixels, sidePixels, CV_8UC1);
// create small marker with 1 pixel per bin
Mat tinyMarker(markerSize + 2 * borderBits, markerSize + 2 * borderBits, CV_8UC1,
Scalar::all(0));
Mat innerRegion = tinyMarker.rowRange(borderBits, tinyMarker.rows - borderBits)
.colRange(borderBits, tinyMarker.cols - borderBits);
// put inner bits
Mat bits = 255 * getBitsFromByteList(bytesList.rowRange(id, id + 1), markerSize);
CV_Assert(innerRegion.total() == bits.total());
bits.copyTo(innerRegion);
// resize tiny marker to output size
cv::resize(tinyMarker, _img.getMat(), _img.getMat().size(), 0, 0, INTER_NEAREST);
}
/**
* @brief Transform matrix of bits to list of bytes in the 4 rotations
*/
Mat Dictionary::getByteListFromBits(const Mat &bits) {
int nbytes = (bits.cols * bits.rows) / 8;
if((bits.cols * bits.rows) % 8 != 0) nbytes++;
Mat candidateByteList(1, nbytes, CV_8UC4, Scalar::all(0));
unsigned char currentBit = 0;
int currentByte = 0;
for(int row = 0; row < bits.rows; row++) {
for(int col = 0; col < bits.cols; col++) {
// circular shift
candidateByteList.ptr< Vec4b >(0)[currentByte][0] =
candidateByteList.ptr< Vec4b >(0)[currentByte][0] << 1;
candidateByteList.ptr< Vec4b >(0)[currentByte][1] =
candidateByteList.ptr< Vec4b >(0)[currentByte][1] << 1;
candidateByteList.ptr< Vec4b >(0)[currentByte][2] =
candidateByteList.ptr< Vec4b >(0)[currentByte][2] << 1;
candidateByteList.ptr< Vec4b >(0)[currentByte][3] =
candidateByteList.ptr< Vec4b >(0)[currentByte][3] << 1;
// increment if bit is 1
if(bits.at< unsigned char >(row, col))
candidateByteList.ptr< Vec4b >(0)[currentByte][0]++;
if(bits.at< unsigned char >(col, bits.cols - 1 - row))
candidateByteList.ptr< Vec4b >(0)[currentByte][1]++;
if(bits.at< unsigned char >(bits.rows - 1 - row, bits.cols - 1 - col))
candidateByteList.ptr< Vec4b >(0)[currentByte][2]++;
if(bits.at< unsigned char >(bits.rows - 1 - col, row))
candidateByteList.ptr< Vec4b >(0)[currentByte][3]++;
currentBit++;
if(currentBit == 8) {
// next byte
currentBit = 0;
currentByte++;
}
}
}
return candidateByteList;
}
/**
* @brief Transform list of bytes to matrix of bits
*/
Mat Dictionary::getBitsFromByteList(const Mat &byteList, int markerSize) {
CV_Assert(byteList.total() > 0 &&
byteList.total() >= (unsigned int)markerSize * markerSize / 8 &&
byteList.total() <= (unsigned int)markerSize * markerSize / 8 + 1);
Mat bits(markerSize, markerSize, CV_8UC1, Scalar::all(0));
unsigned char base2List[] = { 128, 64, 32, 16, 8, 4, 2, 1 };
int currentByteIdx = 0;
// we only need the bytes in normal rotation
unsigned char currentByte = byteList.ptr< Vec4b >(0)[0][0];
int currentBit = 0;
for(int row = 0; row < bits.rows; row++) {
for(int col = 0; col < bits.cols; col++) {
if(currentByte >= base2List[currentBit]) {
bits.at< unsigned char >(row, col) = 1;
currentByte -= base2List[currentBit];
}
currentBit++;
if(currentBit == 8) {
currentByteIdx++;
currentByte = byteList.ptr< Vec4b >(0)[currentByteIdx][0];
// if not enough bits for one more byte, we are in the end
// update bit position accordingly
if(8 * (currentByteIdx + 1) > (int)bits.total())
currentBit = 8 * (currentByteIdx + 1) - (int)bits.total();
else
currentBit = 0; // ok, bits enough for next byte
}
}
}
return bits;
}
// DictionaryData constructors calls
const Dictionary DICT_ARUCO_DATA = Dictionary(&(DICT_ARUCO_BYTES[0][0][0]), 5, 1024, 1);
const Dictionary DICT_4X4_50_DATA = Dictionary(&(DICT_4X4_1000_BYTES[0][0][0]), 4, 50, 1);
const Dictionary DICT_4X4_100_DATA = Dictionary(&(DICT_4X4_1000_BYTES[0][0][0]), 4, 100, 1);
const Dictionary DICT_4X4_250_DATA = Dictionary(&(DICT_4X4_1000_BYTES[0][0][0]), 4, 250, 1);
const Dictionary DICT_4X4_1000_DATA = Dictionary(&(DICT_4X4_1000_BYTES[0][0][0]), 4, 1000, 0);
const Dictionary DICT_5X5_50_DATA = Dictionary(&(DICT_5X5_1000_BYTES[0][0][0]), 5, 50, 3);
const Dictionary DICT_5X5_100_DATA = Dictionary(&(DICT_5X5_1000_BYTES[0][0][0]), 5, 100, 3);
const Dictionary DICT_5X5_250_DATA = Dictionary(&(DICT_5X5_1000_BYTES[0][0][0]), 5, 250, 2);
const Dictionary DICT_5X5_1000_DATA = Dictionary(&(DICT_5X5_1000_BYTES[0][0][0]), 5, 1000, 2);
const Dictionary DICT_6X6_50_DATA = Dictionary(&(DICT_6X6_1000_BYTES[0][0][0]), 6, 50, 6);
const Dictionary DICT_6X6_100_DATA = Dictionary(&(DICT_6X6_1000_BYTES[0][0][0]), 6, 100, 5);
const Dictionary DICT_6X6_250_DATA = Dictionary(&(DICT_6X6_1000_BYTES[0][0][0]), 6, 250, 5);
const Dictionary DICT_6X6_1000_DATA = Dictionary(&(DICT_6X6_1000_BYTES[0][0][0]), 6, 1000, 4);
const Dictionary DICT_7X7_50_DATA = Dictionary(&(DICT_7X7_1000_BYTES[0][0][0]), 7, 50, 9);
const Dictionary DICT_7X7_100_DATA = Dictionary(&(DICT_7X7_1000_BYTES[0][0][0]), 7, 100, 8);
const Dictionary DICT_7X7_250_DATA = Dictionary(&(DICT_7X7_1000_BYTES[0][0][0]), 7, 250, 8);
const Dictionary DICT_7X7_1000_DATA = Dictionary(&(DICT_7X7_1000_BYTES[0][0][0]), 7, 1000, 6);
const Dictionary &getPredefinedDictionary(PREDEFINED_DICTIONARY_NAME name) {
switch(name) {
case DICT_ARUCO_ORIGINAL:
return DICT_ARUCO_DATA;
case DICT_4X4_50:
return DICT_4X4_50_DATA;
case DICT_4X4_100:
return DICT_4X4_100_DATA;
case DICT_4X4_250:
return DICT_4X4_250_DATA;
case DICT_4X4_1000:
return DICT_4X4_1000_DATA;
case DICT_5X5_50:
return DICT_5X5_50_DATA;
case DICT_5X5_100:
return DICT_5X5_100_DATA;
case DICT_5X5_250:
return DICT_5X5_250_DATA;
case DICT_5X5_1000:
return DICT_5X5_1000_DATA;
case DICT_6X6_50:
return DICT_6X6_50_DATA;
case DICT_6X6_100:
return DICT_6X6_100_DATA;
case DICT_6X6_250:
return DICT_6X6_250_DATA;
case DICT_6X6_1000:
return DICT_6X6_1000_DATA;
case DICT_7X7_50:
return DICT_7X7_50_DATA;
case DICT_7X7_100:
return DICT_7X7_100_DATA;
case DICT_7X7_250:
return DICT_7X7_250_DATA;
case DICT_7X7_1000:
return DICT_7X7_1000_DATA;
}
return DICT_4X4_50_DATA;
}
/**
* @brief Generates a random marker Mat of size markerSize x markerSize
*/
static Mat _generateRandomMarker(int markerSize) {
Mat marker(markerSize, markerSize, CV_8UC1, Scalar::all(0));
for(int i = 0; i < markerSize; i++) {
for(int j = 0; j < markerSize; j++) {
unsigned char bit = rand() % 2;
marker.at< unsigned char >(i, j) = bit;
}
}
return marker;
}
/**
* @brief Calculate selfDistance of the codification of a marker Mat. Self distance is the Hamming
* distance of the marker to itself in the other rotations.
* See S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
* "Automatic generation and detection of highly reliable fiducial markers under occlusion".
* Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005
*/
static int _getSelfDistance(const Mat &marker) {
Mat bytes = Dictionary::getByteListFromBits(marker);
int minHamming = (int)marker.total() + 1;
for(int i = 1; i < 4; i++) {
int currentHamming = 0;
for(int j = 0; j < bytes.cols; j++) {
unsigned char xorRes = bytes.ptr< Vec4b >()[j][0] ^ bytes.ptr< Vec4b >()[j][i];
currentHamming += hammingWeightLUT[xorRes];
}
if(currentHamming < minHamming) minHamming = currentHamming;
}
return minHamming;
}
/**
*/
Dictionary generateCustomDictionary(int nMarkers, int markerSize,
const Dictionary &baseDictionary) {
Dictionary out;
out.markerSize = markerSize;
// theoretical maximum intermarker distance
// See S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
// "Automatic generation and detection of highly reliable fiducial markers under occlusion".
// Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005
int C = (int)std::floor(float(markerSize * markerSize) / 4.f);
int tau = 2 * (int)std::floor(float(C) * 4.f / 3.f);
// if baseDictionary is provided, calculate its intermarker distance
if(baseDictionary.bytesList.rows > 0) {
CV_Assert(baseDictionary.markerSize == markerSize);
out.bytesList = baseDictionary.bytesList.clone();
int minDistance = markerSize * markerSize + 1;
for(int i = 0; i < out.bytesList.rows; i++) {
Mat markerBytes = out.bytesList.rowRange(i, i + 1);
Mat markerBits = Dictionary::getBitsFromByteList(markerBytes, markerSize);
minDistance = min(minDistance, _getSelfDistance(markerBits));
for(int j = i + 1; j < out.bytesList.rows; j++) {
minDistance = min(minDistance, out.getDistanceToId(markerBits, j));
}
}
tau = minDistance;
}
// current best option
int bestTau = 0;
Mat bestMarker;
// after these number of unproductive iterations, the best option is accepted
const int maxUnproductiveIterations = 5000;
int unproductiveIterations = 0;
while(out.bytesList.rows < nMarkers) {
Mat currentMarker = _generateRandomMarker(markerSize);
int selfDistance = _getSelfDistance(currentMarker);
int minDistance = selfDistance;
// if self distance is better or equal than current best option, calculate distance
// to previous accepted markers
if(selfDistance >= bestTau) {
for(int i = 0; i < out.bytesList.rows; i++) {
int currentDistance = out.getDistanceToId(currentMarker, i);
minDistance = min(currentDistance, minDistance);
if(minDistance <= bestTau) {
break;
}
}
}
// if distance is high enough, accept the marker
if(minDistance >= tau) {
unproductiveIterations = 0;
bestTau = 0;
Mat bytes = Dictionary::getByteListFromBits(currentMarker);
out.bytesList.push_back(bytes);
} else {
unproductiveIterations++;
// if distance is not enough, but is better than the current best option
if(minDistance > bestTau) {
bestTau = minDistance;
bestMarker = currentMarker;
}
// if number of unproductive iterarions has been reached, accept the current best option
if(unproductiveIterations == maxUnproductiveIterations) {
unproductiveIterations = 0;
tau = bestTau;
bestTau = 0;
Mat bytes = Dictionary::getByteListFromBits(bestMarker);
out.bytesList.push_back(bytes);
}
}
}
// update the maximum number of correction bits for the generated dictionary
out.maxCorrectionBits = (tau - 1) / 2;
return out;
}
}
}

@ -0,0 +1,49 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2014, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_CCALIB_PRECOMP__
#define __OPENCV_CCALIB_PRECOMP__
#include <opencv2/core.hpp>
#include <opencv2/calib3d.hpp>
#include <vector>
#endif

File diff suppressed because it is too large Load Diff

@ -0,0 +1,506 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include "test_precomp.hpp"
#include <opencv2/aruco.hpp>
#include <string>
using namespace std;
using namespace cv;
/**
* @brief Draw 2D synthetic markers and detect them
*/
class CV_ArucoDetectionSimple : public cvtest::BaseTest {
public:
CV_ArucoDetectionSimple();
protected:
void run(int);
};
CV_ArucoDetectionSimple::CV_ArucoDetectionSimple() {}
void CV_ArucoDetectionSimple::run(int) {
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
// 20 images
for(int i = 0; i < 20; i++) {
const int markerSidePixels = 100;
int imageSize = markerSidePixels * 2 + 3 * (markerSidePixels / 2);
// draw synthetic image and store marker corners and ids
vector< vector< Point2f > > groundTruthCorners;
vector< int > groundTruthIds;
Mat img = Mat(imageSize, imageSize, CV_8UC1, Scalar::all(255));
for(int y = 0; y < 2; y++) {
for(int x = 0; x < 2; x++) {
Mat marker;
int id = i * 4 + y * 2 + x;
aruco::drawMarker(dictionary, id, markerSidePixels, marker);
Point2f firstCorner =
Point2f(markerSidePixels / 2.f + x * (1.5f * markerSidePixels),
markerSidePixels / 2.f + y * (1.5f * markerSidePixels));
Mat aux = img.colRange((int)firstCorner.x, (int)firstCorner.x + markerSidePixels)
.rowRange((int)firstCorner.y, (int)firstCorner.y + markerSidePixels);
marker.copyTo(aux);
groundTruthIds.push_back(id);
groundTruthCorners.push_back(vector< Point2f >());
groundTruthCorners.back().push_back(firstCorner);
groundTruthCorners.back().push_back(firstCorner + Point2f(markerSidePixels - 1, 0));
groundTruthCorners.back().push_back(
firstCorner + Point2f(markerSidePixels - 1, markerSidePixels - 1));
groundTruthCorners.back().push_back(firstCorner + Point2f(0, markerSidePixels - 1));
}
}
if(i % 2 == 1) img.convertTo(img, CV_8UC3);
// detect markers
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::DetectorParameters params;
aruco::detectMarkers(img, dictionary, corners, ids, params);
// check detection results
for(unsigned int m = 0; m < groundTruthIds.size(); m++) {
int idx = -1;
for(unsigned int k = 0; k < ids.size(); k++) {
if(groundTruthIds[m] == ids[k]) {
idx = (int)k;
break;
}
}
if(idx == -1) {
ts->printf(cvtest::TS::LOG, "Marker not detected");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
for(int c = 0; c < 4; c++) {
double dist = norm(groundTruthCorners[m][c] - corners[idx][c]);
if(dist > 0.001) {
ts->printf(cvtest::TS::LOG, "Incorrect marker corners position");
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
return;
}
}
}
}
}
static double deg2rad(double deg) { return deg * CV_PI / 180.; }
/**
* @brief Get rvec and tvec from yaw, pitch and distance
*/
static void getSyntheticRT(double yaw, double pitch, double distance, Mat &rvec, Mat &tvec) {
rvec = Mat(3, 1, CV_64FC1);
tvec = Mat(3, 1, CV_64FC1);
// Rvec
// first put the Z axis aiming to -X (like the camera axis system)
Mat rotZ(3, 1, CV_64FC1);
rotZ.ptr< double >(0)[0] = 0;
rotZ.ptr< double >(0)[1] = 0;
rotZ.ptr< double >(0)[2] = -0.5 * CV_PI;
Mat rotX(3, 1, CV_64FC1);
rotX.ptr< double >(0)[0] = 0.5 * CV_PI;
rotX.ptr< double >(0)[1] = 0;
rotX.ptr< double >(0)[2] = 0;
Mat camRvec, camTvec;
composeRT(rotZ, Mat(3, 1, CV_64FC1, Scalar::all(0)), rotX, Mat(3, 1, CV_64FC1, Scalar::all(0)),
camRvec, camTvec);
// now pitch and yaw angles
Mat rotPitch(3, 1, CV_64FC1);
rotPitch.ptr< double >(0)[0] = 0;
rotPitch.ptr< double >(0)[1] = pitch;
rotPitch.ptr< double >(0)[2] = 0;
Mat rotYaw(3, 1, CV_64FC1);
rotYaw.ptr< double >(0)[0] = yaw;
rotYaw.ptr< double >(0)[1] = 0;
rotYaw.ptr< double >(0)[2] = 0;
composeRT(rotPitch, Mat(3, 1, CV_64FC1, Scalar::all(0)), rotYaw,
Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec, tvec);
// compose both rotations
composeRT(camRvec, Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec,
Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec, tvec);
// Tvec, just move in z (camera) direction the specific distance
tvec.ptr< double >(0)[0] = 0.;
tvec.ptr< double >(0)[1] = 0.;
tvec.ptr< double >(0)[2] = distance;
}
/**
* @brief Create a synthetic image of a marker with perspective
*/
static Mat projectMarker(aruco::Dictionary dictionary, int id, Mat cameraMatrix, double yaw,
double pitch, double distance, Size imageSize, int markerBorder,
vector< Point2f > &corners) {
// canonical image
Mat markerImg;
const int markerSizePixels = 100;
aruco::drawMarker(dictionary, id, markerSizePixels, markerImg, markerBorder);
// get rvec and tvec for the perspective
Mat rvec, tvec;
getSyntheticRT(yaw, pitch, distance, rvec, tvec);
const float markerLength = 0.05f;
vector< Point3f > markerObjPoints;
markerObjPoints.push_back(Point3f(-markerLength / 2.f, +markerLength / 2.f, 0));
markerObjPoints.push_back(markerObjPoints[0] + Point3f(markerLength, 0, 0));
markerObjPoints.push_back(markerObjPoints[0] + Point3f(markerLength, -markerLength, 0));
markerObjPoints.push_back(markerObjPoints[0] + Point3f(0, -markerLength, 0));
// project markers and draw them
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
projectPoints(markerObjPoints, rvec, tvec, cameraMatrix, distCoeffs, corners);
vector< Point2f > originalCorners;
originalCorners.push_back(Point2f(0, 0));
originalCorners.push_back(Point2f((float)markerSizePixels, 0));
originalCorners.push_back(Point2f((float)markerSizePixels, (float)markerSizePixels));
originalCorners.push_back(Point2f(0, (float)markerSizePixels));
Mat transformation = getPerspectiveTransform(originalCorners, corners);
Mat img(imageSize, CV_8UC1, Scalar::all(255));
Mat aux;
const char borderValue = 127;
warpPerspective(markerImg, aux, transformation, imageSize, INTER_NEAREST, BORDER_CONSTANT,
Scalar::all(borderValue));
// copy only not-border pixels
for(int y = 0; y < aux.rows; y++) {
for(int x = 0; x < aux.cols; x++) {
if(aux.at< unsigned char >(y, x) == borderValue) continue;
img.at< unsigned char >(y, x) = aux.at< unsigned char >(y, x);
}
}
return img;
}
/**
* @brief Draws markers in perspective and detect them
*/
class CV_ArucoDetectionPerspective : public cvtest::BaseTest {
public:
CV_ArucoDetectionPerspective();
protected:
void run(int);
};
CV_ArucoDetectionPerspective::CV_ArucoDetectionPerspective() {}
void CV_ArucoDetectionPerspective::run(int) {
int iter = 0;
Mat cameraMatrix = Mat::eye(3, 3, CV_64FC1);
Size imgSize(500, 500);
cameraMatrix.at< double >(0, 0) = cameraMatrix.at< double >(1, 1) = 650;
cameraMatrix.at< double >(0, 2) = imgSize.width / 2;
cameraMatrix.at< double >(1, 2) = imgSize.height / 2;
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
// detect from different positions
for(double distance = 0.1; distance <= 0.5; distance += 0.2) {
for(int pitch = 0; pitch < 360; pitch += 60) {
for(int yaw = 30; yaw <= 90; yaw += 50) {
int currentId = iter % 250;
int markerBorder = iter % 2 + 1;
iter++;
vector< Point2f > groundTruthCorners;
// create synthetic image
Mat img =
projectMarker(dictionary, currentId, cameraMatrix, deg2rad(yaw), deg2rad(pitch),
distance, imgSize, markerBorder, groundTruthCorners);
// detect markers
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::DetectorParameters params;
params.minDistanceToBorder = 1;
params.markerBorderBits = markerBorder;
aruco::detectMarkers(img, dictionary, corners, ids, params);
// check results
if(ids.size() != 1 || (ids.size() == 1 && ids[0] != currentId)) {
if(ids.size() != 1)
ts->printf(cvtest::TS::LOG, "Incorrect number of detected markers");
else
ts->printf(cvtest::TS::LOG, "Incorrect marker id");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
for(int c = 0; c < 4; c++) {
double dist = norm(groundTruthCorners[c] - corners[0][c]);
if(dist > 5) {
ts->printf(cvtest::TS::LOG, "Incorrect marker corners position");
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
return;
}
}
}
}
}
}
/**
* @brief Check max and min size in marker detection parameters
*/
class CV_ArucoDetectionMarkerSize : public cvtest::BaseTest {
public:
CV_ArucoDetectionMarkerSize();
protected:
void run(int);
};
CV_ArucoDetectionMarkerSize::CV_ArucoDetectionMarkerSize() {}
void CV_ArucoDetectionMarkerSize::run(int) {
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
int markerSide = 20;
int imageSize = 200;
// 10 cases
for(int i = 0; i < 10; i++) {
Mat marker;
int id = 10 + i * 20;
// create synthetic image
Mat img = Mat(imageSize, imageSize, CV_8UC1, Scalar::all(255));
aruco::drawMarker(dictionary, id, markerSide, marker);
Mat aux = img.colRange(30, 30 + markerSide).rowRange(50, 50 + markerSide);
marker.copyTo(aux);
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::DetectorParameters params;
// set a invalid minMarkerPerimeterRate
params.minMarkerPerimeterRate = min(4., (4. * markerSide) / float(imageSize) + 0.1);
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(corners.size() != 0) {
ts->printf(cvtest::TS::LOG, "Error in DetectorParameters::minMarkerPerimeterRate");
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
return;
}
// set an valid minMarkerPerimeterRate
params.minMarkerPerimeterRate = max(0., (4. * markerSide) / float(imageSize) - 0.1);
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(corners.size() != 1 || (corners.size() == 1 && ids[0] != id)) {
ts->printf(cvtest::TS::LOG, "Error in DetectorParameters::minMarkerPerimeterRate");
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
return;
}
// set a invalid maxMarkerPerimeterRate
params.maxMarkerPerimeterRate = min(4., (4. * markerSide) / float(imageSize) - 0.1);
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(corners.size() != 0) {
ts->printf(cvtest::TS::LOG, "Error in DetectorParameters::maxMarkerPerimeterRate");
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
return;
}
// set an valid maxMarkerPerimeterRate
params.maxMarkerPerimeterRate = max(0., (4. * markerSide) / float(imageSize) + 0.1);
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(corners.size() != 1 || (corners.size() == 1 && ids[0] != id)) {
ts->printf(cvtest::TS::LOG, "Error in DetectorParameters::maxMarkerPerimeterRate");
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
return;
}
}
}
/**
* @brief Check error correction in marker bits
*/
class CV_ArucoBitCorrection : public cvtest::BaseTest {
public:
CV_ArucoBitCorrection();
protected:
void run(int);
};
CV_ArucoBitCorrection::CV_ArucoBitCorrection() {}
void CV_ArucoBitCorrection::run(int) {
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
aruco::Dictionary dictionary2 = dictionary;
int markerSide = 50;
int imageSize = 150;
aruco::DetectorParameters params;
// 10 markers
for(int l = 0; l < 10; l++) {
Mat marker;
int id = 10 + l * 20;
Mat currentCodeBytes = dictionary.bytesList.rowRange(id, id + 1);
// 5 valid cases
for(int i = 0; i < 5; i++) {
// how many bit errors (the error is low enough so it can be corrected)
params.errorCorrectionRate = 0.2 + i * 0.1;
int errors =
(int)std::floor(dictionary.maxCorrectionBits * params.errorCorrectionRate - 1.);
// create erroneous marker in currentCodeBits
Mat currentCodeBits =
aruco::Dictionary::getBitsFromByteList(currentCodeBytes, dictionary.markerSize);
for(int e = 0; e < errors; e++) {
currentCodeBits.ptr< unsigned char >()[2 * e] =
!currentCodeBits.ptr< unsigned char >()[2 * e];
}
// add erroneous marker to dictionary2 in order to create the erroneous marker image
Mat currentCodeBytesError = aruco::Dictionary::getByteListFromBits(currentCodeBits);
currentCodeBytesError.copyTo(dictionary2.bytesList.rowRange(id, id + 1));
Mat img = Mat(imageSize, imageSize, CV_8UC1, Scalar::all(255));
aruco::drawMarker(dictionary2, id, markerSide, marker);
Mat aux = img.colRange(30, 30 + markerSide).rowRange(50, 50 + markerSide);
marker.copyTo(aux);
// try to detect using original dictionary
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(corners.size() != 1 || (corners.size() == 1 && ids[0] != id)) {
ts->printf(cvtest::TS::LOG, "Error in bit correction");
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
return;
}
}
// 5 invalid cases
for(int i = 0; i < 5; i++) {
// how many bit errors (the error is too high to be corrected)
params.errorCorrectionRate = 0.2 + i * 0.1;
int errors =
(int)std::floor(dictionary.maxCorrectionBits * params.errorCorrectionRate + 1.);
// create erroneous marker in currentCodeBits
Mat currentCodeBits =
aruco::Dictionary::getBitsFromByteList(currentCodeBytes, dictionary.markerSize);
for(int e = 0; e < errors; e++) {
currentCodeBits.ptr< unsigned char >()[2 * e] =
!currentCodeBits.ptr< unsigned char >()[2 * e];
}
// dictionary3 is only composed by the modified marker (in its original form)
aruco::Dictionary dictionary3 = dictionary;
dictionary3.bytesList = dictionary2.bytesList.rowRange(id, id + 1).clone();
// add erroneous marker to dictionary2 in order to create the erroneous marker image
Mat currentCodeBytesError = aruco::Dictionary::getByteListFromBits(currentCodeBits);
currentCodeBytesError.copyTo(dictionary2.bytesList.rowRange(id, id + 1));
Mat img = Mat(imageSize, imageSize, CV_8UC1, Scalar::all(255));
aruco::drawMarker(dictionary2, id, markerSide, marker);
Mat aux = img.colRange(30, 30 + markerSide).rowRange(50, 50 + markerSide);
marker.copyTo(aux);
// try to detect using dictionary3, it should fail
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::detectMarkers(img, dictionary3, corners, ids, params);
if(corners.size() != 0) {
ts->printf(cvtest::TS::LOG, "Error in DetectorParameters::errorCorrectionRate");
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
return;
}
}
}
}
TEST(CV_ArucoDetectionSimple, algorithmic) {
CV_ArucoDetectionSimple test;
test.safe_run();
}
TEST(CV_ArucoDetectionPerspective, algorithmic) {
CV_ArucoDetectionPerspective test;
test.safe_run();
}
TEST(CV_ArucoDetectionMarkerSize, algorithmic) {
CV_ArucoDetectionMarkerSize test;
test.safe_run();
}
TEST(CV_ArucoBitCorrection, algorithmic) {
CV_ArucoBitCorrection test;
test.safe_run();
}

@ -0,0 +1,338 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include "test_precomp.hpp"
#include <string>
#include <opencv2/aruco.hpp>
using namespace std;
using namespace cv;
static double deg2rad(double deg) { return deg * CV_PI / 180.; }
/**
* @brief Get rvec and tvec from yaw, pitch and distance
*/
static void getSyntheticRT(double yaw, double pitch, double distance, Mat &rvec, Mat &tvec) {
rvec = Mat(3, 1, CV_64FC1);
tvec = Mat(3, 1, CV_64FC1);
// Rvec
// first put the Z axis aiming to -X (like the camera axis system)
Mat rotZ(3, 1, CV_64FC1);
rotZ.ptr< double >(0)[0] = 0;
rotZ.ptr< double >(0)[1] = 0;
rotZ.ptr< double >(0)[2] = -0.5 * CV_PI;
Mat rotX(3, 1, CV_64FC1);
rotX.ptr< double >(0)[0] = 0.5 * CV_PI;
rotX.ptr< double >(0)[1] = 0;
rotX.ptr< double >(0)[2] = 0;
Mat camRvec, camTvec;
composeRT(rotZ, Mat(3, 1, CV_64FC1, Scalar::all(0)), rotX, Mat(3, 1, CV_64FC1, Scalar::all(0)),
camRvec, camTvec);
// now pitch and yaw angles
Mat rotPitch(3, 1, CV_64FC1);
rotPitch.ptr< double >(0)[0] = 0;
rotPitch.ptr< double >(0)[1] = pitch;
rotPitch.ptr< double >(0)[2] = 0;
Mat rotYaw(3, 1, CV_64FC1);
rotYaw.ptr< double >(0)[0] = yaw;
rotYaw.ptr< double >(0)[1] = 0;
rotYaw.ptr< double >(0)[2] = 0;
composeRT(rotPitch, Mat(3, 1, CV_64FC1, Scalar::all(0)), rotYaw,
Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec, tvec);
// compose both rotations
composeRT(camRvec, Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec,
Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec, tvec);
// Tvec, just move in z (camera) direction the specific distance
tvec.ptr< double >(0)[0] = 0.;
tvec.ptr< double >(0)[1] = 0.;
tvec.ptr< double >(0)[2] = distance;
}
/**
* @brief Project a synthetic marker
*/
static void projectMarker(Mat &img, aruco::Dictionary dictionary, int id,
vector< Point3f > markerObjPoints, Mat cameraMatrix, Mat rvec, Mat tvec,
int markerBorder) {
// canonical image
Mat markerImg;
const int markerSizePixels = 100;
aruco::drawMarker(dictionary, id, markerSizePixels, markerImg, markerBorder);
// projected corners
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
vector< Point2f > corners;
projectPoints(markerObjPoints, rvec, tvec, cameraMatrix, distCoeffs, corners);
// get perspective transform
vector< Point2f > originalCorners;
originalCorners.push_back(Point2f(0, 0));
originalCorners.push_back(Point2f((float)markerSizePixels, 0));
originalCorners.push_back(Point2f((float)markerSizePixels, (float)markerSizePixels));
originalCorners.push_back(Point2f(0, (float)markerSizePixels));
Mat transformation = getPerspectiveTransform(originalCorners, corners);
// apply transformation
Mat aux;
const char borderValue = 127;
warpPerspective(markerImg, aux, transformation, img.size(), INTER_NEAREST, BORDER_CONSTANT,
Scalar::all(borderValue));
// copy only not-border pixels
for(int y = 0; y < aux.rows; y++) {
for(int x = 0; x < aux.cols; x++) {
if(aux.at< unsigned char >(y, x) == borderValue) continue;
img.at< unsigned char >(y, x) = aux.at< unsigned char >(y, x);
}
}
}
/**
* @brief Get a synthetic image of GridBoard in perspective
*/
static Mat projectBoard(aruco::GridBoard board, Mat cameraMatrix, double yaw, double pitch,
double distance, Size imageSize, int markerBorder) {
Mat rvec, tvec;
getSyntheticRT(yaw, pitch, distance, rvec, tvec);
Mat img = Mat(imageSize, CV_8UC1, Scalar::all(255));
for(unsigned int m = 0; m < board.ids.size(); m++) {
projectMarker(img, board.dictionary, board.ids[m], board.objPoints[m], cameraMatrix, rvec,
tvec, markerBorder);
}
return img;
}
/**
* @brief Check pose estimation of aruco board
*/
class CV_ArucoBoardPose : public cvtest::BaseTest {
public:
CV_ArucoBoardPose();
protected:
void run(int);
};
CV_ArucoBoardPose::CV_ArucoBoardPose() {}
void CV_ArucoBoardPose::run(int) {
int iter = 0;
Mat cameraMatrix = Mat::eye(3, 3, CV_64FC1);
Size imgSize(500, 500);
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
aruco::GridBoard board = aruco::GridBoard::create(3, 3, 0.02f, 0.005f, dictionary);
cameraMatrix.at< double >(0, 0) = cameraMatrix.at< double >(1, 1) = 650;
cameraMatrix.at< double >(0, 2) = imgSize.width / 2;
cameraMatrix.at< double >(1, 2) = imgSize.height / 2;
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
// for different perspectives
for(double distance = 0.2; distance <= 0.4; distance += 0.2) {
for(int yaw = 0; yaw < 360; yaw += 100) {
for(int pitch = 30; pitch <= 90; pitch += 50) {
for(unsigned int i = 0; i < board.ids.size(); i++)
board.ids[i] = (iter + int(i)) % 250;
int markerBorder = iter % 2 + 1;
iter++;
// create synthetic image
Mat img = projectBoard(board, cameraMatrix, deg2rad(pitch), deg2rad(yaw), distance,
imgSize, markerBorder);
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::DetectorParameters params;
params.minDistanceToBorder = 3;
params.markerBorderBits = markerBorder;
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(ids.size() == 0) {
ts->printf(cvtest::TS::LOG, "Marker detection failed in Board test");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
// estimate pose
Mat rvec, tvec;
aruco::estimatePoseBoard(corners, ids, board, cameraMatrix, distCoeffs, rvec, tvec);
// check result
for(unsigned int i = 0; i < ids.size(); i++) {
int foundIdx = -1;
for(unsigned int j = 0; j < board.ids.size(); j++) {
if(board.ids[j] == ids[i]) {
foundIdx = int(j);
break;
}
}
if(foundIdx == -1) {
ts->printf(cvtest::TS::LOG, "Marker detected with wrong ID in Board test");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
vector< Point2f > projectedCorners;
projectPoints(board.objPoints[foundIdx], rvec, tvec, cameraMatrix, distCoeffs,
projectedCorners);
for(int c = 0; c < 4; c++) {
double repError = norm(projectedCorners[c] - corners[i][c]);
if(repError > 5.) {
ts->printf(cvtest::TS::LOG, "Corner reprojection error too high");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
}
}
}
}
}
}
/**
* @brief Check refine strategy
*/
class CV_ArucoRefine : public cvtest::BaseTest {
public:
CV_ArucoRefine();
protected:
void run(int);
};
CV_ArucoRefine::CV_ArucoRefine() {}
void CV_ArucoRefine::run(int) {
int iter = 0;
Mat cameraMatrix = Mat::eye(3, 3, CV_64FC1);
Size imgSize(500, 500);
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
aruco::GridBoard board = aruco::GridBoard::create(3, 3, 0.02f, 0.005f, dictionary);
cameraMatrix.at< double >(0, 0) = cameraMatrix.at< double >(1, 1) = 650;
cameraMatrix.at< double >(0, 2) = imgSize.width / 2;
cameraMatrix.at< double >(1, 2) = imgSize.height / 2;
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
// for different perspectives
for(double distance = 0.2; distance <= 0.4; distance += 0.2) {
for(int yaw = 0; yaw < 360; yaw += 100) {
for(int pitch = 30; pitch <= 90; pitch += 50) {
for(unsigned int i = 0; i < board.ids.size(); i++)
board.ids[i] = (iter + int(i)) % 250;
int markerBorder = iter % 2 + 1;
iter++;
// create synthetic image
Mat img = projectBoard(board, cameraMatrix, deg2rad(pitch), deg2rad(yaw), distance,
imgSize, markerBorder);
// detect markers
vector< vector< Point2f > > corners, rejected;
vector< int > ids;
aruco::DetectorParameters params;
params.minDistanceToBorder = 3;
params.doCornerRefinement = true;
params.markerBorderBits = markerBorder;
aruco::detectMarkers(img, dictionary, corners, ids, params, rejected);
// remove a marker from detection
int markersBeforeDelete = (int)ids.size();
if(markersBeforeDelete < 2) continue;
rejected.push_back(corners[0]);
corners.erase(corners.begin(), corners.begin() + 1);
ids.erase(ids.begin(), ids.begin() + 1);
// try to refind the erased marker
aruco::refineDetectedMarkers(img, board, corners, ids, rejected, cameraMatrix,
distCoeffs, 10, 3., true, noArray(), params);
// check result
if((int)ids.size() < markersBeforeDelete) {
ts->printf(cvtest::TS::LOG, "Error in refine detected markers");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
}
}
}
}
TEST(CV_ArucoBoardPose, accuracy) {
CV_ArucoBoardPose test;
test.safe_run();
}
TEST(CV_ArucoRefine, accuracy) {
CV_ArucoRefine test;
test.safe_run();
}

@ -0,0 +1,564 @@
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include "test_precomp.hpp"
#include <opencv2/aruco/charuco.hpp>
#include <string>
using namespace std;
using namespace cv;
static double deg2rad(double deg) { return deg * CV_PI / 180.; }
/**
* @brief Get rvec and tvec from yaw, pitch and distance
*/
static void getSyntheticRT(double yaw, double pitch, double distance, Mat &rvec, Mat &tvec) {
rvec = Mat(3, 1, CV_64FC1);
tvec = Mat(3, 1, CV_64FC1);
// Rvec
// first put the Z axis aiming to -X (like the camera axis system)
Mat rotZ(3, 1, CV_64FC1);
rotZ.ptr< double >(0)[0] = 0;
rotZ.ptr< double >(0)[1] = 0;
rotZ.ptr< double >(0)[2] = -0.5 * CV_PI;
Mat rotX(3, 1, CV_64FC1);
rotX.ptr< double >(0)[0] = 0.5 * CV_PI;
rotX.ptr< double >(0)[1] = 0;
rotX.ptr< double >(0)[2] = 0;
Mat camRvec, camTvec;
composeRT(rotZ, Mat(3, 1, CV_64FC1, Scalar::all(0)), rotX, Mat(3, 1, CV_64FC1, Scalar::all(0)),
camRvec, camTvec);
// now pitch and yaw angles
Mat rotPitch(3, 1, CV_64FC1);
rotPitch.ptr< double >(0)[0] = 0;
rotPitch.ptr< double >(0)[1] = pitch;
rotPitch.ptr< double >(0)[2] = 0;
Mat rotYaw(3, 1, CV_64FC1);
rotYaw.ptr< double >(0)[0] = yaw;
rotYaw.ptr< double >(0)[1] = 0;
rotYaw.ptr< double >(0)[2] = 0;
composeRT(rotPitch, Mat(3, 1, CV_64FC1, Scalar::all(0)), rotYaw,
Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec, tvec);
// compose both rotations
composeRT(camRvec, Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec,
Mat(3, 1, CV_64FC1, Scalar::all(0)), rvec, tvec);
// Tvec, just move in z (camera) direction the specific distance
tvec.ptr< double >(0)[0] = 0.;
tvec.ptr< double >(0)[1] = 0.;
tvec.ptr< double >(0)[2] = distance;
}
/**
* @brief Project a synthetic marker
*/
static void projectMarker(Mat &img, aruco::Dictionary dictionary, int id,
vector< Point3f > markerObjPoints, Mat cameraMatrix, Mat rvec, Mat tvec,
int markerBorder) {
Mat markerImg;
const int markerSizePixels = 100;
aruco::drawMarker(dictionary, id, markerSizePixels, markerImg, markerBorder);
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
vector< Point2f > corners;
projectPoints(markerObjPoints, rvec, tvec, cameraMatrix, distCoeffs, corners);
vector< Point2f > originalCorners;
originalCorners.push_back(Point2f(0, 0));
originalCorners.push_back(Point2f((float)markerSizePixels, 0));
originalCorners.push_back(Point2f((float)markerSizePixels, (float)markerSizePixels));
originalCorners.push_back(Point2f(0, (float)markerSizePixels));
Mat transformation = getPerspectiveTransform(originalCorners, corners);
Mat aux;
const char borderValue = 127;
warpPerspective(markerImg, aux, transformation, img.size(), INTER_NEAREST, BORDER_CONSTANT,
Scalar::all(borderValue));
// copy only not-border pixels
for(int y = 0; y < aux.rows; y++) {
for(int x = 0; x < aux.cols; x++) {
if(aux.at< unsigned char >(y, x) == borderValue) continue;
img.at< unsigned char >(y, x) = aux.at< unsigned char >(y, x);
}
}
}
/**
* @brief Get a synthetic image of Chessboard in perspective
*/
static Mat projectChessboard(int squaresX, int squaresY, float squareSize, Size imageSize,
Mat cameraMatrix, Mat rvec, Mat tvec) {
Mat img(imageSize, CV_8UC1, Scalar::all(255));
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
for(int y = 0; y < squaresY; y++) {
float startY = float(y) * squareSize;
for(int x = 0; x < squaresX; x++) {
if(y % 2 != x % 2) continue;
float startX = float(x) * squareSize;
vector< Point3f > squareCorners;
squareCorners.push_back(Point3f(startX, startY, 0));
squareCorners.push_back(squareCorners[0] + Point3f(squareSize, 0, 0));
squareCorners.push_back(squareCorners[0] + Point3f(squareSize, squareSize, 0));
squareCorners.push_back(squareCorners[0] + Point3f(0, squareSize, 0));
vector< vector< Point2f > > projectedCorners;
projectedCorners.push_back(vector< Point2f >());
projectPoints(squareCorners, rvec, tvec, cameraMatrix, distCoeffs, projectedCorners[0]);
vector< vector< Point > > projectedCornersInt;
projectedCornersInt.push_back(vector< Point >());
for(int k = 0; k < 4; k++)
projectedCornersInt[0]
.push_back(Point((int)projectedCorners[0][k].x, (int)projectedCorners[0][k].y));
fillPoly(img, projectedCornersInt, Scalar::all(0));
}
}
return img;
}
/**
* @brief Check pose estimation of charuco board
*/
static Mat projectCharucoBoard(aruco::CharucoBoard board, Mat cameraMatrix, double yaw,
double pitch, double distance, Size imageSize, int markerBorder,
Mat &rvec, Mat &tvec) {
getSyntheticRT(yaw, pitch, distance, rvec, tvec);
// project markers
Mat img = Mat(imageSize, CV_8UC1, Scalar::all(255));
for(unsigned int m = 0; m < board.ids.size(); m++) {
projectMarker(img, board.dictionary, board.ids[m], board.objPoints[m], cameraMatrix, rvec,
tvec, markerBorder);
}
// project chessboard
Mat chessboard =
projectChessboard(board.getChessboardSize().width, board.getChessboardSize().height,
board.getSquareLength(), imageSize, cameraMatrix, rvec, tvec);
for(unsigned int i = 0; i < chessboard.total(); i++) {
if(chessboard.ptr< unsigned char >()[i] == 0) {
img.ptr< unsigned char >()[i] = 0;
}
}
return img;
}
/**
* @brief Check Charuco detection
*/
class CV_CharucoDetection : public cvtest::BaseTest {
public:
CV_CharucoDetection();
protected:
void run(int);
};
CV_CharucoDetection::CV_CharucoDetection() {}
void CV_CharucoDetection::run(int) {
int iter = 0;
Mat cameraMatrix = Mat::eye(3, 3, CV_64FC1);
Size imgSize(500, 500);
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
aruco::CharucoBoard board = aruco::CharucoBoard::create(4, 4, 0.03f, 0.015f, dictionary);
cameraMatrix.at< double >(0, 0) = cameraMatrix.at< double >(1, 1) = 650;
cameraMatrix.at< double >(0, 2) = imgSize.width / 2;
cameraMatrix.at< double >(1, 2) = imgSize.height / 2;
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
// for different perspectives
for(double distance = 0.2; distance <= 0.4; distance += 0.2) {
for(int yaw = 0; yaw < 360; yaw += 100) {
for(int pitch = 30; pitch <= 90; pitch += 50) {
int markerBorder = iter % 2 + 1;
iter++;
// create synthetic image
Mat rvec, tvec;
Mat img = projectCharucoBoard(board, cameraMatrix, deg2rad(pitch), deg2rad(yaw),
distance, imgSize, markerBorder, rvec, tvec);
// detect markers
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::DetectorParameters params;
params.minDistanceToBorder = 3;
params.markerBorderBits = markerBorder;
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(ids.size() == 0) {
ts->printf(cvtest::TS::LOG, "Marker detection failed");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
// interpolate charuco corners
vector< Point2f > charucoCorners;
vector< int > charucoIds;
if(iter % 2 == 0) {
aruco::interpolateCornersCharuco(corners, ids, img, board, charucoCorners,
charucoIds);
} else {
aruco::interpolateCornersCharuco(corners, ids, img, board, charucoCorners,
charucoIds, cameraMatrix, distCoeffs);
}
// check results
vector< Point2f > projectedCharucoCorners;
projectPoints(board.chessboardCorners, rvec, tvec, cameraMatrix, distCoeffs,
projectedCharucoCorners);
for(unsigned int i = 0; i < charucoIds.size(); i++) {
int currentId = charucoIds[i];
if(currentId >= (int)board.chessboardCorners.size()) {
ts->printf(cvtest::TS::LOG, "Invalid Charuco corner id");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
double repError = norm(charucoCorners[i] - projectedCharucoCorners[currentId]);
if(repError > 5.) {
ts->printf(cvtest::TS::LOG, "Charuco corner reprojection error too high");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
}
}
}
}
}
/**
* @brief Check charuco pose estimation
*/
class CV_CharucoPoseEstimation : public cvtest::BaseTest {
public:
CV_CharucoPoseEstimation();
protected:
void run(int);
};
CV_CharucoPoseEstimation::CV_CharucoPoseEstimation() {}
void CV_CharucoPoseEstimation::run(int) {
int iter = 0;
Mat cameraMatrix = Mat::eye(3, 3, CV_64FC1);
Size imgSize(500, 500);
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
aruco::CharucoBoard board = aruco::CharucoBoard::create(4, 4, 0.03f, 0.015f, dictionary);
cameraMatrix.at< double >(0, 0) = cameraMatrix.at< double >(1, 1) = 650;
cameraMatrix.at< double >(0, 2) = imgSize.width / 2;
cameraMatrix.at< double >(1, 2) = imgSize.height / 2;
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
// for different perspectives
for(double distance = 0.2; distance <= 0.4; distance += 0.2) {
for(int yaw = 0; yaw < 360; yaw += 100) {
for(int pitch = 30; pitch <= 90; pitch += 50) {
int markerBorder = iter % 2 + 1;
iter++;
// get synthetic image
Mat rvec, tvec;
Mat img = projectCharucoBoard(board, cameraMatrix, deg2rad(pitch), deg2rad(yaw),
distance, imgSize, markerBorder, rvec, tvec);
// detect markers
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::DetectorParameters params;
params.minDistanceToBorder = 3;
params.markerBorderBits = markerBorder;
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(ids.size() == 0) {
ts->printf(cvtest::TS::LOG, "Marker detection failed");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
// interpolate charuco corners
vector< Point2f > charucoCorners;
vector< int > charucoIds;
if(iter % 2 == 0) {
aruco::interpolateCornersCharuco(corners, ids, img, board, charucoCorners,
charucoIds);
} else {
aruco::interpolateCornersCharuco(corners, ids, img, board, charucoCorners,
charucoIds, cameraMatrix, distCoeffs);
}
if(charucoIds.size() == 0) continue;
// estimate charuco pose
aruco::estimatePoseCharucoBoard(charucoCorners, charucoIds, board, cameraMatrix,
distCoeffs, rvec, tvec);
// check result
vector< Point2f > projectedCharucoCorners;
projectPoints(board.chessboardCorners, rvec, tvec, cameraMatrix, distCoeffs,
projectedCharucoCorners);
for(unsigned int i = 0; i < charucoIds.size(); i++) {
int currentId = charucoIds[i];
if(currentId >= (int)board.chessboardCorners.size()) {
ts->printf(cvtest::TS::LOG, "Invalid Charuco corner id");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
double repError = norm(charucoCorners[i] - projectedCharucoCorners[currentId]);
if(repError > 5.) {
ts->printf(cvtest::TS::LOG, "Charuco corner reprojection error too high");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
}
}
}
}
}
/**
* @brief Check diamond detection
*/
class CV_CharucoDiamondDetection : public cvtest::BaseTest {
public:
CV_CharucoDiamondDetection();
protected:
void run(int);
};
CV_CharucoDiamondDetection::CV_CharucoDiamondDetection() {}
void CV_CharucoDiamondDetection::run(int) {
int iter = 0;
Mat cameraMatrix = Mat::eye(3, 3, CV_64FC1);
Size imgSize(500, 500);
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(aruco::DICT_6X6_250);
float squareLength = 0.03f;
float markerLength = 0.015f;
aruco::CharucoBoard board =
aruco::CharucoBoard::create(3, 3, squareLength, markerLength, dictionary);
cameraMatrix.at< double >(0, 0) = cameraMatrix.at< double >(1, 1) = 650;
cameraMatrix.at< double >(0, 2) = imgSize.width / 2;
cameraMatrix.at< double >(1, 2) = imgSize.height / 2;
Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));
// for different perspectives
for(double distance = 0.3; distance <= 0.3; distance += 0.2) {
for(int yaw = 0; yaw < 360; yaw += 100) {
for(int pitch = 30; pitch <= 90; pitch += 30) {
int markerBorder = iter % 2 + 1;
for(int i = 0; i < 4; i++)
board.ids[i] = 4 * iter + i;
iter++;
// get synthetic image
Mat rvec, tvec;
Mat img = projectCharucoBoard(board, cameraMatrix, deg2rad(pitch), deg2rad(yaw),
distance, imgSize, markerBorder, rvec, tvec);
// detect markers
vector< vector< Point2f > > corners;
vector< int > ids;
aruco::DetectorParameters params;
params.minDistanceToBorder = 0;
params.markerBorderBits = markerBorder;
aruco::detectMarkers(img, dictionary, corners, ids, params);
if(ids.size() != 4) {
ts->printf(cvtest::TS::LOG, "Not enough markers for diamond detection");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
// detect diamonds
vector< vector< Point2f > > diamondCorners;
vector< Vec4i > diamondIds;
aruco::detectCharucoDiamond(img, corners, ids, squareLength / markerLength,
diamondCorners, diamondIds, cameraMatrix, distCoeffs);
// check results
if(diamondIds.size() != 1) {
ts->printf(cvtest::TS::LOG, "Diamond not detected correctly");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
for(int i = 0; i < 4; i++) {
if(diamondIds[0][i] != board.ids[i]) {
ts->printf(cvtest::TS::LOG, "Incorrect diamond ids");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
}
vector< Point2f > projectedDiamondCorners;
projectPoints(board.chessboardCorners, rvec, tvec, cameraMatrix, distCoeffs,
projectedDiamondCorners);
vector< Point2f > projectedDiamondCornersReorder(4);
projectedDiamondCornersReorder[0] = projectedDiamondCorners[2];
projectedDiamondCornersReorder[1] = projectedDiamondCorners[3];
projectedDiamondCornersReorder[2] = projectedDiamondCorners[1];
projectedDiamondCornersReorder[3] = projectedDiamondCorners[0];
for(unsigned int i = 0; i < 4; i++) {
double repError =
norm(diamondCorners[0][i] - projectedDiamondCornersReorder[i]);
if(repError > 5.) {
ts->printf(cvtest::TS::LOG, "Diamond corner reprojection error too high");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
}
// estimate diamond pose
vector< Mat > estimatedRvec, estimatedTvec;
aruco::estimatePoseSingleMarkers(diamondCorners, squareLength, cameraMatrix,
distCoeffs, estimatedRvec, estimatedTvec);
// check result
vector< Point2f > projectedDiamondCornersPose;
vector< Vec3f > diamondObjPoints(4);
diamondObjPoints[0] = Vec3f(-squareLength / 2.f, squareLength / 2.f, 0);
diamondObjPoints[1] = Vec3f(squareLength / 2.f, squareLength / 2.f, 0);
diamondObjPoints[2] = Vec3f(squareLength / 2.f, -squareLength / 2.f, 0);
diamondObjPoints[3] = Vec3f(-squareLength / 2.f, -squareLength / 2.f, 0);
projectPoints(diamondObjPoints, estimatedRvec[0], estimatedTvec[0], cameraMatrix,
distCoeffs, projectedDiamondCornersPose);
for(unsigned int i = 0; i < 4; i++) {
double repError =
norm(projectedDiamondCornersReorder[i] - projectedDiamondCornersPose[i]);
if(repError > 5.) {
ts->printf(cvtest::TS::LOG, "Charuco pose error too high");
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
}
}
}
}
}
TEST(CV_CharucoDetection, accuracy) {
CV_CharucoDetection test;
test.safe_run();
}
TEST(CV_CharucoPoseEstimation, accuracy) {
CV_CharucoPoseEstimation test;
test.safe_run();
}
TEST(CV_CharucoDiamondDetection, accuracy) {
CV_CharucoDiamondDetection test;
test.safe_run();
}

@ -0,0 +1,3 @@
#include "test_precomp.hpp"
CV_TEST_MAIN("cv")

@ -0,0 +1,18 @@
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-declarations"
# if defined __clang__ || defined __APPLE__
# pragma GCC diagnostic ignored "-Wmissing-prototypes"
# pragma GCC diagnostic ignored "-Wextra"
# endif
#endif
#ifndef __OPENCV_TEST_PRECOMP_HPP__
#define __OPENCV_TEST_PRECOMP_HPP__
#include <iostream>
#include "opencv2/ts.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/aruco.hpp"
#endif

@ -0,0 +1,260 @@
Detection of ArUco Boards {#tutorial_aruco_board_detection}
==============================
An ArUco Board is a set of markers that acts like a single marker in the sense that it provides a
single pose for the camera.
The most popular board is the one with all the markers in the same plane, since it can be easily printed:
![](images/gboriginal.png)
However, boards are not limited to this arrangement and can represent any 2d or 3d layout.
The difference between a Board and a set of independent markers is that the relative position between
the markers in the Board is known a priori. This allows that the corners of all the markers can be used for
estimating the pose of the camera respect to the whole Board.
When you use a set of independent markers, you can estimate the pose for each marker individually,
since you dont know the relative position of the markers in the environment.
The main benefits of using Boards are:
- The pose estimation is much more versatile. Only some markers are necessary to perform pose estimation.
Thus, the pose can be calculated even in the presence of occlusions or partial views.
- The obtained pose is usually more accurate since a higher amount of point correspondences (marker
corners) are employed.
The aruco module allows the use of Boards. The main class is the ```cv::aruco::Board``` class which defines the Board layout:
``` c++
class Board {
public:
std::vector<std::vector<cv::Point3f> > objPoints;
cv::aruco::Dictionary dictionary;
std::vector<int> ids;
};
```
A object of type ```Board``` has three parameters:
- The ```objPoints``` structure is the list of corner positions in the 3d Board reference system, i.e. its layout.
For each marker, its four corners are stored in the standard order, i.e. in clockwise order and starting
with the top left corner.
- The ```dictionary``` parameter indicates to which marker dictionary the Board markers belong to.
- Finally, the ```ids``` structure indicates the identifiers of each of the markers in ```objPoints``` respect to the specified ```dictionary```.
Board Detection
-----
A Board detection is similar to the standard marker detection. The only difference is in the pose estimation step.
In fact, to use marker boards, a standard marker detection should be done before estimating the Board pose.
The aruco module provides a specific function, ```estimatePoseBoard()```, to perform pose estimation for boards:
``` c++
cv::Mat inputImage;
// camera parameters are read from somewhere
cv::Mat cameraMatrix, distCoeffs;
readCameraParameters(cameraMatrix, distCoeffs);
// assume we have a function to create the board object
cv::aruco::Board board = createBoard();
...
vector< int > markerIds;
vector< vector<Point2f> > markerCorners;
cv::aruco::detectMarkers(inputImage, board.dictionary, markerCorners, markerIds);
// if at least one marker detected
if(markerIds.size() > 0) {
cv::Mat rvec, tvec;
int valid = cv::aruco::estimatePoseBoard(markerCorners, markerIds, board, cameraMatrix, distCoeffs, rvec, tvec);
}
```
The parameters of estimatePoseBoard are:
- ```markerCorners``` and ```markerIds```: structures of detected markers from ```detectMarkers()``` function.
- ```board```: the ```Board``` object that defines the board layout and its ids
- ```cameraMatrix``` and ```distCoeffs```: camera calibration parameters necessary for pose estimation.
- ```rvec``` and ```tvec```: estimated pose of the Board.
- The function returns the total number of markers employed for estimating the board pose. Note that not all the
markers provided in ```markerCorners``` and ```markerIds``` should be used, since only the markers whose ids are
listed in the ```Board::ids``` structure are considered.
The ```drawAxis()``` function can be used to check the obtained pose. For instance:
![Board with axis](images/gbmarkersaxis.png)
And this is another example with the board partially occluded:
![Board with occlusions](images/gbocclusion.png)
As it can be observed, although some markers have not been detected, the Board pose can still be estimated from the rest of markers.
Grid Board
-----
Creating the ```Board``` object requires specifying the corner positions for each marker in the environment.
However, in many cases, the board will be just a set of markers in the same plane and in a grid layout,
so it can be easily printed and used.
Fortunately, the aruco module provides the basic functionality to create and print these types of markers
easily.
The ```GridBoard``` class is a specialized class that inherits from the ```Board``` class and which represents a Board
with all the markers in the same plane and in a grid layout, as in the following image:
![Image with aruco board](images/gboriginal.png)
Concretely, the coordinate system in a Grid Board is positioned in the board plane, centered in the bottom left
corner of the board and with the Z pointing out, like in the following image (X:red, Y:green, Z:blue):
![Board with axis](images/gbaxis.png)
A ```GridBoard``` object can be defined using the following parameters:
- Number of markers in the X direction.
- Number of markers in the Y direction.
- Lenght of the marker side.
- Length of the marker separation.
- The dictionary of the markers.
- Ids of all the markers (X*Y markers).
This object can be easily created from these parameters using the ```cv::aruco::GridBoard::create()``` static function:
``` c++
cv::aruco::GridBoard board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary);
```
- The first and second parameters are the number of markers in the X and Y direction respectively.
- The third and fourth parameters are the marker length and the marker separation respectively. They can be provided
in any unit, having in mind that the estimated pose for this board will be measured in the same units (in general, meters are used).
- Finally, the dictionary of the markers is provided.
So, this board will be composed by 5x7=35 markers. The ids of each of the markers are assigned, by default, in ascending
order starting on 0, so they will be 0, 1, 2, ..., 34. This can be easily customized by accessing to the ids vector
through ```board.ids```, like in the ```Board``` parent class.
After creating a Grid Board, we probably want to print it and use it. A function to generate the image
of a ```GridBoard``` is provided in ```cv::aruco::GridBoard::draw()```. For example:
``` c++
cv::aruco::GridBoard board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary);
cv::Mat boardImage;
board.draw( cv::Size(600, 500), boardImage, 10, 1 );
```
- The first parameter is the size of the output image in pixels. In this case 600x500 pixels. If this is not proportional
to the board dimensions, it will be centered on the image.
- ```boardImage```: the output image with the board.
- The third parameter is the (optional) margin in pixels, so none of the markers are touching the image border.
In this case the margin is 10.
- Finally, the size of the marker border, similarly to ```drawMarker()``` function. The default value is 1.
The output image will be something like this:
![](images/board.jpg)
A full working example of board creation is included in the ```create_board.cpp``` inside the module samples folder.
Finally, a full example of board detection:
``` c++
cv::VideoCapture inputVideo;
inputVideo.open(0);
cv::Mat cameraMatrix, distCoeffs;
// camera parameters are read from somewhere
readCameraParameters(cameraMatrix, distCoeffs);
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::GridBoard board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary);
while (inputVideo.grab()) {
cv::Mat image, imageCopy;
inputVideo.retrieve(image);
image.copyTo(imageCopy);
std::vector<int> ids;
std::vector<std::vector<cv::Point2f> > corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
// if at least one marker detected
if (ids.size() > 0) {
cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
cv::Mat rvec, tvec;
int valid = estimatePoseBoard(corners, ids, board, cameraMatrix, distCoeffs, rvec, tvec);
// if at least one board marker detected
if(valid > 0)
cv::aruco::drawAxis(imageCopy, cameraMatrix, distCoeffs, rvec, tvec, 0.1);
}
cv::imshow("out", imageCopy);
char key = (char) cv::waitKey(waitTime);
if (key == 27)
break;
}
```
Sample video:
[![Board Detection video](http://img.youtube.com/vi/Q1HlJEjW_j0/0.jpg)](https://youtu.be/Q1HlJEjW_j0)
A full working example is included in the ```detect_board.cpp``` inside the module samples folder.
Refine marker detection
-----
ArUco boards can also be used to improve the detection of markers. If we have detected a subset of the markers
that belongs to the board, we can use these markers and the board layout information to try to find the
markers that have not been previously detected.
This can be done using the ```refineDetectedMarkers()``` function, which should be called
after calling ```detectMarkers()```.
The main parameters of this function are the original image where markers were detected, the Board object,
the detected marker corners, the detected marker ids and the rejected marker corners.
The rejected corners can be obtained from the ```detectMarkers()``` function and are also known as marker
candidates. This candidates are square shapes that have been found in the original image but have failed
to pass the identification step (i.e. their inner codification presents too many errors) and thus they
have not been recognized as markers.
However, these candidates are sometimes actual markers that have not been correctly identified due to high
noise in the image, very low resolution or other related problems that affect to the binary code extraction.
The ```refineDetectedMarkers()``` function finds correspondences between these candidates and the missing
markers of the board. This search is based on two parameters:
- Distance between the candidate and the projection of the missing marker. To obtain these projections,
it is necessary to have detected at least one marker of the board. The projections are obtained using the
camera parameters (camera matrix and distortion coefficients) if they are provided. If not, the projections
are obtained from local homography and only planar board are allowed (i.e. the Z coordinate of all the
marker corners should be the same). The ```minRepDistance``` parameter in ```refineDetectedMarkers()```
determines the minimum euclidean distance between the candidate corners and the projected marker corners
(default value 10).
- Binary codification. If a candidate surpasses the minimum distance condition, its internal bits
are analyzed again to determine if it is actually the projected marker or not. However, in this case,
the condition is not so strong and the number of allowed erroneous bits can be higher. This is indicated
in the ```errorCorrectionRate``` parameter (default value 3.0). If a negative value is provided, the
internal bits are not analyzed at all and only the corner distances are evaluated.
This is an example of using the ```refineDetectedMarkers()``` function:
``` c++
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::GridBoard board = cv::aruco::GridBoard::create(5, 7, 0.04, 0.01, dictionary);
vector< int > markerIds;
vector< vector<Point2f> > markerCorners, rejectedCandidates;
cv::aruco::detectMarkers(inputImage, dictionary, markerCorners, markerIds, cv::aruco::DetectorParameters(), rejectedCandidates);
cv::aruco::refineDetectedMarkersinputImage, board, markerCorners, markerIds, rejectedCandidates);
// After calling this function, if any new marker has been detected it will be removed from rejectedCandidates and included
// at the end of markerCorners and markerIds
```
It must also be noted that, in some cases, if the number of detected markers in the first place is too low (for instance only
1 or 2 markers), the projections of the missing markers can be of bad quality, producing erroneous correspondences.
See module samples for a more detailed implementation.

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 464 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 464 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 487 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 443 KiB

@ -0,0 +1,102 @@
Calibration with ArUco and ChArUco {#tutorial_aruco_calibration}
==============================
The ArUco module can also be used to calibrate a camera. Camera calibration consists in obtaining the
camera intrinsic parameters and distortion coefficients. This parameters remain fixed unless the camera
optic is modified, thus camera calibration only need to be done once.
Camera calibration is usually performed using the OpenCV ```calibrateCamera()``` function. This function
requires some correspondences between environment points and their projection in the camera image from
different viewpoints. In general, these correspondences are obtained from the corners of chessboard
patterns. See ```calibrateCamera()``` function documentation or the OpenCV calibration tutorial for
more detailed information.
Using the ArUco module, calibration can be performed based on ArUco markers corners or ChArUco corners.
Calibrating using ArUco is much more versatile than using traditional chessboard patterns, since it
allows occlusions or partial views.
As it can be stated, calibration can be done using both, marker corners or ChArUco corners. However,
it is highly recommended using the ChArUco corners approach since the provided corners are much
more accurate in comparison to the marker corners. Calibration using a standard Board should only be
employed in those scenarios where the ChArUco boards cannot be employed because of any kind of restriction.
Calibration with ChArUco Boards
------
To calibrate using a ChArUco board, it is necessary to detect the board from different viewpoints, in the
same way that the standard calibration does with the traditional chessboard pattern. However, due to the
benefits of using ChArUco, occlusions and partial views are allowed, and not all the corners need to be
visible in all the viewpoints.
![ChArUco calibration viewpoints](images/charucocalibration.png)
The function to calibrate is ```calibrateCameraCharuco()```. Example:
``` c++
aruco::CharucoBoard board = ... // create charuco board
cv::Size imgSize = ... // camera image size
std::vector< std::vector<cv::Point2f> > allCharucoCorners;
std::vector< std::vector<int> > allCharucoIds;
// Detect charuco board from several viewpoints and fill allCharucoCorners and allCharucoIds
...
...
// After capturing in several viewpoints, start calibration
cv::Mat cameraMatrix, distCoeffs;
std::vector< Mat > rvecs, tvecs;
int calibrationFlags = ... // Set calibration flags (same than in calibrateCamera() function)
double repError = cv::aruco::calibrateCameraCharuco(allCharucoCorners, allCharucoIds, board, imgSize, cameraMatrix, distCoeffs, rvecs, tvecs, calibrationFlags);
```
The ChArUco corners and ChArUco identifiers captured on each viewpoint are stored in the vectors ```allCharucoCorners``` and ```allCharucoIds```, one element per viewpoint.
The ```calibrateCameraCharuco()``` function will fill the ```cameraMatrix``` and ```distCoeffs``` arrays with the camera calibration parameters. It will return the reprojection
error obtained from the calibration. The elements in ```rvecs``` and ```tvecs``` will be filled with the estimated pose of the camera (respect to the ChArUco board)
in each of the viewpoints.
Finally, the ```calibrationFlags``` parameter determines some of the options for the calibration. Its format is equivalent to the flags parameter in the OpenCV
```calibrateCamera()``` function.
A full working example is included in the ```calibrate_camera_charuco.cpp``` inside the module samples folder.
Calibration with ArUco Boards
------
As it has been stated, it is recommended the use of ChAruco boards instead of ArUco boards for camera calibration, since
ChArUco corners are more accurate than marker corners. However, in some special cases it must be required to use calibration
based on ArUco boards. For these cases, the ```calibrateCameraAruco()``` function is provided. As in the previous case, it
requires the detections of an ArUco board from different viewpoints.
![ArUco calibration viewpoints](images/arucocalibration.png)
Example of ```calibrateCameraAruco()``` use:
``` c++
aruco::Board board = ... // create aruco board
cv::Size imgSize = ... // camera image size
std::vector< std::vector< cv::Point2f > > allCornersConcatenated;
std::vector< int > allIdsConcatenated;
std::vector< int > markerCounterPerFrame;
// Detect aruco board from several viewpoints and fill allCornersConcatenated, allIdsConcatenated and markerCounterPerFrame
...
...
// After capturing in several viewpoints, start calibration
cv::Mat cameraMatrix, distCoeffs;
std::vector< Mat > rvecs, tvecs;
int calibrationFlags = ... // Set calibration flags (same than in calibrateCamera() function)
double repError = cv::aruco::calibrateCameraAruco(allCornersConcatenated, allIdsConcatenated, markerCounterPerFrame, board, imgSize, cameraMatrix, distCoeffs, rvecs, tvecs, calibrationFlags);
```
In this case, and contrary to the ```calibrateCameraCharuco()``` function, the detected markers on each viewpoint are concatenated in the arrays ```allCornersConcatenated``` and
```allCornersConcatenated``` (the first two parameters). The third parameter, the array ```markerCounterPerFrame```, indicates the number of marker detected on each viewpoint.
The rest of parameters are the same than in ```calibrateCameraCharuco()```, except the board layout object which does not need to be a ```CharucoBoard``` object, it can be
any ```Board``` object.
A full working example is included in the ```calibrate_camera.cpp``` inside the module samples folder.

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 313 KiB

@ -0,0 +1,721 @@
Detection of ArUco Markers {#tutorial_aruco_detection}
==============================
Pose estimation is of great importance in many computer vision applications: robot navigation,
augmented reality, and many more. This process is based on finding correspondences between points in
the real environment and their 2d image projection. This is usually a difficult step, and thus it is
common the use of synthetic or fiducial markers to make it easier.
One of the most popular approach is the use of binary square fiducial markers. The main benefit
of these markers is that a single marker provides enough correspondences (its four corners)
to obtain the camera pose. Also, the inner binary codification makes them specially robust, allowing
the possibility of applying error detection and correction techniques.
The aruco module is based on the [ArUco library](http://www.uco.es/investiga/grupos/ava/node/26),
a popular library for detection of square fiducial markers developed by Rafael Muñoz and Sergio Garrido:
> S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
> "Automatic generation and detection of highly reliable fiducial markers under occlusion".
> Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005
The aruco functionalities are included in:
``` c++
\#include <opencv2/aruco.hpp>
```
Markers and Dictionaries
------
An ArUco marker is a synthetic square marker composed by a wide black border and a inner binary
matrix which determines its identifier (id). The black border facilitates its fast detection in the
image and the binary codification allows its identification and the application of error detection
and correction techniques. The marker size determines the size of the internal matrix. For instance
a marker size of 4x4 is composed by 16 bits.
Some examples of ArUco markers:
![Example of markers images](images/markers.jpg)
It must be noted that a marker can be found rotated in the environment, however, the detection
process needs to be able to determine its original rotation, so that each corner is identified
unequivocally. This is also done based on the binary codification.
A dictionary of markers is a set of markers that are considered in an specific application. It is
simply the list of binary codifications of each of its markers.
The main properties of a dictionary are the dictionary size and the marker size.
- The dictionary size is the number of markers that composed the dictionary.
- The marker size is the size of those markers (the number of bits).
The aruco module includes some predefined dictionaries covering a range of different dictionary
sizes and marker sizes.
One may think that the marker id is the number obtained from converting the binary codification to
a decimal base number. However, this is not possible since for high marker sizes the number of bits
is too high and managing so huge numbers is not practical. Instead, a marker id is simply
the marker index inside the dictionary it belongs to. For instance, the first 5 markers inside a
dictionary has the ids: 0, 1, 2, 3 and 4.
More information about dictionaries is provided in the "Selecting a dictionary" section.
Marker Creation
------
Before their detection, markers need to be printed in order to be placed in the environment.
Marker images can be generated using the ```drawMarker()``` function.
For example, lets analyze the following call:
``` c++
cv::Mat markerImage;
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::drawMarker(dictionary, 23, 200, markerImage, 1);
```
First, the ```Dictionary``` object is created by choosing one of the predefined dictionaries in the aruco module.
Concretely, this dictionary is composed by 250 markers and a marker size of 6x6 bits (```DICT_6X6_250```).
The parameters of ```drawMarker``` are:
- The first parameter is the ```Dictionary``` object previously created.
- The second parameter is the marker id, in this case the marker 23 of the dictionary ```DICT_6X6_250```.
Note that each dictionary is composed by a different number of markers. In this case, the valid ids
go from 0 to 249. Any specific id out of the valid range will produce an exception.
- The third parameter, 200, is the size of the output marker image. In this case, the output image
will have a size of 200x200 pixels. Note that this parameter should be large enough to store the
number of bits for the specific dictionary. So, for instance, you cannot generate an image of
5x5 pixels for a marker size of 6x6 bits (and that is without considering the marker border).
Furthermore, to avoid deformations, this parameter should be proportional to the number of bits +
border size, or at least much higher than the marker size (like 200 in the example), so that
deformations are insignificant.
- The forth parameter is the output image.
- Finally, the last parameter is an optional parameter to specify the width of the marker black
border. The size is specified proportional to the number of bits. For instance a value of 2 means
that the border will have a width equivalent to the size of two internal bits. The default value
is 1.
The generated image is:
![Generated marker](images/marker23.jpg)
A full working example is included in the ```create_marker.cpp``` inside the module samples folder.
Marker Detection
------
Given an image where some ArUco markers are visible, the detection process has to return a list of
detected markers. Each detected marker includes:
- The position of its four corners in the image (in their original order).
- The id of the marker.
The marker detection process is comprised by two main steps:
1. Detection of marker candidates. In this step the image is analyzed in order to find square shapes
that are candidates to be markers. It begins with an adaptive thresholding to segment the markers,
then contours are extracted from the thresholded image and those that are not convex or do not
approximate to a square shape are discarded. Some extra filtering are also applied (removing
too small or too big contours, removing contours too close to each other, etc).
2. After the candidate detection, it is necessary to determine if they are actually markers by
analyzing their inner codification. This step starts by extracting the marker bits of each marker.
To do so, first, perspective transformation is applied to obtain the marker in its canonical form. Then, the
canonical image is thresholded using Otsu to separate white and black bits. The image is divided in
different cells according to the marker size and the border size and the amount of black or white
pixels on each cell is counted to determine if it is a white or a black bit. Finally, the bits
are analyzed to determine if the marker belongs to the specific dictionary and error correction
techniques are employed when necessary.
Consider the following image:
![Original image with markers](images/singlemarkersoriginal.png)
These are the detected markers (in green):
![Image with detected markers](images/singlemarkersdetection.png)
And these are the marker candidates that have been rejected during the identification step (in pink):
![Image with rejected candidates](images/singlemarkersrejected.png)
In the aruco module, the detection is performed in the ```detectMarkers()``` function. This function is
the most important in the module, since all the rest of functionalities are based on the
previous detected markers returned by ```detectMarkers()```.
An example of marker detection:
``` c++
cv::Mat inputImage;
...
vector< int > markerIds;
vector< vector<Point2f> > markerCorners, rejectedCandidates;
cv::aruco::DetectorParameters parameters;
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::detectMarkers(inputImage, dictionary, markerCorners, markerIds, parameters, rejectedCandidates);
```
The parameters of ```detectMarkers``` are:
- The first parameter is the image where the markers are going to be detected.
- The second parameter is the dictionary object, in this case one of the predefined dictionaries (```DICT_6X6_250```).
- The detected markers are stored in the ```markerCorners``` and ```markerIds``` structures:
- ```markerCorners``` is the list of corners of the detected markers. For each marker, its four
corners are returned in their original order (which is clockwise starting with top left). So, the first corner is the top left corner, followed by the top right, bottom right and bottom left.
- ```markerIds``` is the list of ids of each of the detected markers in ```markerCorners```.
Note that the returned ```markerCorners``` and ```markerIds``` vectors have the same sizes.
- The fourth parameter is the object of type ```DetectionParameters```. This object includes all the
parameters that can be customized during the detection process. This parameters are commented in
detail in the next section.
- The final parameter, ```rejectedCandidates```, is a returned list of marker candidates, i.e. those
squares that have been found but they do not present a valid codification. Each candidate is also
defined by its four corners, and its format is the same than the ```markerCorners``` parameter. This
parameter can be omitted and is only useful for debugging purposes and for 'refind' strategies (see ```refineDetectedMarkers()``` ).
The next thing you probably want to do after ```detectMarkers()``` is checking that your markers have
been correctly detected. Fortunately, the aruco module provides a function to draw the detected
markers in the input image, this function is ```drawDetectedMarkers()```. For example:
``` c++
cv::Mat outputImage
cv::aruco::drawDetectedMarkers(image, markerCorners, markerIds);
```
- ```image``` is the input/output image where the markers will be drawn (it will normally be the same image where the markers were detected).
- ```markerCorners``` and ```markerIds``` are the structures of the detected markers in the same format
provided by the ```detectMarkers()``` function.
![Image with detected markers](images/singlemarkersdetection.png)
Note that this function is only provided for visualization and its use can be perfectly omitted.
With these two functions we can create a basic marker detection loop to detect markers from our
camera:
``` c++
cv::VideoCapture inputVideo;
inputVideo.open(0);
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
while (inputVideo.grab()) {
cv::Mat image, imageCopy;
inputVideo.retrieve(image);
image.copyTo(imageCopy);
std::vector<int> ids;
std::vector<std::vector<cv::Point2f> > corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
// if at least one marker detected
if (ids.size() > 0)
cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
cv::imshow("out", imageCopy);
char key = (char) cv::waitKey(waitTime);
if (key == 27)
break;
}
```
Note that some of the optional parameters have been omitted, like the detection parameter object or the
output vector of rejected candidates.
A full working example is included in the ```detect_markers.cpp``` inside the module samples folder.
Pose Estimation
------
The next thing you probably want to do after detecting the markers is to obtain the camera pose from them.
To perform camera pose estimation you need to know the calibration parameters of your camera. This is
the camera matrix and distortion coefficients. If you do not know how to calibrate your camera, you can
take a look to the ```calibrateCamera()``` function and the Calibration tutorial of OpenCV. You can also calibrate your camera using the aruco module
as it is explained in the Calibration with aruco tutorial. Note that this only need to be done once unless the
camera optics are modified (for instance changing its focus).
At the end, what you get after the calibration is the camera matrix: a matrix of 3x3 elements with the
focal distances and the camera center coordinates (a.k.a intrinsic parameters), and the distortion
coefficients: a vector of 5 elements or more that models the distortion produced by your camera.
When you estimate the pose with ArUco markers, you can estimate the pose of each marker individually.
If you want to estimate one pose from a set of markers, what you want to use is aruco Boards (see ArUco
Boards tutorial).
The camera pose respect to a marker is the 3d transformation from the marker coordinate system to the
camera coordinate system. It is specified by a rotation and a translation vector (see ```solvePnP()``` function for more
information).
The aruco module provides a function to estimate the poses of all the detected markers:
``` c++
Mat cameraMatrix, distCoeffs;
...
vector< Mat > rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(corners, 0.05, cameraMatrix, distCoeffs, rvecs, tvecs);
```
- The ```corners``` parameter is the vector of marker corners returned by the ```detectMarkers()``` function.
- The second parameter is the size of the marker side in meters or in any other unit. Note that the
translation vectors of the estimated poses will be in the same unit
- ```cameraMatrix``` and ```distCoeffs``` are the camera calibration parameters that need to be known a priori.
- ```rvecs``` and ```tvecs``` are the rotation and translation vectors respectively, for each of the markers
in corners.
The marker coordinate system that is assumed by this function is placed at the center of the marker
with the Z axis pointing out, as in the following image. Axis-color correspondences are X:red, Y:green, Z:blue.
![Image with axis drawn](images/singlemarkersaxis.png)
The aruco module provides a function to draw the axis as in the image above, so pose estimation can be
checked:
``` c++
cv::aruco::drawAxis(image, cameraMatrix, distCoeffs, rvec, tvec, 0.1);
```
- ```image``` is the input/output image where the axis will be drawn (it will normally be the same image where the markers were detected).
- ```cameraMatrix``` and ```distCoeffs``` are the camera calibration parameters.
- ```rvec``` and ```tvec``` are the pose parameters whose axis want to be drawn.
- The last parameter is the length of the axis, in the same unit that tvec (usually meters)
A basic full example for pose estimation from single markers:
``` c++
cv::VideoCapture inputVideo;
inputVideo.open(0);
cv::Mat cameraMatrix, distCoeffs;
// camera parameters are read from somewhere
readCameraParameters(cameraMatrix, distCoeffs);
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
while (inputVideo.grab()) {
cv::Mat image, imageCopy;
inputVideo.retrieve(image);
image.copyTo(imageCopy);
std::vector<int> ids;
std::vector<std::vector<cv::Point2f> > corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
// if at least one marker detected
if (ids.size() > 0) {
cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
vector< Mat > rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(corners, 0.05, cameraMatrix, distCoeffs, rvecs, tvecs);
// draw axis for each marker
for(int i=0; i<ids.size(); i++)
cv::aruco::drawAxis(imageCopy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1);
}
cv::imshow("out", imageCopy);
char key = (char) cv::waitKey(waitTime);
if (key == 27)
break;
}
```
Sample video:
[![ArUco markers detection video](http://img.youtube.com/vi/IsXWrcB_Hvs/0.jpg)](https://youtu.be/IsXWrcB_Hvs)
A full working example is included in the ```detect_markers.cpp``` inside the module samples folder.
Selecting a dictionary
------
The aruco module provides the ```Dictionary``` class to represent a dictionary of markers.
Apart of the marker size and the number of markers in the dictionary, there is another important dictionary
parameter, the inter-marker distance. The inter-marker distance is the minimum distance among its markers
and it determines the error detection and correction capabilities of the dictionary.
In general, lower dictionary sizes and higher marker sizes increase the inter-marker distance and
vice-versa. However, the detection of markers with higher sizes is more complex, due to the higher
amount of bits that need to be extracted from the image.
For instance, if you need only 10 markers in your application, it is better to use a dictionary only
composed by those 10 markers than using one dictionary composed by 1000 markers. The reason is that
the dictionary composed by 10 markers will have a higher inter-marker distance and, thus, it will be
more robust to errors.
As a consequence, the aruco module includes several ways to select your dictionary of markers, so that
you can increase your system robustness:
- Predefined dictionaries:
This is the easiest way to select a dictionary. The aruco module includes a set of predefined dictionaries
of a variety of marker sizes and number of markers. For instance:
``` c++
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
```
DICT_6X6_250 is an example of predefined dictionary of markers with 6x6 bits and a total of 250
markers.
From all the provided dictionaries, it is recommended to choose the smaller one that fits to your application.
For instance, if you need 200 markers of 6x6 bits, it is better to use DICT_6X6_250 than DICT_6X6_1000.
The smaller the dictionary, the higher the inter-marker distance.
- Automatic dictionary generation:
The dictionary can be generated automatically to adjust to the desired number of markers and bits, so that
the inter-marker distance is optimized:
``` c++
cv::aruco::Dictionary dictionary = cv::aruco::generateCustomDictionary(36, 5);
```
This will generate a customized dictionary composed by 36 markers of 5x5 bits. The process can take several
seconds, depending on the parameters (it is slower for larger dictionaries and higher number of bits).
- Manually dictionary generation:
Finally, the dictionary can be configured manually, so that any codification can be employed. To do that,
the ```Dictionary``` object parameters need to be assigned manually. It must be noted that, unless you have
a special reason to do this manually, it is preferable to use one of the previous alternatives.
The ```Dictionary``` parameters are:
``` c++
class Dictionary {
public:
Mat bytesList;
int markerSize;
int maxCorrectionBits; // maximum number of bits that can be corrected
...
}
```
```bytesList``` is the array that contains all the information about the marker codes. ```markerSize``` is the size
of each marker dimension (for instance, 5 for markers with 5x5 bits). Finally, ```maxCorrectionBits``` is
the maximum number of erroneous bits that can be corrected during the marker detection. If this value is too
high, it can lead to a high amount of false positives.
Each row in ```bytesList``` represents one of the dictionary markers. However, the markers are not stored in its
binary form, instead they are stored in a special format to simplificate their detection.
Fortunately, a marker can be easily transformed to this form using the static method ```Dictionary::getByteListFromBits()```.
For example:
``` c++
Dictionary dictionary;
// markers of 6x6 bits
dictionary.markerSize = 6;
// maximum number of bit corrections
dictionary.maxCorrectionBits = 3;
// lets create a dictionary of 100 markers
for(int i=0; i<100; i++)
// assume generateMarkerBits() generate a new marker in binary format, so that
// markerBits is a 6x6 matrix of CV_8UC1 type, only containing 0s and 1s
cv::Mat markerBits = generateMarkerBits();
cv::Mat markerCompressed = getByteListFromBits(markerBits);
// add the marker as a new row
dictionary.bytesList.push_back(markerCompressed);
}
```
Detector Parameters
------
One of the parameters of ```detectMarkers()``` function is a ```DetectorParameters``` object. This object
includes all the options that can be customized during the marker detection process.
In this section, all these parameters are commented. The parameters can be classified depending on
the process they are involved:
#### Thresholding
One of the first steps of the marker detection process is an adaptive thresholding of the input image.
For instance, the thresholded image for the sample image used above is:
![Thresholded image](images/singlemarkersthresh.png)
This thresholding can be customized in the following parameters:
- ```int adaptiveThreshWinSizeMin```, ```int adaptiveThreshWinSizeMax```, ```int adaptiveThreshWinSizeStep```
The ```adaptiveThreshWinSizeMin``` and ```adaptiveThreshWinSizeMax``` parameters represent the interval where the
thresholding window sizes (in pixels) are selected for the adaptive thresholding (see OpenCV
```threshold()``` function for more details).
The parameter ```adaptiveThreshWinSizeStep``` indicates the increments on the window size from
```adaptiveThreshWinSizeMin``` to adaptiveThreshWinSizeMax```.
For instance, for the values ```adaptiveThreshWinSizeMin``` = 5 and adaptiveThreshWinSizeMax``` = 21 and
```adaptiveThreshWinSizeStep``` = 4, there will be 5 thresholding steps with window sizes 5, 9, 13, 17 and 21.
On each thresholding image, marker candidates will be extracted.
Low values of window size can 'break' the marker border if the marker size is too large, and
it would not be detected, like in the following image:
![Broken marker image](images/singlemarkersbrokenthresh.png)
On the other hand, too high values can produce the same effect if the markers are too small, and it can also
reduce the performance. Moreover the process would tend to a global thresholding, losing the adaptive benefits.
The simplest case is using the same value for ```adaptiveThreshWinSizeMin``` and
```adaptiveThreshWinSizeMax```, which produces a single thresholding step. However, it is usually better using a
range of values for the window size, although many thresholding steps can also reduce the performance considerably.
Default values:
```adaptiveThreshWinSizeMin```: 3, ```adaptiveThreshWinSizeMax```: 23, ```adaptiveThreshWinSizeStep```: 10
- ```double adaptiveThreshConstant```
This parameter represents the constant value added in the thresholding condition (see OpenCV
```threshold()``` function for more details). Its default value is a good option in most cases.
Default value: 7
#### Contour filtering
After thresholding, contours are detected. However, not all contours
are considered as marker candidates. They are filtered out in different steps so that contours that are
very unlikely to be markers are discarded. The parameters in this section customize
this filtering process.
It must be noted that in most cases it is a question of balance between detection capacity
and performance. All the considered contours will be processed in the following stages, which usually have
a higher computational cost. So, it is preferred to discard wrong candidates in this stage than in the later stages.
On the other hand, if the filtering conditions are too strict, the real marker contours could be discarded and,
hence, not detected.
- ```double minMarkerPerimeterRate```, ```double maxMarkerPerimeterRate```
These parameters determine the minimum and maximum size of a marker, concretely the maximum and
minimum marker perimeter. They are not specified in absolute pixels values, instead they are
specified relative to the maximum dimension of the input image.
For instance, a image with size 640x480 and a minimum relative marker perimeter of 0.05 will lead
to a minimum marker perimeter of 640x0.05 = 32 pixels, since 640 is the maximum dimension of the
image. The same applies for the ```maxMarkerPerimeterRate``` parameter.
If the ```minMarkerPerimeterRate``` is too low, it can penalize considerably the detection performance since
many more contours would be considered for future stages.
This penalization is not so noticeable for the ```maxMarkerPerimeterRate``` parameter, since there are
usually many more small contours than big contours.
A ```minMarkerPerimeterRate``` value of 0 and a ```maxMarkerPerimeterRate``` value of 4 (or more) will be
equivalent to consider all the contours in the image, however this is not recommended for
the performance reasons.
Default values:
```minMarkerPerimeterRate``` : 0.03, ```maxMarkerPerimeterRate``` : 4.0
- ```double polygonalApproxAccuracyRate```
A polygonal approximation is applied to each candidate and only those that approximate to a square
shape are accepted. This value determines the maximum error that the polygonal approximation can
produce (see ```approxPolyDP()``` function for more information).
This parameter is relative to the candidate length (in pixels). So if the candidate has
a perimeter of 100 pixels and the value of ```polygonalApproxAccuracyRate``` is 0.04, the maximum error
would be 100x0.04=5.4 pixels.
In most cases, the default value works fine, but higher error values could be necessary for high
distorted images.
Default value: 0.05
- ```double minCornerDistanceRate```
Minimum distance between any pair of corners in the same marker. It is expressed relative to the marker
perimeter. Minimum distance in pixels is Perimeter * minCornerDistanceRate.
Default value: 0.05
- ```double minMarkerDistanceRate```
Minimum distance between any pair of corners from two different markers. It is expressed relative to
the minimum marker perimeter of the two markers. If two candidates are too close, the smaller one is ignored.
Default value: 0.05
- ```int minDistanceToBorder```
Minimum distance to any of the marker corners to the image border (in pixels). Markers partially occluded
by the image border can be correctly detected if the occlusion is small. However, if one of the corner
is occluded, the returned corner is usually placed in a wrong position near the image border.
If the position of marker corners is important, for instance if you want to do pose estimation, it is
better to discard markers with any of their corners are too close to the image border. Elsewhere, it is not necessary.
Default value: 3
#### Bits Extraction
After candidate detection, the bits of each candidate are analyzed in order to determine if they
are markers or not.
Before analyzing the binary code itself, the bits need to be extracted. To do so, the perspective
distortion is removed and the resulting image is thresholded using Otsu threshold to separate
black and white pixels.
This is an example of the image obtained after removing the perspective distortion of a marker:
![Perspective removing](images/removeperspective.png)
Then, the image is divided in a grid with the same cells than the number of bits in the marker.
On each cell, the number of black and white pixels are counted to decide the bit assigned to the cell (from the majority value):
![Marker cells](images/bitsextraction1.png)
There are several parameters that can customize this process:
- ```int markerBorderBits```
This parameter indicates the width of the marker border. It is relative to the size of each bit. So, a
value of 2 indicates the border has the width of two internal bits.
This parameter needs to coincide with the border size of the markers you are using. The border size
can be configured in the marker drawing functions such as ```drawMarker()```.
Default value: 1
- ```double minOtsuStdDev```
This value determines the minimum standard deviation on the pixels values to perform Otsu
thresholding. If the deviation is low, it probably means that all the square is black (or white)
and applying Otsu does not make sense. If this is the case, all the bits are set to 0 (or 1)
depending if the mean value is higher or lower than 128.
Default value: 5.0
- ```int perpectiveRemovePixelPerCell```
This parameter determines the number of pixels (per cell) in the obtained image after removing perspective
distortion (including the border). This is the size of the red squares in the image above.
For instance, lets assume we are dealing with markers of 5x5 bits and border size of 1 bit
(see ```markerBorderBits```). Then, the total number of cells/bits per dimension is 5 + 2*1 = 7 (the border
has to be counted twice). The total number of cells is 7x7.
If the value of ```perpectiveRemovePixelPerCell``` is 10, then the size of the obtained image will be
10*7 = 70 -> 70x70 pixels.
A higher value of this parameter can improve the bits extraction process (up to some degree), however it can penalize
the performance.
Default value: 4
- ```double perspectiveRemoveIgnoredMarginPerCell```
When extracting the bits of each cell, the numbers of black and white pixels are counted. In general, it is
not recommended to consider all the cell pixels. Instead it is better to ignore some pixels in the
margins of the cells.
The reason of this is that, after removing the perspective distortion, the cells' colors are, in general, not
perfectly separated and white cells can invade some pixels of black cells (and vice-versa). Thus, it is
better to ignore some pixels just to avoid counting erroneous pixels.
For instance, in the following image:
![Marker cell margins](images/bitsextraction2.png)
only the pixels inside the green squares are considered. It can be seen in the right image that
the resulting pixels contain a lower amount of noise from neighbor cells.
The ```perspectiveRemoveIgnoredMarginPerCell``` parameter indicates the difference between the red and
the green squares.
This parameter is relative to the total size of the cell. For instance if the cell size is 40 pixels and the
value of this parameter is 0.1, a margin of 40*0.1=4 pixels is ignored in the cells. This means that the total
amount of pixels that would be analyzed on each cell would actually be 32x32, instead of 40x40.
Default value: 0.13
#### Marker identification
After the bits have been extracted, the next step is checking if the extracted code belongs to the marker
dictionary and, if necessary, error correction can be performed.
- ```double maxErroneousBitsInBorderRate```
The bits of the marker border should be black. This parameter specifies the allowed number of erroneous
bits in the border, i.e. the maximum number of white bits in the border. It is represented
relative to the total number of bits in the marker.
Default value: 0.5
- ```double errorCorrectionRate```
Each marker dictionary has a theoretical maximum number of bits that can be corrected (```Dictionary.maxCorrectionBits```).
However, this value can be modified by the ```errorCorrectionRate``` parameter.
For instance, if the allowed number of bits that can be corrected (for the used dictionary) is 6 and the value of ```errorCorrectionRate``` is
0.5, the real maximum number of bits that can be corrected is 6*0.5=3 bits.
This value is useful to reduce the error correction capabilities in order to avoid false positives.
Default value: 0.6
#### Corner Refinement
After markers have been detected and identified, the last step is performing subpixel refinement
in the corner positions (see OpenCV ```cornerSubPix()```)
Note that this step is optional and it only makes sense if the position of the marker corners have to
be accurate, for instance for pose estimation. It is usually a time consuming step and it is disabled by default.
- ```bool doCornerRefinement```
This parameter determines if the corner subpixel process is performed or not. It can be disabled
if accurate corners are not necessary.
Default value: false.
- ```int cornerRefinementWinSize```
This parameter determines the window size of the subpixel refinement process.
High values can produce that close image corners are included in the window region, so that the
marker corner moves to a different and wrong location during the process. Furthermore
it can affect to performance.
Default value: 5
- ```int cornerRefinementMaxIterations```, ```double cornerRefinementMinAccuracy```
These two parameters determine the stop criterion of the subpixel refinement process. The
```cornerRefinementMaxIterations``` indicates the maximum number of iterations and
```cornerRefinementMinAccuracy``` the minimum error value before stopping the process.
If the number of iterations is too high, it can affect the performance. On the other hand, if it is
too low, it can produce a poor subpixel refinement.
Default values:
```cornerRefinementMaxIterations```: 30, ```cornerRefinementMinAccuracy```: 0.1

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 381 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 384 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

@ -0,0 +1,149 @@
Aruco module FAQ {#tutorial_aruco_faq}
==============================
This is a compilation of questions that can be useful for those that want to use the aruco module.
- I only want to label some objects, what should I use?
In this case, you only need single ArUco markers. You can place one or several markers with different ids in each of the object you want to identify.
- Which algorithm is used for marker detection?
The aruco module is based on the original ArUco library. A full description of the detection process can be found in:
> S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
> "Automatic generation and detection of highly reliable fiducial markers under occlusion".
> Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005
- My markers are not being detected correctly, what can I do?
There can be many factors that avoid the correct detection of markers. You probably need to adjust some of the parameters
in the ```DetectorParameters``` object. The first thing you can do is checking if your markers are returned
as rejected candidates by the ```detectMarkers()``` function. Depending on this, you should try to modify different parameters.
If you are using a ArUco board, you can also try the ```refineDetectedMarkers()``` function.
- What are the benefits of ArUco boards? What are the drawbacks?
Using a board of markers you can obtain the camera pose from a set of markers, instead of a single one. This way,
the detection is able to handle occlusion of partial views of the Board, since only one marker is necessary to obtain the pose.
Furthermore, as in most cases you are using more corners for pose estimation, it will be more accurate than using a single marker.
The main drawback is that a Board is not as versatile as a single marker.
- What are the benefits of ChArUco boards over ArUco boards? And the drawbacks?
ChArUco boards combines chessboards with ArUco boards. Thanks to this, the corners provided by ChArUco boards are more accurate than those provided by ArUco Boards (or single markers).
The main drawback is that ChArUco boards are not as versatile as ArUco board. For instance, a ChArUco board is a planar board with a specific marker layout while the ArUco boards
can have any layout, even in 3d. Furthermore, the markers in the ChArUco board are usually smaller and more difficult to detect.
- I do not need pose estimation, should I use ChArUco boards?
No. The main goal of ChArUco boards is provide high accurate corners for pose estimation or camera calibration.
- Should all the markers in an ArUco board be placed in the same plane?
No, the marker corners in a ArUco board can be placed anywhere in its 3d coordinate system.
- Should all the markers in an ChArUco board be placed in the same plane?
Yes, all the markers in a ChArUco board need to be in the same plane and their layout is fixed by the chessboard shape.
- What is the difference between a ```Board``` object and a ```GridBoard``` object?
The ```GridBoard``` class is a specific type of board that inherits from ```Board``` class. A ```GridBoard``` object is a board whose markers are placed in the same
plane and in a grid layout.
- What are Diamond markers?
Diamond markers are very similar to a ChArUco board of 3x3 squares. However, contrary to ChArUco boards, the detection of diamonds is based on the relative position of the markers.
They are useful when you want to provide a conceptual meaning to any (or all) of the markers in the diamond. An example is using one of the marker to provide the diamond scale.
- Do I need to detect marker before board detection, ChArUco board detection or Diamond detection?
Yes, the detection of single markers is a basic tool in the aruco module. It is done using the ```detectMarkers()``` function. The rest of functionalities receives
a list of detected markers from this function.
- I want to calibrate my camera, can I use this module?
Yes, the aruco module provides functionalities to calibrate the camera using both, ArUco boards and ChArUco boards.
- Should I calibrate using a ChArUco board or an ArUco board?
It is highly recommended the calibration using ChArUco board due to the high accuracy.
- Should I use a predefined dictionary or generate my own dictionary?
In general, it is easier to use one of the predefined dictionaries. However, if you need a bigger dictionary (in terms of number of markers or number of bits)
you should generate your own dictionary. Dictionary generation is also useful if you want to maximize the inter-marker distance to achieve a better error
correction during the identification step.
- I am generating my own dictionary but it takes too long
Dictionary generation should only be done once at the beginning of your application and it should take some seconds. If you are
generating the dictionary on each iteration of your detection loop, you are doing it wrong.
Furthermore, it is recommendable to save the dictionary to a file and read it on every execution so you dont need to generate it.
- I would like to use some markers of the original ArUco library that I have already printed, can I use them?
Yes, one of the predefined dictionary is ```DICT_ARUCO_ORIGINAL```, which detects the marker of the original ArUco library with the same identifiers.
- Can I use the Board configuration file of the original ArUco library in this module?
Not directly, you will need to adapt the information of the ArUco file to the aruco module Board format.
- Can I use this module to detect the markers of other libraries based on binary fiducial markers?
Probably yes, however you will need to port the dictionary of the original library to the aruco module format.
- Do I need to store the Dictionary information in a file so I can use it in different executions?
If you are using one of the predefined dictionaries, it is not necessary. Otherwise, it is recommendable that you save it to file.
- Do I need to store the Board information in a file so I can use it in different executions?
If you are using a ```GridBoard``` or a ```ChArUco``` board you only need to store the board measurements that are provided to the ```GridBoard::create()``` or ```ChArUco::create()``` functions.
If you manually modify the marker ids of the boards, or if you use a different type of board, you should save your board object to file.
- Does the aruco module provide functions to save the Dictionary or Board to file?
Not right now. However the data member of both the dictionary and board classes are public and can be easily stored.
- Alright, but how can I render a 3d model to create an augmented reality application?
To do so, you will need to use an external rendering engine library, such as OpenGL. The aruco module only provides the functionality to
obtain the camera pose, i.e. the rotation and traslation vectors, which is necessary to create the augmented reality effect.
However, you will need to adapt the rotation and traslation vectors from the OpenCV format to the format accepted by your 3d rendering library.
The original ArUco library contains examples of how to do it for OpenGL and Ogre3D.
- I have use this module in my research work, how can I cite it?
You can cite the original ArUco library:
> S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
> "Automatic generation and detection of highly reliable fiducial markers under occlusion".
> Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005

@ -0,0 +1,312 @@
Detection of ChArUco Corners {#tutorial_charuco_detection}
==============================
ArUco markers and boards are very useful due to their fast detection and their versatility.
However, one of the problems of ArUco markers is that the accuracy of their corner positions is not too high,
even after applying subpixel refinement.
On the contrary, the corners of chessboard patterns can be refined more accurately since each corner is
surrounded by two black squares. However, finding a chessboard pattern is not as versatile as finding an ArUco board:
it has to be completely visible and occlusions are not permitted.
A ChArUco board tries to combine the benefits of these two approaches:
![Charuco definition](images/charucodefinition.png)
The ArUco part is used to interpolate the position of the chessboard corners, so that it has the versatility of marker
boards, since it allows occlusions or partial views. Moreover, since the interpolated corners belong to a chessboard,
they are very accurate in terms of subpixel accuracy.
When high precision is necessary, such as in camera calibration, Charuco boards are a better option than standard
Aruco boards.
ChArUco Board Creation
------
The aruco module provides the ```cv::aruco::CharucoBoard``` class that represents a Charuco Board and which inherits from the ```Board``` class.
This class, as the rest of ChArUco functionalities, are defined in:
``` c++
\#include <opencv2/aruco/charuco.hpp>
```
To define a ```CharucoBoard```, it is necesary:
- Number of chessboard squares in X direction.
- Number of chessboard squares in Y direction.
- Length of square side.
- Length of marker side.
- The dictionary of the markers.
- Ids of all the markers.
As for the ```GridBoard``` objects, the aruco module provides a function to create ```CharucoBoard```s easily. This function
is the static function ```cv::aruco::CharucoBoard::create()``` :
``` c++
cv::aruco::CharucoBoard board = cv::aruco::CharucoBoard::create(5, 7, 0.04, 0.02, dictionary);
```
- The first and second parameters are the number of squares in X and Y direction respectively.
- The third and fourth parameters are the length of the squares and the markers respectively. They can be provided
in any unit, having in mind that the estimated pose for this board would be measured in the same units (usually meters are used).
- Finally, the dictionary of the markers is provided.
The ids of each of the markers are assigned by default in ascending order and starting on 0, like in ```GridBoard::create()```.
This can be easily customized by accessing to the ids vector through ```board.ids```, like in the ```Board``` parent class.
Once we have our ```CharucoBoard``` object, we can create an image to print it. This can be done with the
```CharucoBoard::draw()``` method:
``` c++
cv::aruco::CharucoBoard board = cv::aruco::CharucoBoard::create(5, 7, 0.04, 0.02, dictionary);
cv::Mat boardImage;
board.draw( cv::Size(600, 500), boardImage, 10, 1 );
```
- The first parameter is the size of the output image in pixels. In this case 600x500 pixels. If this is not proportional
to the board dimensions, it will be centered on the image.
- ```boardImage```: the output image with the board.
- The third parameter is the (optional) margin in pixels, so none of the markers are touching the image border.
In this case the margin is 10.
- Finally, the size of the marker border, similarly to ```drawMarker()``` function. The default value is 1.
The output image will be something like this:
![](images/charucoboard.jpg)
A full working example is included in the ```create_board_charuco.cpp``` inside the module samples folder.
ChArUco Board Detection
------
When you detect a ChArUco board, what you are actually detecting is each of the chessboard corners of the board.
Each corner on a ChArUco board has a unique identifier (id) assigned. These ids go from 0 to the total number of corners
in the board.
So, a detected ChArUco board consists in:
- ```vector<Point2f> charucoCorners``` : list of image positions of the detected corners.
- ```vector <int> charucoIds``` : ids for each of the detected corners in ```charucoCorners```.
The detection of the ChArUco corners is based on the previous detected markers. So that, first markers are detected, and then
ChArUco corners are interpolated from markers.
The function that detect the ChArUco corners is ```cv::aruco::interpolateCornersCharuco()``` . This example shows the whole process. First, markers are detected, and then the ChArUco corners are interpolated from these markers.
``` c++
cv::Mat inputImage;
cv::Mat cameraMatrix, distCoeffs;
// camera parameters are read from somewhere
readCameraParameters(cameraMatrix, distCoeffs);
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::CharucoBoard board = cv::aruco::CharucoBoard::create(5, 7, 0.04, 0.02, dictionary);
...
vector< int > markerIds;
vector< vector<Point2f> > markerCorners;
cv::aruco::detectMarkers(inputImage, board.dictionary, markerCorners, markerIds);
// if at least one marker detected
if(markerIds.size() > 0) {
std::vector<cv::Point2f> charucoCorners;
std::vector<int> charucoIds;
cv::aruco::interpolateCornersCharuco(markerCorners, markerIds, inputImage, board, charucoCorners, charucoIds, cameraMatrix, distCoeffs);
}
```
The parameters of the ```interpolateCornersCharuco()``` function are:
- ```markerCorners``` and ```markerIds```: the detected markers from ```detectMarkers()``` function.
- ```inputImage```: the original image where the markers were detected. The image is necessary to perform subpixel refinement
in the ChArUco corners.
- ```board```: the ```CharucoBoard``` object
- ```charucoCorners``` and ```charucoIds```: the output interpolated Charuco corners
- ```cameraMatrix``` and ```distCoeffs```: the optional camera calibration parameters
- The function returns the number of Charuco corners interpolated.
In this case, we have call ```interpolateCornersCharuco()``` providing the camera calibration parameters. However these parameters
are optional. A similar example without these parameters would be:
``` c++
cv::Mat inputImage;
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::CharucoBoard board = cv::aruco::CharucoBoard::create(5, 7, 0.04, 0.02, dictionary);
...
vector< int > markerIds;
vector< vector<Point2f> > markerCorners;
DetectorParameters params;
params.doCornerRefinement = false;
cv::aruco::detectMarkers(inputImage, board.dictionary, markerCorners, markerIds, params);
// if at least one marker detected
if(markerIds.size() > 0) {
std::vector<cv::Point2f> charucoCorners;
std::vector<int> charucoIds;
cv::aruco::interpolateCornersCharuco(markerCorners, markerIds, inputImage, board, charucoCorners, charucoIds);
}
```
If calibration parameters are provided, the ChArUco corners are interpolated by, first, estimating a rough pose from the ArUco markers
and, then, reprojecting the ChArUco corners back to the image.
On the other hand, if calibration parameters are not provided, the ChArUco corners are interpolated by calculating the
corresponding homography between the ChArUco plane and the ChArUco image projection.
The main problem of using homography is that the interpolation is more sensible to image distortion. Actually, the homography is only performed
using the closest markers of each ChArUco corner to reduce the effect of distortion.
When detecting markers for ChArUco boards, and specially when using homography, it is recommended to disable the corner refinement of markers. The reason of this
is that, due to the proximity of the chessboard squares, the subpixel process can produce important
deviations in the corner positions and these deviations are propagated to the ChArUco corner interpolation,
producing poor results.
Furthermore, only those corners whose two surrounding markers have be found are returned. If any of the two surrounding markers has
not been detected, this usually means that there is some occlusion or the image quality is not good in that zone. In any case, it is
preferable not to consider that corner, since what we want is to be sure that the interpolated ChArUco corners are very accurate.
After the ChArUco corners have been interpolated, a subpixel refinement is performed.
Once we have interpolated the ChArUco corners, we would probably want to draw them to see if their detections are correct.
This can be easily done using the ```drawDetectedCornersCharuco()``` function:
``` c++
cv::aruco::drawDetectedCornersCharuco(image, charucoCorners, charucoIds, color);
```
- ```image``` is the image where the corners will be drawn (it will normally be the same image where the corners were detected).
- The ```outputImage``` will be a clone of ```inputImage``` with the corners drawn.
- ```charucoCorners``` and ```charucoIds``` are the detected Charuco corners from the ```interpolateCornersCharuco()``` function.
- Finally, the last parameter is the (optional) color we want to draw the corners with, of type ```cv::Scalar```.
For this image:
![Image with Charuco board](images/choriginal.png)
The result will be:
![Charuco board detected](images/chcorners.png)
In the presence of occlusion. like in the following image, although some corners are clearly visible, not all their surrounding markers have been detected due occlusion and, thus, they are not interpolated:
![Charuco detection with occlusion](images/chocclusion.png)
Finally, this is a full example of ChArUco detection (without using calibration parameters):
``` c++
cv::VideoCapture inputVideo;
inputVideo.open(0);
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::CharucoBoard board = cv::aruco::CharucoBoard::create(5, 7, 0.04, 0.02, dictionary);
DetectorParameters params;
params.doCornerRefinement = false;
while (inputVideo.grab()) {
cv::Mat image, imageCopy;
inputVideo.retrieve(image);
image.copyTo(imageCopy);
std::vector<int> ids;
std::vector<std::vector<cv::Point2f> > corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids, params);
// if at least one marker detected
if (ids.size() > 0) {
cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
std::vector<cv::Point2f> charucoCorners;
std::vector<int> charucoIds;
cv::aruco::interpolateCornersCharuco(corners, ids, image, board, charucoCorners, charucoIds);
// if at least one charuco corner detected
if(charucoIds.size() > 0)
cv::aruco::drawDetectedCornersCharuco(imageCopy, charucoCorners, charucoIds, cv::Scalar(255, 0, 0));
}
cv::imshow("out", imageCopy);
char key = (char) cv::waitKey(waitTime);
if (key == 27)
break;
}
```
Sample video:
[![ChArUco board detection video](http://img.youtube.com/vi/Nj44m_N_9FY/0.jpg)](https://youtu.be/Nj44m_N_9FY)
A full working example is included in the ```detect_board_charuco.cpp``` inside the module samples folder.
ChArUco Pose Estimation
------
The final goal of the ChArUco boards is finding corners very accurately for a high precision calibration or pose estimation.
The aruco module provides a function to perform ChArUco pose estimation easily. As in the ```GridBoard```, the coordinate system
of the ```CharucoBoard``` is placed in the board plane with the Z axis pointing out, and centered in the bottom left corner of the board.
The function for pose estimation is ```estimatePoseCharucoBoard()```:
``` c++
cv::aruco::estimatePoseCharucoBoard(charucoCorners, charucoIds, board, cameraMatrix, distCoeffs, rvec, tvec);
```
- The ```charucoCorners``` and ```charucoIds``` parameters are the detected charuco corners from the ```interpolateCornersCharuco()```
function.
- The third parameter is the ```CharucoBoard``` object.
- The ```cameraMatrix``` and ```distCoeffs``` are the camera calibration parameters which are necessary for pose estimation.
- Finally, the ```rvec``` and ```tvec``` parameters are the output pose of the Charuco Board.
- The function returns true if the pose was correctly estimated and false otherwise. The main reason of failing is that there are
not enough corners for pose estimation or they are in the same line.
The axis can be drawn using ```drawAxis()``` to check the pose is correctly estimated. The result would be: (X:red, Y:green, Z:blue)
![Charuco Board Axis](images/chaxis.png)
A full example of ChArUco detection with pose estimation:
``` c++
cv::VideoCapture inputVideo;
inputVideo.open(0);
cv::Mat cameraMatrix, distCoeffs;
// camera parameters are read from somewhere
readCameraParameters(cameraMatrix, distCoeffs);
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::CharucoBoard board = cv::aruco::CharucoBoard::create(5, 7, 0.04, 0.02, dictionary);
while (inputVideo.grab()) {
cv::Mat image, imageCopy;
inputVideo.retrieve(image);
image.copyTo(imageCopy);
std::vector<int> ids;
std::vector<std::vector<cv::Point2f> > corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
// if at least one marker detected
if (ids.size() > 0) {
std::vector<cv::Point2f> charucoCorners;
std::vector<int> charucoIds;
cv::aruco::interpolateCornersCharuco(corners, ids, image, board, charucoCorners, charucoIds, cameraMatrix, distCoeffs);
// if at least one charuco corner detected
if(charucoIds.size() > 0) {
cv::aruco::drawDetectedCornersCharuco(imageCopy, charucoCorners, charucoIds, cv::Scalar(255, 0, 0));
cv::Mat rvec, tvec;
bool valid = cv::aruco::estimatePoseCharucoBoard(charucoCorners, charucoIds, board, cameraMatrix, distCoeffs, rvec, tvec);
// if charuco pose is valid
if(valid)
cv::aruco::drawAxis(imageCopy, cameraMatrix, distCoeffs, rvec, tvec, 0.1);
}
}
cv::imshow("out", imageCopy);
char key = (char) cv::waitKey(waitTime);
if (key == 27)
break;
}
```
A full working example is included in the ```detect_board_charuco.cpp``` inside the module samples folder.

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 385 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 387 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 404 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

@ -0,0 +1,159 @@
Detection of Diamond Markers {#tutorial_charuco_diamond_detection}
==============================
A ChArUco diamond marker (or simply diamond marker) is a chessboard composed by 3x3 squares and 4 ArUco markers inside the white squares.
It is similar to a ChArUco board in appearance, however they are conceptually different.
![Diamond marker examples](images/diamondmarkers.png)
In both, ChArUco board and Diamond markers, their detection is based on the previous detected ArUco
markers. In the ChArUco case, the used markers are selected by directly looking their identifiers. This means
that if a marker (included in the board) is found on a image, it will be automatically assumed to belong to the board. Furthermore,
if a marker board is found more than once in the image, it will produce an ambiguity since the system wont
be able to know which one should be used for the Board.
On the other hand, the detection of Diamond marker is not based on the identifiers. Instead, their detection
is based on the relative position of the markers. As a consequence, marker identifiers can be repeated in the
same diamond or among different diamonds, and they can be detected simultaneously without ambiguity. However,
due to the complexity of finding marker based on their relative position, the diamond markers are limited to
a size of 3x3 squares and 4 markers.
As in a single ArUco marker, each Diamond marker is composed by 4 corners and a identifier. The four corners
correspond to the 4 chessboard corners in the marker and the identifier is actually an array of 4 numbers, which are
the identifiers of the four ArUco markers inside the diamond.
Diamond markers are useful in those scenarios where repeated markers should be allowed. For instance:
- To increase the number of identifiers of single markers by using diamond marker for labeling. They would allow
up to N^4 different ids, being N the number of markers in the used dictionary.
- Give to each of the four markers a conceptual meaning. For instance, one of the four marker ids could be
used to indicate the scale of the marker (i.e. the size of the square), so that the same diamond can be found
in the environment with different sizes just by changing one of the four markers and the user does not need
to manually indicate the scale of each of them. This case is included in the ```diamond_detector.cpp``` file inside
the samples folder of the module.
Furthermore, as its corners are chessboard corners, they can be used for accurate pose estimation.
The diamond functionalities are included in ```<opencv2/aruco/charuco.hpp>```
ChArUco Diamond Creation
------
The image of a diamond marker can be easily created using the ```drawCharucoDiamond()``` function.
For instance:
``` c++
cv::Mat diamondImage;
cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::drawCharucoDiamond(dictionary, cv::Vec4i(45,68,28,74), 200, 120, markerImage);
```
This will create a diamond marker image with a square size of 200 pixels and a marker size of 120 pixels.
The marker ids are given in the second parameter as a ```Vec4i``` object. The order of the marker ids
in the diamond layout are the same as in a standard ChArUco board, i.e. top, left, right and bottom.
The image produced will be:
![Diamond marker](images/diamondmarker.png)
A full working example is included in the ```create_diamond.cpp``` inside the module samples folder.
ChArUco Diamond Detection
------
As in most cases, the detection of diamond markers requires a previous detection of ArUco markers.
After detecting markers, diamond are detected using the ```detectCharucoDiamond()``` function:
``` c++
cv::Mat inputImage;
float squareLength = 0.40;
float markerLength = 0.25;
...
std::vector< int > markerIds;
std::vector< std::vector< cv::Point2f > > markerCorners;
// detect ArUco markers
cv::aruco::detectMarkers(inputImage, dictionary, markerCorners, markerIds);
std::vector< cv::Vec4i > diamondIds;
std::vector< std::vector< cv::Point2f > > diamondCorners;
// detect diamon diamonds
cv::aruco::detectCharucoDiamond(inputImage, markerCorners, markerIds, squareLength / markerLength, diamondCorners, diamondIds);
```
The ```detectCharucoDiamond()``` function receives the original image and the previous detected marker corners and ids.
The input image is necessary to perform subpixel refinement in the ChArUco corners.
It also receives the rate between the square size and the marker sizes which is required for both, detecting the diamond
from the relative positions of the markers and interpolating the ChArUco corners.
The function returns the detected diamonds in two parameters. The first parameter, ```diamondCorners```, is an array containing
all the four corners of each detected diamond. Its format is similar to the detected corners by the ```detectMarkers()```
function and, for each diamond, the corners are represented in the same order than in the ArUco markers, i.e. clockwise order
starting with the top-left corner. The second returned parameter, ```diamondIds```, contains all the ids of the returned
diamond corners in ```diamondCorners```. Each id is actually an array of 4 integers that can be represented with ```Vec4i```.
The detected diamond can be visualized using the function ```drawDetectedDiamonds()``` which simply recieves the image and the diamond
corners and ids:
``` c++
...
std::vector< cv::Vec4i > diamondIds;
std::vector< std::vector< cv::Point2f > > diamondCorners;
cv::aruco::detectCharucoDiamond(inputImage, markerCorners, markerIds, squareLength / markerLength, diamondCorners, diamondIds);
cv::aruco::drawDetectedDiamonds(inputImage, diamondCorners, diamondIds);
```
The result is the same that the one produced by ```drawDetectedMarkers()```, but printing the four ids of the diamond:
![Detected diamond markers](images/detecteddiamonds.png)
A full working example is included in the ```detect_diamonds.cpp``` inside the module samples folder.
ChArUco Diamond Pose Estimation
------
Since a ChArUco diamond is represented by its four corners, its pose can be estimated in the same way than in a single ArUco marker,
i.e. using the ```estimatePoseSingleMarkers()``` function. For instance:
``` c++
...
std::vector< cv::Vec4i > diamondIds;
std::vector< std::vector< cv::Point2f > > diamondCorners;
// detect diamon diamonds
cv::aruco::detectCharucoDiamond(inputImage, markerCorners, markerIds, squareLength / markerLength, diamondCorners, diamondIds);
// estimate poses
std::vector<cv::Mat> rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(diamondCorners, squareLength, camMatrix, distCoeffs, rvecs, tvecs);
// draw axis
for(unsigned int i=0; i<rvecs.size(); i++)
cv::aruco::drawAxis(inputImage, camMatrix, distCoeffs, rvecs[i], tvecs[i], axisLength);
```
The function will obtain the rotation and translation vector for each of the diamond marker and store them
in ```rvecs``` and ```tvecs```. Note that the diamond corners are a chessboard square corners and thus, the square length
has to be provided for pose estimation, and not the marker length. Camera calibration parameters are also required.
Finally, an axis can be drawn to check the estimated pose is correct using ```drawAxis()```:
![Detected diamond axis](images/diamondsaxis.png)
The coordinate system of the diamond pose will be in the center of the marker with the Z axis pointing out,
as in a simple ArUco marker pose estimation.
Sample video:
[![Diamond markers detection video](http://img.youtube.com/vi/OqKpBnglH7k/0.jpg)](https://youtu.be/OqKpBnglH7k)
A full working example is included in the ```detect_diamonds.cpp``` inside the module samples folder.

Binary file not shown.

After

Width:  |  Height:  |  Size: 417 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 420 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 416 KiB

@ -0,0 +1,60 @@
ArUco marker detection (aruco module) {#tutorial_table_of_content_aruco}
==========================================================
ArUco markers are binary square fiducial markers that can be used for camera pose estimation.
Their main benefit is that their detection is robust, fast and simple.
The aruco module includes the detection of these types of markers and the tools to employ them
for pose estimation and camera calibration.
Also, the ChArUco functionalities combine ArUco markers with traditional chessboards to allow
an easy and versatile corner detection. The module also includes the functions to detect
ChArUco corners and use them for pose estimation and camera calibration.
- @subpage tutorial_aruco_detection
*Compatibility:* \> OpenCV 3.0
*Author:* Sergio Garrido
Basic detection and pose estimation from single ArUco markers.
- @subpage tutorial_aruco_board_detection
*Compatibility:* \> OpenCV 3.0
*Author:* Sergio Garrido
Detection and pose estimation using a Board of markers
- @subpage tutorial_charuco_detection
*Compatibility:* \> OpenCV 3.0
*Author:* Sergio Garrido
Basic detection using ChArUco corners
- @subpage tutorial_charuco_diamond_detection
*Compatibility:* \> OpenCV 3.0
*Author:* Sergio Garrido
Detection and pose estimation using ChArUco markers
- @subpage tutorial_aruco_calibration
*Compatibility:* \> OpenCV 3.0
*Author:* Sergio Garrido
Camera Calibration using ArUco and ChArUco boards
- @subpage tutorial_aruco_faq
*Compatibility:* \> OpenCV 3.0
*Author:* Sergio Garrido
General and useful questions about the aruco module
Loading…
Cancel
Save