- ```markerCorners``` and ```markerIds```: structures of detected markers from ```detectMarkers()``` function.
- ```objPoints```, ```imgPoints```: object and image points, matched with ```matchImagePoints```, which, in turn, takes as input ```markerCorners``` and ```markerIds```: structures of detected markers from ```detectMarkers()``` function).
- ```board```: the ```Board``` object that defines the board layout and its ids
- ```board```: the ```Board``` object that defines the board layout and its ids
- ```cameraMatrix``` and ```distCoeffs```: camera calibration parameters necessary for pose estimation.
- ```cameraMatrix``` and ```distCoeffs```: camera calibration parameters necessary for pose estimation.
- ```rvec``` and ```tvec```: estimated pose of the Board. If not empty then treated as initial guess.
- ```rvec``` and ```tvec```: estimated pose of the Board. If not empty then treated as initial guess.
- The function returns the total number of markers employed for estimating the board pose. Note that not all the
- The function returns the total number of markers employed for estimating the board pose.
markers provided in ```markerCorners``` and ```markerIds``` should be used, since only the markers whose ids are
listed in the ```Board::ids``` structure are considered.
The ```drawFrameAxes()``` function can be used to check the obtained pose. For instance:
The ```drawFrameAxes()``` function can be used to check the obtained pose. For instance:
@ -135,8 +131,7 @@ in any unit, having in mind that the estimated pose for this board will be measu
- Finally, the dictionary of the markers is provided.
- Finally, the dictionary of the markers is provided.
So, this board will be composed by 5x7=35 markers. The ids of each of the markers are assigned, by default, in ascending
So, this board will be composed by 5x7=35 markers. The ids of each of the markers are assigned, by default, in ascending
order starting on 0, so they will be 0, 1, 2, ..., 34. This can be easily customized by accessing to the ids vector
order starting on 0, so they will be 0, 1, 2, ..., 34.
through ```board.ids```, like in the ```Board``` parent class.
After creating a Grid Board, we probably want to print it and use it. A function to generate the image
After creating a Grid Board, we probably want to print it and use it. A function to generate the image
of a ```GridBoard``` is provided in ```cv::aruco::GridBoard::generateImage()```. For example:
of a ```GridBoard``` is provided in ```cv::aruco::GridBoard::generateImage()```. For example:
@ -144,7 +139,7 @@ of a ```GridBoard``` is provided in ```cv::aruco::GridBoard::generateImage()```.
When you create an `#cv::aruco::ArucoDetector` object, you need to pass the following parameters to the constructor:
- A dictionary object, in this case one of the predefined dictionaries (`#cv::aruco::DICT_6X6_250`).
- Object of type `#cv::aruco::DetectorParameters`. This object includes all parameters that can be customized during the detection process. These parameters will be explained in the next section.
The parameters of `detectMarkers` are:
The parameters of `detectMarkers` are:
- The first parameter is the image containing the markers to be detected.
- The first parameter is the image containing the markers to be detected.
- The second parameter is the dictionary object, in this case one of the predefined dictionaries (`DICT_6X6_250`).
- The detected markers are stored in the `markerCorners` and `markerIds` structures:
- The detected markers are stored in the `markerCorners` and `markerIds` structures:
- `markerCorners` is the list of corners of the detected markers. For each marker, its four
- `markerCorners` is the list of corners of the detected markers. For each marker, its four
corners are returned in their original order (which is clockwise starting with top left). So, the first corner is the top left corner, followed by the top right, bottom right and bottom left.
corners are returned in their original order (which is clockwise starting with top left). So, the first corner is the top left corner, followed by the top right, bottom right and bottom left.
- `markerIds` is the list of ids of each of the detected markers in `markerCorners`.
- `markerIds` is the list of ids of each of the detected markers in `markerCorners`.
Note that the returned `markerCorners` and `markerIds` vectors have the same size.
Note that the returned `markerCorners` and `markerIds` vectors have the same size.
- The fourth parameter is the object of type `DetectionParameters`. This object includes all the
parameters that can be customized during the detection process. These parameters are explained in the next section.
- The final parameter, `rejectedCandidates`, is a returned list of marker candidates, i.e.
- The final parameter, `rejectedCandidates`, is a returned list of marker candidates, i.e.
shapes that were found and considered but did not contain a valid marker. Each candidate is also
shapes that were found and considered but did not contain a valid marker. Each candidate is also
defined by its four corners, and its format is the same as the `markerCorners` parameter. This
defined by its four corners, and its format is the same as the `markerCorners` parameter. This
@note To work with examples from the tutorial, you can use camera parameters from `tutorial_camera_params.yml`.
@note To work with examples from the tutorial, you can use camera parameters from `tutorial_camera_params.yml`.
An example of use in `detect.cpp`.
An example of use in `detect_markers.cpp`.
@ -373,12 +402,12 @@ Selecting a dictionary
The aruco module provides the `Dictionary` class to represent a dictionary of markers.
The aruco module provides the `Dictionary` class to represent a dictionary of markers.
In addition to the marker size and the number of markers in the dictionary, there is another important dictionary
In addition to the marker size and the number of markers in the dictionary, there is another important parameter of the dictionary -
parameter, the inter-marker distance. The inter-marker distance is the minimum distance among its markers
the inter-marker distance. The inter-marker distance is the minimum distance between dictionary markers
and it determines the error detection and correction capabilities of the dictionary.
that determines the dictionary's ability to detect and correct errors.
In general, lower dictionary sizes and higher marker sizes increase the inter-marker distance and
In general, smaller dictionary sizes and larger marker sizes increase the inter-marker distance and
vice-versa. However, the detection of markers with higher sizes is more complex, due to the higher
vice versa. However, the detection of markers with larger sizes is more difficult due to the higher
number of bits that need to be extracted from the image.
number of bits that need to be extracted from the image.
For instance, if you need only 10 markers in your application, it is better to use a dictionary composed only of those 10 markers than using a dictionary composed of 1000 markers. The reason is that
For instance, if you need only 10 markers in your application, it is better to use a dictionary composed only of those 10 markers than using a dictionary composed of 1000 markers. The reason is that
@ -390,29 +419,28 @@ you can increase your system robustness:
### Predefined dictionaries
### Predefined dictionaries
This is the easiest way to select a dictionary. The aruco module includes a set of predefined dictionaries
This is the easiest way to select a dictionary. The aruco module includes a set of predefined
in a variety of marker sizes and number of markers. For instance:
dictionaries in a variety of marker sizes and number of markers. For instance:
On the other hand, values that are too high can produce the same effect if the markers are too small, and it can also
On the other hand, too large values can produce the same effect if the markers are too small, and can also
reduce the performance. Moreover the process would tend to a global thresholding, losing the adaptive benefits.
reduce the performance. Moreover the process will tend to global thresholding, resulting in a loss of adaptive benefits.
The simplest case is using the same value for `adaptiveThreshWinSizeMin` and
The simplest case is using the same value for `adaptiveThreshWinSizeMin` and
`adaptiveThreshWinSizeMax`, which produces a single thresholding step. However, it is usually better to use a
`adaptiveThreshWinSizeMax`, which produces a single thresholding step. However, it is usually better to use a
@ -556,8 +586,8 @@ For instance, a image with size 640x480 and a minimum relative marker perimeter
to a minimum marker perimeter of 640x0.05 = 32 pixels, since 640 is the maximum dimension of the
to a minimum marker perimeter of 640x0.05 = 32 pixels, since 640 is the maximum dimension of the
image. The same applies for the `maxMarkerPerimeterRate` parameter.
image. The same applies for the `maxMarkerPerimeterRate` parameter.
If the `minMarkerPerimeterRate` is too low, it can penalize considerably the detection performance since
If the `minMarkerPerimeterRate` is too low, detection performance can be significantly reduced,
many more contours would be considered for future stages.
as many more contours will be considered for future stages.
This penalization is not so noticeable for the `maxMarkerPerimeterRate` parameter, since there are
This penalization is not so noticeable for the `maxMarkerPerimeterRate` parameter, since there are
usually many more small contours than big contours.
usually many more small contours than big contours.
A `minMarkerPerimeterRate` value of 0 and a `maxMarkerPerimeterRate` value of 4 (or more) will be
A `minMarkerPerimeterRate` value of 0 and a `maxMarkerPerimeterRate` value of 4 (or more) will be
@ -623,8 +653,8 @@ Default value:
After candidate detection, the bits of each candidate are analyzed in order to determine if they
After candidate detection, the bits of each candidate are analyzed in order to determine if they
are markers or not.
are markers or not.
Before analyzing the binary code itself, the bits need to be extracted. To do so, the perspective
Before analyzing the binary code itself, the bits need to be extracted. To do this, perspective
distortion is removed and the resulting image is thresholded using Otsu threshold to separate
distortion is corrected and the resulting image is thresholded using Otsu threshold to separate
black and white pixels.
black and white pixels.
This is an example of the image obtained after removing the perspective distortion of a marker:
This is an example of the image obtained after removing the perspective distortion of a marker:
@ -663,7 +693,7 @@ Default value:
#### perspectiveRemovePixelPerCell
#### perspectiveRemovePixelPerCell
This parameter determines the number of pixels (per cell) in the obtained image after removing perspective
This parameter determines the number of pixels (per cell) in the obtained image after correcting perspective
distortion (including the border). This is the size of the red squares in the image above.
distortion (including the border). This is the size of the red squares in the image above.
For instance, let’s assume we are dealing with markers of 5x5 bits and border size of 1 bit
For instance, let’s assume we are dealing with markers of 5x5 bits and border size of 1 bit
@ -687,7 +717,7 @@ not recommended to consider all the cell pixels. Instead it is better to ignore
margins of the cells.
margins of the cells.
The reason for this is that, after removing the perspective distortion, the cells’ colors are, in general, not
The reason for this is that, after removing the perspective distortion, the cells’ colors are, in general, not
perfectly separated and white cells can invade some pixels of black cells (and vice-versa). Thus, it is
perfectly separated and white cells can invade some pixels of black cells (and viceversa). Thus, it is
better to ignore some pixels just to avoid counting erroneous pixels.
better to ignore some pixels just to avoid counting erroneous pixels.
For instance, in the following image:
For instance, in the following image:
@ -748,8 +778,9 @@ be accurate, for instance for pose estimation. It is usually a time-consuming st
#### cornerRefinementMethod
#### cornerRefinementMethod
This parameter determines whether the corner subpixel process is performed or not and which method to use if it is being performed. It can be disabled
This parameter determines whether the corner subpixel process is performed or not and which method to use
if accurate corners are not necessary. Possible values are `CORNER_REFINE_NONE`, `CORNER_REFINE_SUBPIX`, `CORNER_REFINE_CONTOUR`, and `CORNER_REFINE_APRILTAG`.
if it is being performed. It can be disabled if accurate corners are not necessary. Possible values are
`CORNER_REFINE_NONE`, `CORNER_REFINE_SUBPIX`, `CORNER_REFINE_CONTOUR`, and `CORNER_REFINE_APRILTAG`.
Default value:
Default value:
@ -759,9 +790,8 @@ Default value:
This parameter determines the window size of the subpixel refinement process.
This parameter determines the window size of the subpixel refinement process.
High values can produce the effect that close image corners are included in the window region, so that the
High values can cause close corners of the image to be included in the window area, so that the corner
marker corner moves to a different and wrong location during the process. Furthermore
of the marker moves to a different and incorrect location during the process. Also, it may affect performance.
it can affect performance.
Default value:
Default value:
@ -773,8 +803,8 @@ These two parameters determine the stop criteria of the subpixel refinement proc
`cornerRefinementMaxIterations` indicates the maximum number of iterations and
`cornerRefinementMaxIterations` indicates the maximum number of iterations and
`cornerRefinementMinAccuracy` the minimum error value before stopping the process.
`cornerRefinementMinAccuracy` the minimum error value before stopping the process.
If the number of iterations is too high, it can affect the performance. On the other hand, if it is
If the number of iterations is too high, it may affect the performance. On the other hand, if it is
too low, it can produce a poor subpixel refinement.
too low, it can result in poor subpixel refinement.