Merge pull request #25378 from AleksandrPanov:move_charuco_tutorial

Move Charuco/Calib tutorials and samples to main repo #25378

Merge with https://github.com/opencv/opencv_contrib/pull/3708

Move Charuco/Calib tutorials and samples to main repo:

- [x] update/fix charuco_detection.markdown and samples
- [x] update/fix charuco_diamond_detection.markdown and samples
- [x] update/fix aruco_calibration.markdown and samples
- [x] update/fix aruco_faq.markdown
- [x] move tutorials, samples and tests to main repo
- [x] remove old tutorials, samples and tests from contrib


### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [ ] The feature is well documented and sample code can be built with the project CMake
pull/25422/head^2
Alexander Panov 7 months ago committed by GitHub
parent 4fb0541916
commit e2621f128e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 1
      doc/tutorials/objdetect/aruco_board_detection/aruco_board_detection.markdown
  2. 88
      doc/tutorials/objdetect/aruco_calibration/aruco_calibration.markdown
  3. BIN
      doc/tutorials/objdetect/aruco_calibration/images/arucocalibration.jpg
  4. BIN
      doc/tutorials/objdetect/aruco_calibration/images/charucocalibration.jpg
  5. 190
      doc/tutorials/objdetect/aruco_faq/aruco_faq.markdown
  6. 265
      doc/tutorials/objdetect/charuco_detection/charuco_detection.markdown
  7. BIN
      doc/tutorials/objdetect/charuco_detection/images/charucoboard.png
  8. BIN
      doc/tutorials/objdetect/charuco_detection/images/charucodefinition.png
  9. BIN
      doc/tutorials/objdetect/charuco_detection/images/chaxis.jpg
  10. BIN
      doc/tutorials/objdetect/charuco_detection/images/chcorners.jpg
  11. BIN
      doc/tutorials/objdetect/charuco_detection/images/chocclusion.jpg
  12. BIN
      doc/tutorials/objdetect/charuco_detection/images/chocclusion_original.jpg
  13. BIN
      doc/tutorials/objdetect/charuco_detection/images/choriginal.jpg
  14. 143
      doc/tutorials/objdetect/charuco_diamond_detection/charuco_diamond_detection.markdown
  15. BIN
      doc/tutorials/objdetect/charuco_diamond_detection/images/detecteddiamonds.jpg
  16. BIN
      doc/tutorials/objdetect/charuco_diamond_detection/images/diamondmarker.png
  17. BIN
      doc/tutorials/objdetect/charuco_diamond_detection/images/diamondmarkers.jpg
  18. BIN
      doc/tutorials/objdetect/charuco_diamond_detection/images/diamondsaxis.jpg
  19. 4
      doc/tutorials/objdetect/table_of_content_objdetect.markdown
  20. 246
      modules/objdetect/test/test_aruco_tutorial.cpp
  21. 43
      samples/cpp/tutorial_code/objectDetection/aruco_samples_utility.hpp
  22. 188
      samples/cpp/tutorial_code/objectDetection/calibrate_camera.cpp
  23. 216
      samples/cpp/tutorial_code/objectDetection/calibrate_camera_charuco.cpp
  24. 22
      samples/cpp/tutorial_code/objectDetection/create_board.cpp
  25. 77
      samples/cpp/tutorial_code/objectDetection/create_board_charuco.cpp
  26. 72
      samples/cpp/tutorial_code/objectDetection/create_diamond.cpp
  27. 29
      samples/cpp/tutorial_code/objectDetection/create_marker.cpp
  28. 52
      samples/cpp/tutorial_code/objectDetection/detect_board.cpp
  29. 144
      samples/cpp/tutorial_code/objectDetection/detect_board_charuco.cpp
  30. 187
      samples/cpp/tutorial_code/objectDetection/detect_diamonds.cpp
  31. 66
      samples/cpp/tutorial_code/objectDetection/detect_markers.cpp
  32. 30
      samples/cpp/tutorial_code/objectDetection/detector_params.yml
  33. 21
      samples/cpp/tutorial_code/objectDetection/tutorial_camera_charuco.yml

@ -2,6 +2,7 @@ Detection of ArUco boards {#tutorial_aruco_board_detection}
=========================
@prev_tutorial{tutorial_aruco_detection}
@next_tutorial{tutorial_charuco_detection}
| | |
| -: | :- |

@ -0,0 +1,88 @@
Calibration with ArUco and ChArUco {#tutorial_aruco_calibration}
==================================
@prev_tutorial{tutorial_charuco_diamond_detection}
@next_tutorial{tutorial_aruco_faq}
The ArUco module can also be used to calibrate a camera. Camera calibration consists in obtaining the
camera intrinsic parameters and distortion coefficients. This parameters remain fixed unless the camera
optic is modified, thus camera calibration only need to be done once.
Camera calibration is usually performed using the OpenCV `cv::calibrateCamera()` function. This function
requires some correspondences between environment points and their projection in the camera image from
different viewpoints. In general, these correspondences are obtained from the corners of chessboard
patterns. See `cv::calibrateCamera()` function documentation or the OpenCV calibration tutorial for
more detailed information.
Using the ArUco module, calibration can be performed based on ArUco markers corners or ChArUco corners.
Calibrating using ArUco is much more versatile than using traditional chessboard patterns, since it
allows occlusions or partial views.
As it can be stated, calibration can be done using both, marker corners or ChArUco corners. However,
it is highly recommended using the ChArUco corners approach since the provided corners are much
more accurate in comparison to the marker corners. Calibration using a standard Board should only be
employed in those scenarios where the ChArUco boards cannot be employed because of any kind of restriction.
Calibration with ChArUco Boards
-------------------------------
To calibrate using a ChArUco board, it is necessary to detect the board from different viewpoints, in the
same way that the standard calibration does with the traditional chessboard pattern. However, due to the
benefits of using ChArUco, occlusions and partial views are allowed, and not all the corners need to be
visible in all the viewpoints.
![ChArUco calibration viewpoints](images/charucocalibration.jpg)
The example of using `cv::calibrateCamera()` for cv::aruco::CharucoBoard:
@snippet samples/cpp/tutorial_code/objectDetection/calibrate_camera_charuco.cpp CalibrationWithCharucoBoard1
@snippet samples/cpp/tutorial_code/objectDetection/calibrate_camera_charuco.cpp CalibrationWithCharucoBoard2
@snippet samples/cpp/tutorial_code/objectDetection/calibrate_camera_charuco.cpp CalibrationWithCharucoBoard3
The ChArUco corners and ChArUco identifiers captured on each viewpoint are stored in the vectors
`allCharucoCorners` and `allCharucoIds`, one element per viewpoint.
The `calibrateCamera()` function will fill the `cameraMatrix` and `distCoeffs` arrays with the
camera calibration parameters. It will return the reprojection error obtained from the calibration.
The elements in `rvecs` and `tvecs` will be filled with the estimated pose of the camera
(respect to the ChArUco board) in each of the viewpoints.
Finally, the `calibrationFlags` parameter determines some of the options for the calibration.
A full working example is included in the `calibrate_camera_charuco.cpp` inside the
`samples/cpp/tutorial_code/objectDetection` folder.
The samples now take input via commandline via the `cv::CommandLineParser`. For this file the example
parameters will look like:
@code{.cpp}
"camera_calib.txt" -w=5 -h=7 -sl=0.04 -ml=0.02 -d=10
-v=path/img_%02d.jpg
@endcode
The camera calibration parameters from `opencv/samples/cpp/tutorial_code/objectDetection/tutorial_camera_charuco.yml`
were obtained by the `img_00.jpg-img_03.jpg` placed from this
[folder](https://github.com/opencv/opencv_contrib/tree/4.6.0/modules/aruco/tutorials/aruco_calibration/images).
Calibration with ArUco Boards
-----------------------------
As it has been stated, it is recommended the use of ChAruco boards instead of ArUco boards for camera
calibration, since ChArUco corners are more accurate than marker corners. However, in some special cases
it must be required to use calibration based on ArUco boards. As in the previous case, it requires
the detections of an ArUco board from different viewpoints.
![ArUco calibration viewpoints](images/arucocalibration.jpg)
The example of using `cv::calibrateCamera()` for cv::aruco::GridBoard:
@snippet samples/cpp/tutorial_code/objectDetection/calibrate_camera.cpp CalibrationWithArucoBoard1
@snippet samples/cpp/tutorial_code/objectDetection/calibrate_camera.cpp CalibrationWithArucoBoard2
@snippet samples/cpp/tutorial_code/objectDetection/calibrate_camera.cpp CalibrationWithArucoBoard3
A full working example is included in the `calibrate_camera.cpp` inside the `samples/cpp/tutorial_code/objectDetection` folder.
The samples now take input via commandline via the `cv::CommandLineParser`. For this file the example
parameters will look like:
@code{.cpp}
"camera_calib.txt" -w=5 -h=7 -l=100 -s=10 -d=10 -v=path/aruco_videos_or_images
@endcode

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

@ -0,0 +1,190 @@
Aruco module FAQ {#tutorial_aruco_faq}
================
@prev_tutorial{tutorial_aruco_calibration}
This is a compilation of questions that can be useful for those that want to use the aruco module.
- I only want to label some objects, what should I use?
In this case, you only need single ArUco markers. You can place one or several markers with different
ids in each of the object you want to identify.
- Which algorithm is used for marker detection?
The aruco module is based on the original ArUco library. A full description of the detection process
can be found in:
> S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
> "Automatic generation and detection of highly reliable fiducial markers under occlusion".
> Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005
- My markers are not being detected correctly, what can I do?
There can be many factors that avoid the correct detection of markers. You probably need to adjust
some of the parameters in the `cv::aruco::DetectorParameters` object. The first thing you can do is
checking if your markers are returned as rejected candidates by the `cv::aruco::ArucoDetector::detectMarkers()`
function. Depending on this, you should try to modify different parameters.
If you are using a ArUco board, you can also try the `cv::aruco::ArucoDetector::refineDetectedMarkers()` function.
If you are [using big markers](https://github.com/opencv/opencv_contrib/issues/2811) (400x400 pixels and more), try
increasing `cv::aruco::DetectorParameters::adaptiveThreshWinSizeMax` value.
Also avoid [narrow borders around the ArUco marker](https://github.com/opencv/opencv_contrib/issues/2492)
(5% or less of the marker perimeter, adjusted by `cv::aruco::DetectorParameters::minMarkerDistanceRate`)
around markers.
- What are the benefits of ArUco boards? What are the drawbacks?
Using a board of markers you can obtain the camera pose from a set of markers, instead of a single one.
This way, the detection is able to handle occlusion of partial views of the Board, since only one
marker is necessary to obtain the pose.
Furthermore, as in most cases you are using more corners for pose estimation, it will be more
accurate than using a single marker.
The main drawback is that a Board is not as versatile as a single marker.
- What are the benefits of ChArUco boards over ArUco boards? And the drawbacks?
ChArUco boards combines chessboards with ArUco boards. Thanks to this, the corners provided by
ChArUco boards are more accurate than those provided by ArUco Boards (or single markers).
The main drawback is that ChArUco boards are not as versatile as ArUco board. For instance,
a ChArUco board is a planar board with a specific marker layout while the ArUco boards can have
any layout, even in 3d. Furthermore, the markers in the ChArUco board are usually smaller and
more difficult to detect.
- I do not need pose estimation, should I use ChArUco boards?
No. The main goal of ChArUco boards is provide high accurate corners for pose estimation or camera
calibration.
- Should all the markers in an ArUco board be placed in the same plane?
No, the marker corners in a ArUco board can be placed anywhere in its 3d coordinate system.
- Should all the markers in an ChArUco board be placed in the same plane?
Yes, all the markers in a ChArUco board need to be in the same plane and their layout is fixed by
the chessboard shape.
- What is the difference between a `cv::aruco::Board` object and a `cv::aruco::GridBoard` object?
The `cv::aruco::GridBoard` class is a specific type of board that inherits from `cv::aruco::Board` class.
A `cv::aruco::GridBoard` object is a board whose markers are placed in the same plane and in a grid layout.
- What are Diamond markers?
Diamond markers are very similar to a ChArUco board of 3x3 squares. However, contrary to ChArUco boards,
the detection of diamonds is based on the relative position of the markers.
They are useful when you want to provide a conceptual meaning to any (or all) of the markers in
the diamond. An example is using one of the marker to provide the diamond scale.
- Do I need to detect marker before board detection, ChArUco board detection or Diamond detection?
Yes, the detection of single markers is a basic tool in the aruco module. It is done using the
`cv::aruco::DetectorParameters::detectMarkers()` function. The rest of functionalities receives
a list of detected markers from this function.
- I want to calibrate my camera, can I use this module?
Yes, the aruco module provides functionalities to calibrate the camera using both, ArUco boards and
ChArUco boards.
- Should I calibrate using a ChArUco board or an ArUco board?
It is highly recommended the calibration using ChArUco board due to the high accuracy.
- Should I use a predefined dictionary or generate my own dictionary?
In general, it is easier to use one of the predefined dictionaries. However, if you need a bigger
dictionary (in terms of number of markers or number of bits) you should generate your own dictionary.
Dictionary generation is also useful if you want to maximize the inter-marker distance to achieve
a better error correction during the identification step.
- I am generating my own dictionary but it takes too long
Dictionary generation should only be done once at the beginning of your application and it should take
some seconds. If you are generating the dictionary on each iteration of your detection loop, you are
doing it wrong.
Furthermore, it is recommendable to save the dictionary to a file with `cv::aruco::Dictionary::writeDictionary()`
and read it with `cv::aruco::Dictionary::readDictionary()` on every execution, so you don't need
to generate it.
- I would like to use some markers of the original ArUco library that I have already printed, can I use them?
Yes, one of the predefined dictionary is `cv::aruco::DICT_ARUCO_ORIGINAL`, which detects the marker
of the original ArUco library with the same identifiers.
- Can I use the Board configuration file of the original ArUco library in this module?
Not directly, you will need to adapt the information of the ArUco file to the aruco module Board format.
- Can I use this module to detect the markers of other libraries based on binary fiducial markers?
Probably yes, however you will need to port the dictionary of the original library to the aruco module format.
- Do I need to store the Dictionary information in a file so I can use it in different executions?
If you are using one of the predefined dictionaries, it is not necessary. Otherwise, it is recommendable
that you save it to file.
- Do I need to store the Board information in a file so I can use it in different executions?
If you are using a `cv::aruco::GridBoard` or a `cv::aruco::CharucoBoard` you only need to store
the board measurements that are provided to the `cv::aruco::GridBoard::GridBoard()` constructor or
in or `cv::aruco::CharucoBoard` constructor. If you manually modify the marker ids of the boards,
or if you use a different type of board, you should save your board object to file.
- Does the aruco module provide functions to save the Dictionary or Board to file?
You can use `cv::aruco::Dictionary::writeDictionary()` and `cv::aruco::Dictionary::readDictionary()`
for `cv::aruco::Dictionary`. The data member of board classes are public and can be easily stored.
- Alright, but how can I render a 3d model to create an augmented reality application?
To do so, you will need to use an external rendering engine library, such as OpenGL. The aruco module
only provides the functionality to obtain the camera pose, i.e. the rotation and traslation vectors,
which is necessary to create the augmented reality effect. However, you will need to adapt the rotation
and traslation vectors from the OpenCV format to the format accepted by your 3d rendering library.
The original ArUco library contains examples of how to do it for OpenGL and Ogre3D.
- I have use this module in my research work, how can I cite it?
You can cite the original ArUco library:
> S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez. 2014.
> "Automatic generation and detection of highly reliable fiducial markers under occlusion".
> Pattern Recogn. 47, 6 (June 2014), 2280-2292. DOI=10.1016/j.patcog.2014.01.005
- Pose estimation markers are not being detected correctly, what can I do?
It is important to remark that the estimation of the pose using only 4 coplanar points is subject to ambiguity.
In general, the ambiguity can be solved, if the camera is near to the marker.
However, as the marker becomes small, the errors in the corner estimation grows and ambiguity comes
as a problem. Try increasing the size of the marker you're using, and you can also try non-symmetrical
(aruco_dict_utils.cpp) markers to avoid collisions. Use multiple markers (ArUco/ChArUco/Diamonds boards)
and pose estimation with solvePnP() with the `cv::SOLVEPNP_IPPE_SQUARE` option.
More in [this issue](https://github.com/opencv/opencv/issues/8813).

@ -0,0 +1,265 @@
Detection of ChArUco Boards {#tutorial_charuco_detection}
===========================
@prev_tutorial{tutorial_aruco_board_detection}
@next_tutorial{tutorial_charuco_diamond_detection}
ArUco markers and boards are very useful due to their fast detection and their versatility.
However, one of the problems of ArUco markers is that the accuracy of their corner positions is not
too high, even after applying subpixel refinement.
On the contrary, the corners of chessboard patterns can be refined more accurately since each corner
is surrounded by two black squares. However, finding a chessboard pattern is not as versatile as
finding an ArUco board: it has to be completely visible and occlusions are not permitted.
A ChArUco board tries to combine the benefits of these two approaches:
![Charuco definition](images/charucodefinition.png)
The ArUco part is used to interpolate the position of the chessboard corners, so that it has the
versatility of marker boards, since it allows occlusions or partial views. Moreover, since the
interpolated corners belong to a chessboard, they are very accurate in terms of subpixel accuracy.
When high precision is necessary, such as in camera calibration, Charuco boards are a better option
than standard ArUco boards.
Goal
----
In this tutorial you will learn:
- How to create a charuco board ?
- How to detect the charuco corners without performing camera calibration ?
- How to detect the charuco corners with camera calibration and pose estimation ?
Source code
-----------
You can find this code in `samples/cpp/tutorial_code/objectDetection/detect_board_charuco.cpp`
Here's a sample code of how to achieve all the stuff enumerated at the goal list.
@snippet samples/cpp/tutorial_code/objectDetection/detect_board_charuco.cpp charuco_detect_board_full_sample
ChArUco Board Creation
----------------------
The aruco module provides the `cv::aruco::CharucoBoard` class that represents a Charuco Board and
which inherits from the `cv::aruco::Board` class.
This class, as the rest of ChArUco functionalities, are defined in:
@snippet samples/cpp/tutorial_code/objectDetection/detect_board_charuco.cpp charucohdr
To define a `cv::aruco::CharucoBoard`, it is necessary:
- Number of chessboard squares in X and Y directions.
- Length of square side.
- Length of marker side.
- The dictionary of the markers.
- Ids of all the markers.
As for the `cv::aruco::GridBoard` objects, the aruco module provides to create `cv::aruco::CharucoBoard`
easily. This object can be easily created from these parameters using the `cv::aruco::CharucoBoard`
constructor:
@snippet samples/cpp/tutorial_code/objectDetection/create_board_charuco.cpp create_charucoBoard
- The first parameter is the number of squares in X and Y direction respectively.
- The second and third parameters are the length of the squares and the markers respectively. They can
be provided in any unit, having in mind that the estimated pose for this board would be measured
in the same units (usually meters are used).
- Finally, the dictionary of the markers is provided.
The ids of each of the markers are assigned by default in ascending order and starting on 0, like in
`cv::aruco::GridBoard` constructor. This can be easily customized by accessing to the ids vector
through `board.ids`, like in the `cv::aruco::Board` parent class.
Once we have our `cv::aruco::CharucoBoard` object, we can create an image to print it. There are
two ways to do this:
1. By using the script `doc/patter_tools/gen_pattern.py `, see @subpage tutorial_camera_calibration_pattern.
2. By using the function `cv::aruco::CharucoBoard::generateImage()`.
The function `cv::aruco::CharucoBoard::generateImage()` is provided in cv::aruco::CharucoBoard class
and can be called by using the following code:
@snippet samples/cpp/tutorial_code/objectDetection/create_board_charuco.cpp generate_charucoBoard
- The first parameter is the size of the output image in pixels. If this is not proportional
to the board dimensions, it will be centered on the image.
- The second parameter is the output image with the charuco board.
- The third parameter is the (optional) margin in pixels, so none of the markers are touching the
image border.
- Finally, the size of the marker border, similarly to `cv::aruco::generateImageMarker()` function.
The default value is 1.
The output image will be something like this:
![](images/charucoboard.png)
A full working example is included in the `create_board_charuco.cpp` inside the `samples/cpp/tutorial_code/objectDetection/`.
The samples `create_board_charuco.cpp` now take input via commandline via the `cv::CommandLineParser`.
For this file the example
parameters will look like:
@code{.cpp}
"_output_path_/chboard.png" -w=5 -h=7 -sl=100 -ml=60 -d=10
@endcode
ChArUco Board Detection
-----------------------
When you detect a ChArUco board, what you are actually detecting is each of the chessboard corners
of the board.
Each corner on a ChArUco board has a unique identifier (id) assigned. These ids go from 0 to the total
number of corners in the board.
The steps of charuco board detection can be broken down to the following steps:
- **Taking input Image**
@snippet samples/cpp/tutorial_code/objectDetection/detect_board_charuco.cpp inputImg
The original image where the markers are to be detected. The image is necessary to perform subpixel
refinement in the ChArUco corners.
- **Reading the camera calibration Parameters(only for detection with camera calibration)**
@snippet samples/cpp/tutorial_code/objectDetection/aruco_samples_utility.hpp camDistCoeffs
The parameters of `readCameraParameters` are:
- The first parameter is the path to the camera intrinsic matrix and distortion coefficients.
- The second and third parameters are cameraMatrix and distCoeffs.
This function takes these parameters as input and returns a boolean value of whether the camera
calibration parameters are valid or not. For detection of charuco corners without calibration,
this step is not required.
- **Detecting the markers and interpolation of charuco corners from markers**
The detection of the ChArUco corners is based on the previous detected markers.
So that, first markers are detected, and then ChArUco corners are interpolated from markers.
The method that detect the ChArUco corners is `cv::aruco::CharucoDetector::detectBoard()`.
@snippet samples/cpp/tutorial_code/objectDetection/detect_board_charuco.cpp interpolateCornersCharuco
The parameters of detectBoard are:
- `image` - Input image.
- `charucoCorners` - output list of image positions of the detected corners.
- `charucoIds` - output ids for each of the detected corners in `charucoCorners`.
- `markerCorners` - input/output vector of detected marker corners.
- `markerIds` - input/output vector of identifiers of the detected markers
If markerCorners and markerIds are empty, the function will detect aruco markers and ids.
If calibration parameters are provided, the ChArUco corners are interpolated by, first, estimating
a rough pose from the ArUco markers and, then, reprojecting the ChArUco corners back to the image.
On the other hand, if calibration parameters are not provided, the ChArUco corners are interpolated
by calculating the corresponding homography between the ChArUco plane and the ChArUco image projection.
The main problem of using homography is that the interpolation is more sensible to image distortion.
Actually, the homography is only performed using the closest markers of each ChArUco corner to reduce
the effect of distortion.
When detecting markers for ChArUco boards, and specially when using homography, it is recommended to
disable the corner refinement of markers. The reason of this is that, due to the proximity of the
chessboard squares, the subpixel process can produce important deviations in the corner positions and
these deviations are propagated to the ChArUco corner interpolation, producing poor results.
@note To avoid deviations, the margin between chessboard square and aruco marker should be greater
than 70% of one marker module.
Furthermore, only those corners whose two surrounding markers have be found are returned. If any of
the two surrounding markers has not been detected, this usually means that there is some occlusion
or the image quality is not good in that zone. In any case, it is preferable not to consider that
corner, since what we want is to be sure that the interpolated ChArUco corners are very accurate.
After the ChArUco corners have been interpolated, a subpixel refinement is performed.
Once we have interpolated the ChArUco corners, we would probably want to draw them to see if their
detections are correct. This can be easily done using the `cv::aruco::drawDetectedCornersCharuco()`
function:
@snippet samples/cpp/tutorial_code/objectDetection/detect_board_charuco.cpp drawDetectedCornersCharuco
- `imageCopy` is the image where the corners will be drawn (it will normally be the same image where
the corners were detected).
- The `outputImage` will be a clone of `inputImage` with the corners drawn.
- `charucoCorners` and `charucoIds` are the detected Charuco corners from the `cv::aruco::CharucoDetector::detectBoard()`
function.
- Finally, the last parameter is the (optional) color we want to draw the corners with, of type `cv::Scalar`.
For this image:
![Image with Charuco board](images/choriginal.jpg)
The result will be:
![Charuco board detected](images/chcorners.jpg)
In the presence of occlusion. like in the following image, although some corners are clearly visible,
not all their surrounding markers have been detected due occlusion and, thus, they are not interpolated:
![Charuco detection with occlusion](images/chocclusion.jpg)
Sample video:
@youtube{Nj44m_N_9FY}
A full working example is included in the `detect_board_charuco.cpp` inside the
`samples/cpp/tutorial_code/objectDetection/`.
The samples `detect_board_charuco.cpp` now take input via commandline via the `cv::CommandLineParser`.
For this file the example parameters will look like:
@code{.cpp}
-w=5 -h=7 -sl=0.04 -ml=0.02 -d=10 -v=/path_to_opencv/opencv/doc/tutorials/objdetect/charuco_detection/images/choriginal.jpg
@endcode
ChArUco Pose Estimation
-----------------------
The final goal of the ChArUco boards is finding corners very accurately for a high precision calibration
or pose estimation.
The aruco module provides a function to perform ChArUco pose estimation easily. As in the
`cv::aruco::GridBoard`, the coordinate system of the `cv::aruco::CharucoBoard` is placed in
the board plane with the Z axis pointing in, and centered in the bottom left corner of the board.
@note After OpenCV 4.6.0, there was an incompatible change in the coordinate systems of the boards,
now the coordinate systems are placed in the boards plane with the Z axis pointing in the plane
(previously the axis pointed out the plane).
`objPoints` in CW order correspond to the Z-axis pointing in the plane.
`objPoints` in CCW order correspond to the Z-axis pointing out the plane.
See PR https://github.com/opencv/opencv_contrib/pull/3174
To perform pose estimation for charuco boards, you should use `cv::aruco::CharucoBoard::matchImagePoints()`
and `cv::solvePnP()`:
@snippet samples/cpp/tutorial_code/objectDetection/detect_board_charuco.cpp poseCharuco
- The `charucoCorners` and `charucoIds` parameters are the detected charuco corners from the
`cv::aruco::CharucoDetector::detectBoard()` function.
- The `cameraMatrix` and `distCoeffs` are the camera calibration parameters which are necessary
for pose estimation.
- Finally, the `rvec` and `tvec` parameters are the output pose of the Charuco Board.
- `cv::solvePnP()` returns true if the pose was correctly estimated and false otherwise.
The main reason of failing is that there are not enough corners for pose estimation or
they are in the same line.
The axis can be drawn using `cv::drawFrameAxes()` to check the pose is correctly estimated.
The result would be: (X:red, Y:green, Z:blue)
![Charuco Board Axis](images/chaxis.jpg)
A full working example is included in the `detect_board_charuco.cpp` inside the
`samples/cpp/tutorial_code/objectDetection/`.
The samples `detect_board_charuco.cpp` now take input via commandline via the `cv::CommandLineParser`.
For this file the example parameters will look like:
@code{.cpp}
-w=5 -h=7 -sl=0.04 -ml=0.02 -d=10
-v=/path_to_opencv/opencv/doc/tutorials/objdetect/charuco_detection/images/choriginal.jpg
-c=/path_to_opencv/opencv/samples/cpp/tutorial_code/objectDetection/tutorial_camera_charuco.yml
@endcode

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

@ -0,0 +1,143 @@
Detection of Diamond Markers {#tutorial_charuco_diamond_detection}
==============================
@prev_tutorial{tutorial_charuco_detection}
@next_tutorial{tutorial_aruco_calibration}
A ChArUco diamond marker (or simply diamond marker) is a chessboard composed by 3x3 squares and 4 ArUco markers inside the white squares.
It is similar to a ChArUco board in appearance, however they are conceptually different.
![Diamond marker examples](images/diamondmarkers.jpg)
In both, ChArUco board and Diamond markers, their detection is based on the previous detected ArUco
markers. In the ChArUco case, the used markers are selected by directly looking their identifiers. This means
that if a marker (included in the board) is found on a image, it will be automatically assumed to belong to the board. Furthermore,
if a marker board is found more than once in the image, it will produce an ambiguity since the system wont
be able to know which one should be used for the Board.
On the other hand, the detection of Diamond marker is not based on the identifiers. Instead, their detection
is based on the relative position of the markers. As a consequence, marker identifiers can be repeated in the
same diamond or among different diamonds, and they can be detected simultaneously without ambiguity. However,
due to the complexity of finding marker based on their relative position, the diamond markers are limited to
a size of 3x3 squares and 4 markers.
As in a single ArUco marker, each Diamond marker is composed by 4 corners and a identifier. The four corners
correspond to the 4 chessboard corners in the marker and the identifier is actually an array of 4 numbers, which are
the identifiers of the four ArUco markers inside the diamond.
Diamond markers are useful in those scenarios where repeated markers should be allowed. For instance:
- To increase the number of identifiers of single markers by using diamond marker for labeling. They would allow
up to N^4 different ids, being N the number of markers in the used dictionary.
- Give to each of the four markers a conceptual meaning. For instance, one of the four marker ids could be
used to indicate the scale of the marker (i.e. the size of the square), so that the same diamond can be found
in the environment with different sizes just by changing one of the four markers and the user does not need
to manually indicate the scale of each of them. This case is included in the `detect_diamonds.cpp` file inside
the samples folder of the module.
Furthermore, as its corners are chessboard corners, they can be used for accurate pose estimation.
The diamond functionalities are included in `<opencv2/objdetect/charuco_detector.hpp>`
ChArUco Diamond Creation
------
The image of a diamond marker can be easily created using the `cv::aruco::CharucoBoard::generateImage()` function.
For instance:
@snippet samples/cpp/tutorial_code/objectDetection/create_diamond.cpp generate_diamond
This will create a diamond marker image with a square size of 200 pixels and a marker size of 120 pixels.
The marker ids are given in the second parameter as a `cv::Vec4i` object. The order of the marker ids
in the diamond layout are the same as in a standard ChArUco board, i.e. top, left, right and bottom.
The image produced will be:
![Diamond marker](images/diamondmarker.png)
A full working example is included in the `create_diamond.cpp` inside the `samples/cpp/tutorial_code/objectDetection/`.
The samples `create_diamond.cpp` now take input via commandline via the `cv::CommandLineParser`. For this file the example
parameters will look like:
@code{.cpp}
"_path_/mydiamond.png" -sl=200 -ml=120 -d=10 -ids=0,1,2,3
@endcode
ChArUco Diamond Detection
------
As in most cases, the detection of diamond markers requires a previous detection of ArUco markers.
After detecting markers, diamond are detected using the `cv::aruco::CharucoDetector::detectDiamonds()` function:
@snippet samples/cpp/tutorial_code/objectDetection/detect_diamonds.cpp detect_diamonds
The `cv::aruco::CharucoDetector::detectDiamonds()` function receives the original image and the previous detected marker corners and ids.
If markerCorners and markerIds are empty, the function will detect aruco markers and ids.
The input image is necessary to perform subpixel refinement in the ChArUco corners.
It also receives the rate between the square size and the marker sizes which is required for both, detecting the diamond
from the relative positions of the markers and interpolating the ChArUco corners.
The function returns the detected diamonds in two parameters. The first parameter, `diamondCorners`, is an array containing
all the four corners of each detected diamond. Its format is similar to the detected corners by the `cv::aruco::ArucoDetector::detectMarkers()`
function and, for each diamond, the corners are represented in the same order than in the ArUco markers, i.e. clockwise order
starting with the top-left corner. The second returned parameter, `diamondIds`, contains all the ids of the returned
diamond corners in `diamondCorners`. Each id is actually an array of 4 integers that can be represented with `cv::Vec4i`.
The detected diamond can be visualized using the function `cv::aruco::drawDetectedDiamonds()` which simply receives the image and the diamond
corners and ids:
@snippet samples/cpp/tutorial_code/objectDetection/detect_diamonds.cpp draw_diamonds
The result is the same that the one produced by `cv::aruco::drawDetectedMarkers()`, but printing the four ids of the diamond:
![Detected diamond markers](images/detecteddiamonds.jpg)
A full working example is included in the `detect_diamonds.cpp` inside the `samples/cpp/tutorial_code/objectDetection/`.
The samples `detect_diamonds.cpp` now take input via commandline via the `cv::CommandLineParser`. For this file the example
parameters will look like:
@code{.cpp}
-dp=path_to_opencv/opencv/samples/cpp/tutorial_code/objectDetection/detector_params.yml -sl=0.4 -ml=0.25 -refine=3
-v=path_to_opencv/opencv/doc/tutorials/objdetect/charuco_diamond_detection/images/diamondmarkers.jpg
-cd=path_to_opencv/opencv/samples/cpp/tutorial_code/objectDetection/tutorial_dict.yml
@endcode
ChArUco Diamond Pose Estimation
------
Since a ChArUco diamond is represented by its four corners, its pose can be estimated in the same way than in a single ArUco marker,
i.e. using the `cv::solvePnP()` function. For instance:
@snippet samples/cpp/tutorial_code/objectDetection/detect_diamonds.cpp diamond_pose_estimation
@snippet samples/cpp/tutorial_code/objectDetection/detect_diamonds.cpp draw_diamond_pose_estimation
The function will obtain the rotation and translation vector for each of the diamond marker and store them
in `rvecs` and `tvecs`. Note that the diamond corners are a chessboard square corners and thus, the square length
has to be provided for pose estimation, and not the marker length. Camera calibration parameters are also required.
Finally, an axis can be drawn to check the estimated pose is correct using `drawFrameAxes()`:
![Detected diamond axis](images/diamondsaxis.jpg)
The coordinate system of the diamond pose will be in the center of the marker with the Z axis pointing out,
as in a simple ArUco marker pose estimation.
Sample video:
@youtube{OqKpBnglH7k}
Also ChArUco diamond pose can be estimated as ChArUco board:
@snippet samples/cpp/tutorial_code/objectDetection/detect_diamonds.cpp diamond_pose_estimation_as_charuco
A full working example is included in the `detect_diamonds.cpp` inside the `samples/cpp/tutorial_code/objectDetection/`.
The samples `detect_diamonds.cpp` now take input via commandline via the `cv::CommandLineParser`. For this file the example
parameters will look like:
@code{.cpp}
-dp=path_to_opencv/opencv/samples/cpp/tutorial_code/objectDetection/detector_params.yml -sl=0.4 -ml=0.25 -refine=3
-v=path_to_opencv/opencv/doc/tutorials/objdetect/charuco_diamond_detection/images/diamondmarkers.jpg
-cd=path_to_opencv/opencv/samples/cpp/tutorial_code/objectDetection/tutorial_dict.yml
-c=path_to_opencv/opencv/samples/cpp/tutorial_code/objectDetection/tutorial_camera_params.yml
@endcode

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

@ -3,3 +3,7 @@ Object Detection (objdetect module) {#tutorial_table_of_content_objdetect}
- @subpage tutorial_aruco_detection
- @subpage tutorial_aruco_board_detection
- @subpage tutorial_charuco_detection
- @subpage tutorial_charuco_diamond_detection
- @subpage tutorial_aruco_calibration
- @subpage tutorial_aruco_faq

@ -0,0 +1,246 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#include "test_precomp.hpp"
#include "opencv2/objdetect/aruco_detector.hpp"
namespace opencv_test { namespace {
TEST(CV_ArucoTutorial, can_find_singlemarkersoriginal)
{
string img_path = cvtest::findDataFile("aruco/singlemarkersoriginal.jpg");
Mat image = imread(img_path);
aruco::ArucoDetector detector(aruco::getPredefinedDictionary(aruco::DICT_6X6_250));
vector<int> ids;
vector<vector<Point2f> > corners, rejected;
const size_t N = 6ull;
// corners of ArUco markers with indices goldCornersIds
const int goldCorners[N][8] = { {359,310, 404,310, 410,350, 362,350}, {427,255, 469,256, 477,289, 434,288},
{233,273, 190,273, 196,241, 237,241}, {298,185, 334,186, 335,212, 297,211},
{425,163, 430,186, 394,186, 390,162}, {195,155, 230,155, 227,178, 190,178} };
const int goldCornersIds[N] = { 40, 98, 62, 23, 124, 203};
map<int, const int*> mapGoldCorners;
for (size_t i = 0; i < N; i++)
mapGoldCorners[goldCornersIds[i]] = goldCorners[i];
detector.detectMarkers(image, corners, ids, rejected);
ASSERT_EQ(N, ids.size());
for (size_t i = 0; i < N; i++)
{
int arucoId = ids[i];
ASSERT_EQ(4ull, corners[i].size());
ASSERT_TRUE(mapGoldCorners.find(arucoId) != mapGoldCorners.end());
for (int j = 0; j < 4; j++)
{
EXPECT_NEAR(static_cast<float>(mapGoldCorners[arucoId][j * 2]), corners[i][j].x, 1.f);
EXPECT_NEAR(static_cast<float>(mapGoldCorners[arucoId][j * 2 + 1]), corners[i][j].y, 1.f);
}
}
}
TEST(CV_ArucoTutorial, can_find_gboriginal)
{
string imgPath = cvtest::findDataFile("aruco/gboriginal.jpg");
Mat image = imread(imgPath);
string dictPath = cvtest::findDataFile("aruco/tutorial_dict.yml");
aruco::Dictionary dictionary;
FileStorage fs(dictPath, FileStorage::READ);
dictionary.aruco::Dictionary::readDictionary(fs.root()); // set marker from tutorial_dict.yml
aruco::DetectorParameters detectorParams;
aruco::ArucoDetector detector(dictionary, detectorParams);
vector<int> ids;
vector<vector<Point2f> > corners, rejected;
const size_t N = 35ull;
// corners of ArUco markers with indices 0, 1, ..., 34
const int goldCorners[N][8] = { {252,74, 286,81, 274,102, 238,95}, {295,82, 330,89, 319,111, 282,104},
{338,91, 375,99, 365,121, 327,113}, {383,100, 421,107, 412,130, 374,123},
{429,109, 468,116, 461,139, 421,132}, {235,100, 270,108, 257,130, 220,122},
{279,109, 316,117, 304,140, 266,133}, {324,119, 362,126, 352,150, 313,143},
{371,128, 410,136, 400,161, 360,152}, {418,139, 459,145, 451,170, 410,163},
{216,128, 253,136, 239,161, 200,152}, {262,138, 300,146, 287,172, 248,164},
{309,148, 349,156, 337,183, 296,174}, {358,158, 398,167, 388,194, 346,185},
{407,169, 449,176, 440,205, 397,196}, {196,158, 235,168, 218,195, 179,185},
{243,170, 283,178, 269,206, 228,197}, {293,180, 334,190, 321,218, 279,209},
{343,192, 385,200, 374,230, 330,220}, {395,203, 438,211, 429,241, 384,233},
{174,192, 215,201, 197,231, 156,221}, {223,204, 265,213, 249,244, 207,234},
{275,215, 317,225, 303,257, 259,246}, {327,227, 371,238, 359,270, 313,259},
{381,240, 426,249, 416,282, 369,273}, {151,228, 193,238, 173,271, 130,260},
{202,241, 245,251, 228,285, 183,274}, {255,254, 300,264, 284,299, 238,288},
{310,267, 355,278, 342,314, 295,302}, {366,281, 413,290, 402,327, 353,317},
{125,267, 168,278, 147,314, 102,303}, {178,281, 223,293, 204,330, 157,317},
{233,296, 280,307, 263,346, 214,333}, {291,310, 338,322, 323,363, 274,349},
{349,325, 399,336, 386,378, 335,366} };
map<int, const int*> mapGoldCorners;
for (int i = 0; i < static_cast<int>(N); i++)
mapGoldCorners[i] = goldCorners[i];
detector.detectMarkers(image, corners, ids, rejected);
ASSERT_EQ(N, ids.size());
for (size_t i = 0; i < N; i++)
{
int arucoId = ids[i];
ASSERT_EQ(4ull, corners[i].size());
ASSERT_TRUE(mapGoldCorners.find(arucoId) != mapGoldCorners.end());
for (int j = 0; j < 4; j++)
{
EXPECT_NEAR(static_cast<float>(mapGoldCorners[arucoId][j*2]), corners[i][j].x, 1.f);
EXPECT_NEAR(static_cast<float>(mapGoldCorners[arucoId][j*2+1]), corners[i][j].y, 1.f);
}
}
}
TEST(CV_ArucoTutorial, can_find_choriginal)
{
string imgPath = cvtest::findDataFile("aruco/choriginal.jpg");
Mat image = imread(imgPath);
aruco::ArucoDetector detector(aruco::getPredefinedDictionary(aruco::DICT_6X6_250));
vector< int > ids;
vector< vector< Point2f > > corners, rejected;
const size_t N = 17ull;
// corners of aruco markers with indices goldCornersIds
const int goldCorners[N][8] = { {268,77, 290,80, 286,97, 263,94}, {360,90, 382,93, 379,111, 357,108},
{211,106, 233,109, 228,127, 205,123}, {306,120, 328,124, 325,142, 302,138},
{402,135, 425,139, 423,157, 400,154}, {247,152, 271,155, 267,174, 242,171},
{347,167, 371,171, 369,191, 344,187}, {185,185, 209,189, 203,210, 178,206},
{288,201, 313,206, 309,227, 284,223}, {393,218, 418,222, 416,245, 391,241},
{223,240, 250,244, 244,268, 217,263}, {333,258, 359,262, 356,286, 329,282},
{152,281, 179,285, 171,312, 143,307}, {267,300, 294,305, 289,331, 261,327},
{383,319, 410,324, 408,351, 380,347}, {194,347, 223,352, 216,382, 186,377},
{315,368, 345,373, 341,403, 310,398} };
map<int, const int*> mapGoldCorners;
for (int i = 0; i < static_cast<int>(N); i++)
mapGoldCorners[i] = goldCorners[i];
detector.detectMarkers(image, corners, ids, rejected);
ASSERT_EQ(N, ids.size());
for (size_t i = 0; i < N; i++)
{
int arucoId = ids[i];
ASSERT_EQ(4ull, corners[i].size());
ASSERT_TRUE(mapGoldCorners.find(arucoId) != mapGoldCorners.end());
for (int j = 0; j < 4; j++)
{
EXPECT_NEAR(static_cast<float>(mapGoldCorners[arucoId][j * 2]), corners[i][j].x, 1.f);
EXPECT_NEAR(static_cast<float>(mapGoldCorners[arucoId][j * 2 + 1]), corners[i][j].y, 1.f);
}
}
}
TEST(CV_ArucoTutorial, can_find_chocclusion)
{
string imgPath = cvtest::findDataFile("aruco/chocclusion_original.jpg");
Mat image = imread(imgPath);
aruco::ArucoDetector detector(aruco::getPredefinedDictionary(aruco::DICT_6X6_250));
vector< int > ids;
vector< vector< Point2f > > corners, rejected;
const size_t N = 13ull;
// corners of aruco markers with indices goldCornersIds
const int goldCorners[N][8] = { {301,57, 322,62, 317,79, 295,73}, {391,80, 413,85, 408,103, 386,97},
{242,79, 264,85, 256,102, 234,96}, {334,103, 357,109, 352,126, 329,121},
{428,129, 451,134, 448,152, 425,146}, {274,128, 296,134, 290,153, 266,147},
{371,154, 394,160, 390,180, 366,174}, {208,155, 232,161, 223,181, 199,175},
{309,182, 333,188, 327,209, 302,203}, {411,210, 436,216, 432,238, 407,231},
{241,212, 267,219, 258,242, 232,235}, {167,244, 194,252, 183,277, 156,269},
{202,314, 230,322, 220,349, 191,341} };
map<int, const int*> mapGoldCorners;
const int goldCornersIds[N] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15};
for (int i = 0; i < static_cast<int>(N); i++)
mapGoldCorners[goldCornersIds[i]] = goldCorners[i];
detector.detectMarkers(image, corners, ids, rejected);
ASSERT_EQ(N, ids.size());
for (size_t i = 0; i < N; i++)
{
int arucoId = ids[i];
ASSERT_EQ(4ull, corners[i].size());
ASSERT_TRUE(mapGoldCorners.find(arucoId) != mapGoldCorners.end());
for (int j = 0; j < 4; j++)
{
EXPECT_NEAR(static_cast<float>(mapGoldCorners[arucoId][j * 2]), corners[i][j].x, 1.f);
EXPECT_NEAR(static_cast<float>(mapGoldCorners[arucoId][j * 2 + 1]), corners[i][j].y, 1.f);
}
}
}
TEST(CV_ArucoTutorial, can_find_diamondmarkers)
{
string imgPath = cvtest::findDataFile("aruco/diamondmarkers.jpg");
Mat image = imread(imgPath);
string dictPath = cvtest::findDataFile("aruco/tutorial_dict.yml");
aruco::Dictionary dictionary;
FileStorage fs(dictPath, FileStorage::READ);
dictionary.aruco::Dictionary::readDictionary(fs.root()); // set marker from tutorial_dict.yml
string detectorPath = cvtest::findDataFile("aruco/detector_params.yml");
fs = FileStorage(detectorPath, FileStorage::READ);
aruco::DetectorParameters detectorParams;
detectorParams.readDetectorParameters(fs.root());
detectorParams.cornerRefinementMethod = aruco::CORNER_REFINE_APRILTAG;
aruco::CharucoBoard charucoBoard(Size(3, 3), 0.4f, 0.25f, dictionary);
aruco::CharucoDetector detector(charucoBoard, aruco::CharucoParameters(), detectorParams);
vector<int> ids;
vector<vector<Point2f> > corners, diamondCorners;
vector<Vec4i> diamondIds;
const size_t N = 12ull;
// corner indices of ArUco markers
const int goldCornersIds[N] = { 4, 12, 11, 3, 12, 10, 12, 10, 10, 11, 2, 11 };
map<int, int> counterGoldCornersIds;
for (int i = 0; i < static_cast<int>(N); i++)
counterGoldCornersIds[goldCornersIds[i]]++;
const size_t diamondsN = 3;
// corners of diamonds with Vec4i indices
const float goldDiamondCorners[diamondsN][8] = {{195.6f,150.9f, 213.5f,201.2f, 136.4f,215.3f, 122.4f,163.5f},
{501.1f,171.3f, 501.9f,208.5f, 446.2f,199.8f, 447.8f,163.3f},
{343.4f,361.2f, 359.7f,328.7f, 400.8f,344.6f, 385.7f,378.4f}};
auto comp = [](const Vec4i& a, const Vec4i& b) {
for (int i = 0; i < 3; i++)
if (a[i] != b[i]) return a[i] < b[i];
return a[3] < b[3];
};
map<Vec4i, const float*, decltype(comp)> goldDiamonds(comp);
goldDiamonds[Vec4i(10, 4, 11, 12)] = goldDiamondCorners[0];
goldDiamonds[Vec4i(10, 3, 11, 12)] = goldDiamondCorners[1];
goldDiamonds[Vec4i(10, 2, 11, 12)] = goldDiamondCorners[2];
detector.detectDiamonds(image, diamondCorners, diamondIds, corners, ids);
map<int, int> counterRes;
ASSERT_EQ(N, ids.size());
for (size_t i = 0; i < N; i++)
{
int arucoId = ids[i];
counterRes[arucoId]++;
}
ASSERT_EQ(counterGoldCornersIds, counterRes); // check the number of ArUco markers
ASSERT_EQ(goldDiamonds.size(), diamondIds.size()); // check the number of diamonds
for (size_t i = 0; i < goldDiamonds.size(); i++)
{
Vec4i diamondId = diamondIds[i];
ASSERT_TRUE(goldDiamonds.find(diamondId) != goldDiamonds.end());
for (int j = 0; j < 4; j++)
{
EXPECT_NEAR(goldDiamonds[diamondId][j * 2], diamondCorners[i][j].x, 0.5f);
EXPECT_NEAR(goldDiamonds[diamondId][j * 2 + 1], diamondCorners[i][j].y, 0.5f);
}
}
}
}} // namespace

@ -45,4 +45,47 @@ inline static bool saveCameraParams(const std::string &filename, cv::Size imageS
return true;
}
inline static cv::aruco::DetectorParameters readDetectorParamsFromCommandLine(cv::CommandLineParser &parser) {
cv::aruco::DetectorParameters detectorParams;
if (parser.has("dp")) {
cv::FileStorage fs(parser.get<std::string>("dp"), cv::FileStorage::READ);
bool readOk = detectorParams.readDetectorParameters(fs.root());
if(!readOk) {
throw std::runtime_error("Invalid detector parameters file\n");
}
}
return detectorParams;
}
inline static void readCameraParamsFromCommandLine(cv::CommandLineParser &parser, cv::Mat& camMatrix, cv::Mat& distCoeffs) {
//! [camDistCoeffs]
if(parser.has("c")) {
bool readOk = readCameraParameters(parser.get<std::string>("c"), camMatrix, distCoeffs);
if(!readOk) {
throw std::runtime_error("Invalid camera file\n");
}
}
//! [camDistCoeffs]
}
inline static cv::aruco::Dictionary readDictionatyFromCommandLine(cv::CommandLineParser &parser) {
cv::aruco::Dictionary dictionary;
if (parser.has("cd")) {
cv::FileStorage fs(parser.get<std::string>("cd"), cv::FileStorage::READ);
bool readOk = dictionary.readDictionary(fs.root());
if(!readOk) {
throw std::runtime_error("Invalid dictionary file\n");
}
}
else {
int dictionaryId = parser.has("d") ? parser.get<int>("d"): cv::aruco::DICT_4X4_50;
if (!parser.has("d")) {
std::cout << "The default DICT_4X4_50 dictionary has been selected, you could "
"select the specific dictionary using flags -d or -cd." << std::endl;
}
dictionary = cv::aruco::getPredefinedDictionary(dictionaryId);
}
return dictionary;
}
}

@ -0,0 +1,188 @@
#include <ctime>
#include <iostream>
#include <vector>
#include <opencv2/calib3d.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/objdetect/aruco_detector.hpp>
#include "aruco_samples_utility.hpp"
using namespace std;
using namespace cv;
namespace {
const char* about =
"Calibration using a ArUco Planar Grid board\n"
" To capture a frame for calibration, press 'c',\n"
" If input comes from video, press any key for next frame\n"
" To finish capturing, press 'ESC' key and calibration starts.\n";
const char* keys =
"{w | | Number of squares in X direction }"
"{h | | Number of squares in Y direction }"
"{l | | Marker side length (in meters) }"
"{s | | Separation between two consecutive markers in the grid (in meters) }"
"{d | | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
"DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
"DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16}"
"{cd | | Input file with custom dictionary }"
"{@outfile |cam.yml| Output file with calibrated camera parameters }"
"{v | | Input from video file, if ommited, input comes from camera }"
"{ci | 0 | Camera id if input doesnt come from video (-v) }"
"{dp | | File of marker detector parameters }"
"{rs | false | Apply refind strategy }"
"{zt | false | Assume zero tangential distortion }"
"{a | | Fix aspect ratio (fx/fy) to this value }"
"{pc | false | Fix the principal point at the center }";
}
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
if(argc < 6) {
parser.printMessage();
return 0;
}
int markersX = parser.get<int>("w");
int markersY = parser.get<int>("h");
float markerLength = parser.get<float>("l");
float markerSeparation = parser.get<float>("s");
string outputFile = parser.get<string>(0);
int calibrationFlags = 0;
float aspectRatio = 1;
if(parser.has("a")) {
calibrationFlags |= CALIB_FIX_ASPECT_RATIO;
aspectRatio = parser.get<float>("a");
}
if(parser.get<bool>("zt")) calibrationFlags |= CALIB_ZERO_TANGENT_DIST;
if(parser.get<bool>("pc")) calibrationFlags |= CALIB_FIX_PRINCIPAL_POINT;
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
aruco::DetectorParameters detectorParams = readDetectorParamsFromCommandLine(parser);
bool refindStrategy = parser.get<bool>("rs");
int camId = parser.get<int>("ci");
String video;
if(parser.has("v")) {
video = parser.get<String>("v");
}
if(!parser.check()) {
parser.printErrors();
return 0;
}
VideoCapture inputVideo;
int waitTime;
if(!video.empty()) {
inputVideo.open(video);
waitTime = 0;
} else {
inputVideo.open(camId);
waitTime = 10;
}
//! [CalibrationWithArucoBoard1]
// Create board object and ArucoDetector
aruco::GridBoard gridboard(Size(markersX, markersY), markerLength, markerSeparation, dictionary);
aruco::ArucoDetector detector(dictionary, detectorParams);
// Collected frames for calibration
vector<vector<vector<Point2f>>> allMarkerCorners;
vector<vector<int>> allMarkerIds;
Size imageSize;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
vector<int> markerIds;
vector<vector<Point2f>> markerCorners, rejectedMarkers;
// Detect markers
detector.detectMarkers(image, markerCorners, markerIds, rejectedMarkers);
// Refind strategy to detect more markers
if(refindStrategy) {
detector.refineDetectedMarkers(image, gridboard, markerCorners, markerIds, rejectedMarkers);
}
//! [CalibrationWithArucoBoard1]
// Draw results
image.copyTo(imageCopy);
if(!markerIds.empty()) {
aruco::drawDetectedMarkers(imageCopy, markerCorners, markerIds);
}
putText(imageCopy, "Press 'c' to add current frame. 'ESC' to finish and calibrate",
Point(10, 20), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(255, 0, 0), 2);
imshow("out", imageCopy);
// Wait for key pressed
char key = (char)waitKey(waitTime);
if(key == 27) {
break;
}
//! [CalibrationWithArucoBoard2]
if(key == 'c' && !markerIds.empty()) {
cout << "Frame captured" << endl;
allMarkerCorners.push_back(markerCorners);
allMarkerIds.push_back(markerIds);
imageSize = image.size();
}
}
//! [CalibrationWithArucoBoard2]
if(allMarkerIds.empty()) {
throw std::runtime_error("Not enough captures for calibration\n");
}
//! [CalibrationWithArucoBoard3]
Mat cameraMatrix, distCoeffs;
if(calibrationFlags & CALIB_FIX_ASPECT_RATIO) {
cameraMatrix = Mat::eye(3, 3, CV_64F);
cameraMatrix.at<double>(0, 0) = aspectRatio;
}
// Prepare data for calibration
vector<Point3f> objectPoints;
vector<Point2f> imagePoints;
vector<Mat> processedObjectPoints, processedImagePoints;
size_t nFrames = allMarkerCorners.size();
for(size_t frame = 0; frame < nFrames; frame++) {
Mat currentImgPoints, currentObjPoints;
gridboard.matchImagePoints(allMarkerCorners[frame], allMarkerIds[frame], currentObjPoints, currentImgPoints);
if(currentImgPoints.total() > 0 && currentObjPoints.total() > 0) {
processedImagePoints.push_back(currentImgPoints);
processedObjectPoints.push_back(currentObjPoints);
}
}
// Calibrate camera
double repError = calibrateCamera(processedObjectPoints, processedImagePoints, imageSize, cameraMatrix, distCoeffs,
noArray(), noArray(), noArray(), noArray(), noArray(), calibrationFlags);
//! [CalibrationWithArucoBoard3]
bool saveOk = saveCameraParams(outputFile, imageSize, aspectRatio, calibrationFlags,
cameraMatrix, distCoeffs, repError);
if(!saveOk) {
throw std::runtime_error("Cannot save output file\n");
}
cout << "Rep Error: " << repError << endl;
cout << "Calibration saved to " << outputFile << endl;
return 0;
}

@ -0,0 +1,216 @@
#include <iostream>
#include <vector>
#include <opencv2/calib3d.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/objdetect/charuco_detector.hpp>
#include "aruco_samples_utility.hpp"
using namespace std;
using namespace cv;
namespace {
const char* about =
"Calibration using a ChArUco board\n"
" To capture a frame for calibration, press 'c',\n"
" If input comes from video, press any key for next frame\n"
" To finish capturing, press 'ESC' key and calibration starts.\n";
const char* keys =
"{w | | Number of squares in X direction }"
"{h | | Number of squares in Y direction }"
"{sl | | Square side length (in meters) }"
"{ml | | Marker side length (in meters) }"
"{d | | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
"DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
"DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16}"
"{cd | | Input file with custom dictionary }"
"{@outfile |cam.yml| Output file with calibrated camera parameters }"
"{v | | Input from video file, if ommited, input comes from camera }"
"{ci | 0 | Camera id if input doesnt come from video (-v) }"
"{dp | | File of marker detector parameters }"
"{rs | false | Apply refind strategy }"
"{zt | false | Assume zero tangential distortion }"
"{a | | Fix aspect ratio (fx/fy) to this value }"
"{pc | false | Fix the principal point at the center }"
"{sc | false | Show detected chessboard corners after calibration }";
}
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
if(argc < 7) {
parser.printMessage();
return 0;
}
int squaresX = parser.get<int>("w");
int squaresY = parser.get<int>("h");
float squareLength = parser.get<float>("sl");
float markerLength = parser.get<float>("ml");
string outputFile = parser.get<string>(0);
bool showChessboardCorners = parser.get<bool>("sc");
int calibrationFlags = 0;
float aspectRatio = 1;
if(parser.has("a")) {
calibrationFlags |= CALIB_FIX_ASPECT_RATIO;
aspectRatio = parser.get<float>("a");
}
if(parser.get<bool>("zt")) calibrationFlags |= CALIB_ZERO_TANGENT_DIST;
if(parser.get<bool>("pc")) calibrationFlags |= CALIB_FIX_PRINCIPAL_POINT;
aruco::DetectorParameters detectorParams = readDetectorParamsFromCommandLine(parser);
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
bool refindStrategy = parser.get<bool>("rs");
int camId = parser.get<int>("ci");
String video;
if(parser.has("v")) {
video = parser.get<String>("v");
}
if(!parser.check()) {
parser.printErrors();
return 0;
}
VideoCapture inputVideo;
int waitTime;
if(!video.empty()) {
inputVideo.open(video);
waitTime = 0;
} else {
inputVideo.open(camId);
waitTime = 10;
}
aruco::CharucoParameters charucoParams;
if(refindStrategy) {
charucoParams.tryRefineMarkers = true;
}
//! [CalibrationWithCharucoBoard1]
// Create charuco board object and CharucoDetector
aruco::CharucoBoard board(Size(squaresX, squaresY), squareLength, markerLength, dictionary);
aruco::CharucoDetector detector(board, charucoParams, detectorParams);
// Collect data from each frame
vector<Mat> allCharucoCorners, allCharucoIds;
vector<vector<Point2f>> allImagePoints;
vector<vector<Point3f>> allObjectPoints;
vector<Mat> allImages;
Size imageSize;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
vector<int> markerIds;
vector<vector<Point2f>> markerCorners;
Mat currentCharucoCorners, currentCharucoIds;
vector<Point3f> currentObjectPoints;
vector<Point2f> currentImagePoints;
// Detect ChArUco board
detector.detectBoard(image, currentCharucoCorners, currentCharucoIds);
//! [CalibrationWithCharucoBoard1]
// Draw results
image.copyTo(imageCopy);
if(!markerIds.empty()) {
aruco::drawDetectedMarkers(imageCopy, markerCorners);
}
if(currentCharucoCorners.total() > 3) {
aruco::drawDetectedCornersCharuco(imageCopy, currentCharucoCorners, currentCharucoIds);
}
putText(imageCopy, "Press 'c' to add current frame. 'ESC' to finish and calibrate",
Point(10, 20), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(255, 0, 0), 2);
imshow("out", imageCopy);
// Wait for key pressed
char key = (char)waitKey(waitTime);
if(key == 27) {
break;
}
//! [CalibrationWithCharucoBoard2]
if(key == 'c' && currentCharucoCorners.total() > 3) {
// Match image points
board.matchImagePoints(currentCharucoCorners, currentCharucoIds, currentObjectPoints, currentImagePoints);
if(currentImagePoints.empty() || currentObjectPoints.empty()) {
cout << "Point matching failed, try again." << endl;
continue;
}
cout << "Frame captured" << endl;
allCharucoCorners.push_back(currentCharucoCorners);
allCharucoIds.push_back(currentCharucoIds);
allImagePoints.push_back(currentImagePoints);
allObjectPoints.push_back(currentObjectPoints);
allImages.push_back(image);
imageSize = image.size();
}
}
//! [CalibrationWithCharucoBoard2]
if(allCharucoCorners.size() < 4) {
cerr << "Not enough corners for calibration" << endl;
return 0;
}
//! [CalibrationWithCharucoBoard3]
Mat cameraMatrix, distCoeffs;
if(calibrationFlags & CALIB_FIX_ASPECT_RATIO) {
cameraMatrix = Mat::eye(3, 3, CV_64F);
cameraMatrix.at<double>(0, 0) = aspectRatio;
}
// Calibrate camera using ChArUco
double repError = calibrateCamera(allObjectPoints, allImagePoints, imageSize, cameraMatrix, distCoeffs,
noArray(), noArray(), noArray(), noArray(), noArray(), calibrationFlags);
//! [CalibrationWithCharucoBoard3]
bool saveOk = saveCameraParams(outputFile, imageSize, aspectRatio, calibrationFlags,
cameraMatrix, distCoeffs, repError);
if(!saveOk) {
cerr << "Cannot save output file" << endl;
return 0;
}
cout << "Rep Error: " << repError << endl;
cout << "Calibration saved to " << outputFile << endl;
// Show interpolated charuco corners for debugging
if(showChessboardCorners) {
for(size_t frame = 0; frame < allImages.size(); frame++) {
Mat imageCopy = allImages[frame].clone();
if(allCharucoCorners[frame].total() > 0) {
aruco::drawDetectedCornersCharuco(imageCopy, allCharucoCorners[frame], allCharucoIds[frame]);
}
imshow("out", imageCopy);
char key = (char)waitKey(0);
if(key == 27) {
break;
}
}
}
return 0;
}

@ -23,7 +23,6 @@ const char* keys =
"{si | false | show generated image }";
}
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
@ -57,25 +56,7 @@ int main(int argc, char *argv[]) {
imageSize.height =
markersY * (markerLength + markerSeparation) - markerSeparation + 2 * margins;
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
if (parser.has("d")) {
int dictionaryId = parser.get<int>("d");
dictionary = aruco::getPredefinedDictionary(aruco::PredefinedDictionaryType(dictionaryId));
}
else if (parser.has("cd")) {
FileStorage fs(parser.get<std::string>("cd"), FileStorage::READ);
bool readOk = dictionary.readDictionary(fs.root());
if(!readOk)
{
std::cerr << "Invalid dictionary file" << std::endl;
return 0;
}
}
else {
std::cerr << "Dictionary not specified" << std::endl;
return 0;
}
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
aruco::GridBoard board(Size(markersX, markersY), float(markerLength), float(markerSeparation), dictionary);
// show created board
@ -90,6 +71,5 @@ int main(int argc, char *argv[]) {
}
imwrite(out, boardImage);
return 0;
}

@ -0,0 +1,77 @@
#include <opencv2/highgui.hpp>
#include <opencv2/objdetect/charuco_detector.hpp>
#include <iostream>
#include "aruco_samples_utility.hpp"
using namespace cv;
namespace {
const char* about = "Create a ChArUco board image";
//! [charuco_detect_board_keys]
const char* keys =
"{@outfile |res.png| Output image }"
"{w | 5 | Number of squares in X direction }"
"{h | 7 | Number of squares in Y direction }"
"{sl | 100 | Square side length (in pixels) }"
"{ml | 60 | Marker side length (in pixels) }"
"{d | | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
"DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
"DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16}"
"{cd | | Input file with custom dictionary }"
"{m | | Margins size (in pixels). Default is (squareLength-markerLength) }"
"{bb | 1 | Number of bits in marker borders }"
"{si | false | show generated image }";
}
//! [charuco_detect_board_keys]
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
if (argc == 1) {
parser.printMessage();
}
int squaresX = parser.get<int>("w");
int squaresY = parser.get<int>("h");
int squareLength = parser.get<int>("sl");
int markerLength = parser.get<int>("ml");
int margins = squareLength - markerLength;
if(parser.has("m")) {
margins = parser.get<int>("m");
}
int borderBits = parser.get<int>("bb");
bool showImage = parser.get<bool>("si");
std::string pathOutImg = parser.get<std::string>(0);
if(!parser.check()) {
parser.printErrors();
return 0;
}
//! [create_charucoBoard]
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
cv::aruco::CharucoBoard board(Size(squaresX, squaresY), (float)squareLength, (float)markerLength, dictionary);
//! [create_charucoBoard]
// show created board
//! [generate_charucoBoard]
Mat boardImage;
Size imageSize;
imageSize.width = squaresX * squareLength + 2 * margins;
imageSize.height = squaresY * squareLength + 2 * margins;
board.generateImage(imageSize, boardImage, margins, borderBits);
//! [generate_charucoBoard]
if(showImage) {
imshow("board", boardImage);
waitKey(0);
}
if (pathOutImg != "")
imwrite(pathOutImg, boardImage);
return 0;
}

@ -0,0 +1,72 @@
#include <opencv2/highgui.hpp>
#include <opencv2/objdetect/charuco_detector.hpp>
#include <vector>
#include <iostream>
#include "aruco_samples_utility.hpp"
using namespace std;
using namespace cv;
namespace {
const char* about = "Create a ChArUco marker image";
const char* keys =
"{@outfile | res.png | Output image }"
"{sl | 100 | Square side length (in pixels) }"
"{ml | 60 | Marker side length (in pixels) }"
"{cd | | Input file with custom dictionary }"
"{d | 10 | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
"DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
"DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16}"
"{ids |0, 1, 2, 3 | Four ids for the ChArUco marker: id1,id2,id3,id4 }"
"{m | 0 | Margins size (in pixels) }"
"{bb | 1 | Number of bits in marker borders }"
"{si | false | show generated image }";
}
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
int squareLength = parser.get<int>("sl");
int markerLength = parser.get<int>("ml");
string idsString = parser.get<string>("ids");
int margins = parser.get<int>("m");
int borderBits = parser.get<int>("bb");
bool showImage = parser.get<bool>("si");
string out = parser.get<string>(0);
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
if(!parser.check()) {
parser.printErrors();
return 0;
}
istringstream ss(idsString);
vector<string> splittedIds;
string token;
while(getline(ss, token, ','))
splittedIds.push_back(token);
if(splittedIds.size() < 4) {
throw std::runtime_error("Incorrect ids format\n");
}
Vec4i ids;
for(int i = 0; i < 4; i++)
ids[i] = atoi(splittedIds[i].c_str());
//! [generate_diamond]
vector<int> diamondIds = {ids[0], ids[1], ids[2], ids[3]};
aruco::CharucoBoard charucoBoard(Size(3, 3), (float)squareLength, (float)markerLength, dictionary, diamondIds);
Mat markerImg;
charucoBoard.generateImage(Size(3*squareLength + 2*margins, 3*squareLength + 2*margins), markerImg, margins, borderBits);
//! [generate_diamond]
if(showImage) {
imshow("board", markerImg);
waitKey(0);
}
if (out != "")
imwrite(out, markerImg);
return 0;
}

@ -10,13 +10,13 @@ const char* about = "Create an ArUco marker image";
//! [aruco_create_markers_keys]
const char* keys =
"{@outfile |<none> | Output image }"
"{d | | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"{@outfile |res.png| Output image }"
"{d | 0 | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
"DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
"DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16}"
"{cd | | Input file with custom dictionary }"
"{id | | Marker id in the dictionary }"
"{id | 0 | Marker id in the dictionary }"
"{ms | 200 | Marker size in pixels }"
"{bb | 1 | Number of bits in marker borders }"
"{si | false | show generated image }";
@ -28,11 +28,6 @@ int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
if(argc < 4) {
parser.printMessage();
return 0;
}
int markerId = parser.get<int>("id");
int borderBits = parser.get<int>("bb");
int markerSize = parser.get<int>("ms");
@ -45,23 +40,7 @@ int main(int argc, char *argv[]) {
return 0;
}
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
if (parser.has("d")) {
int dictionaryId = parser.get<int>("d");
dictionary = aruco::getPredefinedDictionary(aruco::PredefinedDictionaryType(dictionaryId));
}
else if (parser.has("cd")) {
FileStorage fs(parser.get<std::string>("cd"), FileStorage::READ);
bool readOk = dictionary.readDictionary(fs.root());
if(!readOk) {
std::cerr << "Invalid dictionary file" << std::endl;
return 0;
}
}
else {
std::cerr << "Dictionary not specified" << std::endl;
return 0;
}
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
Mat markerImg;
aruco::generateImageMarker(dictionary, markerId, markerSize, markerImg, borderBits);

@ -30,46 +30,6 @@ const char* keys =
}
//! [aruco_detect_board_keys]
static void readDetectorParamsFromCommandLine(CommandLineParser &parser, aruco::DetectorParameters& detectorParams) {
if(parser.has("dp")) {
FileStorage fs(parser.get<string>("dp"), FileStorage::READ);
bool readOk = detectorParams.readDetectorParameters(fs.root());
if(!readOk) {
cerr << "Invalid detector parameters file" << endl;
throw -1;
}
}
}
static void readCameraParamsFromCommandLine(CommandLineParser &parser, Mat& camMatrix, Mat& distCoeffs) {
if(parser.has("c")) {
bool readOk = readCameraParameters(parser.get<string>("c"), camMatrix, distCoeffs);
if(!readOk) {
cerr << "Invalid camera file" << endl;
throw -1;
}
}
}
static void readDictionatyFromCommandLine(CommandLineParser &parser, aruco::Dictionary& dictionary) {
if (parser.has("d")) {
int dictionaryId = parser.get<int>("d");
dictionary = aruco::getPredefinedDictionary(aruco::PredefinedDictionaryType(dictionaryId));
}
else if (parser.has("cd")) {
FileStorage fs(parser.get<string>("cd"), FileStorage::READ);
bool readOk = dictionary.readDictionary(fs.root());
if(!readOk) {
cerr << "Invalid dictionary file" << endl;
throw -1;
}
}
else {
cerr << "Dictionary not specified" << endl;
throw -1;
}
}
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
@ -91,10 +51,8 @@ int main(int argc, char *argv[]) {
Mat camMatrix, distCoeffs;
readCameraParamsFromCommandLine(parser, camMatrix, distCoeffs);
aruco::DetectorParameters detectorParams;
detectorParams.cornerRefinementMethod = aruco::CORNER_REFINE_SUBPIX; // do corner refinement in markers
readDetectorParamsFromCommandLine(parser, detectorParams);
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
aruco::DetectorParameters detectorParams = readDetectorParamsFromCommandLine(parser);
String video;
if(parser.has("v")) {
@ -106,9 +64,6 @@ int main(int argc, char *argv[]) {
return 0;
}
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
readDictionatyFromCommandLine(parser, dictionary);
aruco::ArucoDetector detector(dictionary, detectorParams);
VideoCapture inputVideo;
int waitTime;
@ -181,9 +136,8 @@ int main(int argc, char *argv[]) {
// Draw results
image.copyTo(imageCopy);
if(!ids.empty()) {
if(!ids.empty())
aruco::drawDetectedMarkers(imageCopy, corners, ids);
}
if(showRejected && !rejected.empty())
aruco::drawDetectedMarkers(imageCopy, rejected, noArray(), Scalar(100, 0, 255));

@ -0,0 +1,144 @@
#include <opencv2/highgui.hpp>
//! [charucohdr]
#include <opencv2/objdetect/charuco_detector.hpp>
//! [charucohdr]
#include <vector>
#include <iostream>
#include "aruco_samples_utility.hpp"
using namespace std;
using namespace cv;
namespace {
const char* about = "Pose estimation using a ChArUco board";
const char* keys =
"{w | | Number of squares in X direction }"
"{h | | Number of squares in Y direction }"
"{sl | | Square side length (in meters) }"
"{ml | | Marker side length (in meters) }"
"{d | | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
"DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
"DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16}"
"{cd | | Input file with custom dictionary }"
"{c | | Output file with calibrated camera parameters }"
"{v | | Input from video or image file, if ommited, input comes from camera }"
"{ci | 0 | Camera id if input doesnt come from video (-v) }"
"{dp | | File of marker detector parameters }"
"{rs | | Apply refind strategy }";
}
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
if(argc < 6) {
parser.printMessage();
return 0;
}
//! [charuco_detect_board_full_sample]
int squaresX = parser.get<int>("w");
int squaresY = parser.get<int>("h");
float squareLength = parser.get<float>("sl");
float markerLength = parser.get<float>("ml");
bool refine = parser.has("rs");
int camId = parser.get<int>("ci");
string video;
if(parser.has("v")) {
video = parser.get<string>("v");
}
Mat camMatrix, distCoeffs;
readCameraParamsFromCommandLine(parser, camMatrix, distCoeffs);
aruco::DetectorParameters detectorParams = readDetectorParamsFromCommandLine(parser);
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
if(!parser.check()) {
parser.printErrors();
return 0;
}
VideoCapture inputVideo;
int waitTime = 0;
if(!video.empty()) {
inputVideo.open(video);
} else {
inputVideo.open(camId);
waitTime = 10;
}
float axisLength = 0.5f * ((float)min(squaresX, squaresY) * (squareLength));
// create charuco board object
aruco::CharucoBoard charucoBoard(Size(squaresX, squaresY), squareLength, markerLength, dictionary);
// create charuco detector
aruco::CharucoParameters charucoParams;
charucoParams.tryRefineMarkers = refine; // if tryRefineMarkers, refineDetectedMarkers() will be used in detectBoard()
charucoParams.cameraMatrix = camMatrix; // cameraMatrix can be used in detectBoard()
charucoParams.distCoeffs = distCoeffs; // distCoeffs can be used in detectBoard()
aruco::CharucoDetector charucoDetector(charucoBoard, charucoParams, detectorParams);
double totalTime = 0;
int totalIterations = 0;
while(inputVideo.grab()) {
//! [inputImg]
Mat image, imageCopy;
inputVideo.retrieve(image);
//! [inputImg]
double tick = (double)getTickCount();
vector<int> markerIds, charucoIds;
vector<vector<Point2f> > markerCorners;
vector<Point2f> charucoCorners;
Vec3d rvec, tvec;
//! [interpolateCornersCharuco]
// detect markers and charuco corners
charucoDetector.detectBoard(image, charucoCorners, charucoIds, markerCorners, markerIds);
//! [interpolateCornersCharuco]
//! [poseCharuco]
// estimate charuco board pose
bool validPose = false;
if(camMatrix.total() != 0 && distCoeffs.total() != 0 && charucoIds.size() >= 4) {
Mat objPoints, imgPoints;
charucoBoard.matchImagePoints(charucoCorners, charucoIds, objPoints, imgPoints);
validPose = solvePnP(objPoints, imgPoints, camMatrix, distCoeffs, rvec, tvec);
}
//! [poseCharuco]
double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
totalTime += currentTime;
totalIterations++;
if(totalIterations % 30 == 0) {
cout << "Detection Time = " << currentTime * 1000 << " ms "
<< "(Mean = " << 1000 * totalTime / double(totalIterations) << " ms)" << endl;
}
// draw results
image.copyTo(imageCopy);
if(markerIds.size() > 0) {
aruco::drawDetectedMarkers(imageCopy, markerCorners);
}
if(charucoIds.size() > 0) {
//! [drawDetectedCornersCharuco]
aruco::drawDetectedCornersCharuco(imageCopy, charucoCorners, charucoIds, cv::Scalar(255, 0, 0));
//! [drawDetectedCornersCharuco]
}
if(validPose)
cv::drawFrameAxes(imageCopy, camMatrix, distCoeffs, rvec, tvec, axisLength);
imshow("out", imageCopy);
if(waitKey(waitTime) == 27) break;
}
//! [charuco_detect_board_full_sample]
return 0;
}

@ -0,0 +1,187 @@
#include <opencv2/highgui.hpp>
#include <vector>
#include <iostream>
#include <opencv2/objdetect/charuco_detector.hpp>
#include "aruco_samples_utility.hpp"
using namespace std;
using namespace cv;
namespace {
const char* about = "Detect ChArUco markers";
const char* keys =
"{sl | 100 | Square side length (in meters) }"
"{ml | 60 | Marker side length (in meters) }"
"{d | 10 | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
"DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
"DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16}"
"{cd | | Input file with custom dictionary }"
"{c | | Output file with calibrated camera parameters }"
"{as | | Automatic scale. The provided number is multiplied by the last"
"diamond id becoming an indicator of the square length. In this case, the -sl and "
"-ml are only used to know the relative length relation between squares and markers }"
"{v | | Input from video file, if ommited, input comes from camera }"
"{ci | 0 | Camera id if input doesnt come from video (-v) }"
"{dp | | File of marker detector parameters }"
"{refine | | Corner refinement: CORNER_REFINE_NONE=0, CORNER_REFINE_SUBPIX=1,"
"CORNER_REFINE_CONTOUR=2, CORNER_REFINE_APRILTAG=3}";
const string refineMethods[4] = {
"None",
"Subpixel",
"Contour",
"AprilTag"
};
}
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
float squareLength = parser.get<float>("sl");
float markerLength = parser.get<float>("ml");
bool estimatePose = parser.has("c");
bool autoScale = parser.has("as");
float autoScaleFactor = autoScale ? parser.get<float>("as") : 1.f;
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
Mat camMatrix, distCoeffs;
readCameraParamsFromCommandLine(parser, camMatrix, distCoeffs);
aruco::DetectorParameters detectorParams = readDetectorParamsFromCommandLine(parser);
if (parser.has("refine")) {
// override cornerRefinementMethod read from config file
int user_method = parser.get<aruco::CornerRefineMethod>("refine");
if (user_method < 0 || user_method >= 4)
{
std::cout << "Corner refinement method should be in range 0..3" << std::endl;
return 0;
}
detectorParams.cornerRefinementMethod = user_method;
}
std::cout << "Corner refinement method: " << refineMethods[detectorParams.cornerRefinementMethod] << std::endl;
int camId = parser.get<int>("ci");
String video;
if(parser.has("v")) {
video = parser.get<String>("v");
}
if(!parser.check()) {
parser.printErrors();
return 0;
}
VideoCapture inputVideo;
int waitTime;
if(!video.empty()) {
inputVideo.open(video);
waitTime = 0;
} else {
inputVideo.open(camId);
waitTime = 10;
}
double totalTime = 0;
int totalIterations = 0;
aruco::CharucoBoard charucoBoard(Size(3, 3), squareLength, markerLength, dictionary);
aruco::CharucoDetector detector(charucoBoard, aruco::CharucoParameters(), detectorParams);
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
double tick = (double)getTickCount();
//! [detect_diamonds]
vector<int> markerIds;
vector<Vec4i> diamondIds;
vector<vector<Point2f> > markerCorners, diamondCorners;
vector<Vec3d> rvecs, tvecs;
detector.detectDiamonds(image, diamondCorners, diamondIds, markerCorners, markerIds);
//! [detect_diamonds]
//! [diamond_pose_estimation]
// estimate diamond pose
size_t N = diamondIds.size();
if(estimatePose && N > 0) {
cv::Mat objPoints(4, 1, CV_32FC3);
rvecs.resize(N);
tvecs.resize(N);
if(!autoScale) {
// set coordinate system
objPoints.ptr<Vec3f>(0)[0] = Vec3f(-squareLength/2.f, squareLength/2.f, 0);
objPoints.ptr<Vec3f>(0)[1] = Vec3f(squareLength/2.f, squareLength/2.f, 0);
objPoints.ptr<Vec3f>(0)[2] = Vec3f(squareLength/2.f, -squareLength/2.f, 0);
objPoints.ptr<Vec3f>(0)[3] = Vec3f(-squareLength/2.f, -squareLength/2.f, 0);
// Calculate pose for each marker
for (size_t i = 0ull; i < N; i++)
solvePnP(objPoints, diamondCorners.at(i), camMatrix, distCoeffs, rvecs.at(i), tvecs.at(i));
//! [diamond_pose_estimation]
/* //! [diamond_pose_estimation_as_charuco]
for (size_t i = 0ull; i < N; i++) { // estimate diamond pose as Charuco board
Mat objPoints_b, imgPoints;
// The coordinate system of the diamond is placed in the board plane centered in the bottom left corner
vector<int> charucoIds = {0, 1, 3, 2}; // if CCW order, Z axis pointing in the plane
// vector<int> charucoIds = {0, 2, 3, 1}; // if CW order, Z axis pointing out the plane
charucoBoard.matchImagePoints(diamondCorners[i], charucoIds, objPoints_b, imgPoints);
solvePnP(objPoints_b, imgPoints, camMatrix, distCoeffs, rvecs[i], tvecs[i]);
}
//! [diamond_pose_estimation_as_charuco] */
}
else {
// if autoscale, extract square size from last diamond id
for(size_t i = 0; i < N; i++) {
float sqLenScale = autoScaleFactor * float(diamondIds[i].val[3]);
vector<vector<Point2f> > currentCorners;
vector<Vec3d> currentRvec, currentTvec;
currentCorners.push_back(diamondCorners[i]);
// set coordinate system
objPoints.ptr<Vec3f>(0)[0] = Vec3f(-sqLenScale/2.f, sqLenScale/2.f, 0);
objPoints.ptr<Vec3f>(0)[1] = Vec3f(sqLenScale/2.f, sqLenScale/2.f, 0);
objPoints.ptr<Vec3f>(0)[2] = Vec3f(sqLenScale/2.f, -sqLenScale/2.f, 0);
objPoints.ptr<Vec3f>(0)[3] = Vec3f(-sqLenScale/2.f, -sqLenScale/2.f, 0);
solvePnP(objPoints, diamondCorners.at(i), camMatrix, distCoeffs, rvecs.at(i), tvecs.at(i));
}
}
}
double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
totalTime += currentTime;
totalIterations++;
if(totalIterations % 30 == 0) {
cout << "Detection Time = " << currentTime * 1000 << " ms "
<< "(Mean = " << 1000 * totalTime / double(totalIterations) << " ms)" << endl;
}
// draw results
image.copyTo(imageCopy);
if(markerIds.size() > 0)
aruco::drawDetectedMarkers(imageCopy, markerCorners);
//! [draw_diamonds]
if(diamondIds.size() > 0) {
aruco::drawDetectedDiamonds(imageCopy, diamondCorners, diamondIds);
//! [draw_diamonds]
//! [draw_diamond_pose_estimation]
if(estimatePose) {
for(size_t i = 0u; i < diamondIds.size(); i++)
cv::drawFrameAxes(imageCopy, camMatrix, distCoeffs, rvecs[i], tvecs[i], squareLength*1.1f);
}
//! [draw_diamond_pose_estimation]
}
imshow("out", imageCopy);
char key = (char)waitKey(waitTime);
if(key == 27) break;
}
return 0;
}

@ -11,7 +11,7 @@ const char* about = "Basic marker detection";
//! [aruco_detect_markers_keys]
const char* keys =
"{d | | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"{d | 0 | dictionary: DICT_4X4_50=0, DICT_4X4_100=1, DICT_4X4_250=2,"
"DICT_4X4_1000=3, DICT_5X5_50=4, DICT_5X5_100=5, DICT_5X5_250=6, DICT_5X5_1000=7, "
"DICT_6X6_50=8, DICT_6X6_100=9, DICT_6X6_250=10, DICT_6X6_1000=11, DICT_7X7_50=12,"
"DICT_7X7_100=13, DICT_7X7_250=14, DICT_7X7_1000=15, DICT_ARUCO_ORIGINAL = 16,"
@ -25,37 +25,41 @@ const char* keys =
"{r | | show rejected candidates too }"
"{refine | | Corner refinement: CORNER_REFINE_NONE=0, CORNER_REFINE_SUBPIX=1,"
"CORNER_REFINE_CONTOUR=2, CORNER_REFINE_APRILTAG=3}";
}
//! [aruco_detect_markers_keys]
const string refineMethods[4] = {
"None",
"Subpixel",
"Contour",
"AprilTag"
};
}
int main(int argc, char *argv[]) {
CommandLineParser parser(argc, argv, keys);
parser.about(about);
if(argc < 2) {
parser.printMessage();
return 0;
}
bool showRejected = parser.has("r");
bool estimatePose = parser.has("c");
float markerLength = parser.get<float>("l");
cv::aruco::DetectorParameters detectorParams;
if(parser.has("dp")) {
cv::FileStorage fs(parser.get<string>("dp"), FileStorage::READ);
bool readOk = detectorParams.readDetectorParameters(fs.root());
if(!readOk) {
cerr << "Invalid detector parameters file" << endl;
return 0;
}
}
aruco::DetectorParameters detectorParams = readDetectorParamsFromCommandLine(parser);
aruco::Dictionary dictionary = readDictionatyFromCommandLine(parser);
if (parser.has("refine")) {
// override cornerRefinementMethod read from config file
detectorParams.cornerRefinementMethod = parser.get<aruco::CornerRefineMethod>("refine");
int user_method = parser.get<aruco::CornerRefineMethod>("refine");
if (user_method < 0 || user_method >= 4)
{
std::cout << "Corner refinement method should be in range 0..3" << std::endl;
return 0;
}
std::cout << "Corner refinement method (0: None, 1: Subpixel, 2:contour, 3: AprilTag 2): " << (int)detectorParams.cornerRefinementMethod << std::endl;
detectorParams.cornerRefinementMethod = user_method;
}
std::cout << "Corner refinement method: " << refineMethods[detectorParams.cornerRefinementMethod] << std::endl;
int camId = parser.get<int>("ci");
@ -69,33 +73,11 @@ int main(int argc, char *argv[]) {
return 0;
}
aruco::Dictionary dictionary = aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
if (parser.has("d")) {
int dictionaryId = parser.get<int>("d");
dictionary = aruco::getPredefinedDictionary(aruco::PredefinedDictionaryType(dictionaryId));
}
else if (parser.has("cd")) {
cv::FileStorage fs(parser.get<std::string>("cd"), FileStorage::READ);
bool readOk = dictionary.readDictionary(fs.root());
if(!readOk) {
std::cerr << "Invalid dictionary file" << std::endl;
return 0;
}
}
else {
std::cerr << "Dictionary not specified" << std::endl;
return 0;
}
//! [aruco_pose_estimation1]
cv::Mat camMatrix, distCoeffs;
Mat camMatrix, distCoeffs;
if(estimatePose) {
// You can read camera parameters from tutorial_camera_params.yml
bool readOk = readCameraParameters(parser.get<string>("c"), camMatrix, distCoeffs);
if(!readOk) {
cerr << "Invalid camera file" << endl;
return 0;
}
readCameraParamsFromCommandLine(parser, camMatrix, distCoeffs);
}
//! [aruco_pose_estimation1]
//! [aruco_detect_markers]

@ -0,0 +1,30 @@
%YAML:1.0
adaptiveThreshWinSizeMin: 3
adaptiveThreshWinSizeMax: 23
adaptiveThreshWinSizeStep: 10
adaptiveThreshWinSize: 21
adaptiveThreshConstant: 7
minMarkerPerimeterRate: 0.03
maxMarkerPerimeterRate: 4.0
polygonalApproxAccuracyRate: 0.05
minCornerDistanceRate: 0.05
minDistanceToBorder: 3
minMarkerDistance: 10.0
minMarkerDistanceRate: 0.05
cornerRefinementMethod: 0
cornerRefinementWinSize: 5
cornerRefinementMaxIterations: 30
cornerRefinementMinAccuracy: 0.1
markerBorderBits: 1
perspectiveRemovePixelPerCell: 8
perspectiveRemoveIgnoredMarginPerCell: 0.13
maxErroneousBitsInBorderRate: 0.04
minOtsuStdDev: 5.0
errorCorrectionRate: 0.6
# new aruco 3 functionality
useAruco3Detection: 0
minSideLengthCanonicalImg: 32 # 16, 32, 64 --> tau_c from the paper
minMarkerLengthRatioOriginalImg: 0.02 # range [0,0.2] --> tau_i from the paper
cameraMotionSpeed: 0.1 # range [0,1) --> tau_s from the paper
useGlobalThreshold: 0

@ -0,0 +1,21 @@
%YAML:1.0
---
calibration_time: "Wed 08 Dec 2021 05:13:09 PM MSK"
image_width: 640
image_height: 480
flags: 0
camera_matrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 4.5251072219637672e+02, 0., 3.1770297317353277e+02, 0.,
4.5676707935146891e+02, 2.7775155919135995e+02, 0., 0., 1. ]
distortion_coefficients: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ 1.2136925618707872e-01, -1.0854664722560681e+00,
1.1786843796668460e-04, -4.6240686046485508e-04,
2.9542589406810080e+00 ]
avg_reprojection_error: 1.8234905535936044e-01
info: "The camera calibration parameters were obtained by img_00.jpg-img_03.jpg from aruco/tutorials/aruco_calibration/images"
Loading…
Cancel
Save