From c8cb03fd8fe1acc91b12bd53f7952475923a603c Mon Sep 17 00:00:00 2001 From: Maksim Shabunin Date: Thu, 7 May 2015 18:00:11 +0300 Subject: [PATCH 1/6] Replaced 'corrected' to 'distorted' in camera calibration tutorials --- .../py_calibration/py_calibration.markdown | 12 ++++++------ .../camera_calibration.markdown | 16 ++++++++-------- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown b/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown index 66f578f33b..3655400f38 100644 --- a/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown +++ b/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown @@ -22,17 +22,17 @@ red line. All the expected straight lines are bulged out. Visit [Distortion ![image](images/calib_radial.jpg) -This distortion is solved as follows: +This distortion is represented as follows: -\f[x_{corrected} = x( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6) \\ -y_{corrected} = y( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6)\f] +\f[x_{distorted} = x( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6) \\ +y_{distorted} = y( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6)\f] Similarly, another distortion is the tangential distortion which occurs because image taking lense is not aligned perfectly parallel to the imaging plane. So some areas in image may look nearer than -expected. It is solved as below: +expected. It is represented as below: -\f[x_{corrected} = x + [ 2p_1xy + p_2(r^2+2x^2)] \\ -y_{corrected} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]\f] +\f[x_{distorted} = x + [ 2p_1xy + p_2(r^2+2x^2)] \\ +y_{distorted} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]\f] In short, we need to find five parameters, known as distortion coefficients given by: diff --git a/doc/tutorials/calib3d/camera_calibration/camera_calibration.markdown b/doc/tutorials/calib3d/camera_calibration/camera_calibration.markdown index a7bd1f0597..1a7b906872 100644 --- a/doc/tutorials/calib3d/camera_calibration/camera_calibration.markdown +++ b/doc/tutorials/calib3d/camera_calibration/camera_calibration.markdown @@ -14,18 +14,18 @@ Theory For the distortion OpenCV takes into account the radial and tangential factors. For the radial factor one uses the following formula: -\f[x_{corrected} = x( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6) \\ -y_{corrected} = y( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6)\f] +\f[x_{distorted} = x( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6) \\ +y_{distorted} = y( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6)\f] -So for an old pixel point at \f$(x,y)\f$ coordinates in the input image, its position on the corrected -output image will be \f$(x_{corrected} y_{corrected})\f$. The presence of the radial distortion -manifests in form of the "barrel" or "fish-eye" effect. +So for an undistorted pixel point at \f$(x,y)\f$ coordinates, its position on the distorted image +will be \f$(x_{distorted} y_{distorted})\f$. The presence of the radial distortion manifests in form +of the "barrel" or "fish-eye" effect. Tangential distortion occurs because the image taking lenses are not perfectly parallel to the -imaging plane. It can be corrected via the formulas: +imaging plane. It can be represented via the formulas: -\f[x_{corrected} = x + [ 2p_1xy + p_2(r^2+2x^2)] \\ -y_{corrected} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]\f] +\f[x_{distorted} = x + [ 2p_1xy + p_2(r^2+2x^2)] \\ +y_{distorted} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]\f] So we have five distortion parameters which in OpenCV are presented as one row matrix with 5 columns: From 61293a09ff8ec11bd33801859d7d65ac43612e8d Mon Sep 17 00:00:00 2001 From: Maksim Shabunin Date: Thu, 7 May 2015 18:18:46 +0300 Subject: [PATCH 2/6] Fixed RGB-to-HLS conversion formula in documentation --- modules/imgproc/doc/colors.markdown | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/imgproc/doc/colors.markdown b/modules/imgproc/doc/colors.markdown index 7e0b39a71b..c372d280d9 100644 --- a/modules/imgproc/doc/colors.markdown +++ b/modules/imgproc/doc/colors.markdown @@ -78,9 +78,9 @@ scaled to fit the 0 to 1 range. \f[L \leftarrow \frac{V_{max} + V_{min}}{2}\f] \f[S \leftarrow \fork { \frac{V_{max} - V_{min}}{V_{max} + V_{min}} }{if \(L < 0.5\) } { \frac{V_{max} - V_{min}}{2 - (V_{max} + V_{min})} }{if \(L \ge 0.5\) }\f] -\f[H \leftarrow \forkthree {{60(G - B)}/{S}}{if \(V_{max}=R\) } - {{120+60(B - R)}/{S}}{if \(V_{max}=G\) } - {{240+60(R - G)}/{S}}{if \(V_{max}=B\) }\f] +\f[H \leftarrow \forkthree {{60(G - B)}/{(V_{max}-V_{min})}}{if \(V_{max}=R\) } + {{120+60(B - R)}/{(V_{max}-V_{min})}}{if \(V_{max}=G\) } + {{240+60(R - G)}/{(V_{max}-V_{min})}}{if \(V_{max}=B\) }\f] If \f$H<0\f$ then \f$H \leftarrow H+360\f$ . On output \f$0 \leq L \leq 1\f$, \f$0 \leq S \leq 1\f$, \f$0 \leq H \leq 360\f$ . From e22c09c601ba9a8e85961ff67d7621619c68ea0d Mon Sep 17 00:00:00 2001 From: Maksim Shabunin Date: Fri, 8 May 2015 17:28:03 +0300 Subject: [PATCH 3/6] Documentation for CommandLineParser --- modules/core/include/opencv2/core/utility.hpp | 137 +++++++++++++++++- modules/core/src/command_line_parser.cpp | 4 +- 2 files changed, 131 insertions(+), 10 deletions(-) diff --git a/modules/core/include/opencv2/core/utility.hpp b/modules/core/include/opencv2/core/utility.hpp index e6dfd7a4f0..3ec066045b 100644 --- a/modules/core/include/opencv2/core/utility.hpp +++ b/modules/core/include/opencv2/core/utility.hpp @@ -539,7 +539,7 @@ private: virtual void deleteDataInstance(void* data) const { delete (T*)data; } }; -/** @brief designed for command line arguments parsing +/** @brief Designed for command line parsing The sample below demonstrates how to use CommandLineParser: @code @@ -569,8 +569,19 @@ The sample below demonstrates how to use CommandLineParser: return 0; } @endcode -Syntax: -@code + +### Keys syntax + +The keys parameter is a string containing several blocks, each one is enclosed in curley braces and +describes one argument. Each argument contains three parts separated by the `|` symbol: + +-# argument names is a space-separated list of option synonyms (to mark argument as positional, prefix it with the `@` symbol) +-# default value will be used if the argument was not provided (can be empty) +-# help message (can be empty) + +For example: + +@code{.cpp} const String keys = "{help h usage ? | | print this message }" "{@image1 | | image1 for compare }" @@ -581,27 +592,89 @@ Syntax: "{N count |100 | count of objects }" "{ts timestamp | | use time stamp }" ; +} @endcode -Use: -@code - # ./app -N=200 1.png 2.jpg 19 -ts - # ./app -fps=aaa +### Usage + +For the described keys: + +@code{.sh} + # Good call (3 positional parameters: image1, image2 and repeat; N is 200, ts is true) + $ ./app -N=200 1.png 2.jpg 19 -ts + + # Bad call + $ ./app -fps=aaa ERRORS: Exception: can not convert: [aaa] to [double] @endcode */ class CV_EXPORTS CommandLineParser { - public: +public: + + /** @brief Constructor + + Initializes command line parser object + + @param argc number of command line arguments (from main()) + @param argv array of command line arguments (from main()) + @param keys string describing acceptable command line parameters (see class description for syntax) + */ CommandLineParser(int argc, const char* const argv[], const String& keys); + + /** @brief Copy constructor */ CommandLineParser(const CommandLineParser& parser); + + /** @brief Assignment operator */ CommandLineParser& operator = (const CommandLineParser& parser); + /** @brief Destructor */ ~CommandLineParser(); + /** @brief Returns application path + + This method returns the path to the executable from the command line (`argv[0]`). + + For example, if the application has been started with such command: + @code{.sh} + $ ./bin/my-executable + @endcode + this method will return `./bin`. + */ String getPathToApplication() const; + /** @brief Access arguments by name + + Returns argument converted to selected type. If the argument is not known or can not be + converted to selected type, the error flag is set (can be checked with @ref check). + + For example, define: + @code{.cpp} + String keys = "{N count||}"; + @endcode + + Call: + @code{.sh} + $ ./my-app -N=20 + # or + $ ./my-app --count=20 + @endcode + + Access: + @code{.cpp} + int N = parser.get("N"); + @endcode + + @param name name of the argument + @param space_delete remove spaces from the left and right of the string + @tparam T the argument will be converted to this type if possible + + @note You can access positional arguments by their `@`-prefixed name: + @code{.cpp} + parser.get("@image"); + @endcode + */ template T get(const String& name, bool space_delete = true) const { @@ -610,6 +683,30 @@ class CV_EXPORTS CommandLineParser return val; } + /** @brief Access positional arguments by index + + Returns argument converted to selected type. Indexes are counted from zero. + + For example, define: + @code{.cpp} + String keys = "{@arg1||}{@arg2||}" + @endcode + + Call: + @code{.sh} + ./my-app abc qwe + @endcode + + Access arguments: + @code{.cpp} + String val_1 = parser.get(0); // returns "abc", arg1 + String val_2 = parser.get(1); // returns "qwe", arg2 + @endcode + + @param index index of the argument + @param space_delete remove spaces from the left and right of the string + @tparam T the argument will be converted to this type if possible + */ template T get(int index, bool space_delete = true) const { @@ -618,13 +715,37 @@ class CV_EXPORTS CommandLineParser return val; } + /** @brief Check if field was provided in the command line + + @param name argument name to check + */ bool has(const String& name) const; + /** @brief Check for parsing errors + + Returns true if error occured while accessing the parameters (bad conversion, missing arguments, + etc.). Call @ref printErrors to print error messages list. + */ bool check() const; + /** @brief Set the about message + + The about message will be shown when @ref printMessage is called, right before arguments table. + */ void about(const String& message); + /** @brief Print help message + + This method will print standard help message containing the about message and arguments description. + + @sa about + */ void printMessage() const; + + /** @brief Print list of errors occured + + @sa check + */ void printErrors() const; protected: diff --git a/modules/core/src/command_line_parser.cpp b/modules/core/src/command_line_parser.cpp index 0238a99724..2753ef30a4 100644 --- a/modules/core/src/command_line_parser.cpp +++ b/modules/core/src/command_line_parser.cpp @@ -108,7 +108,7 @@ void CommandLineParser::getByName(const String& name, bool space_delete, int typ } } impl->error = true; - impl->error_message = impl->error_message + "Unknown parametes " + name + "\n"; + impl->error_message = impl->error_message + "Unknown parameter " + name + "\n"; } catch (std::exception& e) { @@ -133,7 +133,7 @@ void CommandLineParser::getByIndex(int index, bool space_delete, int type, void* } } impl->error = true; - impl->error_message = impl->error_message + "Unknown parametes #" + format("%d", index) + "\n"; + impl->error_message = impl->error_message + "Unknown parameter #" + format("%d", index) + "\n"; } catch(std::exception & e) { From a7160d9b128fed82e4eea728ceb95eab1b085f00 Mon Sep 17 00:00:00 2001 From: Maksim Shabunin Date: Tue, 12 May 2015 17:54:31 +0300 Subject: [PATCH 4/6] Docs: fixed _dest type in cv::compare --- modules/core/include/opencv2/core.hpp | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/core/include/opencv2/core.hpp b/modules/core/include/opencv2/core.hpp index e4c61e43ad..65944767f0 100644 --- a/modules/core/include/opencv2/core.hpp +++ b/modules/core/include/opencv2/core.hpp @@ -1282,7 +1282,8 @@ equivalent matrix expressions: @endcode @param src1 first input array or a scalar; when it is an array, it must have a single channel. @param src2 second input array or a scalar; when it is an array, it must have a single channel. -@param dst output array that has the same size and type as the input arrays. +@param dst output array of type ref CV_8U that has the same size and the same number of channels as + the input arrays. @param cmpop a flag, that specifies correspondence between the arrays (cv::CmpTypes) @sa checkRange, min, max, threshold */ From 3f91b0d3401e105b431b510e1d7aceb949f1081d Mon Sep 17 00:00:00 2001 From: Maksim Shabunin Date: Wed, 13 May 2015 17:59:03 +0300 Subject: [PATCH 5/6] Fixed external link in python colorspace tutorial --- .../py_imgproc/py_colorspaces/py_colorspaces.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown b/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown index 962630ebe2..1418ef95dc 100644 --- a/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown +++ b/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown @@ -89,7 +89,7 @@ just by moving your hand in front of camera and many other funny stuffs. How to find HSV values to track? -------------------------------- -This is a common question found in [stackoverflow.com](www.stackoverflow.com). It is very simple and +This is a common question found in [stackoverflow.com](http://www.stackoverflow.com). It is very simple and you can use the same function, cv2.cvtColor(). Instead of passing an image, you just pass the BGR values you want. For example, to find the HSV value of Green, try following commands in Python terminal: From 6d1cbc6458baafc7755f681883f0fa06fb268965 Mon Sep 17 00:00:00 2001 From: Maksim Shabunin Date: Wed, 13 May 2015 18:42:31 +0300 Subject: [PATCH 6/6] Reorganized user guide --- doc/CMakeLists.txt | 5 +- .../py_face_detection.markdown | 2 +- doc/root.markdown.in | 1 - .../core/mat_operations.markdown} | 7 +- .../core/table_of_content_core.markdown | 3 + .../highgui/intelperc.markdown} | 4 +- .../highgui/kinect_openni.markdown} | 4 +- .../highgui/table_of_content_highgui.markdown | 4 + .../documentation_tutorial.markdown | 3 +- .../table_of_content_objdetect.markdown | 4 + .../objdetect/traincascade.markdown} | 2 +- doc/user_guide/ug_features2d.markdown | 110 ------------------ doc/user_guide/user_guide.markdown | 8 -- 13 files changed, 21 insertions(+), 136 deletions(-) rename doc/{user_guide/ug_mat.markdown => tutorials/core/mat_operations.markdown} (98%) rename doc/{user_guide/ug_intelperc.markdown => tutorials/highgui/intelperc.markdown} (96%) rename doc/{user_guide/ug_highgui.markdown => tutorials/highgui/kinect_openni.markdown} (98%) rename doc/{user_guide/ug_traincascade.markdown => tutorials/objdetect/traincascade.markdown} (99%) delete mode 100644 doc/user_guide/ug_features2d.markdown delete mode 100644 doc/user_guide/user_guide.markdown diff --git a/doc/CMakeLists.txt b/doc/CMakeLists.txt index a7f5372bfa..bb17f7fe11 100644 --- a/doc/CMakeLists.txt +++ b/doc/CMakeLists.txt @@ -111,12 +111,11 @@ if(BUILD_DOCS AND DOXYGEN_FOUND) set(faqfile "${CMAKE_CURRENT_SOURCE_DIR}/faq.markdown") set(tutorial_path "${CMAKE_CURRENT_SOURCE_DIR}/tutorials") set(tutorial_py_path "${CMAKE_CURRENT_SOURCE_DIR}/py_tutorials") - set(user_guide_path "${CMAKE_CURRENT_SOURCE_DIR}/user_guide") set(example_path "${CMAKE_SOURCE_DIR}/samples") # set export variables - string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_INPUT_LIST "${rootfile} ; ${faqfile} ; ${paths_include} ; ${paths_doc} ; ${tutorial_path} ; ${tutorial_py_path} ; ${user_guide_path} ; ${paths_tutorial}") - string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_IMAGE_PATH "${paths_doc} ; ${tutorial_path} ; ${tutorial_py_path} ; ${user_guide_path} ; ${paths_tutorial}") + string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_INPUT_LIST "${rootfile} ; ${faqfile} ; ${paths_include} ; ${paths_doc} ; ${tutorial_path} ; ${tutorial_py_path} ; ${paths_tutorial}") + string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_IMAGE_PATH "${paths_doc} ; ${tutorial_path} ; ${tutorial_py_path} ; ${paths_tutorial}") # TODO: remove paths_doc from EXAMPLE_PATH after face module tutorials/samples moved to separate folders string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_EXAMPLE_PATH "${example_path} ; ${paths_doc} ; ${paths_sample}") set(CMAKE_DOXYGEN_LAYOUT "${CMAKE_CURRENT_SOURCE_DIR}/DoxygenLayout.xml") diff --git a/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.markdown b/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.markdown index 7d45e9d403..31763c9c0a 100644 --- a/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.markdown +++ b/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.markdown @@ -85,7 +85,7 @@ Haar-cascade Detection in OpenCV OpenCV comes with a trainer as well as detector. If you want to train your own classifier for any object like car, planes etc. you can use OpenCV to create one. Its full details are given here: -[Cascade Classifier Training.](http://docs.opencv.org/doc/user_guide/ug_traincascade.html) +[Cascade Classifier Training](@ref tutorial_traincascade). Here we will deal with detection. OpenCV already contains many pre-trained classifiers for face, eyes, smile etc. Those XML files are stored in opencv/data/haarcascades/ folder. Let's create face diff --git a/doc/root.markdown.in b/doc/root.markdown.in index 3a781a5ede..3e7a33199f 100644 --- a/doc/root.markdown.in +++ b/doc/root.markdown.in @@ -4,7 +4,6 @@ OpenCV modules {#mainpage} - @ref intro - @ref tutorial_root - @ref tutorial_py_root -- @ref tutorial_user_guide - @ref faq - @ref citelist diff --git a/doc/user_guide/ug_mat.markdown b/doc/tutorials/core/mat_operations.markdown similarity index 98% rename from doc/user_guide/ug_mat.markdown rename to doc/tutorials/core/mat_operations.markdown index d3994a8ea3..15a9869018 100644 --- a/doc/user_guide/ug_mat.markdown +++ b/doc/tutorials/core/mat_operations.markdown @@ -1,4 +1,4 @@ -Operations with images {#tutorial_ug_mat} +Operations with images {#tutorial_mat_operations} ====================== Input/Output @@ -27,11 +27,6 @@ If you read a jpg file, a 3 channel image is created by default. If you need a g @note use imdecode and imencode to read and write image from/to memory rather than a file. -XML/YAML --------- - -TBD - Basic operations with images ---------------------------- diff --git a/doc/tutorials/core/table_of_content_core.markdown b/doc/tutorials/core/table_of_content_core.markdown index 42440708f0..99e004819f 100644 --- a/doc/tutorials/core/table_of_content_core.markdown +++ b/doc/tutorials/core/table_of_content_core.markdown @@ -32,6 +32,9 @@ understanding how to manipulate the images on a pixel level. You'll find out how to scan images with neighbor access and use the @ref cv::filter2D function to apply kernel filters on images. +- @subpage tutorial_mat_operations + + Reading/writing images from file, accessing pixels, primitive operations, visualizing images. - @subpage tutorial_adding_images diff --git a/doc/user_guide/ug_intelperc.markdown b/doc/tutorials/highgui/intelperc.markdown similarity index 96% rename from doc/user_guide/ug_intelperc.markdown rename to doc/tutorials/highgui/intelperc.markdown index e5e0ddeb8d..b5f2ed64ed 100644 --- a/doc/user_guide/ug_intelperc.markdown +++ b/doc/tutorials/highgui/intelperc.markdown @@ -1,4 +1,4 @@ -Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors {#tutorial_ug_intelperc} +Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors {#tutorial_intelperc} ======================================================================================= Depth sensors compatible with Intel Perceptual Computing SDK are supported through VideoCapture @@ -78,5 +78,5 @@ there are two flags that should be used to set/get property of the needed genera flag value is assumed by default if neither of the two possible values of the property is set. For more information please refer to the example of usage -[intelpercccaptureccpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/intelperc_capture.cpp) +[intelperc_capture.cpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/intelperc_capture.cpp) in opencv/samples/cpp folder. diff --git a/doc/user_guide/ug_highgui.markdown b/doc/tutorials/highgui/kinect_openni.markdown similarity index 98% rename from doc/user_guide/ug_highgui.markdown rename to doc/tutorials/highgui/kinect_openni.markdown index ace4721d75..c9c33a2a05 100644 --- a/doc/user_guide/ug_highgui.markdown +++ b/doc/tutorials/highgui/kinect_openni.markdown @@ -1,4 +1,4 @@ -Using Kinect and other OpenNI compatible depth sensors {#tutorial_ug_highgui} +Using Kinect and other OpenNI compatible depth sensors {#tutorial_kinect_openni} ====================================================== Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture @@ -134,5 +134,5 @@ property. The following properties of cameras available through OpenNI interface - CAP_OPENNI_DEPTH_GENERATOR_REGISTRATION = CAP_OPENNI_DEPTH_GENERATOR + CAP_PROP_OPENNI_REGISTRATION For more information please refer to the example of usage -[openniccaptureccpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/openni_capture.cpp) in +[openni_capture.cpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/openni_capture.cpp) in opencv/samples/cpp folder. diff --git a/doc/tutorials/highgui/table_of_content_highgui.markdown b/doc/tutorials/highgui/table_of_content_highgui.markdown index 2b51dcb7b6..3ff0e0322d 100644 --- a/doc/tutorials/highgui/table_of_content_highgui.markdown +++ b/doc/tutorials/highgui/table_of_content_highgui.markdown @@ -37,3 +37,7 @@ use the built-in graphical user interface of the library. *Author:* Marvin Smith Read common GIS Raster and DEM files to display and manipulate geographic data. + +- @subpage tutorial_kinect_openni + +- @subpage tutorial_intelperc diff --git a/doc/tutorials/introduction/documenting_opencv/documentation_tutorial.markdown b/doc/tutorials/introduction/documenting_opencv/documentation_tutorial.markdown index 051651b3dd..f1d9f9cea1 100644 --- a/doc/tutorials/introduction/documenting_opencv/documentation_tutorial.markdown +++ b/doc/tutorials/introduction/documenting_opencv/documentation_tutorial.markdown @@ -77,8 +77,7 @@ Following scheme represents common documentation places for _opencv_ repository: ├── doc - doxygen config files, root page (root.markdown.in), BibTeX file (opencv.bib) │   ├── tutorials - tutorials hierarchy (pages and images) -│   ├── py_tutorials - python tutorials hierarchy (pages and images) -│   └── user_guide - old user guide (pages and images) +│   └── py_tutorials - python tutorials hierarchy (pages and images) ├── modules │   └── │      ├── doc - documentation pages and images for module diff --git a/doc/tutorials/objdetect/table_of_content_objdetect.markdown b/doc/tutorials/objdetect/table_of_content_objdetect.markdown index 0a4c208a8a..e8f4fbc1bf 100644 --- a/doc/tutorials/objdetect/table_of_content_objdetect.markdown +++ b/doc/tutorials/objdetect/table_of_content_objdetect.markdown @@ -10,3 +10,7 @@ Ever wondered how your digital camera detects peoples and faces? Look here to fi *Author:* Ana Huamán Here we learn how to use *objdetect* to find objects in our images or videos + +- @subpage tutorial_traincascade + + This tutorial describes _opencv_traincascade_ application and its parameters. diff --git a/doc/user_guide/ug_traincascade.markdown b/doc/tutorials/objdetect/traincascade.markdown similarity index 99% rename from doc/user_guide/ug_traincascade.markdown rename to doc/tutorials/objdetect/traincascade.markdown index 1bc7ff5f9a..3e7db48284 100644 --- a/doc/user_guide/ug_traincascade.markdown +++ b/doc/tutorials/objdetect/traincascade.markdown @@ -1,4 +1,4 @@ -Cascade Classifier Training {#tutorial_ug_traincascade} +Cascade Classifier Training {#tutorial_traincascade} =========================== Introduction diff --git a/doc/user_guide/ug_features2d.markdown b/doc/user_guide/ug_features2d.markdown deleted file mode 100644 index 25ec20ab66..0000000000 --- a/doc/user_guide/ug_features2d.markdown +++ /dev/null @@ -1,110 +0,0 @@ -Features2d {#tutorial_ug_features2d} -========== - -Detectors ---------- - -Descriptors ------------ - -Matching keypoints ------------------- - -### The code - -We will start with a short sample \`opencv/samples/cpp/matcher_simple.cpp\`: - -@code{.cpp} - Mat img1 = imread(argv[1], IMREAD_GRAYSCALE); - Mat img2 = imread(argv[2], IMREAD_GRAYSCALE); - if(img1.empty() || img2.empty()) - { - printf("Can't read one of the images\n"); - return -1; - } - - // detecting keypoints - SurfFeatureDetector detector(400); - vector keypoints1, keypoints2; - detector.detect(img1, keypoints1); - detector.detect(img2, keypoints2); - - // computing descriptors - SurfDescriptorExtractor extractor; - Mat descriptors1, descriptors2; - extractor.compute(img1, keypoints1, descriptors1); - extractor.compute(img2, keypoints2, descriptors2); - - // matching descriptors - BruteForceMatcher > matcher; - vector matches; - matcher.match(descriptors1, descriptors2, matches); - - // drawing the results - namedWindow("matches", 1); - Mat img_matches; - drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches); - imshow("matches", img_matches); - waitKey(0); -@endcode - -### The code explained - -Let us break the code down. -@code{.cpp} - Mat img1 = imread(argv[1], IMREAD_GRAYSCALE); - Mat img2 = imread(argv[2], IMREAD_GRAYSCALE); - if(img1.empty() || img2.empty()) - { - printf("Can't read one of the images\n"); - return -1; - } -@endcode -We load two images and check if they are loaded correctly. -@code{.cpp} - // detecting keypoints - Ptr detector = FastFeatureDetector::create(15); - vector keypoints1, keypoints2; - detector->detect(img1, keypoints1); - detector->detect(img2, keypoints2); -@endcode -First, we create an instance of a keypoint detector. All detectors inherit the abstract -FeatureDetector interface, but the constructors are algorithm-dependent. The first argument to each -detector usually controls the balance between the amount of keypoints and their stability. The range -of values is different for different detectors (For instance, *FAST* threshold has the meaning of -pixel intensity difference and usually varies in the region *[0,40]*. *SURF* threshold is applied to -a Hessian of an image and usually takes on values larger than *100*), so use defaults in case of -doubt. -@code{.cpp} - // computing descriptors - Ptr extractor = SURF::create(); - Mat descriptors1, descriptors2; - extractor->compute(img1, keypoints1, descriptors1); - extractor->compute(img2, keypoints2, descriptors2); -@endcode -We create an instance of descriptor extractor. The most of OpenCV descriptors inherit -DescriptorExtractor abstract interface. Then we compute descriptors for each of the keypoints. The -output Mat of the DescriptorExtractor::compute method contains a descriptor in a row *i* for each -*i*-th keypoint. Note that the method can modify the keypoints vector by removing the keypoints such -that a descriptor for them is not defined (usually these are the keypoints near image border). The -method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that -the number of keypoints is equal to the descriptors row count). : -@code{.cpp} - // matching descriptors - BruteForceMatcher > matcher; - vector matches; - matcher.match(descriptors1, descriptors2, matches); -@endcode -Now that we have descriptors for both images, we can match them. First, we create a matcher that for -each descriptor from image 2 does exhaustive search for the nearest descriptor in image 1 using -Euclidean metric. Manhattan distance is also implemented as well as a Hamming distance for Brief -descriptor. The output vector matches contains pairs of corresponding points indices. : -@code{.cpp} - // drawing the results - namedWindow("matches", 1); - Mat img_matches; - drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches); - imshow("matches", img_matches); - waitKey(0); -@endcode -The final part of the sample is about visualizing the matching results. diff --git a/doc/user_guide/user_guide.markdown b/doc/user_guide/user_guide.markdown deleted file mode 100644 index f940bf866e..0000000000 --- a/doc/user_guide/user_guide.markdown +++ /dev/null @@ -1,8 +0,0 @@ -OpenCV User Guide {#tutorial_user_guide} -================= - -- @subpage tutorial_ug_mat -- @subpage tutorial_ug_features2d -- @subpage tutorial_ug_highgui -- @subpage tutorial_ug_traincascade -- @subpage tutorial_ug_intelperc