Reorganized user guide

pull/4012/head
Maksim Shabunin 10 years ago
parent 3f91b0d340
commit 6d1cbc6458
  1. 5
      doc/CMakeLists.txt
  2. 2
      doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.markdown
  3. 1
      doc/root.markdown.in
  4. 7
      doc/tutorials/core/mat_operations.markdown
  5. 3
      doc/tutorials/core/table_of_content_core.markdown
  6. 4
      doc/tutorials/highgui/intelperc.markdown
  7. 4
      doc/tutorials/highgui/kinect_openni.markdown
  8. 4
      doc/tutorials/highgui/table_of_content_highgui.markdown
  9. 3
      doc/tutorials/introduction/documenting_opencv/documentation_tutorial.markdown
  10. 4
      doc/tutorials/objdetect/table_of_content_objdetect.markdown
  11. 2
      doc/tutorials/objdetect/traincascade.markdown
  12. 110
      doc/user_guide/ug_features2d.markdown
  13. 8
      doc/user_guide/user_guide.markdown

@ -111,12 +111,11 @@ if(BUILD_DOCS AND DOXYGEN_FOUND)
set(faqfile "${CMAKE_CURRENT_SOURCE_DIR}/faq.markdown")
set(tutorial_path "${CMAKE_CURRENT_SOURCE_DIR}/tutorials")
set(tutorial_py_path "${CMAKE_CURRENT_SOURCE_DIR}/py_tutorials")
set(user_guide_path "${CMAKE_CURRENT_SOURCE_DIR}/user_guide")
set(example_path "${CMAKE_SOURCE_DIR}/samples")
# set export variables
string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_INPUT_LIST "${rootfile} ; ${faqfile} ; ${paths_include} ; ${paths_doc} ; ${tutorial_path} ; ${tutorial_py_path} ; ${user_guide_path} ; ${paths_tutorial}")
string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_IMAGE_PATH "${paths_doc} ; ${tutorial_path} ; ${tutorial_py_path} ; ${user_guide_path} ; ${paths_tutorial}")
string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_INPUT_LIST "${rootfile} ; ${faqfile} ; ${paths_include} ; ${paths_doc} ; ${tutorial_path} ; ${tutorial_py_path} ; ${paths_tutorial}")
string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_IMAGE_PATH "${paths_doc} ; ${tutorial_path} ; ${tutorial_py_path} ; ${paths_tutorial}")
# TODO: remove paths_doc from EXAMPLE_PATH after face module tutorials/samples moved to separate folders
string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_EXAMPLE_PATH "${example_path} ; ${paths_doc} ; ${paths_sample}")
set(CMAKE_DOXYGEN_LAYOUT "${CMAKE_CURRENT_SOURCE_DIR}/DoxygenLayout.xml")

@ -85,7 +85,7 @@ Haar-cascade Detection in OpenCV
OpenCV comes with a trainer as well as detector. If you want to train your own classifier for any
object like car, planes etc. you can use OpenCV to create one. Its full details are given here:
[Cascade Classifier Training.](http://docs.opencv.org/doc/user_guide/ug_traincascade.html)
[Cascade Classifier Training](@ref tutorial_traincascade).
Here we will deal with detection. OpenCV already contains many pre-trained classifiers for face,
eyes, smile etc. Those XML files are stored in opencv/data/haarcascades/ folder. Let's create face

@ -4,7 +4,6 @@ OpenCV modules {#mainpage}
- @ref intro
- @ref tutorial_root
- @ref tutorial_py_root
- @ref tutorial_user_guide
- @ref faq
- @ref citelist

@ -1,4 +1,4 @@
Operations with images {#tutorial_ug_mat}
Operations with images {#tutorial_mat_operations}
======================
Input/Output
@ -27,11 +27,6 @@ If you read a jpg file, a 3 channel image is created by default. If you need a g
@note use imdecode and imencode to read and write image from/to memory rather than a file.
XML/YAML
--------
TBD
Basic operations with images
----------------------------

@ -32,6 +32,9 @@ understanding how to manipulate the images on a pixel level.
You'll find out how to scan images with neighbor access and use the @ref cv::filter2D
function to apply kernel filters on images.
- @subpage tutorial_mat_operations
Reading/writing images from file, accessing pixels, primitive operations, visualizing images.
- @subpage tutorial_adding_images

@ -1,4 +1,4 @@
Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors {#tutorial_ug_intelperc}
Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors {#tutorial_intelperc}
=======================================================================================
Depth sensors compatible with Intel Perceptual Computing SDK are supported through VideoCapture
@ -78,5 +78,5 @@ there are two flags that should be used to set/get property of the needed genera
flag value is assumed by default if neither of the two possible values of the property is set.
For more information please refer to the example of usage
[intelpercccaptureccpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/intelperc_capture.cpp)
[intelperc_capture.cpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/intelperc_capture.cpp)
in opencv/samples/cpp folder.

@ -1,4 +1,4 @@
Using Kinect and other OpenNI compatible depth sensors {#tutorial_ug_highgui}
Using Kinect and other OpenNI compatible depth sensors {#tutorial_kinect_openni}
======================================================
Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture
@ -134,5 +134,5 @@ property. The following properties of cameras available through OpenNI interface
- CAP_OPENNI_DEPTH_GENERATOR_REGISTRATION = CAP_OPENNI_DEPTH_GENERATOR + CAP_PROP_OPENNI_REGISTRATION
For more information please refer to the example of usage
[openniccaptureccpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/openni_capture.cpp) in
[openni_capture.cpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/openni_capture.cpp) in
opencv/samples/cpp folder.

@ -37,3 +37,7 @@ use the built-in graphical user interface of the library.
*Author:* Marvin Smith
Read common GIS Raster and DEM files to display and manipulate geographic data.
- @subpage tutorial_kinect_openni
- @subpage tutorial_intelperc

@ -77,8 +77,7 @@ Following scheme represents common documentation places for _opencv_ repository:
<opencv>
├── doc - doxygen config files, root page (root.markdown.in), BibTeX file (opencv.bib)
   ├── tutorials - tutorials hierarchy (pages and images)
   ├── py_tutorials - python tutorials hierarchy (pages and images)
   └── user_guide - old user guide (pages and images)
   └── py_tutorials - python tutorials hierarchy (pages and images)
├── modules
   └── <modulename>
      ├── doc - documentation pages and images for module

@ -10,3 +10,7 @@ Ever wondered how your digital camera detects peoples and faces? Look here to fi
*Author:* Ana Huamán
Here we learn how to use *objdetect* to find objects in our images or videos
- @subpage tutorial_traincascade
This tutorial describes _opencv_traincascade_ application and its parameters.

@ -1,4 +1,4 @@
Cascade Classifier Training {#tutorial_ug_traincascade}
Cascade Classifier Training {#tutorial_traincascade}
===========================
Introduction

@ -1,110 +0,0 @@
Features2d {#tutorial_ug_features2d}
==========
Detectors
---------
Descriptors
-----------
Matching keypoints
------------------
### The code
We will start with a short sample \`opencv/samples/cpp/matcher_simple.cpp\`:
@code{.cpp}
Mat img1 = imread(argv[1], IMREAD_GRAYSCALE);
Mat img2 = imread(argv[2], IMREAD_GRAYSCALE);
if(img1.empty() || img2.empty())
{
printf("Can't read one of the images\n");
return -1;
}
// detecting keypoints
SurfFeatureDetector detector(400);
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
// computing descriptors
SurfDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
@endcode
### The code explained
Let us break the code down.
@code{.cpp}
Mat img1 = imread(argv[1], IMREAD_GRAYSCALE);
Mat img2 = imread(argv[2], IMREAD_GRAYSCALE);
if(img1.empty() || img2.empty())
{
printf("Can't read one of the images\n");
return -1;
}
@endcode
We load two images and check if they are loaded correctly.
@code{.cpp}
// detecting keypoints
Ptr<FeatureDetector> detector = FastFeatureDetector::create(15);
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(img1, keypoints1);
detector->detect(img2, keypoints2);
@endcode
First, we create an instance of a keypoint detector. All detectors inherit the abstract
FeatureDetector interface, but the constructors are algorithm-dependent. The first argument to each
detector usually controls the balance between the amount of keypoints and their stability. The range
of values is different for different detectors (For instance, *FAST* threshold has the meaning of
pixel intensity difference and usually varies in the region *[0,40]*. *SURF* threshold is applied to
a Hessian of an image and usually takes on values larger than *100*), so use defaults in case of
doubt.
@code{.cpp}
// computing descriptors
Ptr<SURF> extractor = SURF::create();
Mat descriptors1, descriptors2;
extractor->compute(img1, keypoints1, descriptors1);
extractor->compute(img2, keypoints2, descriptors2);
@endcode
We create an instance of descriptor extractor. The most of OpenCV descriptors inherit
DescriptorExtractor abstract interface. Then we compute descriptors for each of the keypoints. The
output Mat of the DescriptorExtractor::compute method contains a descriptor in a row *i* for each
*i*-th keypoint. Note that the method can modify the keypoints vector by removing the keypoints such
that a descriptor for them is not defined (usually these are the keypoints near image border). The
method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that
the number of keypoints is equal to the descriptors row count). :
@code{.cpp}
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
@endcode
Now that we have descriptors for both images, we can match them. First, we create a matcher that for
each descriptor from image 2 does exhaustive search for the nearest descriptor in image 1 using
Euclidean metric. Manhattan distance is also implemented as well as a Hamming distance for Brief
descriptor. The output vector matches contains pairs of corresponding points indices. :
@code{.cpp}
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
@endcode
The final part of the sample is about visualizing the matching results.

@ -1,8 +0,0 @@
OpenCV User Guide {#tutorial_user_guide}
=================
- @subpage tutorial_ug_mat
- @subpage tutorial_ug_features2d
- @subpage tutorial_ug_highgui
- @subpage tutorial_ug_traincascade
- @subpage tutorial_ug_intelperc
Loading…
Cancel
Save