diff --git a/doc/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.markdown b/doc/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.markdown index caea340722..ed2d3510f3 100644 --- a/doc/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.markdown +++ b/doc/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.markdown @@ -144,4 +144,3 @@ proper macros in their appropriate positions. Rest is done by generator scripts. may be an exceptional cases where generator scripts cannot create the wrappers. Such functions need to be handled manually. But most of the time, a code written according to OpenCV coding guidelines will be automatically wrapped by generator scripts. - diff --git a/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.markdown b/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.markdown index 02a964d13d..5ef3380159 100644 --- a/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.markdown +++ b/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.markdown @@ -65,4 +65,3 @@ Exercises -# OpenCV samples contain an example of generating disparity map and its 3D reconstruction. Check stereo_match.py in OpenCV-Python samples. - diff --git a/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown b/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown index 525b6567aa..0b63515c53 100644 --- a/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown +++ b/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown @@ -172,4 +172,3 @@ Exercises 2. Fundamental Matrix estimation is sensitive to quality of matches, outliers etc. It becomes worse when all selected matches lie on the same plane. [Check this discussion](http://answers.opencv.org/question/18125/epilines-not-correct/). - diff --git a/doc/py_tutorials/py_feature2d/py_brief/py_brief.markdown b/doc/py_tutorials/py_feature2d/py_brief/py_brief.markdown index 901731e093..f1fc1e0ec7 100644 --- a/doc/py_tutorials/py_feature2d/py_brief/py_brief.markdown +++ b/doc/py_tutorials/py_feature2d/py_brief/py_brief.markdown @@ -80,4 +80,3 @@ Additional Resources Independent Elementary Features", 11th European Conference on Computer Vision (ECCV), Heraklion, Crete. LNCS Springer, September 2010. 2. LSH (Locality Sensitive Hasing) at wikipedia. - diff --git a/doc/py_tutorials/py_gui/py_mouse_handling/py_mouse_handling.markdown b/doc/py_tutorials/py_gui/py_mouse_handling/py_mouse_handling.markdown index 879753fe93..3862b49181 100644 --- a/doc/py_tutorials/py_gui/py_mouse_handling/py_mouse_handling.markdown +++ b/doc/py_tutorials/py_gui/py_mouse_handling/py_mouse_handling.markdown @@ -109,4 +109,3 @@ Exercises -# In our last example, we drew filled rectangle. You modify the code to draw an unfilled rectangle. - diff --git a/doc/py_tutorials/py_gui/py_trackbar/py_trackbar.markdown b/doc/py_tutorials/py_gui/py_trackbar/py_trackbar.markdown index 2294311722..1d6e6aebf3 100644 --- a/doc/py_tutorials/py_gui/py_trackbar/py_trackbar.markdown +++ b/doc/py_tutorials/py_gui/py_trackbar/py_trackbar.markdown @@ -72,4 +72,3 @@ Exercises -# Create a Paint application with adjustable colors and brush radius using trackbars. For drawing, refer previous tutorial on mouse handling. - diff --git a/doc/py_tutorials/py_ml/py_svm/py_svm_opencv/py_svm_opencv.markdown b/doc/py_tutorials/py_ml/py_svm/py_svm_opencv/py_svm_opencv.markdown index 032f01601f..ffd38f881c 100644 --- a/doc/py_tutorials/py_ml/py_svm/py_svm_opencv/py_svm_opencv.markdown +++ b/doc/py_tutorials/py_ml/py_svm/py_svm_opencv/py_svm_opencv.markdown @@ -135,4 +135,3 @@ Exercises -# OpenCV samples contain digits.py which applies a slight improvement of the above method to get improved result. It also contains the reference. Check it and understand it. - diff --git a/doc/py_tutorials/py_photo/py_inpainting/py_inpainting.markdown b/doc/py_tutorials/py_photo/py_inpainting/py_inpainting.markdown index b33e69ca0b..8dbfee0213 100644 --- a/doc/py_tutorials/py_photo/py_inpainting/py_inpainting.markdown +++ b/doc/py_tutorials/py_photo/py_inpainting/py_inpainting.markdown @@ -87,4 +87,3 @@ Exercises Adobe Photoshop. On further search, I was able to find that same technique is already there in GIMP with different name, "Resynthesizer" (You need to install separate plugin). I am sure you will enjoy the technique. - diff --git a/doc/py_tutorials/py_setup/py_intro/py_intro.markdown b/doc/py_tutorials/py_setup/py_intro/py_intro.markdown index 2dbf37d45e..007a71ce72 100644 --- a/doc/py_tutorials/py_setup/py_intro/py_intro.markdown +++ b/doc/py_tutorials/py_setup/py_intro/py_intro.markdown @@ -84,4 +84,3 @@ Additional Resources 3. [Numpy Examples List](http://wiki.scipy.org/Numpy_Example_List) 4. [OpenCV Documentation](http://docs.opencv.org/) 5. [OpenCV Forum](http://answers.opencv.org/questions/) - diff --git a/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.markdown b/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.markdown index 055041b5ce..1ea6cd69dc 100644 --- a/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.markdown +++ b/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.markdown @@ -223,4 +223,3 @@ Exercises -# Check the code in samples/python2/lk_track.py. Try to understand the code. 2. Check the code in samples/python2/opt_flow.py. Try to understand the code. - diff --git a/doc/py_tutorials/py_video/py_meanshift/py_meanshift.markdown b/doc/py_tutorials/py_video/py_meanshift/py_meanshift.markdown index 18f05093df..499cc6696f 100644 --- a/doc/py_tutorials/py_video/py_meanshift/py_meanshift.markdown +++ b/doc/py_tutorials/py_video/py_meanshift/py_meanshift.markdown @@ -183,4 +183,3 @@ Exercises -# OpenCV comes with a Python sample on interactive demo of camshift. Use it, hack it, understand it. - diff --git a/doc/tutorials/calib3d/camera_calibration_square_chess/camera_calibration_square_chess.markdown b/doc/tutorials/calib3d/camera_calibration_square_chess/camera_calibration_square_chess.markdown index b9b746273b..2cf9d9adfb 100644 --- a/doc/tutorials/calib3d/camera_calibration_square_chess/camera_calibration_square_chess.markdown +++ b/doc/tutorials/calib3d/camera_calibration_square_chess/camera_calibration_square_chess.markdown @@ -52,4 +52,3 @@ image. opencv/samples/cpp/calibration.cpp, function computeReprojectionErrors). Question: how to calculate the distance from the camera origin to any of the corners? - diff --git a/doc/tutorials/core/basic_geometric_drawing/basic_geometric_drawing.markdown b/doc/tutorials/core/basic_geometric_drawing/basic_geometric_drawing.markdown index 9a921f2079..f7888590c9 100644 --- a/doc/tutorials/core/basic_geometric_drawing/basic_geometric_drawing.markdown +++ b/doc/tutorials/core/basic_geometric_drawing/basic_geometric_drawing.markdown @@ -241,4 +241,3 @@ Result Compiling and running your program should give you a result like this: ![](images/Drawing_1_Tutorial_Result_0.png) - diff --git a/doc/tutorials/features2d/detection_of_planar_objects/detection_of_planar_objects.markdown b/doc/tutorials/features2d/detection_of_planar_objects/detection_of_planar_objects.markdown index 36f67f5777..4112b18738 100644 --- a/doc/tutorials/features2d/detection_of_planar_objects/detection_of_planar_objects.markdown +++ b/doc/tutorials/features2d/detection_of_planar_objects/detection_of_planar_objects.markdown @@ -50,5 +50,3 @@ known planar objects in scenes. Mat points1Projected; perspectiveTransform(Mat(points1), points1Projected, H); - Use drawMatches for drawing inliers. - - diff --git a/doc/tutorials/features2d/feature_homography/feature_homography.markdown b/doc/tutorials/features2d/feature_homography/feature_homography.markdown index 5eacdf35ea..dae120b898 100644 --- a/doc/tutorials/features2d/feature_homography/feature_homography.markdown +++ b/doc/tutorials/features2d/feature_homography/feature_homography.markdown @@ -137,5 +137,3 @@ Result -# And here is the result for the detected object (highlighted in green) ![](images/Feature_Homography_Result.jpg) - - diff --git a/doc/tutorials/features2d/trackingmotion/corner_subpixeles/corner_subpixeles.markdown b/doc/tutorials/features2d/trackingmotion/corner_subpixeles/corner_subpixeles.markdown index 70323efd41..be9b9762c6 100644 --- a/doc/tutorials/features2d/trackingmotion/corner_subpixeles/corner_subpixeles.markdown +++ b/doc/tutorials/features2d/trackingmotion/corner_subpixeles/corner_subpixeles.markdown @@ -127,4 +127,3 @@ Result Here is the result: ![](images/Corner_Subpixeles_Result.jpg) - diff --git a/doc/tutorials/features2d/trackingmotion/generic_corner_detector/generic_corner_detector.markdown b/doc/tutorials/features2d/trackingmotion/generic_corner_detector/generic_corner_detector.markdown index b64bc49f0d..1d04c240ad 100644 --- a/doc/tutorials/features2d/trackingmotion/generic_corner_detector/generic_corner_detector.markdown +++ b/doc/tutorials/features2d/trackingmotion/generic_corner_detector/generic_corner_detector.markdown @@ -33,4 +33,3 @@ Result ![](images/My_Harris_corner_detector_Result.jpg) ![](images/My_Shi_Tomasi_corner_detector_Result.jpg) - diff --git a/doc/tutorials/features2d/trackingmotion/good_features_to_track/good_features_to_track.markdown b/doc/tutorials/features2d/trackingmotion/good_features_to_track/good_features_to_track.markdown index 80c96ffb6b..1b939765c6 100644 --- a/doc/tutorials/features2d/trackingmotion/good_features_to_track/good_features_to_track.markdown +++ b/doc/tutorials/features2d/trackingmotion/good_features_to_track/good_features_to_track.markdown @@ -112,4 +112,3 @@ Result ------ ![](images/Feature_Detection_Result_a.jpg) - diff --git a/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.markdown b/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.markdown index fc89519f80..add7db8a11 100644 --- a/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.markdown +++ b/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.markdown @@ -206,4 +206,3 @@ The original image: The detected corners are surrounded by a small black circle ![](images/Harris_Detector_Result.jpg) - diff --git a/doc/tutorials/tutorials.markdown b/doc/tutorials/tutorials.markdown index 886143d42a..552420c3c9 100644 --- a/doc/tutorials/tutorials.markdown +++ b/doc/tutorials/tutorials.markdown @@ -74,4 +74,3 @@ As always, we would be happy to hear your comments and receive your contribution - @subpage tutorial_table_of_content_viz These tutorials show how to use Viz module effectively. - diff --git a/doc/user_guide/ug_highgui.markdown b/doc/user_guide/ug_highgui.markdown index 3213627f87..2832420643 100644 --- a/doc/user_guide/ug_highgui.markdown +++ b/doc/user_guide/ug_highgui.markdown @@ -1,8 +1,5 @@ -HighGUI {#tutorial_ug_highgui} -======= - -Using Kinect and other OpenNI compatible depth sensors ------------------------------------------------------- +Using Kinect and other OpenNI compatible depth sensors {#tutorial_ug_highgui} +====================================================== Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture class. Depth map, RGB image and some other formats of output can be retrieved by using familiar diff --git a/doc/user_guide/ug_intelperc.markdown b/doc/user_guide/ug_intelperc.markdown index da81e1de0c..e5e0ddeb8d 100644 --- a/doc/user_guide/ug_intelperc.markdown +++ b/doc/user_guide/ug_intelperc.markdown @@ -1,8 +1,5 @@ -HighGUI {#tutorial_ug_intelperc} -======= - -Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors ---------------------------------------------------------------------------------------- +Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors {#tutorial_ug_intelperc} +======================================================================================= Depth sensors compatible with Intel Perceptual Computing SDK are supported through VideoCapture class. Depth map, RGB image and some other formats of output can be retrieved by using familiar