From 6cd730a02c2f629f03941e00e9e6951cad411d78 Mon Sep 17 00:00:00 2001 From: Nishanth Date: Fri, 9 Aug 2024 03:34:44 -0400 Subject: [PATCH] Merge pull request #26002 from nishanthdass:doc/missing-fields-python-tutorials Remove empty Additional Resources and Exercises fields from tutorials #26002 ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [x] There is a reference to the original bug report and related work - [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [ ] The feature is well documented and sample code can be built with the project CMake This PR is in response to issue [26001](https://github.com/opencv/opencv/issues/26001) This pull request addresses the issue of empty "Additional Resources" and "Exercises" fields in several OpenCV-Python tutorials. The empty sections have been removed to improve the clarity and consistency of the documentation. --- .../py_calib3d/py_calibration/py_calibration.markdown | 2 -- .../py_epipolar_geometry/py_epipolar_geometry.markdown | 3 --- doc/py_tutorials/py_calib3d/py_pose/py_pose.markdown | 6 ------ doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown | 6 ------ .../py_image_arithmetics/py_image_arithmetics.markdown | 3 --- .../py_core/py_optimization/py_optimization.markdown | 3 --- doc/py_tutorials/py_feature2d/py_fast/py_fast.markdown | 3 --- .../py_feature_homography/py_feature_homography.markdown | 6 ------ .../py_features_meaning/py_features_meaning.markdown | 6 ------ .../py_feature2d/py_matcher/py_matcher.markdown | 6 ------ doc/py_tutorials/py_feature2d/py_orb/py_orb.markdown | 3 --- .../py_feature2d/py_shi_tomasi/py_shi_tomasi.markdown | 6 ------ .../py_feature2d/py_sift_intro/py_sift_intro.markdown | 6 ------ .../py_feature2d/py_surf_intro/py_surf_intro.markdown | 6 ------ .../py_gui/py_mouse_handling/py_mouse_handling.markdown | 2 -- .../py_gui/py_video_display/py_video_display.markdown | 6 ------ .../py_imgproc/py_colorspaces/py_colorspaces.markdown | 3 --- .../py_contour_features/py_contour_features.markdown | 6 ------ .../py_contour_properties/py_contour_properties.markdown | 3 --- .../py_contours_begin/py_contours_begin.markdown | 6 ------ .../py_contours_hierarchy/py_contours_hierarchy.markdown | 6 ------ .../py_contours_more_functions.markdown | 3 --- .../py_imgproc/py_filtering/py_filtering.markdown | 3 --- .../py_geometric_transformations.markdown | 3 --- doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.markdown | 3 --- .../py_imgproc/py_gradients/py_gradients.markdown | 6 ------ .../py_histograms/py_2d_histogram/py_2d_histogram.markdown | 6 ------ .../py_histogram_backprojection.markdown | 3 --- .../py_histogram_begins/py_histogram_begins.markdown | 3 --- .../py_histogram_equalization.markdown | 3 --- .../py_imgproc/py_houghcircles/py_houghcircles.markdown | 6 ------ .../py_imgproc/py_houghlines/py_houghlines.markdown | 3 --- .../py_morphological_ops/py_morphological_ops.markdown | 3 --- .../py_imgproc/py_pyramids/py_pyramids.markdown | 3 --- .../py_template_matching/py_template_matching.markdown | 6 ------ .../py_fourier_transform/py_fourier_transform.markdown | 3 --- .../py_kmeans/py_kmeans_opencv/py_kmeans_opencv.markdown | 6 ------ .../py_kmeans_understanding.markdown | 3 --- .../py_ml/py_svm/py_svm_basics/py_svm_basics.markdown | 2 -- .../py_photo/py_non_local_means/py_non_local_means.markdown | 3 --- .../py_setup/py_setup_in_fedora/py_setup_in_fedora.markdown | 3 --- .../py_setup_in_windows/py_setup_in_windows.markdown | 3 --- 42 files changed, 174 deletions(-) diff --git a/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown b/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown index 182f1c845b..06716fe5dc 100644 --- a/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown +++ b/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown @@ -216,8 +216,6 @@ for i in range(len(objpoints)): print( "total error: {}".format(mean_error/len(objpoints)) ) @endcode -Additional Resources --------------------- Exercises --------- diff --git a/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown b/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown index ada22222cb..811e940714 100644 --- a/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown +++ b/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown @@ -158,9 +158,6 @@ side. That meeting point is the epipole. For better results, images with good resolution and many non-planar points should be used. -Additional Resources --------------------- - Exercises --------- diff --git a/doc/py_tutorials/py_calib3d/py_pose/py_pose.markdown b/doc/py_tutorials/py_calib3d/py_pose/py_pose.markdown index 15dd8584fa..cc06da6902 100644 --- a/doc/py_tutorials/py_calib3d/py_pose/py_pose.markdown +++ b/doc/py_tutorials/py_calib3d/py_pose/py_pose.markdown @@ -119,9 +119,3 @@ And look at the result below: If you are interested in graphics, augmented reality etc, you can use OpenGL to render more complicated figures. - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown b/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown index 1594f77200..e4ee61ebd2 100644 --- a/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown +++ b/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown @@ -195,9 +195,3 @@ See the result below. (Image is displayed with matplotlib. So RED and BLUE chann interchanged): ![image](images/border.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.markdown b/doc/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.markdown index 4b6e8bd3c1..e863cb9f62 100644 --- a/doc/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.markdown +++ b/doc/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.markdown @@ -110,9 +110,6 @@ img2_fg. ![image](images/overlay.jpg) -Additional Resources --------------------- - Exercises --------- diff --git a/doc/py_tutorials/py_core/py_optimization/py_optimization.markdown b/doc/py_tutorials/py_core/py_optimization/py_optimization.markdown index 7d63ffadef..f851433f59 100644 --- a/doc/py_tutorials/py_core/py_optimization/py_optimization.markdown +++ b/doc/py_tutorials/py_core/py_optimization/py_optimization.markdown @@ -163,6 +163,3 @@ Additional Resources 2. Scipy Lecture Notes - [Advanced Numpy](http://scipy-lectures.github.io/advanced/advanced_numpy/index.html#advanced-numpy) 3. [Timing and Profiling in IPython](http://pynash.org/2013/03/06/timing-and-profiling/) - -Exercises ---------- diff --git a/doc/py_tutorials/py_feature2d/py_fast/py_fast.markdown b/doc/py_tutorials/py_feature2d/py_fast/py_fast.markdown index 29e385c64d..d9c9cb2429 100644 --- a/doc/py_tutorials/py_feature2d/py_fast/py_fast.markdown +++ b/doc/py_tutorials/py_feature2d/py_fast/py_fast.markdown @@ -138,6 +138,3 @@ Additional Resources 2. Edward Rosten, Reid Porter, and Tom Drummond, "Faster and better: a machine learning approach to corner detection" in IEEE Trans. Pattern Analysis and Machine Intelligence, 2010, vol 32, pp. 105-119. - -Exercises ---------- diff --git a/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.markdown b/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.markdown index 4597c6bfcf..bb2455fca7 100644 --- a/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.markdown +++ b/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.markdown @@ -102,9 +102,3 @@ plt.imshow(img3, 'gray'),plt.show() See the result below. Object is marked in white color in cluttered image: ![image](images/homography_findobj.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_feature2d/py_features_meaning/py_features_meaning.markdown b/doc/py_tutorials/py_feature2d/py_features_meaning/py_features_meaning.markdown index 3aa00b715a..5e8bca6813 100644 --- a/doc/py_tutorials/py_feature2d/py_features_meaning/py_features_meaning.markdown +++ b/doc/py_tutorials/py_feature2d/py_features_meaning/py_features_meaning.markdown @@ -81,9 +81,3 @@ or do whatever you want. So in this module, we are looking to different algorithms in OpenCV to find features, describe them, match them etc. - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown b/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown index aeab98bfd6..bb38a77927 100644 --- a/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown +++ b/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown @@ -209,9 +209,3 @@ plt.imshow(img3,),plt.show() See the result below: ![image](images/matcher_flann.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_feature2d/py_orb/py_orb.markdown b/doc/py_tutorials/py_feature2d/py_orb/py_orb.markdown index 73d01aaaa1..c86a79c8af 100644 --- a/doc/py_tutorials/py_feature2d/py_orb/py_orb.markdown +++ b/doc/py_tutorials/py_feature2d/py_orb/py_orb.markdown @@ -93,6 +93,3 @@ Additional Resources -# Ethan Rublee, Vincent Rabaud, Kurt Konolige, Gary R. Bradski: ORB: An efficient alternative to SIFT or SURF. ICCV 2011: 2564-2571. - -Exercises ---------- diff --git a/doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.markdown b/doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.markdown index c5d29493e4..00d8d0a288 100644 --- a/doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.markdown +++ b/doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.markdown @@ -67,9 +67,3 @@ See the result below: ![image](images/shitomasi_block1.jpg) This function is more appropriate for tracking. We will see that when its time comes. - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.markdown b/doc/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.markdown index bbbae6a3e6..77caab6c06 100644 --- a/doc/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.markdown +++ b/doc/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.markdown @@ -160,9 +160,3 @@ Here kp will be a list of keypoints and des is a numpy array of shape So we got keypoints, descriptors etc. Now we want to see how to match keypoints in different images. That we will learn in coming chapters. - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.markdown b/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.markdown index 5bcd91cce8..e856c56ecd 100644 --- a/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.markdown +++ b/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.markdown @@ -155,9 +155,3 @@ Finally we check the descriptor size and change it to 128 if it is only 64-dim. (47, 128) @endcode Remaining part is matching which we will do in another chapter. - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_gui/py_mouse_handling/py_mouse_handling.markdown b/doc/py_tutorials/py_gui/py_mouse_handling/py_mouse_handling.markdown index 3c17b2ec9d..1dae65d64a 100644 --- a/doc/py_tutorials/py_gui/py_mouse_handling/py_mouse_handling.markdown +++ b/doc/py_tutorials/py_gui/py_mouse_handling/py_mouse_handling.markdown @@ -101,8 +101,6 @@ while(1): cv.destroyAllWindows() @endcode -Additional Resources --------------------- Exercises --------- diff --git a/doc/py_tutorials/py_gui/py_video_display/py_video_display.markdown b/doc/py_tutorials/py_gui/py_video_display/py_video_display.markdown index 5819653fa0..0b34965479 100644 --- a/doc/py_tutorials/py_gui/py_video_display/py_video_display.markdown +++ b/doc/py_tutorials/py_gui/py_video_display/py_video_display.markdown @@ -152,9 +152,3 @@ cap.release() out.release() cv.destroyAllWindows() @endcode - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown b/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown index 55c7d5c9d2..bb9d30b29b 100644 --- a/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown +++ b/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown @@ -103,9 +103,6 @@ Now you take [H-10, 100,100] and [H+10, 255, 255] as the lower bound and upper b from this method, you can use any image editing tools like GIMP or any online converters to find these values, but don't forget to adjust the HSV ranges. -Additional Resources --------------------- - Exercises --------- diff --git a/doc/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.markdown b/doc/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.markdown index e98b8a64b9..d32eab2a59 100644 --- a/doc/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.markdown +++ b/doc/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.markdown @@ -199,9 +199,3 @@ righty = int(((cols-x)*vy/vx)+y) cv.line(img,(cols-1,righty),(0,lefty),(0,255,0),2) @endcode ![image](images/fitline.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.markdown b/doc/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.markdown index 282f62ddf9..f685972e46 100644 --- a/doc/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.markdown +++ b/doc/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.markdown @@ -114,9 +114,6 @@ For eg, if I apply it to an Indian map, I get the following result : ![image](images/extremepoints.jpg) -Additional Resources --------------------- - Exercises --------- diff --git a/doc/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.markdown b/doc/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.markdown index e96598b11e..a472346402 100644 --- a/doc/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.markdown +++ b/doc/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.markdown @@ -88,9 +88,3 @@ the contour array (drawn in blue color). First image shows points I got with cv. much memory it saves!!! ![image](images/none.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_contours/py_contours_hierarchy/py_contours_hierarchy.markdown b/doc/py_tutorials/py_imgproc/py_contours/py_contours_hierarchy/py_contours_hierarchy.markdown index 075e6ec81f..097722f8cb 100644 --- a/doc/py_tutorials/py_imgproc/py_contours/py_contours_hierarchy/py_contours_hierarchy.markdown +++ b/doc/py_tutorials/py_imgproc/py_contours/py_contours_hierarchy/py_contours_hierarchy.markdown @@ -212,9 +212,3 @@ array([[[ 7, -1, 1, -1], [ 8, 0, -1, -1], [-1, 7, -1, -1]]]) @endcode - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_contours/py_contours_more_functions/py_contours_more_functions.markdown b/doc/py_tutorials/py_imgproc/py_contours/py_contours_more_functions/py_contours_more_functions.markdown index fc278669b0..0a511557ad 100644 --- a/doc/py_tutorials/py_imgproc/py_contours/py_contours_more_functions/py_contours_more_functions.markdown +++ b/doc/py_tutorials/py_imgproc/py_contours/py_contours_more_functions/py_contours_more_functions.markdown @@ -124,9 +124,6 @@ See, even image rotation doesn't affect much on this comparison. moments invariant to translation, rotation and scale. Seventh one is skew-invariant. Those values can be found using **cv.HuMoments()** function. -Additional Resources -==================== - Exercises --------- diff --git a/doc/py_tutorials/py_imgproc/py_filtering/py_filtering.markdown b/doc/py_tutorials/py_imgproc/py_filtering/py_filtering.markdown index 82ce0d45ab..72e7b72b2e 100644 --- a/doc/py_tutorials/py_imgproc/py_filtering/py_filtering.markdown +++ b/doc/py_tutorials/py_imgproc/py_filtering/py_filtering.markdown @@ -150,6 +150,3 @@ Additional Resources -------------------- -# Details about the [bilateral filtering](http://people.csail.mit.edu/sparis/bf_course/) - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.markdown b/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.markdown index 6dd151fe96..6aa6e0b4e3 100644 --- a/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.markdown +++ b/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.markdown @@ -163,6 +163,3 @@ Additional Resources -------------------- -# "Computer Vision: Algorithms and Applications", Richard Szeliski - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.markdown b/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.markdown index 349ebac031..b9fea5fd29 100644 --- a/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.markdown +++ b/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.markdown @@ -146,9 +146,6 @@ mark the rectangle area in mask image with 2-pixel or 3-pixel (probable backgrou mark our sure_foreground with 1-pixel as we did in second example. Then directly apply the grabCut function with mask mode. -Additional Resources --------------------- - Exercises --------- diff --git a/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.markdown b/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.markdown index 0b9556f2bb..2c0b03e913 100644 --- a/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.markdown +++ b/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.markdown @@ -103,9 +103,3 @@ plt.show() Check the result below: ![image](images/double_edge.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_histograms/py_2d_histogram/py_2d_histogram.markdown b/doc/py_tutorials/py_imgproc/py_histograms/py_2d_histogram/py_2d_histogram.markdown index 8e05a64080..a0cc5dfc02 100644 --- a/doc/py_tutorials/py_imgproc/py_histograms/py_2d_histogram/py_2d_histogram.markdown +++ b/doc/py_tutorials/py_imgproc/py_histograms/py_2d_histogram/py_2d_histogram.markdown @@ -125,9 +125,3 @@ output of that code for the same image as above: You can clearly see in the histogram what colors are present, blue is there, yellow is there, and some white due to chessboard is there. Nice !!! - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_backprojection/py_histogram_backprojection.markdown b/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_backprojection/py_histogram_backprojection.markdown index dce31c376b..3b1097636f 100644 --- a/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_backprojection/py_histogram_backprojection.markdown +++ b/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_backprojection/py_histogram_backprojection.markdown @@ -123,6 +123,3 @@ Additional Resources -# "Indexing via color histograms", Swain, Michael J. , Third international conference on computer vision,1990. - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_begins/py_histogram_begins.markdown b/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_begins/py_histogram_begins.markdown index 5667cee36c..6d5f89ef5b 100644 --- a/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_begins/py_histogram_begins.markdown +++ b/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_begins/py_histogram_begins.markdown @@ -197,6 +197,3 @@ Additional Resources -------------------- -# [Cambridge in Color website](http://www.cambridgeincolour.com/tutorials/histograms1.htm) - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_equalization/py_histogram_equalization.markdown b/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_equalization/py_histogram_equalization.markdown index bc9c69a714..c7160d2bd4 100644 --- a/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_equalization/py_histogram_equalization.markdown +++ b/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_equalization/py_histogram_equalization.markdown @@ -151,6 +151,3 @@ Also check these SOF questions regarding contrast adjustment: C?](http://stackoverflow.com/questions/10549245/how-can-i-adjust-contrast-in-opencv-in-c) 4. [How do I equalize contrast & brightness of images using opencv?](http://stackoverflow.com/questions/10561222/how-do-i-equalize-contrast-brightness-of-images-using-opencv) - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.markdown b/doc/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.markdown index 570ad9145c..5778b83834 100644 --- a/doc/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.markdown +++ b/doc/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.markdown @@ -45,9 +45,3 @@ cv.destroyAllWindows() Result is shown below: ![image](images/houghcircles2.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.markdown b/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.markdown index 9851599455..7f38a0cdc4 100644 --- a/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.markdown +++ b/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.markdown @@ -103,6 +103,3 @@ Additional Resources -------------------- -# [Hough Transform on Wikipedia](http://en.wikipedia.org/wiki/Hough_transform) - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.markdown b/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.markdown index f52a2ce411..24b504914f 100644 --- a/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.markdown +++ b/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.markdown @@ -152,6 +152,3 @@ Additional Resources -------------------- -# [Morphological Operations](http://homepages.inf.ed.ac.uk/rbf/HIPR2/morops.htm) at HIPR2 - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_pyramids/py_pyramids.markdown b/doc/py_tutorials/py_imgproc/py_pyramids/py_pyramids.markdown index 0470211fd3..df6ae70ed6 100644 --- a/doc/py_tutorials/py_imgproc/py_pyramids/py_pyramids.markdown +++ b/doc/py_tutorials/py_imgproc/py_pyramids/py_pyramids.markdown @@ -139,6 +139,3 @@ Additional Resources -------------------- -# [Image Blending](http://pages.cs.wisc.edu/~csverma/CS766_09/ImageMosaic/imagemosaic.html) - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_template_matching/py_template_matching.markdown b/doc/py_tutorials/py_imgproc/py_template_matching/py_template_matching.markdown index 3a59bf6b23..e5eddb0e6b 100644 --- a/doc/py_tutorials/py_imgproc/py_template_matching/py_template_matching.markdown +++ b/doc/py_tutorials/py_imgproc/py_template_matching/py_template_matching.markdown @@ -132,9 +132,3 @@ cv.imwrite('res.png',img_rgb) Result: ![image](images/res_mario.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_imgproc/py_transforms/py_fourier_transform/py_fourier_transform.markdown b/doc/py_tutorials/py_imgproc/py_transforms/py_fourier_transform/py_fourier_transform.markdown index df12efd45c..5378012534 100644 --- a/doc/py_tutorials/py_imgproc/py_transforms/py_fourier_transform/py_fourier_transform.markdown +++ b/doc/py_tutorials/py_imgproc/py_transforms/py_fourier_transform/py_fourier_transform.markdown @@ -291,6 +291,3 @@ Additional Resources Theory](http://cns-alumni.bu.edu/~slehar/fourier/fourier.html) by Steven Lehar 2. [Fourier Transform](http://homepages.inf.ed.ac.uk/rbf/HIPR2/fourier.htm) at HIPR 3. [What does frequency domain denote in case of images?](http://dsp.stackexchange.com/q/1637/818) - -Exercises ---------- diff --git a/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.markdown b/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.markdown index 05a1300a16..4982f72df6 100644 --- a/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.markdown +++ b/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.markdown @@ -186,9 +186,3 @@ cv.destroyAllWindows() See the result below for K=8: ![image](images/oc_color_quantization.jpg) - -Additional Resources --------------------- - -Exercises ---------- diff --git a/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_understanding/py_kmeans_understanding.markdown b/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_understanding/py_kmeans_understanding.markdown index 988d5b08b8..ebf0007cd2 100644 --- a/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_understanding/py_kmeans_understanding.markdown +++ b/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_understanding/py_kmeans_understanding.markdown @@ -80,6 +80,3 @@ Additional Resources -# [Machine Learning Course](https://www.coursera.org/course/ml), Video lectures by Prof. Andrew Ng (Some of the images are taken from this) - -Exercises ---------- diff --git a/doc/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.markdown b/doc/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.markdown index 55f74237e9..a115d23530 100644 --- a/doc/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.markdown +++ b/doc/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.markdown @@ -130,5 +130,3 @@ Additional Resources -# [NPTEL notes on Statistical Pattern Recognition, Chapters 25-29](https://nptel.ac.in/courses/117108048) -Exercises ---------- diff --git a/doc/py_tutorials/py_photo/py_non_local_means/py_non_local_means.markdown b/doc/py_tutorials/py_photo/py_non_local_means/py_non_local_means.markdown index 94e57d4d6e..36a5a4a782 100644 --- a/doc/py_tutorials/py_photo/py_non_local_means/py_non_local_means.markdown +++ b/doc/py_tutorials/py_photo/py_non_local_means/py_non_local_means.markdown @@ -147,6 +147,3 @@ Additional Resources recommended to visit. Our test image is generated from this link) 2. [Online course at coursera](https://www.coursera.org/course/images) (First image taken from here) - -Exercises ---------- diff --git a/doc/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.markdown b/doc/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.markdown index 9ef961de3f..68fa5f678a 100644 --- a/doc/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.markdown +++ b/doc/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.markdown @@ -237,9 +237,6 @@ make doxygen @endcode Then open opencv/build/doc/doxygen/html/index.html and bookmark it in the browser. -Additional Resources --------------------- - Exercises --------- diff --git a/doc/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.markdown b/doc/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.markdown index c30f80dd18..198422d8bf 100644 --- a/doc/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.markdown +++ b/doc/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.markdown @@ -116,9 +116,6 @@ Building OpenCV from source @note We have installed with no other support like TBB, Eigen, Qt, Documentation etc. It would be difficult to explain it here. A more detailed video will be added soon or you can just hack around. -Additional Resources --------------------- - Exercises ---------