diff --git a/modules/bgsegm/README.md b/modules/bgsegm/README.md index a678929d6..8108f0275 100644 --- a/modules/bgsegm/README.md +++ b/modules/bgsegm/README.md @@ -1,10 +1,10 @@ -Improved Background-Foreground Segmentation Methods +Improved Background-Foreground Segmentation Methods =================================================== -This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation. It[1] was introduced by Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg in 2012. As per the paper, the system ran a successful interactive audio art installation called “Are We There Yet?” from March 31 - July 31 2011 at the Contemporary Jewish Museum in San Francisco, California. +This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation. It[1] was introduced by Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg in 2012. As per the paper, the system ran a successful interactive audio art installation called "Are We There Yet?" from March 31 - July 31 2011 at the Contemporary Jewish Museum in San Francisco, California. It uses first few (120 by default) frames for background modelling. It employs probabilistic foreground segmentation algorithm that identifies possible foreground objects using Bayesian inference. The estimates are adaptive; newer observations are more heavily weighted than old observations to accommodate variable illumination. Several morphological filtering operations like closing and opening are done to remove unwanted noise. You will get a black window during first few frames. References ---------- -[1]: A.B. Godbehere, A. Matsukawa, K. Goldberg. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. American Control Conference. (2012), pp. 4305–4312 \ No newline at end of file +[1]: A.B. Godbehere, A. Matsukawa, K. Goldberg. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. American Control Conference. (2012), pp. 4305–4312 diff --git a/modules/datasets/include/opencv2/datasets/dataset.hpp b/modules/datasets/include/opencv2/datasets/dataset.hpp index ccf2b6649..f62da1555 100644 --- a/modules/datasets/include/opencv2/datasets/dataset.hpp +++ b/modules/datasets/include/opencv2/datasets/dataset.hpp @@ -485,7 +485,7 @@ Implements loading dataset: "VOT 2015 dataset comprises 60 short sequences showing various objects in challenging backgrounds. The sequences were chosen from a large pool of sequences including the ALOV dataset, OTB2 dataset, -non-tracking datasets, Computer Vision Online, Professor Bob Fisher’s Image Database, Videezy, +non-tracking datasets, Computer Vision Online, Professor Bob Fisher's Image Database, Videezy, Center for Research in Computer Vision, University of Central Florida, USA, NYU Center for Genomics and Systems Biology, Data Wrangling, Open Access Directory and Learning and Recognition in Vision Group, INRIA, France. The VOT sequence selection protocol was applied to obtain a representative diff --git a/modules/face/include/opencv2/face.hpp b/modules/face/include/opencv2/face.hpp index eefd4e613..de9637935 100644 --- a/modules/face/include/opencv2/face.hpp +++ b/modules/face/include/opencv2/face.hpp @@ -70,7 +70,7 @@ which is available since the 2.4 release. I suggest you take a look at its descr Algorithm provides the following features for all derived classes: -- So called “virtual constructor”. That is, each Algorithm derivative is registered at program +- So called "virtual constructor". That is, each Algorithm derivative is registered at program start and you can get the list of registered algorithms and create instance of a particular algorithm by its name (see Algorithm::create). If you plan to add your own algorithms, it is good practice to add a unique prefix to your algorithms to distinguish them from other diff --git a/modules/fuzzy/doc/fuzzy.bib b/modules/fuzzy/doc/fuzzy.bib index 064d340f7..329e57ec5 100644 --- a/modules/fuzzy/doc/fuzzy.bib +++ b/modules/fuzzy/doc/fuzzy.bib @@ -52,7 +52,7 @@ } @incollection{IPMU2012, - title={$F^1$-transform edge detector inspired by canny’s algorithm}, + title={$F^1$-transform edge detector inspired by canny's algorithm}, author={Perfilieva, Irina and Hod'{\'a}kov{\'a}, Petra and Hurtík, Petr}, booktitle={Advances on Computational Intelligence}, pages={230--239}, @@ -75,4 +75,4 @@ pages={235--240}, year={2015}, organization={IEEE} -} \ No newline at end of file +} diff --git a/modules/saliency/include/opencv2/saliency/saliencyBaseClasses.hpp b/modules/saliency/include/opencv2/saliency/saliencyBaseClasses.hpp index f92922e6a..bbabfeae5 100644 --- a/modules/saliency/include/opencv2/saliency/saliencyBaseClasses.hpp +++ b/modules/saliency/include/opencv2/saliency/saliencyBaseClasses.hpp @@ -93,7 +93,7 @@ class CV_EXPORTS_W StaticSaliency : public virtual Saliency targets, a segmentation by clustering is performed, using *K-means algorithm*. Then, to gain a binary representation of clustered saliency map, since values of the map can vary according to the characteristics of frame under analysis, it is not convenient to use a fixed threshold. So, - *Otsu’s algorithm* is used, which assumes that the image to be thresholded contains two classes + *Otsu's algorithm* is used, which assumes that the image to be thresholded contains two classes of pixels or bi-modal histograms (e.g. foreground and back-ground pixels); later on, the algorithm calculates the optimal threshold separating those two classes, so that their intra-class variance is minimal. diff --git a/modules/sfm/src/libmv_light/libmv/correspondence/feature_matching.h b/modules/sfm/src/libmv_light/libmv/correspondence/feature_matching.h index 0552f2850..26567829b 100644 --- a/modules/sfm/src/libmv_light/libmv/correspondence/feature_matching.h +++ b/modules/sfm/src/libmv_light/libmv/correspondence/feature_matching.h @@ -77,7 +77,7 @@ void FindCandidateMatches(const FeatureSet &left, // method. // I.E: A match is considered as strong if the following test is true : // I.E distance[0] < fRatio * distances[1]. -// From David Lowe “Distinctive Image Features from Scale-Invariant Keypoints”. +// From David Lowe "Distinctive Image Features from Scale-Invariant Keypoints". // You can use David Lowe's magic ratio (0.6 or 0.8). // 0.8 allow to remove 90% of the false matches while discarding less than 5% // of the correct matches. diff --git a/modules/structured_light/include/opencv2/structured_light/graycodepattern.hpp b/modules/structured_light/include/opencv2/structured_light/graycodepattern.hpp index 55b39af0f..ca228acdf 100644 --- a/modules/structured_light/include/opencv2/structured_light/graycodepattern.hpp +++ b/modules/structured_light/include/opencv2/structured_light/graycodepattern.hpp @@ -137,7 +137,7 @@ class CV_EXPORTS_W GrayCodePattern : public StructuredLightPattern * @param patternImages The pattern images acquired by the camera, stored in a grayscale vector < Mat >. * @param x x coordinate of the image pixel. * @param y y coordinate of the image pixel. - * @param projPix Projector's pixel corresponding to the camera's pixel: projPix.x and projPix.y are the image coordinates of the projector’s pixel corresponding to the pixel being decoded in a camera. + * @param projPix Projector's pixel corresponding to the camera's pixel: projPix.x and projPix.y are the image coordinates of the projector's pixel corresponding to the pixel being decoded in a camera. */ CV_WRAP virtual bool getProjPixel( InputArrayOfArrays patternImages, int x, int y, Point &projPix ) const = 0; @@ -146,4 +146,4 @@ class CV_EXPORTS_W GrayCodePattern : public StructuredLightPattern //! @} } } -#endif \ No newline at end of file +#endif diff --git a/modules/structured_light/include/opencv2/structured_light/structured_light.hpp b/modules/structured_light/include/opencv2/structured_light/structured_light.hpp index 5e413ac69..4d4b6fa78 100644 --- a/modules/structured_light/include/opencv2/structured_light/structured_light.hpp +++ b/modules/structured_light/include/opencv2/structured_light/structured_light.hpp @@ -53,7 +53,7 @@ namespace structured_light { // other algorithms can be implemented enum { - DECODE_3D_UNDERWORLD = 0 //!< Kyriakos Herakleous, Charalambos Poullis. “3DUNDERWORLD-SLS: An Open-Source Structured-Light Scanning System for Rapid Geometry Acquisition”, arXiv preprint arXiv:1406.6595 (2014). + DECODE_3D_UNDERWORLD = 0 //!< Kyriakos Herakleous, Charalambos Poullis. "3DUNDERWORLD-SLS: An Open-Source Structured-Light Scanning System for Rapid Geometry Acquisition", arXiv preprint arXiv:1406.6595 (2014). }; /** @brief Abstract base class for generating and decoding structured light patterns. @@ -88,4 +88,4 @@ class CV_EXPORTS_W StructuredLightPattern : public virtual Algorithm } } -#endif \ No newline at end of file +#endif diff --git a/modules/text/include/opencv2/text/textDetector.hpp b/modules/text/include/opencv2/text/textDetector.hpp index fdb92fdfb..ad3ff1247 100644 --- a/modules/text/include/opencv2/text/textDetector.hpp +++ b/modules/text/include/opencv2/text/textDetector.hpp @@ -5,7 +5,7 @@ #ifndef __OPENCV_TEXT_TEXTDETECTOR_HPP__ #define __OPENCV_TEXT_TEXTDETECTOR_HPP__ -#include"ocr.hpp" +#include "ocr.hpp" namespace cv { diff --git a/modules/text/tutorials/install_tesseract/install_tesseract.markdown b/modules/text/tutorials/install_tesseract/install_tesseract.markdown index 90a3e895d..739c9245f 100644 --- a/modules/text/tutorials/install_tesseract/install_tesseract.markdown +++ b/modules/text/tutorials/install_tesseract/install_tesseract.markdown @@ -113,4 +113,4 @@ CMAKE_OPTIONS='-DBUILD_PERF_TESTS:BOOL=OFF -DBUILD_TESTS:BOOL=OFF -DBUILD_DOCS:B @endcode -# now we need the language files from tesseract. either clone https://github.com/tesseract-ocr/tessdata, or copy only those language files you need to a folder (example c:\\lib\\install\\tesseract\\tessdata). If you don't want to add a new folder you must copy language file in same folder than your executable -# if you created a new folder, then you must add a new variable, TESSDATA_PREFIX with the value c:\\lib\\install\\tessdata to your system's environment --# add c:\\Lib\\install\\leptonica\\bin and c:\\Lib\\install\\tesseract\\bin to your PATH environment. If you don't want to modify the PATH then copy tesseract400.dll and leptonica-1.74.4.dll to the same folder than your exe file. \ No newline at end of file +-# add c:\\Lib\\install\\leptonica\\bin and c:\\Lib\\install\\tesseract\\bin to your PATH environment. If you don't want to modify the PATH then copy tesseract400.dll and leptonica-1.74.4.dll to the same folder than your exe file. diff --git a/modules/tracking/include/opencv2/tracking/tracker.hpp b/modules/tracking/include/opencv2/tracking/tracker.hpp index 22af92177..bd3955577 100644 --- a/modules/tracking/include/opencv2/tracking/tracker.hpp +++ b/modules/tracking/include/opencv2/tracking/tracker.hpp @@ -1171,7 +1171,7 @@ class CV_EXPORTS_W TrackerMedianFlow : public Tracker tracking, learning and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that -have been observed so far and corrects the tracker if necessary. The learning estimates detector’s +have been observed so far and corrects the tracker if necessary. The learning estimates detector's errors and updates it to avoid these errors in the future. The implementation is based on @cite TLD . The Median Flow algorithm (see cv::TrackerMedianFlow) was chosen as a tracking component in this @@ -1435,7 +1435,7 @@ public: the long-term tracking task into tracking, learning and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that -have been observed so far and corrects the tracker if necessary. The learning estimates detector’s +have been observed so far and corrects the tracker if necessary. The learning estimates detector's errors and updates it to avoid these errors in the future. The implementation is based on @cite TLD . The Median Flow algorithm (see cv::TrackerMedianFlow) was chosen as a tracking component in this