diff --git a/doc/js_tutorials/js_assets/webnn-electron/package.json b/doc/js_tutorials/js_assets/webnn-electron/package.json index e6a258ee40..9c3c817db7 100644 --- a/doc/js_tutorials/js_assets/webnn-electron/package.json +++ b/doc/js_tutorials/js_assets/webnn-electron/package.json @@ -1,7 +1,7 @@ { "name": "image_classification", "version": "0.0.1", - "description": "An Electon.js example of image_classification using webnn-native", + "description": "An Electron.js example of image_classification using webnn-native", "main": "main.js", "author": "WebNN-native Authors", "license": "Apache-2.0", diff --git a/doc/js_tutorials/js_setup/js_setup/js_setup.markdown b/doc/js_tutorials/js_setup/js_setup/js_setup.markdown index 9927477443..2a7a111d8a 100644 --- a/doc/js_tutorials/js_setup/js_setup/js_setup.markdown +++ b/doc/js_tutorials/js_setup/js_setup/js_setup.markdown @@ -97,10 +97,10 @@ Building OpenCV.js from Source @endcode @note - The loader is implemented as a js file in the path `/bin/loader.js`. The loader utilizes the [WebAssembly Feature Detection](https://github.com/GoogleChromeLabs/wasm-feature-detect) to detect the features of the broswer and load corresponding OpenCV.js automatically. To use it, you need to use the UMD version of [WebAssembly Feature Detection](https://github.com/GoogleChromeLabs/wasm-feature-detect) and introduce the `loader.js` in your Web application. + The loader is implemented as a js file in the path `/bin/loader.js`. The loader utilizes the [WebAssembly Feature Detection](https://github.com/GoogleChromeLabs/wasm-feature-detect) to detect the features of the browser and load corresponding OpenCV.js automatically. To use it, you need to use the UMD version of [WebAssembly Feature Detection](https://github.com/GoogleChromeLabs/wasm-feature-detect) and introduce the `loader.js` in your Web application. Example Code: - @code{.javascipt} + @code{.javascript} // Set paths configuration let pathsConfig = { wasm: "../../build_wasm/opencv.js", @@ -173,7 +173,7 @@ This snippet and the following require [Node.js](https://nodejs.org) to be insta ### Headless with Puppeteer -Alternatively tests can run with [GoogleChrome/puppeteer](https://github.com/GoogleChrome/puppeteer#readme) which is a version of Google Chrome that runs in the terminal (useful for Continuos integration like travis CI, etc) +Alternatively tests can run with [GoogleChrome/puppeteer](https://github.com/GoogleChrome/puppeteer#readme) which is a version of Google Chrome that runs in the terminal (useful for Continuous integration like travis CI, etc) @code{.sh} cd build_js/bin @@ -229,7 +229,7 @@ node tests.js The simd optimization is experimental as wasm simd is still in development. @note - Now only emscripten LLVM upstream backend supports wasm simd, refering to https://emscripten.org/docs/porting/simd.html. So you need to setup upstream backend environment with the following command first: + Now only emscripten LLVM upstream backend supports wasm simd, referring to https://emscripten.org/docs/porting/simd.html. So you need to setup upstream backend environment with the following command first: @code{.bash} ./emsdk update ./emsdk install latest-upstream diff --git a/doc/tutorials/calib3d/usac.markdown b/doc/tutorials/calib3d/usac.markdown index 27d590be3a..df9e25f907 100644 --- a/doc/tutorials/calib3d/usac.markdown +++ b/doc/tutorials/calib3d/usac.markdown @@ -244,9 +244,9 @@ Samples: There are three new sample files in opencv/samples directory. 1. `epipolar_lines.cpp` – input arguments of `main` function are two - pathes to images. Then correspondences are found using + paths to images. Then correspondences are found using SIFT detector. Fundamental matrix is found using RANSAC from - tentaive correspondences and epipolar lines are plot. + tentative correspondences and epipolar lines are plot. 2. `essential_mat_reconstr.cpp` – input arguments are path to data file containing image names and single intrinsic matrix and directory diff --git a/doc/tutorials/core/how_to_use_OpenCV_parallel_for_new/how_to_use_OpenCV_parallel_for_new.markdown b/doc/tutorials/core/how_to_use_OpenCV_parallel_for_new/how_to_use_OpenCV_parallel_for_new.markdown index 5ef63ed6f4..57cec4cba1 100644 --- a/doc/tutorials/core/how_to_use_OpenCV_parallel_for_new/how_to_use_OpenCV_parallel_for_new.markdown +++ b/doc/tutorials/core/how_to_use_OpenCV_parallel_for_new/how_to_use_OpenCV_parallel_for_new.markdown @@ -92,7 +92,7 @@ We then fill value to the corresponding pixel in the dst image. ### Parallel implementation -When looking at the sequential implementation, we can notice that each pixel depends on multiple neighbouring pixels but only one pixel is edited at a time. Thus, to optimize the computation, we can split the image into stripes and parallely perform convolution on each, by exploiting the multi-core architecture of modern processor. The OpenCV @ref cv::parallel_for_ framework automatically decides how to split the computation efficiently and does most of the work for us. +When looking at the sequential implementation, we can notice that each pixel depends on multiple neighbouring pixels but only one pixel is edited at a time. Thus, to optimize the computation, we can split the image into stripes and parallelly perform convolution on each, by exploiting the multi-core architecture of modern processor. The OpenCV @ref cv::parallel_for_ framework automatically decides how to split the computation efficiently and does most of the work for us. @note Although values of a pixel in a particular stripe may depend on pixel values outside the stripe, these are only read only operations and hence will not cause undefined behaviour. diff --git a/doc/tutorials/dnn/dnn_halide_scheduling/dnn_halide_scheduling.markdown b/doc/tutorials/dnn/dnn_halide_scheduling/dnn_halide_scheduling.markdown index 38324610be..6d2751a467 100644 --- a/doc/tutorials/dnn/dnn_halide_scheduling/dnn_halide_scheduling.markdown +++ b/doc/tutorials/dnn/dnn_halide_scheduling/dnn_halide_scheduling.markdown @@ -70,7 +70,7 @@ Sometimes networks built using blocked structure that means some layer are identical or quite similar. If you want to apply the same scheduling for different layers accurate to tiling or vectorization factors, define scheduling patterns in section `patterns` at the beginning of scheduling file. -Also, your patters may use some parametric variables. +Also, your patterns may use some parametric variables. @code # At the beginning of the file patterns: diff --git a/doc/tutorials/dnn/dnn_text_spotting/dnn_text_spotting.markdown b/doc/tutorials/dnn/dnn_text_spotting/dnn_text_spotting.markdown index c2b3ec8d71..b6f4e120fb 100644 --- a/doc/tutorials/dnn/dnn_text_spotting/dnn_text_spotting.markdown +++ b/doc/tutorials/dnn/dnn_text_spotting/dnn_text_spotting.markdown @@ -29,8 +29,8 @@ Before recognition, you should `setVocabulary` and `setDecodeType`. - "CTC-prefix-beam-search", the output of the text recognition model should be a probability matrix same with "CTC-greedy". - The algorithm is proposed at Hannun's [paper](https://arxiv.org/abs/1408.2873). - `setDecodeOptsCTCPrefixBeamSearch` could be used to control the beam size in search step. - - To futher optimize for big vocabulary, a new option `vocPruneSize` is introduced to avoid iterate the whole vocbulary - but only the number of `vocPruneSize` tokens with top probabilty. + - To further optimize for big vocabulary, a new option `vocPruneSize` is introduced to avoid iterate the whole vocbulary + but only the number of `vocPruneSize` tokens with top probability. @ref cv::dnn::TextRecognitionModel::recognize() is the main function for text recognition. - The input image should be a cropped text image or an image with `roiRects` diff --git a/doc/tutorials/gapi/anisotropic_segmentation/porting_anisotropic_segmentation.markdown b/doc/tutorials/gapi/anisotropic_segmentation/porting_anisotropic_segmentation.markdown index 60829360fe..64b68e644c 100644 --- a/doc/tutorials/gapi/anisotropic_segmentation/porting_anisotropic_segmentation.markdown +++ b/doc/tutorials/gapi/anisotropic_segmentation/porting_anisotropic_segmentation.markdown @@ -142,7 +142,7 @@ being a Graph API, doesn't force its users to do that. However, a graph is still built implicitly when a cv::GComputation object is defined. It may be useful to inspect how the resulting graph looks like to check if it is generated correctly and if it really -represents our alrogithm. It is also useful to learn the structure of +represents our algorithm. It is also useful to learn the structure of the graph to see if it has any redundancies. G-API allows to dump generated graphs to `.dot` files which then diff --git a/doc/tutorials/gapi/interactive_face_detection/interactive_face_detection.markdown b/doc/tutorials/gapi/interactive_face_detection/interactive_face_detection.markdown index 6f8b03bb61..27916b4176 100644 --- a/doc/tutorials/gapi/interactive_face_detection/interactive_face_detection.markdown +++ b/doc/tutorials/gapi/interactive_face_detection/interactive_face_detection.markdown @@ -241,7 +241,7 @@ pipeline is compiled for streaming: cv::GComputation::compileStreaming() triggers a special video-oriented form of graph compilation where G-API is trying to optimize throughput. Result of this compilation is an object of special type -cv::GStreamingCompiled -- in constract to a traditional callable +cv::GStreamingCompiled -- in contrast to a traditional callable cv::GCompiled, these objects are closer to media players in their semantics. diff --git a/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.markdown b/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.markdown index 520d8761eb..14b3105b68 100644 --- a/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.markdown +++ b/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.markdown @@ -79,7 +79,7 @@ The main function is rather simple, as follows from the comments we do the follo In general callback functions are used to react to some kind of signal, in our case it's trackbar's state change. Explicit one-time call of `thresh_callback` is necessary to display - the "Contours" window simultaniously with the "Source" window. + the "Contours" window simultaneously with the "Source" window. @add_toggle_cpp @snippet samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo1.cpp trackbar diff --git a/doc/tutorials/introduction/android_binary_package/dev_with_OCV_on_Android.markdown b/doc/tutorials/introduction/android_binary_package/dev_with_OCV_on_Android.markdown index 5acdbc41ed..d37721a188 100644 --- a/doc/tutorials/introduction/android_binary_package/dev_with_OCV_on_Android.markdown +++ b/doc/tutorials/introduction/android_binary_package/dev_with_OCV_on_Android.markdown @@ -240,7 +240,7 @@ taken: Hello OpenCV Sample ------------------- -Here are basic steps to guide you trough the process of creating a simple OpenCV-centric +Here are basic steps to guide you through the process of creating a simple OpenCV-centric application. It will be capable of accessing camera output, processing it and displaying the result. -# Open Eclipse IDE, create a new clean workspace, create a new Android project diff --git a/doc/tutorials/introduction/linux_gdb_pretty_printer/linux_gdb_pretty_printer.markdown b/doc/tutorials/introduction/linux_gdb_pretty_printer/linux_gdb_pretty_printer.markdown index 9d64469920..b0b8d404a0 100644 --- a/doc/tutorials/introduction/linux_gdb_pretty_printer/linux_gdb_pretty_printer.markdown +++ b/doc/tutorials/introduction/linux_gdb_pretty_printer/linux_gdb_pretty_printer.markdown @@ -20,7 +20,7 @@ This pretty-printer can show element type, `is_continuous`, `is_submatrix` flags # Installation {#tutorial_linux_gdb_pretty_printer_installation} -Move into `opencv/samples/gdb/`. Place `mat_pretty_printer.py` in a convinient place, rename `gdbinit` to `.gdbinit` and move it into your home folder. Change 'source' line of `.gdbinit` to point to your `mat_pretty_printer.py` path. +Move into `opencv/samples/gdb/`. Place `mat_pretty_printer.py` in a convenient place, rename `gdbinit` to `.gdbinit` and move it into your home folder. Change 'source' line of `.gdbinit` to point to your `mat_pretty_printer.py` path. In order to check version of python bundled with your gdb, use the following commands from the gdb shell: @@ -34,5 +34,5 @@ If the version of python 3 installed in your system doesn't match the version in # Usage {#tutorial_linux_gdb_pretty_printer_usage} -The fields in a debugger prefixed with `view_` are pseudo-fields added for convinience, the rest are left as is. -If you feel that the number of elements in truncated view is too low, you can edit `mat_pretty_printer.py` - `np.set_printoptions` controlls everything matrix display-related. +The fields in a debugger prefixed with `view_` are pseudo-fields added for convenience, the rest are left as is. +If you feel that the number of elements in truncated view is too low, you can edit `mat_pretty_printer.py` - `np.set_printoptions` controls everything matrix display-related. diff --git a/doc/tutorials/ios/image_manipulation/image_manipulation.markdown b/doc/tutorials/ios/image_manipulation/image_manipulation.markdown index f01aa6e4f8..57f34e8e4c 100644 --- a/doc/tutorials/ios/image_manipulation/image_manipulation.markdown +++ b/doc/tutorials/ios/image_manipulation/image_manipulation.markdown @@ -22,7 +22,7 @@ Introduction In *OpenCV* all the image processing operations are usually carried out on the *Mat* structure. In iOS however, to render an image on screen it have to be an instance of the *UIImage* class. To convert an *OpenCV Mat* to an *UIImage* we use the *Core Graphics* framework available in iOS. Below -is the code needed to covert back and forth between Mat's and UIImage's. +is the code needed to convert back and forth between Mat's and UIImage's. @code{.m} - (cv::Mat)cvMatFromUIImage:(UIImage *)image {