propagated some fixes from 2.3 to trunk

pull/13383/head
Vadim Pisarevsky 14 years ago
parent d4fbb2c4fb
commit 49467947ac
  1. 2
      modules/androidcamera/CMakeLists.txt
  2. 8
      modules/calib3d/doc/camera_calibration_and_3d_reconstruction.rst
  3. 15
      modules/core/doc/operations_on_arrays.rst
  4. 271
      modules/core/doc/xml_yaml_persistence.rst
  5. 4
      modules/core/include/opencv2/core/version.hpp
  6. 32
      modules/features2d/doc/common_interfaces_of_descriptor_extractors.rst
  7. 20
      modules/features2d/doc/common_interfaces_of_descriptor_matchers.rst
  8. 92
      modules/features2d/doc/common_interfaces_of_feature_detectors.rst
  9. 36
      modules/features2d/doc/common_interfaces_of_generic_descriptor_matchers.rst
  10. 8
      modules/features2d/doc/drawing_function_of_keypoints_and_matches.rst
  11. 33
      modules/features2d/doc/feature_detection_and_description.rst
  12. 12
      modules/features2d/doc/object_categorization.rst
  13. 6
      modules/gpu/CMakeLists.txt
  14. 2
      modules/highgui/CMakeLists.txt
  15. 1
      modules/highgui/doc/reading_and_writing_images_and_video.rst
  16. 4
      modules/highgui/doc/user_interface.rst
  17. 7
      modules/highgui/src/cap_dshow.cpp
  18. 4
      modules/highgui/src/cap_ffmpeg.cpp
  19. 4
      modules/highgui/src/cap_qtkit.mm
  20. 57
      modules/highgui/src/window_cocoa.mm
  21. 3
      modules/imgproc/doc/feature_detection.rst
  22. 8
      modules/imgproc/doc/filtering.rst
  23. 3
      modules/imgproc/doc/geometric_transformations.rst
  24. 3
      modules/imgproc/doc/miscellaneous_transformations.rst
  25. 63
      modules/ml/doc/decision_trees.rst
  26. 2
      modules/python/test/test.py

@ -51,4 +51,4 @@ if (ARMEABI_V7A AND NOT BUILD_ANDROID_CAMERA_WRAPPER)
DESTINATION lib DESTINATION lib
COMPONENT main) COMPONENT main)
endforeach() endforeach()
endif() endif()

@ -413,10 +413,8 @@ value if all of the corners are found and they are placed
in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder
them, it returns 0. For example, a regular chessboard has 8 x 8 them, it returns 0. For example, a regular chessboard has 8 x 8
squares and 7 x 7 internal corners, that is, points where the black squares and 7 x 7 internal corners, that is, points where the black
squares touch each other. The detected coordinates are approximate, squares touch each other. The detected coordinates are approximate so the function calls :ref:`cornerSubPix` internally to determine their position more accurately.
and to determine their position more accurately, you may use You also may use the function :ref:`cornerSubPix` with different parameters if returned coordinates are not accurate enough.
the function
:ref:`cornerSubPix`.
Sample usage of detecting and drawing chessboard corners: :: Sample usage of detecting and drawing chessboard corners: ::
@ -628,7 +626,7 @@ findHomography
.. math:: .. math::
\| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} \texttt{srcPoints} _i) \| > \texttt{ransacReprojThreshold} \| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} * \texttt{srcPoints} _i) \| > \texttt{ransacReprojThreshold}
then the point :math:`i` is considered an outlier. If ``srcPoints`` and ``dstPoints`` are measured in pixels, it usually makes sense to set this parameter somewhere in the range of 1 to 10. then the point :math:`i` is considered an outlier. If ``srcPoints`` and ``dstPoints`` are measured in pixels, it usually makes sense to set this parameter somewhere in the range of 1 to 10.

@ -3,15 +3,6 @@ Operations on Arrays
.. highlight:: cpp .. highlight:: cpp
.. list-table:: **Arithmetical Operations**
* -
-
* - :ocv:func:`abs` (src)
- Computes an absolute value of each matrix element.
* - :ocv:func:`absdiff` (src1, src2, dst)
- Computes the per-element absolute difference between 2 arrays or between an array and a scalar.
abs abs
--- ---
.. ocv:function:: MatExpr abs(const Mat& src) .. ocv:function:: MatExpr abs(const Mat& src)
@ -77,9 +68,11 @@ The function ``absdiff`` computes:
add add
------- -------
.. ocv:function:: void add(InputArray src1, InputArray src2, OutputArray dst, InputArray mask=noArray(), int dtype=-1)
Computes the per-element sum of two arrays or an array and a scalar. Computes the per-element sum of two arrays or an array and a scalar.
.. ocv:function:: void add(InputArray src1, InputArray src2, OutputArray dst, InputArray mask=noArray(), int dtype=-1)
.. ocv:pyfunction:: cv2.add(src1, src2 [, dst=None [, mask=None [, dtype=-1]]]) -> dst
:param src1: First source array or a scalar. :param src1: First source array or a scalar.

@ -3,149 +3,164 @@ XML/YAML Persistence
.. highlight:: cpp .. highlight:: cpp
.. index:: FileStorage XML/YAML file storages. Writing to a file storage.
--------------------------------------------------
You can store and then restore various OpenCV data structures to/from XML (http://www.w3c.org/XML) or YAML
(http://www.yaml.org) formats. Also, it is possible store and load arbitrarily complex data structures, which include OpenCV data structures, as well as primitive data types (integer and floating-point numbers and text strings) as their elements.
Use the following procedure to write something to XML or YAML:
#. Create new :ocv:class:`FileStorage` and open it for writing. It can be done with a single call to :ocv:func:`FileStorage::FileStorage` constructor that takes a filename, or you can use the default constructor and then call :ocv:class:`FileStorage::open`. Format of the file (XML or YAML) is determined from the filename extension (".xml" and ".yml"/".yaml", respectively)
#. Write all the data you want using the streaming operator ``>>``, just like in the case of STL streams.
#. Close the file using :ocv:func:`FileStorage::release`. ``FileStorage`` destructor also closes the file.
Here is an example: ::
#include "opencv2/opencv.hpp"
#include <time.h>
using namespace cv;
int main(int, char** argv)
{
FileStorage fs("test.yml", FileStorage::WRITE);
fs << "frameCount" << 5;
time_t rawtime; time(&rawtime);
fs << "calibrationDate" << asctime(localtime(&rawtime));
Mat cameraMatrix = (Mat_<double>(3,3) << 1000, 0, 320, 0, 1000, 240, 0, 0, 1);
Mat distCoeffs = (Mat_<double>(5,1) << 0.1, 0.01, -0.001, 0, 0);
fs << "cameraMatrix" << cameraMatrix << "distCoeffs" << distCoeffs;
fs << "features" << "[";
for( int i = 0; i < 3; i++ )
{
int x = rand() % 640;
int y = rand() % 480;
uchar lbp = rand() % 256;
fs << "{:" << "x" << x << "y" << y << "lbp" << "[:";
for( int j = 0; j < 8; j++ )
fs << ((lbp >> j) & 1);
fs << "]" << "}";
}
fs << "]";
fs.release();
return 0;
}
The sample above stores to XML and integer, text string (calibration date), 2 matrices, and a custom structure "feature", which includes feature coordinates and LBP (local binary pattern) value. Here is output of the sample:
.. code-block:: yaml
%YAML:1.0
frameCount: 5
calibrationDate: "Fri Jun 17 14:09:29 2011\n"
cameraMatrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1000., 0., 320., 0., 1000., 240., 0., 0., 1. ]
distCoeffs: !!opencv-matrix
rows: 5
cols: 1
dt: d
data: [ 1.0000000000000001e-01, 1.0000000000000000e-02,
-1.0000000000000000e-03, 0., 0. ]
features:
- { x:167, y:49, lbp:[ 1, 0, 0, 1, 1, 0, 1, 1 ] }
- { x:298, y:130, lbp:[ 0, 0, 0, 1, 0, 0, 1, 1 ] }
- { x:344, y:158, lbp:[ 1, 1, 0, 0, 0, 0, 1, 0 ] }
As an exercise, you can replace ".yml" with ".xml" in the sample above and see, how the corresponding XML file will look like.
Several things can be noted by looking at the sample code and the output:
*
The produced YAML (and XML) consists of heterogeneous collections that can be nested. There are 2 types of collections: named collections (mappings) and unnamed collections (sequences). In mappings each element has a name and is accessed by name. This is similar to structures and ``std::map`` in C/C++ and dictionaries in Python. In sequences elements do not have names, they are accessed by indices. This is similar to arrays and ``std::vector`` in C/C++ and lists, tuples in Python. "Heterogeneous" means that elements of each single collection can have different types.
Top-level collection in YAML/XML is a mapping. Each matrix is stored as a mapping, and the matrix elements are stored as a sequence. Then, there is a sequence of features, where each feature is represented a mapping, and lbp value in a nested sequence.
*
When you write to a mapping (a structure), you write element name followed by its value. When you write to a sequence, you simply write the elements one by one. OpenCV data structures (such as cv::Mat) are written in absolutely the same way as simple C data structures - using **``<<``** operator.
*
To write a mapping, you first write the special string **"{"** to the storage, then write the elements as pairs (``fs << <element_name> << <element_value>``) and then write the closing **"}"**.
*
To write a sequence, you first write the special string **"["**, then write the elements, then write the closing **"]"**.
*
In YAML (but not XML), mappings and sequences can be written in a compact Python-like inline form. In the sample above matrix elements, as well as each feature, including its lbp value, is stored in such inline form. To store a mapping/sequence in a compact form, put ":" after the opening character, e.g. use **"{:"** instead of **"{"** and **"[:"** instead of **"["**. When the data is written to XML, those extra ":" are ignored.
Reading data from a file storage.
---------------------------------
To read the previously written XML or YAML file, do the following:
#.
Open the file storage using :ocv:func:`FileStorage::FileStorage` constructor or :ocv:func:`FileStorage::open` method. In the current implementation the whole file is parsed and the whole representation of file storage is built in memory as a hierarchy of file nodes (see :ocv:class:`FileNode`)
#.
Read the data you are interested in. Use :ocv:func:`FileStorage::operator []`, :ocv:func:`FileNode::operator []` and/or :ocv:class:`FileNodeIterator`.
#.
Close the storage using :ocv:func:`FileStorage::release`.
Here is how to read the file created by the code sample above: ::
FileStorage fs2("test.yml", FileStorage::READ);
// first method: use (type) operator on FileNode.
int frameCount = (int)fs2["frameCount"];
std::string date;
// second method: use FileNode::operator >>
fs2["calibrationDate"] >> date;
Mat cameraMatrix2, distCoeffs2;
fs2["cameraMatrix"] >> cameraMatrix2;
fs2["distCoeffs"] >> distCoeffs2;
cout << "frameCount: " << frameCount << endl
<< "calibration date: " << date << endl
<< "camera matrix: " << cameraMatrix2 << endl
<< "distortion coeffs: " << distCoeffs2 << endl;
FileNode features = fs2["features"];
FileNodeIterator it = features.begin(), it_end = features.end();
int idx = 0;
std::vector<uchar> lbpval;
// iterate through a sequence using FileNodeIterator
for( ; it != it_end; ++it, idx++ )
{
cout << "feature #" << idx << ": ";
cout << "x=" << (int)(*it)["x"] << ", y=" << (int)(*it)["y"] << ", lbp: (";
// you can also easily read numerical arrays using FileNode >> std::vector operator.
(*it)["lbp"] >> lbpval;
for( int i = 0; i < (int)lbpval.size(); i++ )
cout << " " << (int)lbpval[i];
cout << ")" << endl;
}
fs.release();
FileStorage FileStorage
----------- -----------
.. ocv:class:: FileStorage .. ocv:class:: FileStorage
XML/YAML file storage class. :: XML/YAML file storage class that incapsulates all the information necessary for writing or reading data to/from file.
class FileStorage
{
public:
enum { READ=0, WRITE=1, APPEND=2 };
enum { UNDEFINED=0, VALUE_EXPECTED=1, NAME_EXPECTED=2, INSIDE_MAP=4 };
// the default constructor
FileStorage();
// the constructor that opens a file for reading
// (flags=FileStorage::READ) or writing (flags=FileStorage::WRITE)
FileStorage(const string& filename, int flags);
// wraps the already opened CvFileStorage*
FileStorage(CvFileStorage* fs);
// the destructor; closes the file if needed
virtual ~FileStorage();
// opens the specified file for reading (flags=FileStorage::READ)
// or writing (flags=FileStorage::WRITE)
virtual bool open(const string& filename, int flags);
// checks if the storage is opened
virtual bool isOpened() const;
// closes the file
virtual void release();
// returns the first top-level node
FileNode getFirstTopLevelNode() const;
// returns the root file node
// (it is the parent of the first top-level node)
FileNode root(int streamidx=0) const;
// returns the top-level node by name
FileNode operator[](const string& nodename) const;
FileNode operator[](const char* nodename) const;
// returns the underlying CvFileStorage*
CvFileStorage* operator *() { return fs; }
const CvFileStorage* operator *() const { return fs; }
// writes the certain number of elements of the specified format
// (see DataType) without any headers
void writeRaw( const string& fmt, const uchar* vec, size_t len );
// writes an old-style object (CvMat, CvMatND, etc.)
void writeObj( const string& name, const void* obj );
// returns the default object name from the filename
// (used by cvSave() with the default object name, etc.)
static string getDefaultObjectName(const string& filename);
Ptr<CvFileStorage> fs;
string elname;
vector<char> structs;
int state;
};
.. index:: FileNode
FileNode FileNode
-------- --------
.. ocv:class:: FileNode .. ocv:class:: FileNode
XML/YAML file node class. :: The class ``FileNode`` represents each element of the file storage, be it a matrix, a matrix element or a top-level node, containing all the file content. That is, a file node may contain either a singe value (integer, floating-point value or a text string), or it can be a sequence of other file nodes, or it can be a mapping. Type of the file node can be determined using :ocv:func:`FileNode::type` method.
class CV_EXPORTS FileNode
{
public:
enum { NONE=0, INT=1, REAL=2, FLOAT=REAL, STR=3,
STRING=STR, REF=4, SEQ=5, MAP=6, TYPE_MASK=7,
FLOW=8, USER=16, EMPTY=32, NAMED=64 };
FileNode();
FileNode(const CvFileStorage* fs, const CvFileNode* node);
FileNode(const FileNode& node);
FileNode operator[](const string& nodename) const;
FileNode operator[](const char* nodename) const;
FileNode operator[](int i) const;
int type() const;
int rawDataSize(const string& fmt) const;
bool empty() const;
bool isNone() const;
bool isSeq() const;
bool isMap() const;
bool isInt() const;
bool isReal() const;
bool isString() const;
bool isNamed() const;
string name() const;
size_t size() const;
operator int() const;
operator float() const;
operator double() const;
operator string() const;
FileNodeIterator begin() const;
FileNodeIterator end() const;
void readRaw( const string& fmt, uchar* vec, size_t len ) const;
void* readObj() const;
// do not use wrapper pointer classes for better efficiency
const CvFileStorage* fs;
const CvFileNode* node;
};
.. index:: FileNodeIterator
FileNodeIterator FileNodeIterator
---------------- ----------------
.. ocv:class:: FileNodeIterator .. ocv:class:: FileNodeIterator
XML/YAML file node iterator class. :: The class ``FileNodeIterator`` is used to iterate through sequences and mappings. A standard STL notation, with ``node.begin()``, ``node.end()`` denoting the beginning and the end of a sequence, stored in ``node``. See the data reading sample in the beginning of the section.
class CV_EXPORTS FileNodeIterator
{
public:
FileNodeIterator();
FileNodeIterator(const CvFileStorage* fs,
const CvFileNode* node, size_t ofs=0);
FileNodeIterator(const FileNodeIterator& it);
FileNode operator *() const;
FileNode operator ->() const;
FileNodeIterator& operator ++();
FileNodeIterator operator ++(int);
FileNodeIterator& operator --();
FileNodeIterator operator --(int);
FileNodeIterator& operator += (int);
FileNodeIterator& operator -= (int);
FileNodeIterator& readRaw( const string& fmt, uchar* vec,
size_t maxCount=(size_t)INT_MAX );
const CvFileStorage* fs;
const CvFileNode* container;
CvSeqReader reader;
size_t remaining;
};
..

@ -48,8 +48,8 @@
#define __OPENCV_VERSION_HPP__ #define __OPENCV_VERSION_HPP__
#define CV_MAJOR_VERSION 2 #define CV_MAJOR_VERSION 2
#define CV_MINOR_VERSION 2 #define CV_MINOR_VERSION 3
#define CV_SUBMINOR_VERSION 9 #define CV_SUBMINOR_VERSION 0
#define CVAUX_STR_EXP(__A) #__A #define CVAUX_STR_EXP(__A) #__A
#define CVAUX_STR(__A) CVAUX_STR_EXP(__A) #define CVAUX_STR(__A) CVAUX_STR_EXP(__A)

@ -5,9 +5,9 @@ Common Interfaces of Descriptor Extractors
Extractors of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch Extractors of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch
between different algorithms solving the same problem. This section is devoted to computing descriptors between different algorithms solving the same problem. This section is devoted to computing descriptors
that are represented as vectors in a multidimensional space. All objects that implement the ``vector`` represented as vectors in a multidimensional space. All objects that implement the ``vector``
descriptor extractors inherit the descriptor extractors inherit the
:ref:`DescriptorExtractor` interface. :ocv:class:`DescriptorExtractor` interface.
.. index:: DescriptorExtractor .. index:: DescriptorExtractor
@ -15,7 +15,7 @@ DescriptorExtractor
------------------- -------------------
.. ocv:class:: DescriptorExtractor .. ocv:class:: DescriptorExtractor
Abstract base class for computing descriptors for image keypoints :: Abstract base class for computing descriptors for image keypoints. ::
class CV_EXPORTS DescriptorExtractor class CV_EXPORTS DescriptorExtractor
{ {
@ -45,7 +45,7 @@ dense, fixed-dimension vector of a basic type. Most descriptors
follow this pattern as it simplifies computing follow this pattern as it simplifies computing
distances between descriptors. Therefore, a collection of distances between descriptors. Therefore, a collection of
descriptors is represented as descriptors is represented as
:ref:`Mat` , where each row is a keypoint descriptor. :ocv:class:`Mat` , where each row is a keypoint descriptor.
.. index:: DescriptorExtractor::compute .. index:: DescriptorExtractor::compute
@ -57,9 +57,9 @@ DescriptorExtractor::compute
:param image: Image. :param image: Image.
:param keypoints: Keypoints. Keypoints for which a descriptor cannot be computed are removed. Somtimes new keypoints can be added, eg SIFT duplicates keypoint with several dominant orientations (for each orientation). :param keypoints: Keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: ``SIFT`` duplicates keypoint with several dominant orientations (for each orientation).
:param descriptors: Descriptors. Row i is the descriptor for keypoint i. :param descriptors: Descriptors. Row ``i`` is the descriptor for keypoint ``i``.
.. ocv:function:: void DescriptorExtractor::compute( const vector<Mat>& images, vector<vector<KeyPoint> >& keypoints, vector<Mat>& descriptors ) const .. ocv:function:: void DescriptorExtractor::compute( const vector<Mat>& images, vector<vector<KeyPoint> >& keypoints, vector<Mat>& descriptors ) const
@ -103,13 +103,13 @@ DescriptorExtractor::create
The current implementation supports the following types of a descriptor extractor: The current implementation supports the following types of a descriptor extractor:
* ``"SIFT"`` -- :ref:`SiftDescriptorExtractor` * ``"SIFT"`` -- :ocv:class:`SiftDescriptorExtractor`
* ``"SURF"`` -- :ref:`SurfDescriptorExtractor` * ``"SURF"`` -- :ocv:class:`SurfDescriptorExtractor`
* ``"ORB"`` -- :ref:`OrbDescriptorExtractor` * ``"ORB"`` -- :ocv:class:`OrbDescriptorExtractor`
* ``"BRIEF"`` -- :ref:`BriefDescriptorExtractor` * ``"BRIEF"`` -- :ocv:class:`BriefDescriptorExtractor`
A combined format is also supported: descriptor extractor adapter name ( ``"Opponent"`` -- A combined format is also supported: descriptor extractor adapter name ( ``"Opponent"`` --
:ref:`OpponentColorDescriptorExtractor` ) + descriptor extractor name (see above), :ocv:class:`OpponentColorDescriptorExtractor` ) + descriptor extractor name (see above),
for example: ``"OpponentSIFT"`` . for example: ``"OpponentSIFT"`` .
.. index:: SiftDescriptorExtractor .. index:: SiftDescriptorExtractor
@ -121,7 +121,7 @@ SiftDescriptorExtractor
.. ocv:class:: SiftDescriptorExtractor .. ocv:class:: SiftDescriptorExtractor
Wrapping class for computing descriptors by using the Wrapping class for computing descriptors by using the
:ref:`SIFT` class :: :ocv:class::`SIFT` class. ::
class SiftDescriptorExtractor : public DescriptorExtractor class SiftDescriptorExtractor : public DescriptorExtractor
{ {
@ -153,7 +153,7 @@ SurfDescriptorExtractor
.. ocv:class:: SurfDescriptorExtractor .. ocv:class:: SurfDescriptorExtractor
Wrapping class for computing descriptors by using the Wrapping class for computing descriptors by using the
:ref:`SURF` class :: :ocv:class:`SURF` class. ::
class SurfDescriptorExtractor : public DescriptorExtractor class SurfDescriptorExtractor : public DescriptorExtractor
{ {
@ -179,7 +179,7 @@ OrbDescriptorExtractor
.. ocv:class:: OrbDescriptorExtractor .. ocv:class:: OrbDescriptorExtractor
Wrapping class for computing descriptors by using the Wrapping class for computing descriptors by using the
:ref:`ORB` class :: :ocv:class:`ORB` class. ::
template<typename T> template<typename T>
class ORbDescriptorExtractor : public DescriptorExtractor class ORbDescriptorExtractor : public DescriptorExtractor
@ -203,7 +203,7 @@ CalonderDescriptorExtractor
.. ocv:class:: CalonderDescriptorExtractor .. ocv:class:: CalonderDescriptorExtractor
Wrapping class for computing descriptors by using the Wrapping class for computing descriptors by using the
:ref:`RTreeClassifier` class :: :ocv:class:`RTreeClassifier` class. ::
template<typename T> template<typename T>
class CalonderDescriptorExtractor : public DescriptorExtractor class CalonderDescriptorExtractor : public DescriptorExtractor
@ -258,7 +258,7 @@ BriefDescriptorExtractor
Class for computing BRIEF descriptors described in a paper of Calonder M., Lepetit V., Class for computing BRIEF descriptors described in a paper of Calonder M., Lepetit V.,
Strecha C., Fua P. *BRIEF: Binary Robust Independent Elementary Features* , Strecha C., Fua P. *BRIEF: Binary Robust Independent Elementary Features* ,
11th European Conference on Computer Vision (ECCV), Heraklion, Crete. LNCS Springer, September 2010 :: 11th European Conference on Computer Vision (ECCV), Heraklion, Crete. LNCS Springer, September 2010. ::
class BriefDescriptorExtractor : public DescriptorExtractor class BriefDescriptorExtractor : public DescriptorExtractor
{ {

@ -6,8 +6,8 @@ Common Interfaces of Descriptor Matchers
Matchers of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch Matchers of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch
between different algorithms solving the same problem. This section is devoted to matching descriptors between different algorithms solving the same problem. This section is devoted to matching descriptors
that are represented as vectors in a multidimensional space. All objects that implement ``vector`` that are represented as vectors in a multidimensional space. All objects that implement ``vector``
descriptor matchers inherit descriptor matchers inherit the
:ref:`DescriptorMatcher` interface. :ocv:class:`DescriptorMatcher` interface.
.. index:: DMatch .. index:: DMatch
@ -18,7 +18,7 @@ DMatch
.. ocv:class:: DMatch .. ocv:class:: DMatch
Class for matching keypoint descriptors: query descriptor index, Class for matching keypoint descriptors: query descriptor index,
train descriptor index, train image index, and distance between descriptors :: train descriptor index, train image index, and distance between descriptors. ::
struct DMatch struct DMatch
{ {
@ -48,7 +48,7 @@ train descriptor index, train image index, and distance between descriptors ::
DescriptorMatcher DescriptorMatcher
----------------- -----------------
.. c:type:: DescriptorMatcher .. ocv:class:: DescriptorMatcher
Abstract base class for matching keypoint descriptors. It has two groups Abstract base class for matching keypoint descriptors. It has two groups
of match methods: for matching descriptors of an image with another image or of match methods: for matching descriptors of an image with another image or
@ -198,7 +198,7 @@ DescriptorMatcher::knnMatch
:param k: Count of best matches found per each query descriptor or less if a query descriptor has less than k possible matches in total. :param k: Count of best matches found per each query descriptor or less if a query descriptor has less than k possible matches in total.
:param compactResult: Parameter that is used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors. :param compactResult: Parameter used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
These extended variants of :ocv:func:`DescriptorMatcher::match` methods find several best matches for each query descriptor. The matches are returned in the distance increasing order. See :ocv:func:`DescriptorMatcher::match` for the details about query and train descriptors. These extended variants of :ocv:func:`DescriptorMatcher::match` methods find several best matches for each query descriptor. The matches are returned in the distance increasing order. See :ocv:func:`DescriptorMatcher::match` for the details about query and train descriptors.
@ -220,9 +220,9 @@ DescriptorMatcher::radiusMatch
:param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between the input query descriptors and stored train descriptors from the i-th image ``trainDescCollection[i]``. :param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between the input query descriptors and stored train descriptors from the i-th image ``trainDescCollection[i]``.
:param matches: The found matches. :param matches: Found matches.
:param compactResult: Parameter that is used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors. :param compactResult: Parameter used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
:param maxDistance: Threshold for the distance between matched descriptors. :param maxDistance: Threshold for the distance between matched descriptors.
@ -265,7 +265,7 @@ DescriptorMatcher::create
BruteForceMatcher BruteForceMatcher
----------------- -----------------
.. c:type:: BruteForceMatcher .. ocv:class:: BruteForceMatcher
Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets. :: Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets. ::
@ -351,9 +351,9 @@ For efficiency, ``BruteForceMatcher`` is used as a template parameterized with t
FlannBasedMatcher FlannBasedMatcher
----------------- -----------------
.. c:type:: FlannBasedMatcher .. ocv:class:: FlannBasedMatcher
Flann-based descriptor matcher. This matcher trains :ref:`flann::Index` on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. ``FlannBasedMatcher`` does not support masking permissible matches of descriptor sets because :ocv:func:`flann::Index` does not support this. :: Flann-based descriptor matcher. This matcher trains :ocv:func:`flann::Index` on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. ``FlannBasedMatcher`` does not support masking permissible matches of descriptor sets because ``flann::Index`` does not support this. ::
class FlannBasedMatcher : public DescriptorMatcher class FlannBasedMatcher : public DescriptorMatcher
{ {

@ -6,13 +6,13 @@ Common Interfaces of Feature Detectors
Feature detectors in OpenCV have wrappers with a common interface that enables you to easily switch Feature detectors in OpenCV have wrappers with a common interface that enables you to easily switch
between different algorithms solving the same problem. All objects that implement keypoint detectors between different algorithms solving the same problem. All objects that implement keypoint detectors
inherit the inherit the
:ref:`FeatureDetector` interface. :ocv:class:`FeatureDetector` interface.
KeyPoint KeyPoint
-------- --------
.. ocv:class:: KeyPoint .. ocv:class:: KeyPoint
Data structure for salient point detectors :: Data structure for salient point detectors. ::
class KeyPoint class KeyPoint
{ {
@ -68,7 +68,7 @@ FeatureDetector
--------------- ---------------
.. ocv:class:: FeatureDetector .. ocv:class:: FeatureDetector
Abstract base class for 2D image feature detectors :: Abstract base class for 2D image feature detectors. ::
class CV_EXPORTS FeatureDetector class CV_EXPORTS FeatureDetector
{ {
@ -137,18 +137,18 @@ FeatureDetector::create
The following detector types are supported: The following detector types are supported:
* ``"FAST"`` -- :ref:`FastFeatureDetector` * ``"FAST"`` -- :ocv:class:`FastFeatureDetector`
* ``"STAR"`` -- :ref:`StarFeatureDetector` * ``"STAR"`` -- :ocv:class:`StarFeatureDetector`
* ``"SIFT"`` -- :ref:`SiftFeatureDetector` * ``"SIFT"`` -- :ocv:class:`SiftFeatureDetector`
* ``"SURF"`` -- :ref:`SurfFeatureDetector` * ``"SURF"`` -- :ocv:class:`SurfFeatureDetector`
* ``"ORB"`` -- :ref:`OrbFeatureDetector` * ``"ORB"`` -- :ocv:class:`OrbFeatureDetector`
* ``"MSER"`` -- :ref:`MserFeatureDetector` * ``"MSER"`` -- :ocv:class:`MserFeatureDetector`
* ``"GFTT"`` -- :ref:`GfttFeatureDetector` * ``"GFTT"`` -- :ocv:class:`GfttFeatureDetector`
* ``"HARRIS"`` -- :ref:`HarrisFeatureDetector` * ``"HARRIS"`` -- :ocv:class:`HarrisFeatureDetector`
Also a combined format is supported: feature detector adapter name ( ``"Grid"`` -- Also a combined format is supported: feature detector adapter name ( ``"Grid"`` --
:ref:`GridAdaptedFeatureDetector`, ``"Pyramid"`` -- :ocv:class:`GridAdaptedFeatureDetector`, ``"Pyramid"`` --
:ref:`PyramidAdaptedFeatureDetector` ) + feature detector name (see above), :ocv:class:`PyramidAdaptedFeatureDetector` ) + feature detector name (see above),
for example: ``"GridFAST"``, ``"PyramidSTAR"`` . for example: ``"GridFAST"``, ``"PyramidSTAR"`` .
FastFeatureDetector FastFeatureDetector
@ -156,7 +156,7 @@ FastFeatureDetector
.. ocv:class:: FastFeatureDetector .. ocv:class:: FastFeatureDetector
Wrapping class for feature detection using the Wrapping class for feature detection using the
:ref:`FAST` method :: :ocv:func:`FAST` method. ::
class FastFeatureDetector : public FeatureDetector class FastFeatureDetector : public FeatureDetector
{ {
@ -173,7 +173,7 @@ GoodFeaturesToTrackDetector
.. ocv:class:: GoodFeaturesToTrackDetector .. ocv:class:: GoodFeaturesToTrackDetector
Wrapping class for feature detection using the Wrapping class for feature detection using the
:ref:`goodFeaturesToTrack` function :: :ocv:func:`goodFeaturesToTrack` function. ::
class GoodFeaturesToTrackDetector : public FeatureDetector class GoodFeaturesToTrackDetector : public FeatureDetector
{ {
@ -211,7 +211,7 @@ MserFeatureDetector
.. ocv:class:: MserFeatureDetector .. ocv:class:: MserFeatureDetector
Wrapping class for feature detection using the Wrapping class for feature detection using the
:ref:`MSER` class :: :ocv:class:`MSER` class. ::
class MserFeatureDetector : public FeatureDetector class MserFeatureDetector : public FeatureDetector
{ {
@ -233,7 +233,7 @@ StarFeatureDetector
.. ocv:class:: StarFeatureDetector .. ocv:class:: StarFeatureDetector
Wrapping class for feature detection using the Wrapping class for feature detection using the
:ref:`StarDetector` class :: :ocv:class:`StarDetector` class. ::
class StarFeatureDetector : public FeatureDetector class StarFeatureDetector : public FeatureDetector
{ {
@ -252,7 +252,7 @@ SiftFeatureDetector
.. ocv:class:: SiftFeatureDetector .. ocv:class:: SiftFeatureDetector
Wrapping class for feature detection using the Wrapping class for feature detection using the
:ref:`SIFT` class :: :ocv:class:`SIFT` class. ::
class SiftFeatureDetector : public FeatureDetector class SiftFeatureDetector : public FeatureDetector
{ {
@ -276,7 +276,7 @@ SurfFeatureDetector
.. ocv:class:: SurfFeatureDetector .. ocv:class:: SurfFeatureDetector
Wrapping class for feature detection using the Wrapping class for feature detection using the
:ref:`SURF` class :: :ocv:class:`SURF` class. ::
class SurfFeatureDetector : public FeatureDetector class SurfFeatureDetector : public FeatureDetector
{ {
@ -295,7 +295,7 @@ OrbFeatureDetector
.. ocv:class:: OrbFeatureDetector .. ocv:class:: OrbFeatureDetector
Wrapping class for feature detection using the Wrapping class for feature detection using the
:ref:`ORB` class :: :ocv:class:`ORB` class. ::
class OrbFeatureDetector : public FeatureDetector class OrbFeatureDetector : public FeatureDetector
{ {
@ -311,7 +311,7 @@ SimpleBlobDetector
------------------- -------------------
.. ocv:class:: SimpleBlobDetector .. ocv:class:: SimpleBlobDetector
Class for extracting blobs from an image :: Class for extracting blobs from an image. ::
class SimpleBlobDetector : public FeatureDetector class SimpleBlobDetector : public FeatureDetector
{ {
@ -347,19 +347,27 @@ Class for extracting blobs from an image ::
... ...
}; };
The class implements a simple algorithm for extracting blobs from an image. It converts the source image to binary images by applying thresholding with several thresholds from ``minThreshold`` (inclusive) to ``maxThreshold`` (exclusive) with distance ``thresholdStep`` between neighboring thresholds. Then connected components are extracted from every binary image by :ocv:func:`findContours` and their centers are calculated. Centers from several binary images are grouped by their coordinates. Close centers form one group that corresponds to one blob and this is controled by the ``minDistBetweenBlobs`` parameter. Then final centers of blobs and their radiuses are estimated from these groups and returned as locations and sizes of keypoints. The class implements a simple algorithm for extracting blobs from an image:
#. Convert the source image to binary images by applying thresholding with several thresholds from ``minThreshold`` (inclusive) to ``maxThreshold`` (exclusive) with distance ``thresholdStep`` between neighboring thresholds.
#. Extract connected components from every binary image by :ocv:func:`findContours` and calculate their centers.
#. Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the ``minDistBetweenBlobs`` parameter.
#. From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.
This class performs several filtrations of returned blobs. You should set ``filterBy*`` to true/false to turn on/off corresponding filtration. Available filtrations: This class performs several filtrations of returned blobs. You should set ``filterBy*`` to true/false to turn on/off corresponding filtration. Available filtrations:
* By color. This filter compares the intensity of a binary image at the center of a blob to ``blobColor``. If they differ then the blob is filtered out. Use ``blobColor = 0`` to extract dark blobs and ``blobColor = 255`` to extract light blobs. * **By color**. This filter compares the intensity of a binary image at the center of a blob to ``blobColor``. If they differ, the blob is filtered out. Use ``blobColor = 0`` to extract dark blobs and ``blobColor = 255`` to extract light blobs.
* By area. Extracted blobs will have area between ``minArea`` (inclusive) and ``maxArea`` (exclusive). * **By area**. Extracted blobs have an area between ``minArea`` (inclusive) and ``maxArea`` (exclusive).
* By circularity. Extracted blobs will have circularity (:math:`\frac{4*\pi*Area}{perimeter * perimeter}`) between ``minCircularity`` (inclusive) and ``maxCircularity`` (exclusive). * **By circularity**. Extracted blobs have circularity (:math:`\frac{4*\pi*Area}{perimeter * perimeter}`) between ``minCircularity`` (inclusive) and ``maxCircularity`` (exclusive).
* By ratio of the minimum inertia to maximum inertia. Extracted blobs will have this ratio between ``minInertiaRatio`` (inclusive) and ``maxInertiaRatio`` (exclusive). * **By ratio of the minimum inertia to maximum inertia**. Extracted blobs have this ratio between ``minInertiaRatio`` (inclusive) and ``maxInertiaRatio`` (exclusive).
* By convexity. Extracted blobs will have convexity (area / area of blob convex hull) between ``minConvexity`` (inclusive) and ``maxConvexity`` (exclusive). * **By convexity**. Extracted blobs have convexity (area / area of blob convex hull) between ``minConvexity`` (inclusive) and ``maxConvexity`` (exclusive).
Default values of parameters are tuned to extract dark circular blobs. Default values of parameters are tuned to extract dark circular blobs.
@ -368,7 +376,7 @@ GridAdaptedFeatureDetector
-------------------------- --------------------------
.. ocv:class:: GridAdaptedFeatureDetector .. ocv:class:: GridAdaptedFeatureDetector
Class adapting a detector to partition the source image into a grid and detect points in each cell :: Class adapting a detector to partition the source image into a grid and detect points in each cell. ::
class GridAdaptedFeatureDetector : public FeatureDetector class GridAdaptedFeatureDetector : public FeatureDetector
{ {
@ -411,7 +419,7 @@ DynamicAdaptedFeatureDetector
----------------------------- -----------------------------
.. ocv:class:: DynamicAdaptedFeatureDetector .. ocv:class:: DynamicAdaptedFeatureDetector
Adaptively adjusting detector that iteratively detects features until the desired number is found :: Adaptively adjusting detector that iteratively detects features until the desired number is found. ::
class DynamicAdaptedFeatureDetector: public FeatureDetector class DynamicAdaptedFeatureDetector: public FeatureDetector
{ {
@ -426,7 +434,7 @@ used for the last detection. In this case, the detector may be used for consiste
of keypoints in a set of temporally related images, such as video streams or of keypoints in a set of temporally related images, such as video streams or
panorama series. panorama series.
``DynamicAdaptedFeatureDetector`` uses another detector such as FAST or SURF to do the dirty work, ``DynamicAdaptedFeatureDetector`` uses another detector, such as FAST or SURF, to do the dirty work,
with the help of ``AdjusterAdapter`` . with the help of ``AdjusterAdapter`` .
If the detected number of features is not large enough, If the detected number of features is not large enough,
``AdjusterAdapter`` adjusts the detection parameters so that the next detection ``AdjusterAdapter`` adjusts the detection parameters so that the next detection
@ -448,25 +456,27 @@ Example of creating ``DynamicAdaptedFeatureDetector`` : ::
Ptr<FeatureDetector> detector(new DynamicAdaptedFeatureDetector (100, 110, 10, Ptr<FeatureDetector> detector(new DynamicAdaptedFeatureDetector (100, 110, 10,
new FastAdjuster(20,true))); new FastAdjuster(20,true)));
DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector
---------------------------------------------------------------- ----------------------------------------------------------------
.. ocv:function:: DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector( const Ptr<AdjusterAdapter>& adjuster, int min_features, int max_features, int max_iters ) .. ocv:function:: DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector( const Ptr<AdjusterAdapter>& adjuster, int min_features, int max_features, int max_iters )
Constructs the class. Constructs the class.
:param adjuster: :ref:`AdjusterAdapter` that detects features and adjusts parameters. :param adjuster: :ocv:class:`AdjusterAdapter` that detects features and adjusts parameters.
:param min_features: Minimum desired number of features. :param min_features: Minimum desired number of features.
:param max_features: Maximum desired number of features. :param max_features: Maximum desired number of features.
:param max_iters: Maximum number of times to try adjusting the feature detector parameters. For :ref:`FastAdjuster` , this number can be high, but with ``Star`` or ``Surf`` many iterations can be time-comsuming. At each iteration the detector is rerun. :param max_iters: Maximum number of times to try adjusting the feature detector parameters. For :ocv:class:`FastAdjuster` , this number can be high, but with ``Star`` or ``Surf`` many iterations can be time-comsuming. At each iteration the detector is rerun.
AdjusterAdapter AdjusterAdapter
--------------- ---------------
.. ocv:class:: AdjusterAdapter .. ocv:class:: AdjusterAdapter
Class providing an interface for adjusting parameters of a feature detector. This interface is used by :ref:`DynamicAdaptedFeatureDetector` . It is a wrapper for :ref:`FeatureDetector` that enables adjusting parameters after feature detection. :: Class providing an interface for adjusting parameters of a feature detector. This interface is used by :ocv:class:`DynamicAdaptedFeatureDetector` . It is a wrapper for :ocv:class:`FeatureDetector` that enables adjusting parameters after feature detection. ::
class AdjusterAdapter: public FeatureDetector class AdjusterAdapter: public FeatureDetector
{ {
@ -481,9 +491,9 @@ Class providing an interface for adjusting parameters of a feature detector. Thi
See See
:ref:`FastAdjuster`, :ocv:class:`FastAdjuster`,
:ref:`StarAdjuster`, :ocv:class:`StarAdjuster`, and
:ref:`SurfAdjuster` for concrete implementations. :ocv:class:`SurfAdjuster` for concrete implementations.
AdjusterAdapter::tooFew AdjusterAdapter::tooFew
--------------------------- ---------------------------
@ -537,13 +547,13 @@ AdjusterAdapter::create
------------------------- -------------------------
.. ocv:function:: Ptr<AdjusterAdapter> AdjusterAdapter::create( const string& detectorType ) .. ocv:function:: Ptr<AdjusterAdapter> AdjusterAdapter::create( const string& detectorType )
Creates adjuster adapter by name ``detectorType``. The detector name is the same as in :ocv:func:`FeatureDetector::create`, but now supported ``"FAST"``, ``"STAR"`` and ``"SURF"`` only. Creates an adjuster adapter by name ``detectorType``. The detector name is the same as in :ocv:func:`FeatureDetector::create`, but now supports ``"FAST"``, ``"STAR"``, and ``"SURF"`` only.
FastAdjuster FastAdjuster
------------ ------------
.. ocv:class:: FastAdjuster .. ocv:class:: FastAdjuster
:ref:`AdjusterAdapter` for :ref:`FastFeatureDetector`. This class decreases or increases the threshold value by 1. :: :ocv:class:`AdjusterAdapter` for :ocv:class:`FastFeatureDetector`. This class decreases or increases the threshold value by 1. ::
class FastAdjuster FastAdjuster: public AdjusterAdapter class FastAdjuster FastAdjuster: public AdjusterAdapter
{ {
@ -556,7 +566,7 @@ StarAdjuster
------------ ------------
.. ocv:class:: StarAdjuster .. ocv:class:: StarAdjuster
:ref:`AdjusterAdapter` for :ref:`StarFeatureDetector`. This class adjusts the ``responseThreshhold`` of ``StarFeatureDetector``. :: :ocv:class:`AdjusterAdapter` for :ocv:class:`StarFeatureDetector`. This class adjusts the ``responseThreshhold`` of ``StarFeatureDetector``. ::
class StarAdjuster: public AdjusterAdapter class StarAdjuster: public AdjusterAdapter
{ {
@ -568,7 +578,7 @@ SurfAdjuster
------------ ------------
.. ocv:class:: SurfAdjuster .. ocv:class:: SurfAdjuster
:ref:`AdjusterAdapter` for :ref:`SurfFeatureDetector`. This class adjusts the ``hessianThreshold`` of ``SurfFeatureDetector``. :: :ocv:class:`AdjusterAdapter` for :ocv:class:`SurfFeatureDetector`. This class adjusts the ``hessianThreshold`` of ``SurfFeatureDetector``. ::
class SurfAdjuster: public SurfAdjuster class SurfAdjuster: public SurfAdjuster
{ {
@ -580,7 +590,7 @@ FeatureDetector
--------------- ---------------
.. ocv:class:: FeatureDetector .. ocv:class:: FeatureDetector
Abstract base class for 2D image feature detectors :: Abstract base class for 2D image feature detectors. ::
class CV_EXPORTS FeatureDetector class CV_EXPORTS FeatureDetector
{ {

@ -17,7 +17,7 @@ GenericDescriptorMatcher
------------------------ ------------------------
.. ocv:class:: GenericDescriptorMatcher .. ocv:class:: GenericDescriptorMatcher
Abstract interface for extracting and matching a keypoint descriptor. There are also :ref:`DescriptorExtractor` and :ref:`DescriptorMatcher` for these purposes but their interfaces are intended for descriptors represented as vectors in a multidimensional space. ``GenericDescriptorMatcher`` is a more generic interface for descriptors. :ref:`DescriptorMatcher` and ``GenericDescriptorMatcher`` have two groups of match methods: for matching keypoints of an image with another image or with an image set. :: Abstract interface for extracting and matching a keypoint descriptor. There are also :ocv:class:`DescriptorExtractor` and :ocv:class:`DescriptorMatcher` for these purposes but their interfaces are intended for descriptors represented as vectors in a multidimensional space. ``GenericDescriptorMatcher`` is a more generic interface for descriptors. ``DescriptorMatcher`` and ``GenericDescriptorMatcher`` have two groups of match methods: for matching keypoints of an image with another image or with an image set. ::
class GenericDescriptorMatcher class GenericDescriptorMatcher
{ {
@ -129,7 +129,7 @@ GenericDescriptorMatcher::isMaskSupported
--------------------------------------------- ---------------------------------------------
.. ocv:function:: void GenericDescriptorMatcher::isMaskSupported() .. ocv:function:: void GenericDescriptorMatcher::isMaskSupported()
Returns true if a generic descriptor matcher supports masking permissible matches. Returns ``true`` if a generic descriptor matcher supports masking permissible matches.
.. index:: GenericDescriptorMatcher::classify .. index:: GenericDescriptorMatcher::classify
@ -139,7 +139,7 @@ GenericDescriptorMatcher::classify
.. ocv:function:: void GenericDescriptorMatcher::classify( const Mat& queryImage, vector<KeyPoint>& queryKeypoints ) .. ocv:function:: void GenericDescriptorMatcher::classify( const Mat& queryImage, vector<KeyPoint>& queryKeypoints )
Classify keypoints from a query set. Classifies keypoints from a query set.
:param queryImage: Query image. :param queryImage: Query image.
@ -149,15 +149,15 @@ GenericDescriptorMatcher::classify
:param trainKeypoints: Keypoints from a train image. :param trainKeypoints: Keypoints from a train image.
The method classifies each keypoint from a query set. The first variant of the method takes a train image and its keypoints as an input argument. The second variant uses the internally stored training collection that can be built using the ``GenericDescriptorMatcher::add`` method. The method classifies each keypoint from a query set. The first variant of the method takes a train image and its keypoints as an input argument. The second variant uses the internally stored training collection that can be built using the ``GenericDescriptorMatcher::add`` method.
The methods do the following: The methods do the following:
#. #.
Call the ``GenericDescriptorMatcher::match`` method to find correspondence between the query set and the training set. Call the ``GenericDescriptorMatcher::match`` method to find correspondence between the query set and the training set.
#. #.
Sey the ``class_id`` field of each keypoint from the query set to ``class_id`` of the corresponding keypoint from the training set. Set the ``class_id`` field of each keypoint from the query set to ``class_id`` of the corresponding keypoint from the training set.
.. index:: GenericDescriptorMatcher::match .. index:: GenericDescriptorMatcher::match
@ -167,7 +167,7 @@ GenericDescriptorMatcher::match
.. ocv:function:: void GenericDescriptorMatcher::match( const Mat& queryImage, vector<KeyPoint>& queryKeypoints, vector<DMatch>& matches, const vector<Mat>& masks=vector<Mat>() ) .. ocv:function:: void GenericDescriptorMatcher::match( const Mat& queryImage, vector<KeyPoint>& queryKeypoints, vector<DMatch>& matches, const vector<Mat>& masks=vector<Mat>() )
Find the best match in the training set for each keypoint from the query set. Finds the best match in the training set for each keypoint from the query set.
:param queryImage: Query image. :param queryImage: Query image.
@ -179,11 +179,11 @@ GenericDescriptorMatcher::match
:param matches: Matches. If a query descriptor (keypoint) is masked out in ``mask`` , match is added for this descriptor. So, ``matches`` size may be smaller than the query keypoints count. :param matches: Matches. If a query descriptor (keypoint) is masked out in ``mask`` , match is added for this descriptor. So, ``matches`` size may be smaller than the query keypoints count.
:param mask: Mask specifying permissible matches between input query and train keypoints. :param mask: Mask specifying permissible matches between an input query and train keypoints.
:param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between input query keypoints and stored train keypoints from the i-th image. :param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between input query keypoints and stored train keypoints from the i-th image.
The methods find the best match for each query keypoint. In the first variant of the method, a train image and its keypoints are the input arguments. In the second variant, query keypoints are matched to the internally stored training collection that can be built using ``GenericDescriptorMatcher::add`` method. Optional mask (or masks) can be passed to specify which query and training descriptors can be matched. Namely, ``queryKeypoints[i]`` can be matched with ``trainKeypoints[j]`` only if ``mask.at<uchar>(i,j)`` is non-zero. The methods find the best match for each query keypoint. In the first variant of the method, a train image and its keypoints are the input arguments. In the second variant, query keypoints are matched to the internally stored training collection that can be built using the ``GenericDescriptorMatcher::add`` method. Optional mask (or masks) can be passed to specify which query and training descriptors can be matched. Namely, ``queryKeypoints[i]`` can be matched with ``trainKeypoints[j]`` only if ``mask.at<uchar>(i,j)`` is non-zero.
.. index:: GenericDescriptorMatcher::knnMatch .. index:: GenericDescriptorMatcher::knnMatch
@ -193,7 +193,7 @@ GenericDescriptorMatcher::knnMatch
.. ocv:function:: void GenericDescriptorMatcher::knnMatch( const Mat& queryImage, vector<KeyPoint>& queryKeypoints, vector<vector<DMatch> >& matches, int k, const vector<Mat>& masks=vector<Mat>(), bool compactResult=false ) .. ocv:function:: void GenericDescriptorMatcher::knnMatch( const Mat& queryImage, vector<KeyPoint>& queryKeypoints, vector<vector<DMatch> >& matches, int k, const vector<Mat>& masks=vector<Mat>(), bool compactResult=false )
Find the ``k`` best matches for each query keypoint. Finds the ``k`` best matches for each query keypoint.
The methods are extended variants of ``GenericDescriptorMatch::match``. The parameters are similar, and the the semantics is similar to ``DescriptorMatcher::knnMatch``. But this class does not require explicitly computed keypoint descriptors. The methods are extended variants of ``GenericDescriptorMatch::match``. The parameters are similar, and the the semantics is similar to ``DescriptorMatcher::knnMatch``. But this class does not require explicitly computed keypoint descriptors.
@ -205,9 +205,9 @@ GenericDescriptorMatcher::radiusMatch
.. ocv:function:: void GenericDescriptorMatcher::radiusMatch( const Mat& queryImage, vector<KeyPoint>& queryKeypoints, vector<vector<DMatch> >& matches, float maxDistance, const vector<Mat>& masks=vector<Mat>(), bool compactResult=false ) .. ocv:function:: void GenericDescriptorMatcher::radiusMatch( const Mat& queryImage, vector<KeyPoint>& queryKeypoints, vector<vector<DMatch> >& matches, float maxDistance, const vector<Mat>& masks=vector<Mat>(), bool compactResult=false )
For each query keypoint, find the training keypoints not farther than the specified distance. For each query keypoint, finds the training keypoints not farther than the specified distance.
The methods are similar to ``DescriptorMatcher::radiusM. But this class does not require explicitly computed keypoint descriptors. The methods are similar to ``DescriptorMatcher::radius``. But this class does not require explicitly computed keypoint descriptors.
.. index:: GenericDescriptorMatcher::read .. index:: GenericDescriptorMatcher::read
@ -246,7 +246,7 @@ OneWayDescriptorMatcher
.. ocv:class:: OneWayDescriptorMatcher .. ocv:class:: OneWayDescriptorMatcher
Wrapping class for computing, matching, and classifying descriptors using the Wrapping class for computing, matching, and classifying descriptors using the
:ref:`OneWayDescriptorBase` class :: :ocv:class:`OneWayDescriptorBase` class. ::
class OneWayDescriptorMatcher : public GenericDescriptorMatcher class OneWayDescriptorMatcher : public GenericDescriptorMatcher
{ {
@ -305,7 +305,7 @@ FernDescriptorMatcher
.. ocv:class:: FernDescriptorMatcher .. ocv:class:: FernDescriptorMatcher
Wrapping class for computing, matching, and classifying descriptors using the Wrapping class for computing, matching, and classifying descriptors using the
:ref:`FernClassifier` class :: :ocv:class:`FernClassifier` class. ::
class FernDescriptorMatcher : public GenericDescriptorMatcher class FernDescriptorMatcher : public GenericDescriptorMatcher
{ {
@ -363,7 +363,7 @@ VectorDescriptorMatcher
----------------------- -----------------------
.. ocv:class:: VectorDescriptorMatcher .. ocv:class:: VectorDescriptorMatcher
Class used for matching descriptors that can be described as vectors in a finite-dimensional space :: Class used for matching descriptors that can be described as vectors in a finite-dimensional space. ::
class CV_EXPORTS VectorDescriptorMatcher : public GenericDescriptorMatcher class CV_EXPORTS VectorDescriptorMatcher : public GenericDescriptorMatcher
{ {

@ -10,13 +10,13 @@ drawMatches
.. ocv:function:: void drawMatches( const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch> >& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask= vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT ) .. ocv:function:: void drawMatches( const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch> >& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask= vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
Draw the found matches of keypoints from two images Draws the found matches of keypoints from two images.
:param img1: The first source image. :param img1: First source image.
:param keypoints1: Keypoints from the first source image. :param keypoints1: Keypoints from the first source image.
:param img2: The second source image. :param img2: Second source image.
:param keypoints2: Keypoints from the second source image. :param keypoints2: Keypoints from the second source image.
@ -75,5 +75,5 @@ drawKeypoints
:param color: Color of keypoints. :param color: Color of keypoints.
:param flags: Flags setting drawing features. Possible ``flags`` bit values are defined by ``DrawMatchesFlags``. See details above in :ref:`drawMatches` . :param flags: Flags setting drawing features. Possible ``flags`` bit values are defined by ``DrawMatchesFlags``. See details above in :ocv:func:`drawMatches` .

@ -9,7 +9,7 @@ FAST
-------- --------
.. ocv:function:: void FAST( const Mat& image, vector<KeyPoint>& keypoints, int threshold, bool nonmaxSupression=true ) .. ocv:function:: void FAST( const Mat& image, vector<KeyPoint>& keypoints, int threshold, bool nonmaxSupression=true )
Detects corners using the FAST algorithm by E. Rosten (*Machine learning for high-speed corner detection*, 2006). Detects corners using the FAST algorithm by E. Rosten (*Machine Learning for High-speed Corner Detection*, 2006).
:param image: Image where keypoints (corners) are detected. :param image: Image where keypoints (corners) are detected.
@ -27,7 +27,7 @@ MSER
---- ----
.. ocv:class:: MSER .. ocv:class:: MSER
Maximally stable extremal region extractor :: Maximally stable extremal region extractor. ::
class MSER : public CvMSERParams class MSER : public CvMSERParams
{ {
@ -46,7 +46,7 @@ Maximally stable extremal region extractor ::
}; };
The class encapsulates all the parameters of the MSER extraction algorithm (see The class encapsulates all the parameters of the MSER extraction algorithm (see
http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions). http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions). Also see http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/MSER for usefull comments and parameters description.
.. index:: StarDetector .. index:: StarDetector
@ -56,7 +56,7 @@ StarDetector
------------ ------------
.. ocv:class:: StarDetector .. ocv:class:: StarDetector
Class implementing the Star keypoint detector :: Class implementing the ``Star`` keypoint detector. ::
class StarDetector : CvStarDetectorParams class StarDetector : CvStarDetectorParams
{ {
@ -89,13 +89,11 @@ The class implements a modified version of the ``CenSurE`` keypoint detector des
.. index:: SIFT .. index:: SIFT
.. _SIFT:
SIFT SIFT
---- ----
.. ocv:class:: SIFT .. ocv:class:: SIFT
Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) approach :: Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) approach. ::
class CV_EXPORTS SIFT class CV_EXPORTS SIFT
{ {
@ -179,13 +177,11 @@ Class for extracting keypoints and computing descriptors using the Scale Invaria
.. index:: SURF .. index:: SURF
.. _SURF:
SURF SURF
---- ----
.. ocv:class:: SURF .. ocv:class:: SURF
Class for extracting Speeded Up Robust Features from an image :: Class for extracting Speeded Up Robust Features from an image. ::
class SURF : public CvSURFParams class SURF : public CvSURFParams
{ {
@ -214,18 +210,16 @@ The class implements the Speeded Up Robust Features descriptor
[Bay06]. [Bay06].
There is a fast multi-scale Hessian keypoint detector that can be used to find keypoints There is a fast multi-scale Hessian keypoint detector that can be used to find keypoints
(default option). But the descriptors can be also computed for the user-specified keypoints. (default option). But the descriptors can be also computed for the user-specified keypoints.
The algorithm can be used for object tracking and localization, image stitching, and so on. See the ``find_obj.cpp`` demo in OpenCV samples directory. The algorithm can be used for object tracking and localization, image stitching, and so on. See the ``find_obj.cpp`` demo in the OpenCV samples directory.
.. index:: ORB .. index:: ORB
.. _ORB:
ORB ORB
---- ----
.. ocv:class:: ORB .. ocv:class:: ORB
Class for extracting ORB features and descriptors from an image :: Class for extracting ORB features and descriptors from an image. ::
class ORB class ORB
{ {
@ -272,18 +266,17 @@ Class for extracting ORB features and descriptors from an image ::
bool useProvidedKeypoints=false) const; bool useProvidedKeypoints=false) const;
}; };
The class implements ORB The class implements ORB.
.. index:: RandomizedTree .. index:: RandomizedTree
.. _RandomizedTree:
RandomizedTree RandomizedTree
-------------- --------------
.. ocv:class:: RandomizedTree .. ocv:class:: RandomizedTree
Class containing a base structure for ``RTreeClassifier`` :: Class containing a base structure for ``RTreeClassifier``. ::
class CV_EXPORTS RandomizedTree class CV_EXPORTS RandomizedTree
{ {
@ -423,7 +416,7 @@ RTreeNode
--------- ---------
.. ocv:class:: RTreeNode .. ocv:class:: RTreeNode
Class containing a base structure for ``RandomizedTree`` :: Class containing a base structure for ``RandomizedTree``. ::
struct RTreeNode struct RTreeNode
{ {
@ -451,7 +444,7 @@ RTreeClassifier
--------------- ---------------
.. ocv:class:: RTreeClassifier .. ocv:class:: RTreeClassifier
Class containing ``RTreeClassifier``. It represents the Calonder descriptor that was originally introduced by Michael Calonder. :: Class containing ``RTreeClassifier``. It represents the Calonder descriptor originally introduced by Michael Calonder. ::
class CV_EXPORTS RTreeClassifier class CV_EXPORTS RTreeClassifier
{ {
@ -569,7 +562,7 @@ RTreeClassifier::getSparseSignature
:param sig: Output signature (array dimension is ``reduced_num_dim)`` . :param sig: Output signature (array dimension is ``reduced_num_dim)`` .
:param thresh: Threshold that is used for compressing the signature. :param thresh: Threshold used for compressing the signature.
.. index:: RTreeClassifier::countNonZeroElements .. index:: RTreeClassifier::countNonZeroElements

@ -75,7 +75,7 @@ BOWKMeansTrainer
---------------- ----------------
.. ocv:class:: BOWKMeansTrainer .. ocv:class:: BOWKMeansTrainer
:ref:`kmeans` -based class to train visual vocabulary using the *bag of visual words* approach. :ocv:func:`kmeans` -based class to train visual vocabulary using the *bag of visual words* approach.
:: ::
class BOWKMeansTrainer : public BOWTrainer class BOWKMeansTrainer : public BOWTrainer
@ -97,19 +97,19 @@ BOWKMeansTrainer::BOWKMeansTrainer
---------------- ----------------
.. ocv:function:: BOWKMeansTrainer::BOWKMeansTrainer( int clusterCount, const TermCriteria& termcrit=TermCriteria(), int attempts=3, int flags=KMEANS_PP_CENTERS ); .. ocv:function:: BOWKMeansTrainer::BOWKMeansTrainer( int clusterCount, const TermCriteria& termcrit=TermCriteria(), int attempts=3, int flags=KMEANS_PP_CENTERS );
To understand constructor parameters, see :ref:`kmeans` function arguments. See :ocv:func:`kmeans` function parameters.
BOWImgDescriptorExtractor BOWImgDescriptorExtractor
------------------------- -------------------------
.. ocv:class:: BOWImgDescriptorExtractor .. ocv:class:: BOWImgDescriptorExtractor
Class to compute an image descriptor using the ''bag of visual words''. Such a computation consists of the following steps: Class to compute an image descriptor using the *bag of visual words*. Such a computation consists of the following steps:
#. Compute descriptors for a given image and its keypoints set. #. Compute descriptors for a given image and its keypoints set.
#. Find the nearest visual words from the vocabulary for each keypoint descriptor. #. Find the nearest visual words from the vocabulary for each keypoint descriptor.
#. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The ``i``-th bin of the histogram is a frequency of ``i``-th word of the vocabulary in the given image. #. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The ``i``-th bin of the histogram is a frequency of ``i``-th word of the vocabulary in the given image.
Here is the class declaration :: The class declaration is the following: ::
class BOWImgDescriptorExtractor class BOWImgDescriptorExtractor
{ {
@ -138,7 +138,7 @@ BOWImgDescriptorExtractor::BOWImgDescriptorExtractor
-------------------------------------------------------- --------------------------------------------------------
.. ocv:function:: BOWImgDescriptorExtractor::BOWImgDescriptorExtractor( const Ptr<DescriptorExtractor>& dextractor, const Ptr<DescriptorMatcher>& dmatcher ) .. ocv:function:: BOWImgDescriptorExtractor::BOWImgDescriptorExtractor( const Ptr<DescriptorExtractor>& dextractor, const Ptr<DescriptorMatcher>& dmatcher )
The class constructor. Constructs a class.
:param dextractor: Descriptor extractor that is used to compute descriptors for an input image and its keypoints. :param dextractor: Descriptor extractor that is used to compute descriptors for an input image and its keypoints.
@ -152,7 +152,7 @@ BOWImgDescriptorExtractor::setVocabulary
Sets a visual vocabulary. Sets a visual vocabulary.
:param vocabulary: Vocabulary (can be trained using the inheritor of :ref:`BOWTrainer` ). Each row of the vocabulary is a visual word (cluster center). :param vocabulary: Vocabulary (can be trained using the inheritor of :ocv:class:`BOWTrainer` ). Each row of the vocabulary is a visual word (cluster center).
.. index:: BOWImgDescriptorExtractor::getVocabulary .. index:: BOWImgDescriptorExtractor::getVocabulary

@ -234,7 +234,7 @@ if(BUILD_TESTS AND EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/test)
get_target_property(LOC ${the_test_target} LOCATION) get_target_property(LOC ${the_test_target} LOCATION)
add_test(${the_test_target} "${LOC}") add_test(${the_test_target} "${LOC}")
if(WIN32) #if(WIN32)
install(TARGETS ${the_test_target} RUNTIME DESTINATION bin COMPONENT main) # install(TARGETS ${the_test_target} RUNTIME DESTINATION bin COMPONENT main)
endif() #endif()
endif() endif()

@ -433,6 +433,6 @@ if(BUILD_TESTS)
if (MSVC AND NOT BUILD_SHARED_LIBS) if (MSVC AND NOT BUILD_SHARED_LIBS)
set_target_properties(${the_target} PROPERTIES LINK_FLAGS "/NODEFAULTLIB:atlthunk.lib /NODEFAULTLIB:atlsd.lib /DEBUG") set_target_properties(${the_target} PROPERTIES LINK_FLAGS "/NODEFAULTLIB:atlthunk.lib /NODEFAULTLIB:atlsd.lib /DEBUG")
endif() endif()
install(TARGETS ${the_target} RUNTIME DESTINATION bin COMPONENT main) #install(TARGETS ${the_target} RUNTIME DESTINATION bin COMPONENT main)
endif() endif()
endif(BUILD_TESTS) endif(BUILD_TESTS)

@ -366,5 +366,6 @@ Video writer class ::
... ...
}; };
For more detailed description see http://opencv.willowgarage.com/wiki/documentation/cpp/highgui/VideoWriter
.. ..

@ -124,7 +124,7 @@ Qt-specific details:
:: ::
namedWindow( ``myWindow'', ``CV_WINDOW_NORMAL`` textbar ``CV_GUI_NORMAL`` ); namedWindow( "myWindow", CV_WINDOW_NORMAL | CV_GUI_NORMAL );
.. ..
@ -194,7 +194,7 @@ waitKey
:param delay: Delay in milliseconds. 0 is the special value that means "forever". :param delay: Delay in milliseconds. 0 is the special value that means "forever".
The function ``waitKey`` waits for a key event infinitely (when The function ``waitKey`` waits for a key event infinitely (when
:math:`\texttt{delay}\leq 0` ) or for ``delay`` milliseconds, when it is positive. It returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed. :math:`\texttt{delay}\leq 0` ) or for ``delay`` milliseconds, when it is positive. Since the OS has a minimum time between switching threads, the function will not wait exactly ``delay`` ms, it will wait at least ``delay`` ms, depending on what else is running on your computer at that time. It returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
**Notes:** **Notes:**

@ -87,6 +87,8 @@ Thanks to:
*/ */
///////////////////////////////////////////////////////// /////////////////////////////////////////////////////////
#include "precomp.hpp"
#if _MSC_VER >= 1400 #if _MSC_VER >= 1400
#pragma warning(disable: 4995) #pragma warning(disable: 4995)
#endif #endif
@ -100,7 +102,6 @@ Thanks to:
#include <vector> #include <vector>
//Include Directshow stuff here so we don't worry about needing all the h files. //Include Directshow stuff here so we don't worry about needing all the h files.
#ifdef _MSC_VER #ifdef _MSC_VER
#include "DShow.h" #include "DShow.h"
@ -111,7 +112,7 @@ Thanks to:
#else #else
#include "dshow/dshow.h" #include "dshow/dshow.h"
#include "dshow/dvdmedia.h" #include "dshow/dvdmedia.h"
#include "bdatypes.h" #include "dshow/bdatypes.h"
interface IEnumPIDMap : public IUnknown interface IEnumPIDMap : public IUnknown
{ {
@ -155,8 +156,6 @@ interface IMPEG2PIDMap : public IUnknown
#define _WIN32_WINNT 0x400 #define _WIN32_WINNT 0x400
#endif #endif
#include <windows.h>
/* /*
MEDIASUBTYPE_I420 : TGUID ='{30323449-0000-0010-8000-00AA00389B71}'; MEDIASUBTYPE_I420 : TGUID ='{30323449-0000-0010-8000-00AA00389B71}';

@ -433,7 +433,9 @@ bool CvCapture_FFMPEG::reopen()
return true; return true;
} }
#ifndef AVSEEK_FLAG_FRAME
#define AVSEEK_FLAG_FRAME 0
#endif
bool CvCapture_FFMPEG::open( const char* _filename ) bool CvCapture_FFMPEG::open( const char* _filename )
{ {

@ -795,6 +795,7 @@ double CvCaptureFile::getProperty(int property_id){
double retval; double retval;
QTTime t; QTTime t;
//cerr << "get_prop"<<endl;
switch (property_id) { switch (property_id) {
case CV_CAP_PROP_POS_MSEC: case CV_CAP_PROP_POS_MSEC:
[[mCaptureSession attributeForKey:QTMovieCurrentTimeAttribute] getValue:&t]; [[mCaptureSession attributeForKey:QTMovieCurrentTimeAttribute] getValue:&t];
@ -815,6 +816,9 @@ double CvCaptureFile::getProperty(int property_id){
case CV_CAP_PROP_FPS: case CV_CAP_PROP_FPS:
retval = currentFPS; retval = currentFPS;
break; break;
case CV_CAP_PROP_FRAME_COUNT:
retval = movieDuration*movieFPS/1000;
break;
case CV_CAP_PROP_FOURCC: case CV_CAP_PROP_FOURCC:
default: default:
retval = 0; retval = 0;

@ -74,6 +74,7 @@ CV_IMPL int cvWaitKey (int maxWait) {return 0;}
using namespace std; using namespace std;
const int TOP_BORDER = 7; const int TOP_BORDER = 7;
const int MIN_SLIDER_WIDTH=200;
static NSApplication *application = nil; static NSApplication *application = nil;
static NSAutoreleasePool *pool = nil; static NSAutoreleasePool *pool = nil;
@ -182,7 +183,7 @@ CV_IMPL void cvDestroyWindow( const char* name)
//cout << "cvDestroyWindow" << endl; //cout << "cvDestroyWindow" << endl;
CVWindow *window = cvGetWindow(name); CVWindow *window = cvGetWindow(name);
if(window) { if(window) {
[window performClose:nil]; [window close];
[windows removeObjectForKey:[NSString stringWithFormat:@"%s", name]]; [windows removeObjectForKey:[NSString stringWithFormat:@"%s", name]];
} }
[localpool drain]; [localpool drain];
@ -193,10 +194,10 @@ CV_IMPL void cvDestroyAllWindows( void )
{ {
//cout << "cvDestroyAllWindows" << endl; //cout << "cvDestroyAllWindows" << endl;
NSAutoreleasePool* localpool = [[NSAutoreleasePool alloc] init]; NSAutoreleasePool* localpool = [[NSAutoreleasePool alloc] init];
for(NSString *key in windows) { NSDictionary* list = [NSDictionary dictionaryWithDictionary:windows];
[[windows valueForKey:key] performClose:nil]; for(NSString *key in list) {
} cvDestroyWindow([key cStringUsingEncoding:NSASCIIStringEncoding]);
[windows removeAllObjects]; }
[localpool drain]; [localpool drain];
} }
@ -221,8 +222,16 @@ CV_IMPL void cvShowImage( const char* name, const CvArr* arr)
[[window contentView] setImageData:(CvArr *)arr]; [[window contentView] setImageData:(CvArr *)arr];
if([window autosize] || [window firstContent] || empty) if([window autosize] || [window firstContent] || empty)
{ {
//Set new view size considering sliders (reserve height and min width)
NSRect vrectNew = vrectOld; NSRect vrectNew = vrectOld;
vrectNew.size = [[[window contentView] image] size]; int slider_height = 0;
for(NSString *key in [window sliders]) {
slider_height += [[[window sliders] valueForKey:key] frame].size.height;
}
vrectNew.size.height = [[[window contentView] image] size].height + slider_height;
vrectNew.size.width = std::max<int>([[[window contentView] image] size].width, MIN_SLIDER_WIDTH);
[[window contentView] setFrameSize:vrectNew.size]; //adjust sliders to fit new window size
rect.size.width += vrectNew.size.width - vrectOld.size.width; rect.size.width += vrectNew.size.width - vrectOld.size.width;
rect.size.height += vrectNew.size.height - vrectOld.size.height; rect.size.height += vrectNew.size.height - vrectOld.size.height;
rect.origin.y -= vrectNew.size.height - vrectOld.size.height; rect.origin.y -= vrectNew.size.height - vrectOld.size.height;
@ -393,7 +402,7 @@ CV_IMPL void cvSetTrackbarPos(const char* trackbar_name, const char* window_name
if(trackbar_name == NULL || window_name == NULL) if(trackbar_name == NULL || window_name == NULL)
CV_ERROR( CV_StsNullPtr, "NULL trackbar or window name" ); CV_ERROR( CV_StsNullPtr, "NULL trackbar or window name" );
if(pos <= 0) if(pos < 0)
CV_ERROR( CV_StsOutOfRange, "Bad trackbar maximal value" ); CV_ERROR( CV_StsOutOfRange, "Bad trackbar maximal value" );
if (localpool5 != nil) [localpool5 drain]; if (localpool5 != nil) [localpool5 drain];
@ -645,17 +654,26 @@ CV_IMPL int cvWaitKey (int maxWait)
// Save slider // Save slider
[sliders setValue:slider forKey:cvname]; [sliders setValue:slider forKey:cvname];
[[self contentView] addSubview:slider]; [[self contentView] addSubview:slider];
//update contentView size to contain sliders
NSSize viewSize=[[self contentView] frame].size,
sliderSize=[slider frame].size;
viewSize.height += sliderSize.height;
viewSize.width = std::max<int>(viewSize.width, MIN_SLIDER_WIDTH);
// Update slider sizes // Update slider sizes
[[self contentView] setFrameSize:[[self contentView] frame].size]; [[self contentView] setFrameSize:viewSize];
[[self contentView] setNeedsDisplay:YES]; [[self contentView] setNeedsDisplay:YES];
//update window size to contain sliders
NSRect rect = [self frame];
rect.size.height += [slider frame].size.height;
rect.size.width = std::max<int>(rect.size.width, MIN_SLIDER_WIDTH);
[self setFrame:rect display:YES];
int height = 0;
for(NSString *key in sliders) {
height += [[sliders valueForKey:key] frame].size.height;
}
[self setContentMinSize:NSMakeSize(0, height)];
} }
- (CVView *)contentView { - (CVView *)contentView {
@ -755,6 +773,7 @@ CV_IMPL int cvWaitKey (int maxWait)
NSSlider *slider = [[cvwindow sliders] valueForKey:key]; NSSlider *slider = [[cvwindow sliders] valueForKey:key];
NSRect r = [slider frame]; NSRect r = [slider frame];
r.origin.y = height - r.size.height; r.origin.y = height - r.size.height;
r.size.width = [[cvwindow contentView] frame].size.width;
[slider setFrame:r]; [slider setFrame:r];
height -= r.size.height; height -= r.size.height;
} }
@ -773,7 +792,7 @@ CV_IMPL int cvWaitKey (int maxWait)
} }
NSRect imageRect = {{0,0}, {self.frame.size.width, self.frame.size.height-height-6}}; NSRect imageRect = {{0,0}, {[image size].width, [image size].height}};
if(image != nil) { if(image != nil) {
[image drawInRect: imageRect [image drawInRect: imageRect
@ -803,9 +822,9 @@ CV_IMPL int cvWaitKey (int maxWait)
value = NULL; value = NULL;
userData = NULL; userData = NULL;
[self setFrame:NSMakeRect(0,0,200,25)]; [self setFrame:NSMakeRect(0,0,200,30)];
name = [[NSTextField alloc] initWithFrame:NSMakeRect(0, 0,120, 20)]; name = [[NSTextField alloc] initWithFrame:NSMakeRect(10, 0,110, 25)];
[name setEditable:NO]; [name setEditable:NO];
[name setSelectable:NO]; [name setSelectable:NO];
[name setBezeled:NO]; [name setBezeled:NO];
@ -814,7 +833,7 @@ CV_IMPL int cvWaitKey (int maxWait)
[[name cell] setLineBreakMode:NSLineBreakByTruncatingTail]; [[name cell] setLineBreakMode:NSLineBreakByTruncatingTail];
[self addSubview:name]; [self addSubview:name];
slider = [[NSSlider alloc] initWithFrame:NSMakeRect(120, 0, 76, 20)]; slider = [[NSSlider alloc] initWithFrame:NSMakeRect(120, 0, 70, 25)];
[slider setAutoresizingMask:NSViewWidthSizable]; [slider setAutoresizingMask:NSViewWidthSizable];
[slider setMinValue:0]; [slider setMinValue:0];
[slider setMaxValue:100]; [slider setMaxValue:100];
@ -825,7 +844,7 @@ CV_IMPL int cvWaitKey (int maxWait)
[self setAutoresizingMask:NSViewWidthSizable]; [self setAutoresizingMask:NSViewWidthSizable];
[self setFrame:NSMakeRect(12, 0, 182, 30)]; //[self setFrame:NSMakeRect(12, 0, 100, 30)];
return self; return self;
} }

@ -336,7 +336,8 @@ HoughLines
:param stn: For the multi-scale Hough transform, it is a divisor for the distance resolution ``theta`` . :param stn: For the multi-scale Hough transform, it is a divisor for the distance resolution ``theta`` .
The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See
:ocv:func:`HoughLinesP` for the code example. :ocv:func:`HoughLinesP` for the code example. Also see http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.
.. index:: HoughLinesP .. index:: HoughLinesP

@ -388,6 +388,14 @@ bilateralFilter
The function applies bilateral filtering to the input image, as described in The function applies bilateral filtering to the input image, as described in
http://www.dai.ed.ac.uk/CVonline/LOCAL\_COPIES/MANDUCHI1/Bilateral\_Filtering.html http://www.dai.ed.ac.uk/CVonline/LOCAL\_COPIES/MANDUCHI1/Bilateral\_Filtering.html
``bilateralFilter`` can do a very good job of reducing unwanted noise while keep edges fairly sharp. However it is very slow compared to most filters.
*Sigma values*: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10) then the filter will not have much effect, whereas if they are large (> 150) then they will have a very strong effect, making the image look "cartoonish".
*Filter size*: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.
This filter doesn't work inplace.
.. index:: blur .. index:: blur

@ -195,7 +195,7 @@ The function calculates the following matrix:
.. math:: .. math::
\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot \texttt{center.x} - \beta \cdot \texttt{center.y} \\ - \beta & \alpha & \beta \cdot \texttt{center.x} - (1- \alpha ) \cdot \texttt{center.y} \end{bmatrix} \begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot \texttt{center.x} - \beta \cdot \texttt{center.y} \\ - \beta & \alpha & \beta \cdot \texttt{center.x} + (1- \alpha ) \cdot \texttt{center.y} \end{bmatrix}
where where
@ -339,6 +339,7 @@ If you want to decimate the image by factor of 2 in each direction, you can call
// specify fx and fy and let the function compute the destination image size. // specify fx and fy and let the function compute the destination image size.
resize(src, dst, Size(), 0.5, 0.5, interpolation); resize(src, dst, Size(), 0.5, 0.5, interpolation);
To shrink an image, it will generally look best with CV_INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with CV_INTER_CUBIC (slow) or CV_INTER_LINEAR (faster but still looks OK).
See Also: See Also:
:ocv:func:`warpAffine`, :ocv:func:`warpAffine`,

@ -84,6 +84,7 @@ cvtColor
The function converts an input image from one color The function converts an input image from one color
space to another. In case of transformation to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). space to another. In case of transformation to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR).
Note that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the bytes are reversed). So the first byte in a standard (24-bit) color image will be an 8-bit Blue component, the second byte will be Green and the third byte will be Red. The fourth, fifth and sixth bytes would then be the 2nd pixel (Blue then Green then Red) and so on.
The conventional ranges for R, G, and B channel values are: The conventional ranges for R, G, and B channel values are:
@ -103,6 +104,8 @@ But in case of a non-linear transformation, an input RGB image should be normali
img *= 1./255; img *= 1./255;
cvtColor(img, img, CV_BGR2Luv); cvtColor(img, img, CV_BGR2Luv);
If you use ``cvtColor`` with 8-bit images then conversion will have lost some information. For many applications this will not be noticeable but it is recommended to use 32-bit images in applications that need the full range of colors or that convert an image before an operation and then convert back.
The function can do the following transformations: The function can do the following transformations:
* *

@ -124,35 +124,48 @@ CvDTreeParams
------------- -------------
.. c:type:: CvDTreeParams .. c:type:: CvDTreeParams
Decision tree training parameters :: Decision tree training parameters.
struct CvDTreeParams The structure contains all the decision tree training parameters. You can initialize it by default constructor and then override any parameters directly before training, or the structure may be fully initialized using the advanced variant of the constructor.
{
int max_categories; .. index:: CvDTreeParams::CvDTreeParams
int max_depth;
int min_sample_count; .. _CvDTreeParams::CvDTreeParams
int cv_folds;
bool use_surrogates; CvDTreeParams::CvDTreeParams
bool use_1se_rule; ----------------------------
bool truncate_pruned_tree; .. ocv:function:: CvDTreeParams::CvDTreeParams()
float regression_accuracy;
const float* priors; .. ocv:function:: CvDTreeParams( int max_depth, int min_sample_count, float regression_accuracy, bool use_surrogates, int max_categories, int cv_folds, bool use_1se_rule, bool truncate_pruned_tree, const float* priors )
CvDTreeParams() : max_categories(10), max_depth(INT_MAX), min_sample_count(10), :param max_depth: The maximum number of levels in a tree. The depth of a constructed tree may be smaller due to other termination criterias or pruning of the tree.
cv_folds(10), use_surrogates(true), use_1se_rule(true),
truncate_pruned_tree(true), regression_accuracy(0.01f), priors(0) :param min_sample_count: If the number of samples in a node is less than this parameter then the node will not be splitted.
{}
:param regression_accuracy: Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be splitted.
CvDTreeParams( int _max_depth, int _min_sample_count,
float _regression_accuracy, bool _use_surrogates, :param use_surrogates: If true then surrogate splits will be built. These splits allow to work with missing data.
int _max_categories, int _cv_folds,
bool _use_1se_rule, bool _truncate_pruned_tree, :param max_categories: Cluster possible values of a categorical variable into ``K`` :math:`\leq` ``max_categories`` clusters to find a suboptimal split. The clustering is applied only in n>2-class classification problems for categorical variables with ``N > max_categories`` possible values. See the Learning OpenCV book (page 489) for more detailed explanation.
const float* _priors );
}; :param cv_folds: If ``cv_folds > 1`` then prune a tree with ``K``-fold cross-validation where ``K`` is equal to ``cv_folds``.
:param use_1se_rule: If true then a pruning will be harsher. This will make a tree more compact but a bit less accurate.
:param truncate_pruned_tree: If true then pruned branches are removed completely from the tree. Otherwise they are retained and it is possible to get the unpruned tree or prune the tree differently by changing ``CvDTree::pruned_tree_idx`` parameter.
:param priors: Weights of prediction categories which determine relative weights that you give to misclassification. That is, if the weight of the first category is 1 and the weight of the second category is 10, then each mistake in predicting the second category is equivalent to making 10 mistakes in predicting the first category.
The default constructor initializes all the parameters with the default values tuned for the standalone classification tree:
::
The structure contains all the decision tree training parameters. There is a default constructor that initializes all the parameters with the default values tuned for the standalone classification tree. Any parameters can be overridden then, or the structure may be fully initialized using the advanced variant of the constructor. CvDTreeParams() : max_categories(10), max_depth(INT_MAX), min_sample_count(10),
cv_folds(10), use_surrogates(true), use_1se_rule(true),
truncate_pruned_tree(true), regression_accuracy(0.01f), priors(0)
{}
.. index:: CvDTreeTrainData .. index:: CvDTreeTrainData
.. _CvDTreeTrainData: .. _CvDTreeTrainData:

@ -2108,7 +2108,7 @@ class DocumentFragmentTests(OpenCVTests):
def test_precornerdetect(self): def test_precornerdetect(self):
from precornerdetect import precornerdetect from precornerdetect import precornerdetect
im = self.get_sample("samples/c/right01.jpg", 0) im = self.get_sample("samples/cpp/right01.jpg", 0)
imf = cv.CreateMat(im.rows, im.cols, cv.CV_32FC1) imf = cv.CreateMat(im.rows, im.cols, cv.CV_32FC1)
cv.ConvertScale(im, imf) cv.ConvertScale(im, imf)
(r0,r1) = precornerdetect(imf) (r0,r1) = precornerdetect(imf)

Loading…
Cancel
Save