removed ERFilter (to be moved to opencv_contrib/modules/text) and lineMOD (to be moved to opencv_contrib/modules/rgbd)
@ -1,211 +0,0 @@ |
||||
Scene Text Detection |
||||
==================== |
||||
|
||||
.. highlight:: cpp |
||||
|
||||
Class-specific Extremal Regions for Scene Text Detection |
||||
-------------------------------------------------------- |
||||
|
||||
The scene text detection algorithm described below has been initially proposed by Lukás Neumann & Jiri Matas [Neumann12]. The main idea behind Class-specific Extremal Regions is similar to the MSER in that suitable Extremal Regions (ERs) are selected from the whole component tree of the image. However, this technique differs from MSER in that selection of suitable ERs is done by a sequential classifier trained for character detection, i.e. dropping the stability requirement of MSERs and selecting class-specific (not necessarily stable) regions. |
||||
|
||||
The component tree of an image is constructed by thresholding by an increasing value step-by-step from 0 to 255 and then linking the obtained connected components from successive levels in a hierarchy by their inclusion relation: |
||||
|
||||
.. image:: pics/component_tree.png |
||||
:width: 100% |
||||
|
||||
The component tree may conatain a huge number of regions even for a very simple image as shown in the previous image. This number can easily reach the order of 1 x 10^6 regions for an average 1 Megapixel image. In order to efficiently select suitable regions among all the ERs the algorithm make use of a sequential classifier with two differentiated stages. |
||||
|
||||
In the first stage incrementally computable descriptors (area, perimeter, bounding box, and euler number) are computed (in O(1)) for each region r and used as features for a classifier which estimates the class-conditional probability p(r|character). Only the ERs which correspond to local maximum of the probability p(r|character) are selected (if their probability is above a global limit p_min and the difference between local maximum and local minimum is greater than a \delta_min value). |
||||
|
||||
In the second stage, the ERs that passed the first stage are classified into character and non-character classes using more informative but also more computationally expensive features. (Hole area ratio, convex hull ratio, and the number of outer boundary inflexion points). |
||||
|
||||
This ER filtering process is done in different single-channel projections of the input image in order to increase the character localization recall. |
||||
|
||||
After the ER filtering is done on each input channel, character candidates must be grouped in high-level text blocks (i.e. words, text lines, paragraphs, ...). The grouping algorithm used in this implementation has been proposed by Lluis Gomez and Dimosthenis Karatzas in [Gomez13] and basically consist in finding meaningful groups of regions using a perceptual organization based clustering analisys (see :ocv:func:`erGrouping`). |
||||
|
||||
|
||||
To see the text detector at work, have a look at the textdetection demo: https://github.com/Itseez/opencv/blob/master/samples/cpp/textdetection.cpp |
||||
|
||||
|
||||
.. [Neumann12] Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012. The paper is available online at http://cmp.felk.cvut.cz/~neumalu1/neumann-cvpr2012.pdf |
||||
|
||||
.. [Gomez13] Gomez L. and Karatzas D.: Multi-script Text Extraction from Natural Scenes, ICDAR 2013. The paper is available online at http://158.109.8.37/files/GoK2013.pdf |
||||
|
||||
|
||||
ERStat |
||||
------ |
||||
.. ocv:struct:: ERStat |
||||
|
||||
The ERStat structure represents a class-specific Extremal Region (ER). |
||||
|
||||
An ER is a 4-connected set of pixels with all its grey-level values smaller than the values in its outer boundary. A class-specific ER is selected (using a classifier) from all the ER's in the component tree of the image. :: |
||||
|
||||
struct CV_EXPORTS ERStat |
||||
{ |
||||
public: |
||||
//! Constructor |
||||
explicit ERStat(int level = 256, int pixel = 0, int x = 0, int y = 0); |
||||
//! Destructor |
||||
~ERStat() { } |
||||
|
||||
//! seed point and threshold (max grey-level value) |
||||
int pixel; |
||||
int level; |
||||
|
||||
//! incrementally computable features |
||||
int area; |
||||
int perimeter; |
||||
int euler; //!< euler number |
||||
Rect rect; //!< bounding box |
||||
double raw_moments[2]; //!< order 1 raw moments to derive the centroid |
||||
double central_moments[3]; //!< order 2 central moments to construct the covariance matrix |
||||
std::deque<int> *crossings;//!< horizontal crossings |
||||
float med_crossings; //!< median of the crossings at three different height levels |
||||
|
||||
//! 2nd stage features |
||||
float hole_area_ratio; |
||||
float convex_hull_ratio; |
||||
float num_inflexion_points; |
||||
|
||||
//! probability that the ER belongs to the class we are looking for |
||||
double probability; |
||||
|
||||
//! pointers preserving the tree structure of the component tree |
||||
ERStat* parent; |
||||
ERStat* child; |
||||
ERStat* next; |
||||
ERStat* prev; |
||||
}; |
||||
|
||||
computeNMChannels |
||||
----------------- |
||||
Compute the different channels to be processed independently in the N&M algorithm [Neumann12]. |
||||
|
||||
.. ocv:function:: void computeNMChannels(InputArray _src, OutputArrayOfArrays _channels, int _mode = ERFILTER_NM_RGBLGrad) |
||||
|
||||
:param _src: Source image. Must be RGB ``CV_8UC3``. |
||||
:param _channels: Output vector<Mat> where computed channels are stored. |
||||
:param _mode: Mode of operation. Currently the only available options are: **ERFILTER_NM_RGBLGrad** (used by default) and **ERFILTER_NM_IHSGrad**. |
||||
|
||||
In N&M algorithm, the combination of intensity (I), hue (H), saturation (S), and gradient magnitude channels (Grad) are used in order to obtain high localization recall. This implementation also provides an alternative combination of red (R), green (G), blue (B), lightness (L), and gradient magnitude (Grad). |
||||
|
||||
|
||||
ERFilter |
||||
-------- |
||||
.. ocv:class:: ERFilter : public Algorithm |
||||
|
||||
Base class for 1st and 2nd stages of Neumann and Matas scene text detection algorithm [Neumann12]. :: |
||||
|
||||
class CV_EXPORTS ERFilter : public Algorithm |
||||
{ |
||||
public: |
||||
|
||||
//! callback with the classifier is made a class. |
||||
//! By doing it we hide SVM, Boost etc. Developers can provide their own classifiers |
||||
class CV_EXPORTS Callback |
||||
{ |
||||
public: |
||||
virtual ~Callback() { } |
||||
//! The classifier must return probability measure for the region. |
||||
virtual double eval(const ERStat& stat) = 0; |
||||
}; |
||||
|
||||
/*! |
||||
the key method. Takes image on input and returns the selected regions in a vector of ERStat |
||||
only distinctive ERs which correspond to characters are selected by a sequential classifier |
||||
*/ |
||||
virtual void run( InputArray image, std::vector<ERStat>& regions ) = 0; |
||||
|
||||
(...) |
||||
|
||||
}; |
||||
|
||||
|
||||
|
||||
ERFilter::Callback |
||||
------------------ |
||||
Callback with the classifier is made a class. By doing it we hide SVM, Boost etc. Developers can provide their own classifiers to the ERFilter algorithm. |
||||
|
||||
.. ocv:class:: ERFilter::Callback |
||||
|
||||
ERFilter::Callback::eval |
||||
------------------------ |
||||
The classifier must return probability measure for the region. |
||||
|
||||
.. ocv:function:: double ERFilter::Callback::eval(const ERStat& stat) |
||||
|
||||
:param stat: The region to be classified |
||||
|
||||
ERFilter::run |
||||
------------- |
||||
The key method of ERFilter algorithm. Takes image on input and returns the selected regions in a vector of ERStat only distinctive ERs which correspond to characters are selected by a sequential classifier |
||||
|
||||
.. ocv:function:: void ERFilter::run( InputArray image, std::vector<ERStat>& regions ) |
||||
|
||||
:param image: Sinle channel image ``CV_8UC1`` |
||||
:param regions: Output for the 1st stage and Input/Output for the 2nd. The selected Extremal Regions are stored here. |
||||
|
||||
Extracts the component tree (if needed) and filter the extremal regions (ER's) by using a given classifier. |
||||
|
||||
createERFilterNM1 |
||||
----------------- |
||||
Create an Extremal Region Filter for the 1st stage classifier of N&M algorithm [Neumann12]. |
||||
|
||||
.. ocv:function:: Ptr<ERFilter> createERFilterNM1( const Ptr<ERFilter::Callback>& cb, int thresholdDelta = 1, float minArea = 0.00025, float maxArea = 0.13, float minProbability = 0.4, bool nonMaxSuppression = true, float minProbabilityDiff = 0.1 ) |
||||
|
||||
:param cb: Callback with the classifier. Default classifier can be implicitly load with function :ocv:func:`loadClassifierNM1`, e.g. from file in samples/cpp/trained_classifierNM1.xml |
||||
:param thresholdDelta: Threshold step in subsequent thresholds when extracting the component tree |
||||
:param minArea: The minimum area (% of image size) allowed for retreived ER's |
||||
:param minArea: The maximum area (% of image size) allowed for retreived ER's |
||||
:param minProbability: The minimum probability P(er|character) allowed for retreived ER's |
||||
:param nonMaxSuppression: Whenever non-maximum suppression is done over the branch probabilities |
||||
:param minProbability: The minimum probability difference between local maxima and local minima ERs |
||||
|
||||
The component tree of the image is extracted by a threshold increased step by step from 0 to 255, incrementally computable descriptors (aspect_ratio, compactness, number of holes, and number of horizontal crossings) are computed for each ER and used as features for a classifier which estimates the class-conditional probability P(er|character). The value of P(er|character) is tracked using the inclusion relation of ER across all thresholds and only the ERs which correspond to local maximum of the probability P(er|character) are selected (if the local maximum of the probability is above a global limit pmin and the difference between local maximum and local minimum is greater than minProbabilityDiff). |
||||
|
||||
createERFilterNM2 |
||||
----------------- |
||||
Create an Extremal Region Filter for the 2nd stage classifier of N&M algorithm [Neumann12]. |
||||
|
||||
.. ocv:function:: Ptr<ERFilter> createERFilterNM2( const Ptr<ERFilter::Callback>& cb, float minProbability = 0.3 ) |
||||
|
||||
:param cb: Callback with the classifier. Default classifier can be implicitly load with function :ocv:func:`loadClassifierNM2`, e.g. from file in samples/cpp/trained_classifierNM2.xml |
||||
:param minProbability: The minimum probability P(er|character) allowed for retreived ER's |
||||
|
||||
In the second stage, the ERs that passed the first stage are classified into character and non-character classes using more informative but also more computationally expensive features. The classifier uses all the features calculated in the first stage and the following additional features: hole area ratio, convex hull ratio, and number of outer inflexion points. |
||||
|
||||
loadClassifierNM1 |
||||
----------------- |
||||
Allow to implicitly load the default classifier when creating an ERFilter object. |
||||
|
||||
.. ocv:function:: Ptr<ERFilter::Callback> loadClassifierNM1(const std::string& filename) |
||||
|
||||
:param filename: The XML or YAML file with the classifier model (e.g. trained_classifierNM1.xml) |
||||
|
||||
returns a pointer to ERFilter::Callback. |
||||
|
||||
loadClassifierNM2 |
||||
----------------- |
||||
Allow to implicitly load the default classifier when creating an ERFilter object. |
||||
|
||||
.. ocv:function:: Ptr<ERFilter::Callback> loadClassifierNM2(const std::string& filename) |
||||
|
||||
:param filename: The XML or YAML file with the classifier model (e.g. trained_classifierNM2.xml) |
||||
|
||||
returns a pointer to ERFilter::Callback. |
||||
|
||||
erGrouping |
||||
---------- |
||||
Find groups of Extremal Regions that are organized as text blocks. |
||||
|
||||
.. ocv:function:: void erGrouping( InputArrayOfArrays src, std::vector<std::vector<ERStat> > ®ions, const std::string& filename, float minProbablity, std::vector<Rect > &groups) |
||||
|
||||
:param src: Vector of sinle channel images CV_8UC1 from wich the regions were extracted |
||||
:param regions: Vector of ER's retreived from the ERFilter algorithm from each channel |
||||
:param filename: The XML or YAML file with the classifier model (e.g. trained_classifier_erGrouping.xml) |
||||
:param minProbability: The minimum probability for accepting a group |
||||
:param groups: The output of the algorithm are stored in this parameter as list of rectangles. |
||||
|
||||
This function implements the grouping algorithm described in [Gomez13]. Notice that this implementation constrains the results to horizontally-aligned text and latin script (since ERFilter classifiers are trained only for latin script detection). |
||||
|
||||
The algorithm combines two different clustering techniques in a single parameter-free procedure to detect groups of regions organized as text. The maximally meaningful groups are fist detected in several feature spaces, where each feature space is a combination of proximity information (x,y coordinates) and a similarity measure (intensity, color, size, gradient magnitude, etc.), thus providing a set of hypotheses of text groups. Evidence Accumulation framework is used to combine all these hypotheses to get the final estimate. Each of the resulting groups are finally validated using a classifier in order to assess if they form a valid horizontally-aligned text block. |
@ -1,262 +0,0 @@ |
||||
Latent SVM |
||||
=============================================================== |
||||
|
||||
Discriminatively Trained Part Based Models for Object Detection |
||||
--------------------------------------------------------------- |
||||
|
||||
The object detector described below has been initially proposed by |
||||
P.F. Felzenszwalb in [Felzenszwalb2010]_. It is based on a |
||||
Dalal-Triggs detector that uses a single filter on histogram of |
||||
oriented gradients (HOG) features to represent an object category. |
||||
This detector uses a sliding window approach, where a filter is |
||||
applied at all positions and scales of an image. The first |
||||
innovation is enriching the Dalal-Triggs model using a |
||||
star-structured part-based model defined by a "root" filter |
||||
(analogous to the Dalal-Triggs filter) plus a set of parts filters |
||||
and associated deformation models. The score of one of star models |
||||
at a particular position and scale within an image is the score of |
||||
the root filter at the given location plus the sum over parts of the |
||||
maximum, over placements of that part, of the part filter score on |
||||
its location minus a deformation cost easuring the deviation of the |
||||
part from its ideal location relative to the root. Both root and |
||||
part filter scores are defined by the dot product between a filter |
||||
(a set of weights) and a subwindow of a feature pyramid computed |
||||
from the input image. Another improvement is a representation of the |
||||
class of models by a mixture of star models. The score of a mixture |
||||
model at a particular position and scale is the maximum over |
||||
components, of the score of that component model at the given |
||||
location. |
||||
|
||||
In OpenCV there are C implementation of Latent SVM and C++ wrapper of it. |
||||
C version is the structure :ocv:struct:`CvObjectDetection` and a set of functions |
||||
working with this structure (see :ocv:func:`cvLoadLatentSvmDetector`, |
||||
:ocv:func:`cvReleaseLatentSvmDetector`, :ocv:func:`cvLatentSvmDetectObjects`). |
||||
C++ version is the class :ocv:class:`LatentSvmDetector` and has slightly different |
||||
functionality in contrast with C version - it supports loading and detection |
||||
of several models. |
||||
|
||||
There are two examples of Latent SVM usage: ``samples/c/latentsvmdetect.cpp`` |
||||
and ``samples/cpp/latentsvm_multidetect.cpp``. |
||||
|
||||
.. highlight:: c |
||||
|
||||
|
||||
CvLSVMFilterPosition |
||||
-------------------- |
||||
.. ocv:struct:: CvLSVMFilterPosition |
||||
|
||||
Structure describes the position of the filter in the feature pyramid. |
||||
|
||||
.. ocv:member:: unsigned int l |
||||
|
||||
level in the feature pyramid |
||||
|
||||
.. ocv:member:: unsigned int x |
||||
|
||||
x-coordinate in level l |
||||
|
||||
.. ocv:member:: unsigned int y |
||||
|
||||
y-coordinate in level l |
||||
|
||||
|
||||
CvLSVMFilterObject |
||||
------------------ |
||||
.. ocv:struct:: CvLSVMFilterObject |
||||
|
||||
Description of the filter, which corresponds to the part of the object. |
||||
|
||||
.. ocv:member:: CvLSVMFilterPosition V |
||||
|
||||
ideal (penalty = 0) position of the partial filter |
||||
from the root filter position (V_i in the paper) |
||||
|
||||
.. ocv:member:: float fineFunction[4] |
||||
|
||||
vector describes penalty function (d_i in the paper) |
||||
pf[0] * x + pf[1] * y + pf[2] * x^2 + pf[3] * y^2 |
||||
|
||||
.. ocv:member:: int sizeX |
||||
.. ocv:member:: int sizeY |
||||
|
||||
Rectangular map (sizeX x sizeY), |
||||
every cell stores feature vector (dimension = p) |
||||
|
||||
.. ocv:member:: int numFeatures |
||||
|
||||
number of features |
||||
|
||||
.. ocv:member:: float *H |
||||
|
||||
matrix of feature vectors to set and get |
||||
feature vectors (i,j) used formula H[(j * sizeX + i) * p + k], |
||||
where k - component of feature vector in cell (i, j) |
||||
|
||||
CvLatentSvmDetector |
||||
------------------- |
||||
.. ocv:struct:: CvLatentSvmDetector |
||||
|
||||
Structure contains internal representation of trained Latent SVM detector. |
||||
|
||||
.. ocv:member:: int num_filters |
||||
|
||||
total number of filters (root plus part) in model |
||||
|
||||
.. ocv:member:: int num_components |
||||
|
||||
number of components in model |
||||
|
||||
.. ocv:member:: int* num_part_filters |
||||
|
||||
array containing number of part filters for each component |
||||
|
||||
.. ocv:member:: CvLSVMFilterObject** filters |
||||
|
||||
root and part filters for all model components |
||||
|
||||
.. ocv:member:: float* b |
||||
|
||||
biases for all model components |
||||
|
||||
.. ocv:member:: float score_threshold |
||||
|
||||
confidence level threshold |
||||
|
||||
|
||||
CvObjectDetection |
||||
----------------- |
||||
.. ocv:struct:: CvObjectDetection |
||||
|
||||
Structure contains the bounding box and confidence level for detected object. |
||||
|
||||
.. ocv:member:: CvRect rect |
||||
|
||||
bounding box for a detected object |
||||
|
||||
.. ocv:member:: float score |
||||
|
||||
confidence level |
||||
|
||||
|
||||
cvLoadLatentSvmDetector |
||||
----------------------- |
||||
Loads trained detector from a file. |
||||
|
||||
.. ocv:function:: CvLatentSvmDetector* cvLoadLatentSvmDetector(const char* filename) |
||||
|
||||
:param filename: Name of the file containing the description of a trained detector |
||||
|
||||
|
||||
cvReleaseLatentSvmDetector |
||||
-------------------------- |
||||
Release memory allocated for CvLatentSvmDetector structure. |
||||
|
||||
.. ocv:function:: void cvReleaseLatentSvmDetector(CvLatentSvmDetector** detector) |
||||
|
||||
:param detector: CvLatentSvmDetector structure to be released |
||||
|
||||
|
||||
cvLatentSvmDetectObjects |
||||
------------------------ |
||||
Find rectangular regions in the given image that are likely to contain objects |
||||
and corresponding confidence levels. |
||||
|
||||
.. ocv:function:: CvSeq* cvLatentSvmDetectObjects( IplImage* image, CvLatentSvmDetector* detector, CvMemStorage* storage, float overlap_threshold=0.5f, int numThreads=-1 ) |
||||
|
||||
:param image: image |
||||
:param detector: LatentSVM detector in internal representation |
||||
:param storage: Memory storage to store the resultant sequence of the object candidate rectangles |
||||
:param overlap_threshold: Threshold for the non-maximum suppression algorithm |
||||
:param numThreads: Number of threads used in parallel version of the algorithm |
||||
|
||||
.. highlight:: cpp |
||||
|
||||
LatentSvmDetector |
||||
----------------- |
||||
.. ocv:class:: LatentSvmDetector |
||||
|
||||
This is a C++ wrapping class of Latent SVM. It contains internal representation of several |
||||
trained Latent SVM detectors (models) and a set of methods to load the detectors and detect objects |
||||
using them. |
||||
|
||||
LatentSvmDetector::ObjectDetection |
||||
---------------------------------- |
||||
.. ocv:struct:: LatentSvmDetector::ObjectDetection |
||||
|
||||
Structure contains the detection information. |
||||
|
||||
.. ocv:member:: Rect rect |
||||
|
||||
bounding box for a detected object |
||||
|
||||
.. ocv:member:: float score |
||||
|
||||
confidence level |
||||
|
||||
.. ocv:member:: int classID |
||||
|
||||
class (model or detector) ID that detect an object |
||||
|
||||
|
||||
LatentSvmDetector::LatentSvmDetector |
||||
------------------------------------ |
||||
Two types of constructors. |
||||
|
||||
.. ocv:function:: LatentSvmDetector::LatentSvmDetector() |
||||
|
||||
.. ocv:function:: LatentSvmDetector::LatentSvmDetector(const vector<String>& filenames, const vector<String>& classNames=vector<String>()) |
||||
|
||||
|
||||
|
||||
:param filenames: A set of filenames storing the trained detectors (models). Each file contains one model. See examples of such files here /opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007/. |
||||
|
||||
:param classNames: A set of trained models names. If it's empty then the name of each model will be constructed from the name of file containing the model. E.g. the model stored in "/home/user/cat.xml" will get the name "cat". |
||||
|
||||
LatentSvmDetector::~LatentSvmDetector |
||||
------------------------------------- |
||||
Destructor. |
||||
|
||||
.. ocv:function:: LatentSvmDetector::~LatentSvmDetector() |
||||
|
||||
LatentSvmDetector::~clear |
||||
------------------------- |
||||
Clear all trained models and their names stored in an class object. |
||||
|
||||
.. ocv:function:: void LatentSvmDetector::clear() |
||||
|
||||
LatentSvmDetector::load |
||||
----------------------- |
||||
Load the trained models from given ``.xml`` files and return ``true`` if at least one model was loaded. |
||||
|
||||
.. ocv:function:: bool LatentSvmDetector::load( const vector<String>& filenames, const vector<String>& classNames=vector<String>() ) |
||||
|
||||
:param filenames: A set of filenames storing the trained detectors (models). Each file contains one model. See examples of such files here /opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007/. |
||||
|
||||
:param classNames: A set of trained models names. If it's empty then the name of each model will be constructed from the name of file containing the model. E.g. the model stored in "/home/user/cat.xml" will get the name "cat". |
||||
|
||||
LatentSvmDetector::detect |
||||
------------------------- |
||||
Find rectangular regions in the given image that are likely to contain objects of loaded classes (models) |
||||
and corresponding confidence levels. |
||||
|
||||
.. ocv:function:: void LatentSvmDetector::detect( const Mat& image, vector<ObjectDetection>& objectDetections, float overlapThreshold=0.5f, int numThreads=-1 ) |
||||
|
||||
:param image: An image. |
||||
:param objectDetections: The detections: rectangulars, scores and class IDs. |
||||
:param overlapThreshold: Threshold for the non-maximum suppression algorithm. |
||||
:param numThreads: Number of threads used in parallel version of the algorithm. |
||||
|
||||
LatentSvmDetector::getClassNames |
||||
-------------------------------- |
||||
Return the class (model) names that were passed in constructor or method ``load`` or extracted from models filenames in those methods. |
||||
|
||||
.. ocv:function:: const vector<String>& LatentSvmDetector::getClassNames() const |
||||
|
||||
LatentSvmDetector::getClassCount |
||||
-------------------------------- |
||||
Return a count of loaded models (classes). |
||||
|
||||
.. ocv:function:: size_t LatentSvmDetector::getClassCount() const |
||||
|
||||
|
||||
.. [Felzenszwalb2010] Felzenszwalb, P. F. and Girshick, R. B. and McAllester, D. and Ramanan, D. *Object Detection with Discriminatively Trained Part Based Models*. PAMI, vol. 32, no. 9, pp. 1627-1645, September 2010 |
Before Width: | Height: | Size: 106 KiB |
@ -1,266 +0,0 @@ |
||||
/*M///////////////////////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
||||
//
|
||||
// By downloading, copying, installing or using the software you agree to this license.
|
||||
// If you do not agree to this license, do not download, install,
|
||||
// copy or use the software.
|
||||
//
|
||||
//
|
||||
// License Agreement
|
||||
// For Open Source Computer Vision Library
|
||||
//
|
||||
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
|
||||
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
|
||||
// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
|
||||
// Third party copyrights are property of their respective owners.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without modification,
|
||||
// are permitted provided that the following conditions are met:
|
||||
//
|
||||
// * Redistribution's of source code must retain the above copyright notice,
|
||||
// this list of conditions and the following disclaimer.
|
||||
//
|
||||
// * Redistribution's in binary form must reproduce the above copyright notice,
|
||||
// this list of conditions and the following disclaimer in the documentation
|
||||
// and/or other materials provided with the distribution.
|
||||
//
|
||||
// * The name of the copyright holders may not be used to endorse or promote products
|
||||
// derived from this software without specific prior written permission.
|
||||
//
|
||||
// This software is provided by the copyright holders and contributors "as is" and
|
||||
// any express or implied warranties, including, but not limited to, the implied
|
||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
||||
// indirect, incidental, special, exemplary, or consequential damages
|
||||
// (including, but not limited to, procurement of substitute goods or services;
|
||||
// loss of use, data, or profits; or business interruption) however caused
|
||||
// and on any theory of liability, whether in contract, strict liability,
|
||||
// or tort (including negligence or otherwise) arising in any way out of
|
||||
// the use of this software, even if advised of the possibility of such damage.
|
||||
//
|
||||
//M*/
|
||||
|
||||
#ifndef __OPENCV_OBJDETECT_ERFILTER_HPP__ |
||||
#define __OPENCV_OBJDETECT_ERFILTER_HPP__ |
||||
|
||||
#include "opencv2/core.hpp" |
||||
#include <vector> |
||||
#include <deque> |
||||
#include <string> |
||||
|
||||
namespace cv |
||||
{ |
||||
|
||||
/*!
|
||||
Extremal Region Stat structure |
||||
|
||||
The ERStat structure represents a class-specific Extremal Region (ER). |
||||
|
||||
An ER is a 4-connected set of pixels with all its grey-level values smaller than the values |
||||
in its outer boundary. A class-specific ER is selected (using a classifier) from all the ER's |
||||
in the component tree of the image. |
||||
*/ |
||||
struct CV_EXPORTS ERStat |
||||
{ |
||||
public: |
||||
//! Constructor
|
||||
explicit ERStat(int level = 256, int pixel = 0, int x = 0, int y = 0); |
||||
//! Destructor
|
||||
~ERStat() { } |
||||
|
||||
//! seed point and the threshold (max grey-level value)
|
||||
int pixel; |
||||
int level; |
||||
|
||||
//! incrementally computable features
|
||||
int area; |
||||
int perimeter; |
||||
int euler; //!< euler number
|
||||
Rect rect; |
||||
double raw_moments[2]; //!< order 1 raw moments to derive the centroid
|
||||
double central_moments[3]; //!< order 2 central moments to construct the covariance matrix
|
||||
std::deque<int> *crossings;//!< horizontal crossings
|
||||
float med_crossings; //!< median of the crossings at three different height levels
|
||||
|
||||
//! 2nd stage features
|
||||
float hole_area_ratio; |
||||
float convex_hull_ratio; |
||||
float num_inflexion_points; |
||||
|
||||
// TODO Other features can be added (average color, standard deviation, and such)
|
||||
|
||||
|
||||
// TODO shall we include the pixel list whenever available (i.e. after 2nd stage) ?
|
||||
std::vector<int> *pixels; |
||||
|
||||
//! probability that the ER belongs to the class we are looking for
|
||||
double probability; |
||||
|
||||
//! pointers preserving the tree structure of the component tree
|
||||
ERStat* parent; |
||||
ERStat* child; |
||||
ERStat* next; |
||||
ERStat* prev; |
||||
|
||||
//! wenever the regions is a local maxima of the probability
|
||||
bool local_maxima; |
||||
ERStat* max_probability_ancestor; |
||||
ERStat* min_probability_ancestor; |
||||
}; |
||||
|
||||
/*!
|
||||
Base class for 1st and 2nd stages of Neumann and Matas scene text detection algorithms |
||||
Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012 |
||||
|
||||
Extracts the component tree (if needed) and filter the extremal regions (ER's) by using a given classifier. |
||||
*/ |
||||
class CV_EXPORTS ERFilter : public Algorithm |
||||
{ |
||||
public: |
||||
|
||||
//! callback with the classifier is made a class. By doing it we hide SVM, Boost etc.
|
||||
class CV_EXPORTS Callback |
||||
{ |
||||
public: |
||||
virtual ~Callback() { } |
||||
//! The classifier must return probability measure for the region.
|
||||
virtual double eval(const ERStat& stat) = 0; //const = 0; //TODO why cannot use const = 0 here?
|
||||
}; |
||||
|
||||
/*!
|
||||
the key method. Takes image on input and returns the selected regions in a vector of ERStat |
||||
only distinctive ERs which correspond to characters are selected by a sequential classifier |
||||
\param image is the input image |
||||
\param regions is output for the first stage, input/output for the second one. |
||||
*/ |
||||
virtual void run( InputArray image, std::vector<ERStat>& regions ) = 0; |
||||
|
||||
|
||||
//! set/get methods to set the algorithm properties,
|
||||
virtual void setCallback(const Ptr<ERFilter::Callback>& cb) = 0; |
||||
virtual void setThresholdDelta(int thresholdDelta) = 0; |
||||
virtual void setMinArea(float minArea) = 0; |
||||
virtual void setMaxArea(float maxArea) = 0; |
||||
virtual void setMinProbability(float minProbability) = 0; |
||||
virtual void setMinProbabilityDiff(float minProbabilityDiff) = 0; |
||||
virtual void setNonMaxSuppression(bool nonMaxSuppression) = 0; |
||||
virtual int getNumRejected() = 0; |
||||
}; |
||||
|
||||
|
||||
/*!
|
||||
Create an Extremal Region Filter for the 1st stage classifier of N&M algorithm |
||||
Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012 |
||||
|
||||
The component tree of the image is extracted by a threshold increased step by step |
||||
from 0 to 255, incrementally computable descriptors (aspect_ratio, compactness, |
||||
number of holes, and number of horizontal crossings) are computed for each ER |
||||
and used as features for a classifier which estimates the class-conditional |
||||
probability P(er|character). The value of P(er|character) is tracked using the inclusion |
||||
relation of ER across all thresholds and only the ERs which correspond to local maximum |
||||
of the probability P(er|character) are selected (if the local maximum of the |
||||
probability is above a global limit pmin and the difference between local maximum and |
||||
local minimum is greater than minProbabilityDiff). |
||||
|
||||
\param cb Callback with the classifier. |
||||
default classifier can be implicitly load with function loadClassifierNM1() |
||||
from file in samples/cpp/trained_classifierNM1.xml |
||||
\param thresholdDelta Threshold step in subsequent thresholds when extracting the component tree |
||||
\param minArea The minimum area (% of image size) allowed for retreived ER's |
||||
\param minArea The maximum area (% of image size) allowed for retreived ER's |
||||
\param minProbability The minimum probability P(er|character) allowed for retreived ER's |
||||
\param nonMaxSuppression Whenever non-maximum suppression is done over the branch probabilities |
||||
\param minProbability The minimum probability difference between local maxima and local minima ERs |
||||
*/ |
||||
CV_EXPORTS Ptr<ERFilter> createERFilterNM1(const Ptr<ERFilter::Callback>& cb, |
||||
int thresholdDelta = 1, float minArea = 0.00025, |
||||
float maxArea = 0.13, float minProbability = 0.4, |
||||
bool nonMaxSuppression = true, |
||||
float minProbabilityDiff = 0.1); |
||||
|
||||
/*!
|
||||
Create an Extremal Region Filter for the 2nd stage classifier of N&M algorithm |
||||
Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012 |
||||
|
||||
In the second stage, the ERs that passed the first stage are classified into character |
||||
and non-character classes using more informative but also more computationally expensive |
||||
features. The classifier uses all the features calculated in the first stage and the following |
||||
additional features: hole area ratio, convex hull ratio, and number of outer inflexion points. |
||||
|
||||
\param cb Callback with the classifier |
||||
default classifier can be implicitly load with function loadClassifierNM2() |
||||
from file in samples/cpp/trained_classifierNM2.xml |
||||
\param minProbability The minimum probability P(er|character) allowed for retreived ER's |
||||
*/ |
||||
CV_EXPORTS Ptr<ERFilter> createERFilterNM2(const Ptr<ERFilter::Callback>& cb, |
||||
float minProbability = 0.3); |
||||
|
||||
|
||||
/*!
|
||||
Allow to implicitly load the default classifier when creating an ERFilter object. |
||||
The function takes as parameter the XML or YAML file with the classifier model |
||||
(e.g. trained_classifierNM1.xml) returns a pointer to ERFilter::Callback. |
||||
*/ |
||||
|
||||
CV_EXPORTS Ptr<ERFilter::Callback> loadClassifierNM1(const std::string& filename); |
||||
|
||||
/*!
|
||||
Allow to implicitly load the default classifier when creating an ERFilter object. |
||||
The function takes as parameter the XML or YAML file with the classifier model |
||||
(e.g. trained_classifierNM1.xml) returns a pointer to ERFilter::Callback. |
||||
*/ |
||||
|
||||
CV_EXPORTS Ptr<ERFilter::Callback> loadClassifierNM2(const std::string& filename); |
||||
|
||||
|
||||
// computeNMChannels operation modes
|
||||
enum { ERFILTER_NM_RGBLGrad = 0, |
||||
ERFILTER_NM_IHSGrad = 1 |
||||
}; |
||||
|
||||
/*!
|
||||
Compute the different channels to be processed independently in the N&M algorithm |
||||
Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012 |
||||
|
||||
In N&M algorithm, the combination of intensity (I), hue (H), saturation (S), and gradient |
||||
magnitude channels (Grad) are used in order to obtain high localization recall. |
||||
This implementation also provides an alternative combination of red (R), green (G), blue (B), |
||||
lightness (L), and gradient magnitude (Grad). |
||||
|
||||
\param _src Source image. Must be RGB CV_8UC3. |
||||
\param _channels Output vector<Mat> where computed channels are stored. |
||||
\param _mode Mode of operation. Currently the only available options are |
||||
ERFILTER_NM_RGBLGrad (by default) and ERFILTER_NM_IHSGrad. |
||||
|
||||
*/ |
||||
CV_EXPORTS void computeNMChannels(InputArray _src, OutputArrayOfArrays _channels, int _mode = ERFILTER_NM_RGBLGrad); |
||||
|
||||
|
||||
/*!
|
||||
Find groups of Extremal Regions that are organized as text blocks. This function implements |
||||
the grouping algorithm described in: |
||||
Gomez L. and Karatzas D.: Multi-script Text Extraction from Natural Scenes, ICDAR 2013. |
||||
Notice that this implementation constrains the results to horizontally-aligned text and |
||||
latin script (since ERFilter classifiers are trained only for latin script detection). |
||||
|
||||
The algorithm combines two different clustering techniques in a single parameter-free procedure |
||||
to detect groups of regions organized as text. The maximally meaningful groups are fist detected |
||||
in several feature spaces, where each feature space is a combination of proximity information |
||||
(x,y coordinates) and a similarity measure (intensity, color, size, gradient magnitude, etc.), |
||||
thus providing a set of hypotheses of text groups. Evidence Accumulation framework is used to |
||||
combine all these hypotheses to get the final estimate. Each of the resulting groups are finally |
||||
validated using a classifier in order to assest if they form a valid horizontally-aligned text block. |
||||
|
||||
\param src Vector of sinle channel images CV_8UC1 from wich the regions were extracted. |
||||
\param regions Vector of ER's retreived from the ERFilter algorithm from each channel |
||||
\param filename The XML or YAML file with the classifier model (e.g. trained_classifier_erGrouping.xml) |
||||
\param minProbability The minimum probability for accepting a group |
||||
\param groups The output of the algorithm are stored in this parameter as list of rectangles. |
||||
*/ |
||||
CV_EXPORTS void erGrouping(InputArrayOfArrays src, std::vector<std::vector<ERStat> > ®ions, |
||||
const std::string& filename, float minProbablity, |
||||
std::vector<Rect > &groups); |
||||
|
||||
} |
||||
#endif // _OPENCV_ERFILTER_HPP_
|
@ -1,455 +0,0 @@ |
||||
/*M///////////////////////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
||||
//
|
||||
// By downloading, copying, installing or using the software you agree to this license.
|
||||
// If you do not agree to this license, do not download, install,
|
||||
// copy or use the software.
|
||||
//
|
||||
//
|
||||
// License Agreement
|
||||
// For Open Source Computer Vision Library
|
||||
//
|
||||
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
|
||||
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
|
||||
// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
|
||||
// Third party copyrights are property of their respective owners.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without modification,
|
||||
// are permitted provided that the following conditions are met:
|
||||
//
|
||||
// * Redistribution's of source code must retain the above copyright notice,
|
||||
// this list of conditions and the following disclaimer.
|
||||
//
|
||||
// * Redistribution's in binary form must reproduce the above copyright notice,
|
||||
// this list of conditions and the following disclaimer in the documentation
|
||||
// and/or other materials provided with the distribution.
|
||||
//
|
||||
// * The name of the copyright holders may not be used to endorse or promote products
|
||||
// derived from this software without specific prior written permission.
|
||||
//
|
||||
// This software is provided by the copyright holders and contributors "as is" and
|
||||
// any express or implied warranties, including, but not limited to, the implied
|
||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
||||
// indirect, incidental, special, exemplary, or consequential damages
|
||||
// (including, but not limited to, procurement of substitute goods or services;
|
||||
// loss of use, data, or profits; or business interruption) however caused
|
||||
// and on any theory of liability, whether in contract, strict liability,
|
||||
// or tort (including negligence or otherwise) arising in any way out of
|
||||
// the use of this software, even if advised of the possibility of such damage.
|
||||
//
|
||||
//M*/
|
||||
|
||||
#ifndef __OPENCV_OBJDETECT_LINEMOD_HPP__ |
||||
#define __OPENCV_OBJDETECT_LINEMOD_HPP__ |
||||
|
||||
#include "opencv2/core.hpp" |
||||
#include <map> |
||||
|
||||
/****************************************************************************************\
|
||||
* LINE-MOD * |
||||
\****************************************************************************************/ |
||||
|
||||
namespace cv { |
||||
namespace linemod { |
||||
|
||||
/// @todo Convert doxy comments to rst
|
||||
|
||||
/**
|
||||
* \brief Discriminant feature described by its location and label. |
||||
*/ |
||||
struct CV_EXPORTS Feature |
||||
{ |
||||
int x; ///< x offset
|
||||
int y; ///< y offset
|
||||
int label; ///< Quantization
|
||||
|
||||
Feature() : x(0), y(0), label(0) {} |
||||
Feature(int x, int y, int label); |
||||
|
||||
void read(const FileNode& fn); |
||||
void write(FileStorage& fs) const; |
||||
}; |
||||
|
||||
inline Feature::Feature(int _x, int _y, int _label) : x(_x), y(_y), label(_label) {} |
||||
|
||||
struct CV_EXPORTS Template |
||||
{ |
||||
int width; |
||||
int height; |
||||
int pyramid_level; |
||||
std::vector<Feature> features; |
||||
|
||||
void read(const FileNode& fn); |
||||
void write(FileStorage& fs) const; |
||||
}; |
||||
|
||||
/**
|
||||
* \brief Represents a modality operating over an image pyramid. |
||||
*/ |
||||
class QuantizedPyramid |
||||
{ |
||||
public: |
||||
// Virtual destructor
|
||||
virtual ~QuantizedPyramid() {} |
||||
|
||||
/**
|
||||
* \brief Compute quantized image at current pyramid level for online detection. |
||||
* |
||||
* \param[out] dst The destination 8-bit image. For each pixel at most one bit is set, |
||||
* representing its classification. |
||||
*/ |
||||
virtual void quantize(Mat& dst) const =0; |
||||
|
||||
/**
|
||||
* \brief Extract most discriminant features at current pyramid level to form a new template. |
||||
* |
||||
* \param[out] templ The new template. |
||||
*/ |
||||
virtual bool extractTemplate(Template& templ) const =0; |
||||
|
||||
/**
|
||||
* \brief Go to the next pyramid level. |
||||
* |
||||
* \todo Allow pyramid scale factor other than 2 |
||||
*/ |
||||
virtual void pyrDown() =0; |
||||
|
||||
protected: |
||||
/// Candidate feature with a score
|
||||
struct Candidate |
||||
{ |
||||
Candidate(int x, int y, int label, float score); |
||||
|
||||
/// Sort candidates with high score to the front
|
||||
bool operator<(const Candidate& rhs) const |
||||
{ |
||||
return score > rhs.score; |
||||
} |
||||
|
||||
Feature f; |
||||
float score; |
||||
}; |
||||
|
||||
/**
|
||||
* \brief Choose candidate features so that they are not bunched together. |
||||
* |
||||
* \param[in] candidates Candidate features sorted by score. |
||||
* \param[out] features Destination vector of selected features. |
||||
* \param[in] num_features Number of candidates to select. |
||||
* \param[in] distance Hint for desired distance between features. |
||||
*/ |
||||
static void selectScatteredFeatures(const std::vector<Candidate>& candidates, |
||||
std::vector<Feature>& features, |
||||
size_t num_features, float distance); |
||||
}; |
||||
|
||||
inline QuantizedPyramid::Candidate::Candidate(int x, int y, int label, float _score) : f(x, y, label), score(_score) {} |
||||
|
||||
/**
|
||||
* \brief Interface for modalities that plug into the LINE template matching representation. |
||||
* |
||||
* \todo Max response, to allow optimization of summing (255/MAX) features as uint8 |
||||
*/ |
||||
class CV_EXPORTS Modality |
||||
{ |
||||
public: |
||||
// Virtual destructor
|
||||
virtual ~Modality() {} |
||||
|
||||
/**
|
||||
* \brief Form a quantized image pyramid from a source image. |
||||
* |
||||
* \param[in] src The source image. Type depends on the modality. |
||||
* \param[in] mask Optional mask. If not empty, unmasked pixels are set to zero |
||||
* in quantized image and cannot be extracted as features. |
||||
*/ |
||||
Ptr<QuantizedPyramid> process(const Mat& src, |
||||
const Mat& mask = Mat()) const |
||||
{ |
||||
return processImpl(src, mask); |
||||
} |
||||
|
||||
virtual String name() const =0; |
||||
|
||||
virtual void read(const FileNode& fn) =0; |
||||
virtual void write(FileStorage& fs) const =0; |
||||
|
||||
/**
|
||||
* \brief Create modality by name. |
||||
* |
||||
* The following modality types are supported: |
||||
* - "ColorGradient" |
||||
* - "DepthNormal" |
||||
*/ |
||||
static Ptr<Modality> create(const String& modality_type); |
||||
|
||||
/**
|
||||
* \brief Load a modality from file. |
||||
*/ |
||||
static Ptr<Modality> create(const FileNode& fn); |
||||
|
||||
protected: |
||||
// Indirection is because process() has a default parameter.
|
||||
virtual Ptr<QuantizedPyramid> processImpl(const Mat& src, |
||||
const Mat& mask) const =0; |
||||
}; |
||||
|
||||
/**
|
||||
* \brief Modality that computes quantized gradient orientations from a color image. |
||||
*/ |
||||
class CV_EXPORTS ColorGradient : public Modality |
||||
{ |
||||
public: |
||||
/**
|
||||
* \brief Default constructor. Uses reasonable default parameter values. |
||||
*/ |
||||
ColorGradient(); |
||||
|
||||
/**
|
||||
* \brief Constructor. |
||||
* |
||||
* \param weak_threshold When quantizing, discard gradients with magnitude less than this. |
||||
* \param num_features How many features a template must contain. |
||||
* \param strong_threshold Consider as candidate features only gradients whose norms are |
||||
* larger than this. |
||||
*/ |
||||
ColorGradient(float weak_threshold, size_t num_features, float strong_threshold); |
||||
|
||||
virtual String name() const; |
||||
|
||||
virtual void read(const FileNode& fn); |
||||
virtual void write(FileStorage& fs) const; |
||||
|
||||
float weak_threshold; |
||||
size_t num_features; |
||||
float strong_threshold; |
||||
|
||||
protected: |
||||
virtual Ptr<QuantizedPyramid> processImpl(const Mat& src, |
||||
const Mat& mask) const; |
||||
}; |
||||
|
||||
/**
|
||||
* \brief Modality that computes quantized surface normals from a dense depth map. |
||||
*/ |
||||
class CV_EXPORTS DepthNormal : public Modality |
||||
{ |
||||
public: |
||||
/**
|
||||
* \brief Default constructor. Uses reasonable default parameter values. |
||||
*/ |
||||
DepthNormal(); |
||||
|
||||
/**
|
||||
* \brief Constructor. |
||||
* |
||||
* \param distance_threshold Ignore pixels beyond this distance. |
||||
* \param difference_threshold When computing normals, ignore contributions of pixels whose |
||||
* depth difference with the central pixel is above this threshold. |
||||
* \param num_features How many features a template must contain. |
||||
* \param extract_threshold Consider as candidate feature only if there are no differing |
||||
* orientations within a distance of extract_threshold. |
||||
*/ |
||||
DepthNormal(int distance_threshold, int difference_threshold, size_t num_features, |
||||
int extract_threshold); |
||||
|
||||
virtual String name() const; |
||||
|
||||
virtual void read(const FileNode& fn); |
||||
virtual void write(FileStorage& fs) const; |
||||
|
||||
int distance_threshold; |
||||
int difference_threshold; |
||||
size_t num_features; |
||||
int extract_threshold; |
||||
|
||||
protected: |
||||
virtual Ptr<QuantizedPyramid> processImpl(const Mat& src, |
||||
const Mat& mask) const; |
||||
}; |
||||
|
||||
/**
|
||||
* \brief Debug function to colormap a quantized image for viewing. |
||||
*/ |
||||
void colormap(const Mat& quantized, Mat& dst); |
||||
|
||||
/**
|
||||
* \brief Represents a successful template match. |
||||
*/ |
||||
struct CV_EXPORTS Match |
||||
{ |
||||
Match() |
||||
{ |
||||
} |
||||
|
||||
Match(int x, int y, float similarity, const String& class_id, int template_id); |
||||
|
||||
/// Sort matches with high similarity to the front
|
||||
bool operator<(const Match& rhs) const |
||||
{ |
||||
// Secondarily sort on template_id for the sake of duplicate removal
|
||||
if (similarity != rhs.similarity) |
||||
return similarity > rhs.similarity; |
||||
else |
||||
return template_id < rhs.template_id; |
||||
} |
||||
|
||||
bool operator==(const Match& rhs) const |
||||
{ |
||||
return x == rhs.x && y == rhs.y && similarity == rhs.similarity && class_id == rhs.class_id; |
||||
} |
||||
|
||||
int x; |
||||
int y; |
||||
float similarity; |
||||
String class_id; |
||||
int template_id; |
||||
}; |
||||
|
||||
inline |
||||
Match::Match(int _x, int _y, float _similarity, const String& _class_id, int _template_id) |
||||
: x(_x), y(_y), similarity(_similarity), class_id(_class_id), template_id(_template_id) |
||||
{} |
||||
|
||||
/**
|
||||
* \brief Object detector using the LINE template matching algorithm with any set of |
||||
* modalities. |
||||
*/ |
||||
class CV_EXPORTS Detector |
||||
{ |
||||
public: |
||||
/**
|
||||
* \brief Empty constructor, initialize with read(). |
||||
*/ |
||||
Detector(); |
||||
|
||||
/**
|
||||
* \brief Constructor. |
||||
* |
||||
* \param modalities Modalities to use (color gradients, depth normals, ...). |
||||
* \param T_pyramid Value of the sampling step T at each pyramid level. The |
||||
* number of pyramid levels is T_pyramid.size(). |
||||
*/ |
||||
Detector(const std::vector< Ptr<Modality> >& modalities, const std::vector<int>& T_pyramid); |
||||
|
||||
/**
|
||||
* \brief Detect objects by template matching. |
||||
* |
||||
* Matches globally at the lowest pyramid level, then refines locally stepping up the pyramid. |
||||
* |
||||
* \param sources Source images, one for each modality. |
||||
* \param threshold Similarity threshold, a percentage between 0 and 100. |
||||
* \param[out] matches Template matches, sorted by similarity score. |
||||
* \param class_ids If non-empty, only search for the desired object classes. |
||||
* \param[out] quantized_images Optionally return vector<Mat> of quantized images. |
||||
* \param masks The masks for consideration during matching. The masks should be CV_8UC1 |
||||
* where 255 represents a valid pixel. If non-empty, the vector must be |
||||
* the same size as sources. Each element must be |
||||
* empty or the same size as its corresponding source. |
||||
*/ |
||||
void match(const std::vector<Mat>& sources, float threshold, std::vector<Match>& matches, |
||||
const std::vector<String>& class_ids = std::vector<String>(), |
||||
OutputArrayOfArrays quantized_images = noArray(), |
||||
const std::vector<Mat>& masks = std::vector<Mat>()) const; |
||||
|
||||
/**
|
||||
* \brief Add new object template. |
||||
* |
||||
* \param sources Source images, one for each modality. |
||||
* \param class_id Object class ID. |
||||
* \param object_mask Mask separating object from background. |
||||
* \param[out] bounding_box Optionally return bounding box of the extracted features. |
||||
* |
||||
* \return Template ID, or -1 if failed to extract a valid template. |
||||
*/ |
||||
int addTemplate(const std::vector<Mat>& sources, const String& class_id, |
||||
const Mat& object_mask, Rect* bounding_box = NULL); |
||||
|
||||
/**
|
||||
* \brief Add a new object template computed by external means. |
||||
*/ |
||||
int addSyntheticTemplate(const std::vector<Template>& templates, const String& class_id); |
||||
|
||||
/**
|
||||
* \brief Get the modalities used by this detector. |
||||
* |
||||
* You are not permitted to add/remove modalities, but you may dynamic_cast them to |
||||
* tweak parameters. |
||||
*/ |
||||
const std::vector< Ptr<Modality> >& getModalities() const { return modalities; } |
||||
|
||||
/**
|
||||
* \brief Get sampling step T at pyramid_level. |
||||
*/ |
||||
int getT(int pyramid_level) const { return T_at_level[pyramid_level]; } |
||||
|
||||
/**
|
||||
* \brief Get number of pyramid levels used by this detector. |
||||
*/ |
||||
int pyramidLevels() const { return pyramid_levels; } |
||||
|
||||
/**
|
||||
* \brief Get the template pyramid identified by template_id. |
||||
* |
||||
* For example, with 2 modalities (Gradient, Normal) and two pyramid levels |
||||
* (L0, L1), the order is (GradientL0, NormalL0, GradientL1, NormalL1). |
||||
*/ |
||||
const std::vector<Template>& getTemplates(const String& class_id, int template_id) const; |
||||
|
||||
int numTemplates() const; |
||||
int numTemplates(const String& class_id) const; |
||||
int numClasses() const { return static_cast<int>(class_templates.size()); } |
||||
|
||||
std::vector<String> classIds() const; |
||||
|
||||
void read(const FileNode& fn); |
||||
void write(FileStorage& fs) const; |
||||
|
||||
String readClass(const FileNode& fn, const String &class_id_override = ""); |
||||
void writeClass(const String& class_id, FileStorage& fs) const; |
||||
|
||||
void readClasses(const std::vector<String>& class_ids, |
||||
const String& format = "templates_%s.yml.gz"); |
||||
void writeClasses(const String& format = "templates_%s.yml.gz") const; |
||||
|
||||
protected: |
||||
std::vector< Ptr<Modality> > modalities; |
||||
int pyramid_levels; |
||||
std::vector<int> T_at_level; |
||||
|
||||
typedef std::vector<Template> TemplatePyramid; |
||||
typedef std::map<String, std::vector<TemplatePyramid> > TemplatesMap; |
||||
TemplatesMap class_templates; |
||||
|
||||
typedef std::vector<Mat> LinearMemories; |
||||
// Indexed as [pyramid level][modality][quantized label]
|
||||
typedef std::vector< std::vector<LinearMemories> > LinearMemoryPyramid; |
||||
|
||||
void matchClass(const LinearMemoryPyramid& lm_pyramid, |
||||
const std::vector<Size>& sizes, |
||||
float threshold, std::vector<Match>& matches, |
||||
const String& class_id, |
||||
const std::vector<TemplatePyramid>& template_pyramids) const; |
||||
}; |
||||
|
||||
/**
|
||||
* \brief Factory function for detector using LINE algorithm with color gradients. |
||||
* |
||||
* Default parameter settings suitable for VGA images. |
||||
*/ |
||||
CV_EXPORTS Ptr<Detector> getDefaultLINE(); |
||||
|
||||
/**
|
||||
* \brief Factory function for detector using LINE-MOD algorithm with color gradients |
||||
* and depth normals. |
||||
* |
||||
* Default parameter settings suitable for VGA images. |
||||
*/ |
||||
CV_EXPORTS Ptr<Detector> getDefaultLINEMOD(); |
||||
|
||||
} // namespace linemod
|
||||
} // namespace cv
|
||||
|
||||
#endif // __OPENCV_OBJDETECT_LINEMOD_HPP__
|
@ -1,20 +0,0 @@ |
||||
<?xml version="1.0" encoding="UTF-8"?> |
||||
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> |
||||
<plist version="1.0"> |
||||
<dict> |
||||
<key>CFBundleDevelopmentRegion</key> |
||||
<string>English</string> |
||||
<key>CFBundleExecutable</key> |
||||
<string>${EXECUTABLE_NAME}</string> |
||||
<key>CFBundleIdentifier</key> |
||||
<string>de.rwth-aachen.ient.FaceTracker</string> |
||||
<key>CFBundleInfoDictionaryVersion</key> |
||||
<string>6.0</string> |
||||
<key>CFBundlePackageType</key> |
||||
<string>APPL</string> |
||||
<key>CFBundleSignature</key> |
||||
<string>????</string> |
||||
<key>CFBundleVersion</key> |
||||
<string>1.0</string> |
||||
</dict> |
||||
</plist> |
@ -1,86 +0,0 @@ |
||||
|
||||
#include <OpenCV/OpenCV.h> |
||||
#include <cassert> |
||||
#include <iostream> |
||||
|
||||
|
||||
const char * WINDOW_NAME = "Face Tracker"; |
||||
const CFIndex CASCADE_NAME_LEN = 2048; |
||||
char CASCADE_NAME[CASCADE_NAME_LEN] = "~/opencv/data/haarcascades/haarcascade_frontalface_alt2.xml"; |
||||
|
||||
using namespace std; |
||||
|
||||
int main (int argc, char * const argv[]) |
||||
{ |
||||
const int scale = 2; |
||||
|
||||
// locate haar cascade from inside application bundle
|
||||
// (this is the mac way to package application resources)
|
||||
CFBundleRef mainBundle = CFBundleGetMainBundle (); |
||||
assert (mainBundle); |
||||
CFURLRef cascade_url = CFBundleCopyResourceURL (mainBundle, CFSTR("haarcascade_frontalface_alt2"), CFSTR("xml"), NULL); |
||||
assert (cascade_url); |
||||
Boolean got_it = CFURLGetFileSystemRepresentation (cascade_url, true, |
||||
reinterpret_cast<UInt8 *>(CASCADE_NAME), CASCADE_NAME_LEN); |
||||
if (! got_it) |
||||
abort (); |
||||
|
||||
// create all necessary instances
|
||||
cvNamedWindow (WINDOW_NAME, CV_WINDOW_AUTOSIZE); |
||||
CvCapture * camera = cvCreateCameraCapture (CV_CAP_ANY); |
||||
CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*) cvLoad (CASCADE_NAME, 0, 0, 0); |
||||
CvMemStorage* storage = cvCreateMemStorage(0); |
||||
assert (storage); |
||||
|
||||
// you do own an iSight, don't you ?!?
|
||||
if (! camera) |
||||
abort (); |
||||
|
||||
// did we load the cascade?!?
|
||||
if (! cascade) |
||||
abort (); |
||||
|
||||
// get an initial frame and duplicate it for later work
|
||||
IplImage * current_frame = cvQueryFrame (camera); |
||||
IplImage * draw_image = cvCreateImage(cvSize (current_frame->width, current_frame->height), IPL_DEPTH_8U, 3); |
||||
IplImage * gray_image = cvCreateImage(cvSize (current_frame->width, current_frame->height), IPL_DEPTH_8U, 1); |
||||
IplImage * small_image = cvCreateImage(cvSize (current_frame->width / scale, current_frame->height / scale), IPL_DEPTH_8U, 1); |
||||
assert (current_frame && gray_image && draw_image); |
||||
|
||||
// as long as there are images ...
|
||||
while (current_frame = cvQueryFrame (camera)) |
||||
{ |
||||
// convert to gray and downsize
|
||||
cvCvtColor (current_frame, gray_image, CV_BGR2GRAY); |
||||
cvResize (gray_image, small_image, CV_INTER_LINEAR); |
||||
|
||||
// detect faces
|
||||
CvSeq* faces = cvHaarDetectObjects (small_image, cascade, storage, |
||||
1.1, 2, CV_HAAR_DO_CANNY_PRUNING, |
||||
cvSize (30, 30)); |
||||
|
||||
// draw faces
|
||||
cvFlip (current_frame, draw_image, 1); |
||||
for (int i = 0; i < (faces ? faces->total : 0); i++) |
||||
{ |
||||
CvRect* r = (CvRect*) cvGetSeqElem (faces, i); |
||||
CvPoint center; |
||||
int radius; |
||||
center.x = cvRound((small_image->width - r->width*0.5 - r->x) *scale); |
||||
center.y = cvRound((r->y + r->height*0.5)*scale); |
||||
radius = cvRound((r->width + r->height)*0.25*scale); |
||||
cvCircle (draw_image, center, radius, CV_RGB(0,255,0), 3, 8, 0 ); |
||||
} |
||||
|
||||
// just show the image
|
||||
cvShowImage (WINDOW_NAME, draw_image); |
||||
|
||||
// wait a tenth of a second for keypress and window drawing
|
||||
int key = cvWaitKey (100); |
||||
if (key == 'q' || key == 'Q') |
||||
break; |
||||
} |
||||
|
||||
// be nice and return no error
|
||||
return 0; |
||||
} |
@ -1,262 +0,0 @@ |
||||
// !$*UTF8*$! |
||||
{ |
||||
archiveVersion = 1; |
||||
classes = { |
||||
}; |
||||
objectVersion = 42; |
||||
objects = { |
||||
|
||||
/* Begin PBXBuildFile section */ |
||||
4D7DBE8E0C04A90C00D8835D /* FaceTracker.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 08FB7796FE84155DC02AAC07 /* FaceTracker.cpp */; }; |
||||
4D95C9BE0C0577B200983E4D /* OpenCV.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 4D06E1E00C039982004AF23F /* OpenCV.framework */; }; |
||||
4D95C9D80C0577BD00983E4D /* OpenCV.framework in CopyFiles */ = {isa = PBXBuildFile; fileRef = 4D06E1E00C039982004AF23F /* OpenCV.framework */; }; |
||||
4DBF87310C05731500880673 /* haarcascade_frontalface_alt2.xml in Resources */ = {isa = PBXBuildFile; fileRef = 4DBF87300C05731500880673 /* haarcascade_frontalface_alt2.xml */; }; |
||||
/* End PBXBuildFile section */ |
||||
|
||||
/* Begin PBXCopyFilesBuildPhase section */ |
||||
4D7DBE8F0C04A93300D8835D /* CopyFiles */ = { |
||||
isa = PBXCopyFilesBuildPhase; |
||||
buildActionMask = 2147483647; |
||||
dstPath = ""; |
||||
dstSubfolderSpec = 10; |
||||
files = ( |
||||
4D95C9D80C0577BD00983E4D /* OpenCV.framework in CopyFiles */, |
||||
); |
||||
runOnlyForDeploymentPostprocessing = 0; |
||||
}; |
||||
/* End PBXCopyFilesBuildPhase section */ |
||||
|
||||
/* Begin PBXFileReference section */ |
||||
08FB7796FE84155DC02AAC07 /* FaceTracker.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = FaceTracker.cpp; sourceTree = "<group>"; }; |
||||
4D06E1E00C039982004AF23F /* OpenCV.framework */ = {isa = PBXFileReference; lastKnownFileType = wrapper.framework; name = OpenCV.framework; path = ../../../OpenCV.framework; sourceTree = SOURCE_ROOT; }; |
||||
4D4CDBCC0C0630060001A8A2 /* README.txt */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = README.txt; sourceTree = "<group>"; }; |
||||
4D7DBE570C04A8FF00D8835D /* FaceTracker.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = FaceTracker.app; sourceTree = BUILT_PRODUCTS_DIR; }; |
||||
4D7DBE590C04A8FF00D8835D /* FaceTracker-Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.xml; path = "FaceTracker-Info.plist"; sourceTree = "<group>"; }; |
||||
4DBF87300C05731500880673 /* haarcascade_frontalface_alt2.xml */ = {isa = PBXFileReference; fileEncoding = 5; lastKnownFileType = text.xml; name = haarcascade_frontalface_alt2.xml; path = ../../../data/haarcascades/haarcascade_frontalface_alt2.xml; sourceTree = SOURCE_ROOT; }; |
||||
/* End PBXFileReference section */ |
||||
|
||||
/* Begin PBXFrameworksBuildPhase section */ |
||||
4D7DBE550C04A8FF00D8835D /* Frameworks */ = { |
||||
isa = PBXFrameworksBuildPhase; |
||||
buildActionMask = 2147483647; |
||||
files = ( |
||||
4D95C9BE0C0577B200983E4D /* OpenCV.framework in Frameworks */, |
||||
); |
||||
runOnlyForDeploymentPostprocessing = 0; |
||||
}; |
||||
/* End PBXFrameworksBuildPhase section */ |
||||
|
||||
/* Begin PBXGroup section */ |
||||
08FB7794FE84155DC02AAC07 /* FrameworkTest */ = { |
||||
isa = PBXGroup; |
||||
children = ( |
||||
4D4CDBCC0C0630060001A8A2 /* README.txt */, |
||||
08FB7795FE84155DC02AAC07 /* Source */, |
||||
4DBF872C0C0572BC00880673 /* Resources */, |
||||
4D9D40B00C04AC1600EEFFD0 /* Frameworks */, |
||||
1AB674ADFE9D54B511CA2CBB /* Products */, |
||||
); |
||||
name = FrameworkTest; |
||||
sourceTree = "<group>"; |
||||
}; |
||||
08FB7795FE84155DC02AAC07 /* Source */ = { |
||||
isa = PBXGroup; |
||||
children = ( |
||||
08FB7796FE84155DC02AAC07 /* FaceTracker.cpp */, |
||||
); |
||||
name = Source; |
||||
sourceTree = "<group>"; |
||||
}; |
||||
1AB674ADFE9D54B511CA2CBB /* Products */ = { |
||||
isa = PBXGroup; |
||||
children = ( |
||||
4D7DBE570C04A8FF00D8835D /* FaceTracker.app */, |
||||
); |
||||
name = Products; |
||||
sourceTree = "<group>"; |
||||
}; |
||||
4D9D40B00C04AC1600EEFFD0 /* Frameworks */ = { |
||||
isa = PBXGroup; |
||||
children = ( |
||||
4D06E1E00C039982004AF23F /* OpenCV.framework */, |
||||
); |
||||
name = Frameworks; |
||||
sourceTree = "<group>"; |
||||
}; |
||||
4DBF872C0C0572BC00880673 /* Resources */ = { |
||||
isa = PBXGroup; |
||||
children = ( |
||||
4DBF87300C05731500880673 /* haarcascade_frontalface_alt2.xml */, |
||||
4D7DBE590C04A8FF00D8835D /* FaceTracker-Info.plist */, |
||||
); |
||||
name = Resources; |
||||
sourceTree = "<group>"; |
||||
}; |
||||
/* End PBXGroup section */ |
||||
|
||||
/* Begin PBXNativeTarget section */ |
||||
4D7DBE560C04A8FF00D8835D /* FaceTracker */ = { |
||||
isa = PBXNativeTarget; |
||||
buildConfigurationList = 4D7DBE5A0C04A8FF00D8835D /* Build configuration list for PBXNativeTarget "FaceTracker" */; |
||||
buildPhases = ( |
||||
4D7DBE530C04A8FF00D8835D /* Resources */, |
||||
4D7DBE540C04A8FF00D8835D /* Sources */, |
||||
4D7DBE550C04A8FF00D8835D /* Frameworks */, |
||||
4D7DBE8F0C04A93300D8835D /* CopyFiles */, |
||||
); |
||||
buildRules = ( |
||||
); |
||||
dependencies = ( |
||||
); |
||||
name = FaceTracker; |
||||
productName = FaceTracker; |
||||
productReference = 4D7DBE570C04A8FF00D8835D /* FaceTracker.app */; |
||||
productType = "com.apple.product-type.application"; |
||||
}; |
||||
/* End PBXNativeTarget section */ |
||||
|
||||
/* Begin PBXProject section */ |
||||
08FB7793FE84155DC02AAC07 /* Project object */ = { |
||||
isa = PBXProject; |
||||
buildConfigurationList = 1DEB923508733DC60010E9CD /* Build configuration list for PBXProject "FaceTracker" */; |
||||
hasScannedForEncodings = 1; |
||||
mainGroup = 08FB7794FE84155DC02AAC07 /* FrameworkTest */; |
||||
projectDirPath = ""; |
||||
targets = ( |
||||
4D7DBE560C04A8FF00D8835D /* FaceTracker */, |
||||
); |
||||
}; |
||||
/* End PBXProject section */ |
||||
|
||||
/* Begin PBXResourcesBuildPhase section */ |
||||
4D7DBE530C04A8FF00D8835D /* Resources */ = { |
||||
isa = PBXResourcesBuildPhase; |
||||
buildActionMask = 2147483647; |
||||
files = ( |
||||
4DBF87310C05731500880673 /* haarcascade_frontalface_alt2.xml in Resources */, |
||||
); |
||||
runOnlyForDeploymentPostprocessing = 0; |
||||
}; |
||||
/* End PBXResourcesBuildPhase section */ |
||||
|
||||
/* Begin PBXSourcesBuildPhase section */ |
||||
4D7DBE540C04A8FF00D8835D /* Sources */ = { |
||||
isa = PBXSourcesBuildPhase; |
||||
buildActionMask = 2147483647; |
||||
files = ( |
||||
4D7DBE8E0C04A90C00D8835D /* FaceTracker.cpp in Sources */, |
||||
); |
||||
runOnlyForDeploymentPostprocessing = 0; |
||||
}; |
||||
/* End PBXSourcesBuildPhase section */ |
||||
|
||||
/* Begin XCBuildConfiguration section */ |
||||
1DEB923608733DC60010E9CD /* Debug */ = { |
||||
isa = XCBuildConfiguration; |
||||
buildSettings = { |
||||
GCC_WARN_ABOUT_RETURN_TYPE = YES; |
||||
GCC_WARN_UNUSED_VARIABLE = YES; |
||||
PREBINDING = NO; |
||||
SDKROOT = /Developer/SDKs/MacOSX10.4u.sdk; |
||||
}; |
||||
name = Debug; |
||||
}; |
||||
1DEB923708733DC60010E9CD /* Release */ = { |
||||
isa = XCBuildConfiguration; |
||||
buildSettings = { |
||||
ARCHS = ( |
||||
ppc, |
||||
i386, |
||||
); |
||||
GCC_WARN_ABOUT_RETURN_TYPE = YES; |
||||
GCC_WARN_UNUSED_VARIABLE = YES; |
||||
PREBINDING = NO; |
||||
SDKROOT = /Developer/SDKs/MacOSX10.4u.sdk; |
||||
}; |
||||
name = Release; |
||||
}; |
||||
4D7DBE5B0C04A8FF00D8835D /* Debug */ = { |
||||
isa = XCBuildConfiguration; |
||||
buildSettings = { |
||||
COPY_PHASE_STRIP = NO; |
||||
FRAMEWORK_SEARCH_PATHS = ( |
||||
"$(inherited)", |
||||
"$(FRAMEWORK_SEARCH_PATHS_QUOTED_1)", |
||||
"$(FRAMEWORK_SEARCH_PATHS_QUOTED_2)", |
||||
); |
||||
FRAMEWORK_SEARCH_PATHS_QUOTED_1 = "\"$(SRCROOT)/../opencv\""; |
||||
FRAMEWORK_SEARCH_PATHS_QUOTED_2 = "\"$(SRCROOT)/../../..\""; |
||||
GCC_DYNAMIC_NO_PIC = NO; |
||||
GCC_ENABLE_FIX_AND_CONTINUE = YES; |
||||
GCC_GENERATE_DEBUGGING_SYMBOLS = YES; |
||||
GCC_MODEL_TUNING = G5; |
||||
GCC_OPTIMIZATION_LEVEL = 0; |
||||
GCC_PRECOMPILE_PREFIX_HEADER = YES; |
||||
GCC_PREFIX_HEADER = "$(SYSTEM_LIBRARY_DIR)/Frameworks/Carbon.framework/Headers/Carbon.h"; |
||||
INFOPLIST_FILE = "FaceTracker-Info.plist"; |
||||
INSTALL_PATH = "$(HOME)/Applications"; |
||||
OTHER_LDFLAGS = ( |
||||
"-framework", |
||||
Carbon, |
||||
); |
||||
PREBINDING = NO; |
||||
PRODUCT_NAME = FaceTracker; |
||||
WRAPPER_EXTENSION = app; |
||||
ZERO_LINK = YES; |
||||
}; |
||||
name = Debug; |
||||
}; |
||||
4D7DBE5C0C04A8FF00D8835D /* Release */ = { |
||||
isa = XCBuildConfiguration; |
||||
buildSettings = { |
||||
COPY_PHASE_STRIP = YES; |
||||
FRAMEWORK_SEARCH_PATHS = ( |
||||
"$(inherited)", |
||||
"$(FRAMEWORK_SEARCH_PATHS_QUOTED_1)", |
||||
"$(FRAMEWORK_SEARCH_PATHS_QUOTED_2)", |
||||
); |
||||
FRAMEWORK_SEARCH_PATHS_QUOTED_1 = "\"$(SRCROOT)/../opencv\""; |
||||
FRAMEWORK_SEARCH_PATHS_QUOTED_2 = "\"$(SRCROOT)/../../..\""; |
||||
GCC_ENABLE_FIX_AND_CONTINUE = NO; |
||||
GCC_GENERATE_DEBUGGING_SYMBOLS = NO; |
||||
GCC_MODEL_TUNING = G5; |
||||
GCC_PRECOMPILE_PREFIX_HEADER = YES; |
||||
GCC_PREFIX_HEADER = "$(SYSTEM_LIBRARY_DIR)/Frameworks/Carbon.framework/Headers/Carbon.h"; |
||||
INFOPLIST_FILE = "FaceTracker-Info.plist"; |
||||
INSTALL_PATH = "$(HOME)/Applications"; |
||||
OTHER_LDFLAGS = ( |
||||
"-framework", |
||||
Carbon, |
||||
); |
||||
PREBINDING = NO; |
||||
PRODUCT_NAME = FaceTracker; |
||||
WRAPPER_EXTENSION = app; |
||||
ZERO_LINK = NO; |
||||
}; |
||||
name = Release; |
||||
}; |
||||
/* End XCBuildConfiguration section */ |
||||
|
||||
/* Begin XCConfigurationList section */ |
||||
1DEB923508733DC60010E9CD /* Build configuration list for PBXProject "FaceTracker" */ = { |
||||
isa = XCConfigurationList; |
||||
buildConfigurations = ( |
||||
1DEB923608733DC60010E9CD /* Debug */, |
||||
1DEB923708733DC60010E9CD /* Release */, |
||||
); |
||||
defaultConfigurationIsVisible = 0; |
||||
defaultConfigurationName = Release; |
||||
}; |
||||
4D7DBE5A0C04A8FF00D8835D /* Build configuration list for PBXNativeTarget "FaceTracker" */ = { |
||||
isa = XCConfigurationList; |
||||
buildConfigurations = ( |
||||
4D7DBE5B0C04A8FF00D8835D /* Debug */, |
||||
4D7DBE5C0C04A8FF00D8835D /* Release */, |
||||
); |
||||
defaultConfigurationIsVisible = 0; |
||||
defaultConfigurationName = Release; |
||||
}; |
||||
/* End XCConfigurationList section */ |
||||
}; |
||||
rootObject = 08FB7793FE84155DC02AAC07 /* Project object */; |
||||
} |
@ -1,35 +0,0 @@ |
||||
FaceTracker/REAME.txt |
||||
2007-05-24, Mark Asbach <asbach@ient.rwth-aachen.de> |
||||
|
||||
Objective: |
||||
This document is intended to get you up and running with an OpenCV Framework on Mac OS X |
||||
|
||||
Building the OpenCV.framework: |
||||
In the main directory of the opencv distribution, you will find a shell script called |
||||
'make_frameworks.sh' that does all of the typical unixy './configure && make' stuff required |
||||
to build a universal binary framework. Invoke this script from Terminal.app, wait some minutes |
||||
and you are done. |
||||
|
||||
OpenCV is a Private Framework: |
||||
On Mac OS X the concept of Framework bundles is meant to simplify distribution of shared libraries, |
||||
accompanying headers and documentation. There are however to subtly different 'flavours' of |
||||
Frameworks: public and private ones. The public frameworks get installed into the Frameworks |
||||
diretories in /Library, /System/Library or ~/Library and are meant to be shared amongst |
||||
applications. The private frameworks are only distributed as parts of an Application Bundle. |
||||
This makes it easier to deploy applications because they bring their own framework invisibly to |
||||
the user. No installation of the framework is necessary and different applications can bring |
||||
different versions of the same framework without any conflict. |
||||
Since OpenCV is still a moving target, it seems best to avoid any installation and versioning issues |
||||
for an end user. The OpenCV framework that currently comes with this demo application therefore |
||||
is a Private Framework. |
||||
|
||||
Use it for targets that result in an Application Bundle: |
||||
Since it is a Private Framework, it must be copied to the Frameworks/ directory of an Application |
||||
Bundle, which means, it is useless for plain unix console applications. You should create a Carbon |
||||
or a Cocoa application target in XCode for your projects. Then add the OpenCV.framework just like |
||||
in this demo and add a Copy Files build phase to your target. Let that phase copy to the Framework |
||||
directory and drop the OpenCV.framework on the build phase (again just like in this demo code). |
||||
|
||||
The resulting application bundle will be self contained and if you set compiler option correctly |
||||
(in the "Build" tab of the "Project Info" window you should find 'i386 ppc' for the architectures), |
||||
your application can just be copied to any OS 10.4 Mac and used without further installation. |
@ -1,705 +0,0 @@ |
||||
#include <opencv2/core.hpp> |
||||
#include <opencv2/core/utility.hpp> |
||||
#include <opencv2/imgproc/imgproc_c.h> // cvFindContours |
||||
#include <opencv2/imgproc.hpp> |
||||
#include <opencv2/objdetect.hpp> |
||||
#include <opencv2/videoio.hpp> |
||||
#include <opencv2/highgui.hpp> |
||||
#include <iterator> |
||||
#include <set> |
||||
#include <cstdio> |
||||
#include <iostream> |
||||
|
||||
// Function prototypes
|
||||
void subtractPlane(const cv::Mat& depth, cv::Mat& mask, std::vector<CvPoint>& chain, double f); |
||||
|
||||
std::vector<CvPoint> maskFromTemplate(const std::vector<cv::linemod::Template>& templates, |
||||
int num_modalities, cv::Point offset, cv::Size size, |
||||
cv::Mat& mask, cv::Mat& dst); |
||||
|
||||
void templateConvexHull(const std::vector<cv::linemod::Template>& templates, |
||||
int num_modalities, cv::Point offset, cv::Size size, |
||||
cv::Mat& dst); |
||||
|
||||
void drawResponse(const std::vector<cv::linemod::Template>& templates, |
||||
int num_modalities, cv::Mat& dst, cv::Point offset, int T); |
||||
|
||||
cv::Mat displayQuantized(const cv::Mat& quantized); |
||||
|
||||
// Copy of cv_mouse from cv_utilities
|
||||
class Mouse |
||||
{ |
||||
public: |
||||
static void start(const std::string& a_img_name) |
||||
{ |
||||
cv::setMouseCallback(a_img_name.c_str(), Mouse::cv_on_mouse, 0); |
||||
} |
||||
static int event(void) |
||||
{ |
||||
int l_event = m_event; |
||||
m_event = -1; |
||||
return l_event; |
||||
} |
||||
static int x(void) |
||||
{ |
||||
return m_x; |
||||
} |
||||
static int y(void) |
||||
{ |
||||
return m_y; |
||||
} |
||||
|
||||
private: |
||||
static void cv_on_mouse(int a_event, int a_x, int a_y, int, void *) |
||||
{ |
||||
m_event = a_event; |
||||
m_x = a_x; |
||||
m_y = a_y; |
||||
} |
||||
|
||||
static int m_event; |
||||
static int m_x; |
||||
static int m_y; |
||||
}; |
||||
int Mouse::m_event; |
||||
int Mouse::m_x; |
||||
int Mouse::m_y; |
||||
|
||||
static void help() |
||||
{ |
||||
printf("Usage: openni_demo [templates.yml]\n\n" |
||||
"Place your object on a planar, featureless surface. With the mouse,\n" |
||||
"frame it in the 'color' window and right click to learn a first template.\n" |
||||
"Then press 'l' to enter online learning mode, and move the camera around.\n" |
||||
"When the match score falls between 90-95%% the demo will add a new template.\n\n" |
||||
"Keys:\n" |
||||
"\t h -- This help page\n" |
||||
"\t l -- Toggle online learning\n" |
||||
"\t m -- Toggle printing match result\n" |
||||
"\t t -- Toggle printing timings\n" |
||||
"\t w -- Write learned templates to disk\n" |
||||
"\t [ ] -- Adjust matching threshold: '[' down, ']' up\n" |
||||
"\t q -- Quit\n\n"); |
||||
} |
||||
|
||||
// Adapted from cv_timer in cv_utilities
|
||||
class Timer |
||||
{ |
||||
public: |
||||
Timer() : start_(0), time_(0) {} |
||||
|
||||
void start() |
||||
{ |
||||
start_ = cv::getTickCount(); |
||||
} |
||||
|
||||
void stop() |
||||
{ |
||||
CV_Assert(start_ != 0); |
||||
int64 end = cv::getTickCount(); |
||||
time_ += end - start_; |
||||
start_ = 0; |
||||
} |
||||
|
||||
double time() |
||||
{ |
||||
double ret = time_ / cv::getTickFrequency(); |
||||
time_ = 0; |
||||
return ret; |
||||
} |
||||
|
||||
private: |
||||
int64 start_, time_; |
||||
}; |
||||
|
||||
// Functions to store detector and templates in single XML/YAML file
|
||||
static cv::Ptr<cv::linemod::Detector> readLinemod(const std::string& filename) |
||||
{ |
||||
cv::Ptr<cv::linemod::Detector> detector = cv::makePtr<cv::linemod::Detector>(); |
||||
cv::FileStorage fs(filename, cv::FileStorage::READ); |
||||
detector->read(fs.root()); |
||||
|
||||
cv::FileNode fn = fs["classes"]; |
||||
for (cv::FileNodeIterator i = fn.begin(), iend = fn.end(); i != iend; ++i) |
||||
detector->readClass(*i); |
||||
|
||||
return detector; |
||||
} |
||||
|
||||
static void writeLinemod(const cv::Ptr<cv::linemod::Detector>& detector, const std::string& filename) |
||||
{ |
||||
cv::FileStorage fs(filename, cv::FileStorage::WRITE); |
||||
detector->write(fs); |
||||
|
||||
std::vector<cv::String> ids = detector->classIds(); |
||||
fs << "classes" << "["; |
||||
for (int i = 0; i < (int)ids.size(); ++i) |
||||
{ |
||||
fs << "{"; |
||||
detector->writeClass(ids[i], fs); |
||||
fs << "}"; // current class
|
||||
} |
||||
fs << "]"; // classes
|
||||
} |
||||
|
||||
|
||||
int main(int argc, char * argv[]) |
||||
{ |
||||
// Various settings and flags
|
||||
bool show_match_result = true; |
||||
bool show_timings = false; |
||||
bool learn_online = false; |
||||
int num_classes = 0; |
||||
int matching_threshold = 80; |
||||
/// @todo Keys for changing these?
|
||||
cv::Size roi_size(200, 200); |
||||
int learning_lower_bound = 90; |
||||
int learning_upper_bound = 95; |
||||
|
||||
// Timers
|
||||
Timer extract_timer; |
||||
Timer match_timer; |
||||
|
||||
// Initialize HighGUI
|
||||
help(); |
||||
cv::namedWindow("color"); |
||||
cv::namedWindow("normals"); |
||||
Mouse::start("color"); |
||||
|
||||
// Initialize LINEMOD data structures
|
||||
cv::Ptr<cv::linemod::Detector> detector; |
||||
std::string filename; |
||||
if (argc == 1) |
||||
{ |
||||
filename = "linemod_templates.yml"; |
||||
detector = cv::linemod::getDefaultLINEMOD(); |
||||
} |
||||
else |
||||
{ |
||||
detector = readLinemod(argv[1]); |
||||
|
||||
std::vector<cv::String> ids = detector->classIds(); |
||||
num_classes = detector->numClasses(); |
||||
printf("Loaded %s with %d classes and %d templates\n", |
||||
argv[1], num_classes, detector->numTemplates()); |
||||
if (!ids.empty()) |
||||
{ |
||||
printf("Class ids:\n"); |
||||
std::copy(ids.begin(), ids.end(), std::ostream_iterator<std::string>(std::cout, "\n")); |
||||
} |
||||
} |
||||
int num_modalities = (int)detector->getModalities().size(); |
||||
|
||||
// Open Kinect sensor
|
||||
cv::VideoCapture capture( cv::CAP_OPENNI ); |
||||
if (!capture.isOpened()) |
||||
{ |
||||
printf("Could not open OpenNI-capable sensor\n"); |
||||
return -1; |
||||
} |
||||
capture.set(cv::CAP_PROP_OPENNI_REGISTRATION, 1); |
||||
double focal_length = capture.get(cv::CAP_OPENNI_DEPTH_GENERATOR_FOCAL_LENGTH); |
||||
//printf("Focal length = %f\n", focal_length);
|
||||
|
||||
// Main loop
|
||||
cv::Mat color, depth; |
||||
for(;;) |
||||
{ |
||||
// Capture next color/depth pair
|
||||
capture.grab(); |
||||
capture.retrieve(depth, cv::CAP_OPENNI_DEPTH_MAP); |
||||
capture.retrieve(color, cv::CAP_OPENNI_BGR_IMAGE); |
||||
|
||||
std::vector<cv::Mat> sources; |
||||
sources.push_back(color); |
||||
sources.push_back(depth); |
||||
cv::Mat display = color.clone(); |
||||
|
||||
if (!learn_online) |
||||
{ |
||||
cv::Point mouse(Mouse::x(), Mouse::y()); |
||||
int event = Mouse::event(); |
||||
|
||||
// Compute ROI centered on current mouse location
|
||||
cv::Point roi_offset(roi_size.width / 2, roi_size.height / 2); |
||||
cv::Point pt1 = mouse - roi_offset; // top left
|
||||
cv::Point pt2 = mouse + roi_offset; // bottom right
|
||||
|
||||
if (event == cv::EVENT_RBUTTONDOWN) |
||||
{ |
||||
// Compute object mask by subtracting the plane within the ROI
|
||||
std::vector<CvPoint> chain(4); |
||||
chain[0] = pt1; |
||||
chain[1] = cv::Point(pt2.x, pt1.y); |
||||
chain[2] = pt2; |
||||
chain[3] = cv::Point(pt1.x, pt2.y); |
||||
cv::Mat mask; |
||||
subtractPlane(depth, mask, chain, focal_length); |
||||
|
||||
cv::imshow("mask", mask); |
||||
|
||||
// Extract template
|
||||
std::string class_id = cv::format("class%d", num_classes); |
||||
cv::Rect bb; |
||||
extract_timer.start(); |
||||
int template_id = detector->addTemplate(sources, class_id, mask, &bb); |
||||
extract_timer.stop(); |
||||
if (template_id != -1) |
||||
{ |
||||
printf("*** Added template (id %d) for new object class %d***\n", |
||||
template_id, num_classes); |
||||
//printf("Extracted at (%d, %d) size %dx%d\n", bb.x, bb.y, bb.width, bb.height);
|
||||
} |
||||
|
||||
++num_classes; |
||||
} |
||||
|
||||
// Draw ROI for display
|
||||
cv::rectangle(display, pt1, pt2, CV_RGB(0,0,0), 3); |
||||
cv::rectangle(display, pt1, pt2, CV_RGB(255,255,0), 1); |
||||
} |
||||
|
||||
// Perform matching
|
||||
std::vector<cv::linemod::Match> matches; |
||||
std::vector<cv::String> class_ids; |
||||
std::vector<cv::Mat> quantized_images; |
||||
match_timer.start(); |
||||
detector->match(sources, (float)matching_threshold, matches, class_ids, quantized_images); |
||||
match_timer.stop(); |
||||
|
||||
int classes_visited = 0; |
||||
std::set<std::string> visited; |
||||
|
||||
for (int i = 0; (i < (int)matches.size()) && (classes_visited < num_classes); ++i) |
||||
{ |
||||
cv::linemod::Match m = matches[i]; |
||||
|
||||
if (visited.insert(m.class_id).second) |
||||
{ |
||||
++classes_visited; |
||||
|
||||
if (show_match_result) |
||||
{ |
||||
printf("Similarity: %5.1f%%; x: %3d; y: %3d; class: %s; template: %3d\n", |
||||
m.similarity, m.x, m.y, m.class_id.c_str(), m.template_id); |
||||
} |
||||
|
||||
// Draw matching template
|
||||
const std::vector<cv::linemod::Template>& templates = detector->getTemplates(m.class_id, m.template_id); |
||||
drawResponse(templates, num_modalities, display, cv::Point(m.x, m.y), detector->getT(0)); |
||||
|
||||
if (learn_online == true) |
||||
{ |
||||
/// @todo Online learning possibly broken by new gradient feature extraction,
|
||||
/// which assumes an accurate object outline.
|
||||
|
||||
// Compute masks based on convex hull of matched template
|
||||
cv::Mat color_mask, depth_mask; |
||||
std::vector<CvPoint> chain = maskFromTemplate(templates, num_modalities, |
||||
cv::Point(m.x, m.y), color.size(), |
||||
color_mask, display); |
||||
subtractPlane(depth, depth_mask, chain, focal_length); |
||||
|
||||
cv::imshow("mask", depth_mask); |
||||
|
||||
// If pretty sure (but not TOO sure), add new template
|
||||
if (learning_lower_bound < m.similarity && m.similarity < learning_upper_bound) |
||||
{ |
||||
extract_timer.start(); |
||||
int template_id = detector->addTemplate(sources, m.class_id, depth_mask); |
||||
extract_timer.stop(); |
||||
if (template_id != -1) |
||||
{ |
||||
printf("*** Added template (id %d) for existing object class %s***\n", |
||||
template_id, m.class_id.c_str()); |
||||
} |
||||
} |
||||
} |
||||
} |
||||
} |
||||
|
||||
if (show_match_result && matches.empty()) |
||||
printf("No matches found...\n"); |
||||
if (show_timings) |
||||
{ |
||||
printf("Training: %.2fs\n", extract_timer.time()); |
||||
printf("Matching: %.2fs\n", match_timer.time()); |
||||
} |
||||
if (show_match_result || show_timings) |
||||
printf("------------------------------------------------------------\n"); |
||||
|
||||
cv::imshow("color", display); |
||||
cv::imshow("normals", quantized_images[1]); |
||||
|
||||
cv::FileStorage fs; |
||||
char key = (char)cv::waitKey(10); |
||||
if( key == 'q' ) |
||||
break; |
||||
|
||||
switch (key) |
||||
{ |
||||
case 'h': |
||||
help(); |
||||
break; |
||||
case 'm': |
||||
// toggle printing match result
|
||||
show_match_result = !show_match_result; |
||||
printf("Show match result %s\n", show_match_result ? "ON" : "OFF"); |
||||
break; |
||||
case 't': |
||||
// toggle printing timings
|
||||
show_timings = !show_timings; |
||||
printf("Show timings %s\n", show_timings ? "ON" : "OFF"); |
||||
break; |
||||
case 'l': |
||||
// toggle online learning
|
||||
learn_online = !learn_online; |
||||
printf("Online learning %s\n", learn_online ? "ON" : "OFF"); |
||||
break; |
||||
case '[': |
||||
// decrement threshold
|
||||
matching_threshold = std::max(matching_threshold - 1, -100); |
||||
printf("New threshold: %d\n", matching_threshold); |
||||
break; |
||||
case ']': |
||||
// increment threshold
|
||||
matching_threshold = std::min(matching_threshold + 1, +100); |
||||
printf("New threshold: %d\n", matching_threshold); |
||||
break; |
||||
case 'w': |
||||
// write model to disk
|
||||
writeLinemod(detector, filename); |
||||
printf("Wrote detector and templates to %s\n", filename.c_str()); |
||||
break; |
||||
default: |
||||
; |
||||
} |
||||
} |
||||
return 0; |
||||
} |
||||
|
||||
static void reprojectPoints(const std::vector<cv::Point3d>& proj, std::vector<cv::Point3d>& real, double f) |
||||
{ |
||||
real.resize(proj.size()); |
||||
double f_inv = 1.0 / f; |
||||
|
||||
for (int i = 0; i < (int)proj.size(); ++i) |
||||
{ |
||||
double Z = proj[i].z; |
||||
real[i].x = (proj[i].x - 320.) * (f_inv * Z); |
||||
real[i].y = (proj[i].y - 240.) * (f_inv * Z); |
||||
real[i].z = Z; |
||||
} |
||||
} |
||||
|
||||
static void filterPlane(IplImage * ap_depth, std::vector<IplImage *> & a_masks, std::vector<CvPoint> & a_chain, double f) |
||||
{ |
||||
const int l_num_cost_pts = 200; |
||||
|
||||
float l_thres = 4; |
||||
|
||||
IplImage * lp_mask = cvCreateImage(cvGetSize(ap_depth), IPL_DEPTH_8U, 1); |
||||
cvSet(lp_mask, cvRealScalar(0)); |
||||
|
||||
std::vector<CvPoint> l_chain_vector; |
||||
|
||||
float l_chain_length = 0; |
||||
float * lp_seg_length = new float[a_chain.size()]; |
||||
|
||||
for (int l_i = 0; l_i < (int)a_chain.size(); ++l_i) |
||||
{ |
||||
float x_diff = (float)(a_chain[(l_i + 1) % a_chain.size()].x - a_chain[l_i].x); |
||||
float y_diff = (float)(a_chain[(l_i + 1) % a_chain.size()].y - a_chain[l_i].y); |
||||
lp_seg_length[l_i] = sqrt(x_diff*x_diff + y_diff*y_diff); |
||||
l_chain_length += lp_seg_length[l_i]; |
||||
} |
||||
for (int l_i = 0; l_i < (int)a_chain.size(); ++l_i) |
||||
{ |
||||
if (lp_seg_length[l_i] > 0) |
||||
{ |
||||
int l_cur_num = cvRound(l_num_cost_pts * lp_seg_length[l_i] / l_chain_length); |
||||
float l_cur_len = lp_seg_length[l_i] / l_cur_num; |
||||
|
||||
for (int l_j = 0; l_j < l_cur_num; ++l_j) |
||||
{ |
||||
float l_ratio = (l_cur_len * l_j / lp_seg_length[l_i]); |
||||
|
||||
CvPoint l_pts; |
||||
|
||||
l_pts.x = cvRound(l_ratio * (a_chain[(l_i + 1) % a_chain.size()].x - a_chain[l_i].x) + a_chain[l_i].x); |
||||
l_pts.y = cvRound(l_ratio * (a_chain[(l_i + 1) % a_chain.size()].y - a_chain[l_i].y) + a_chain[l_i].y); |
||||
|
||||
l_chain_vector.push_back(l_pts); |
||||
} |
||||
} |
||||
} |
||||
std::vector<cv::Point3d> lp_src_3Dpts(l_chain_vector.size()); |
||||
|
||||
for (int l_i = 0; l_i < (int)l_chain_vector.size(); ++l_i) |
||||
{ |
||||
lp_src_3Dpts[l_i].x = l_chain_vector[l_i].x; |
||||
lp_src_3Dpts[l_i].y = l_chain_vector[l_i].y; |
||||
lp_src_3Dpts[l_i].z = CV_IMAGE_ELEM(ap_depth, unsigned short, cvRound(lp_src_3Dpts[l_i].y), cvRound(lp_src_3Dpts[l_i].x)); |
||||
//CV_IMAGE_ELEM(lp_mask,unsigned char,(int)lp_src_3Dpts[l_i].Y,(int)lp_src_3Dpts[l_i].X)=255;
|
||||
} |
||||
//cv_show_image(lp_mask,"hallo2");
|
||||
|
||||
reprojectPoints(lp_src_3Dpts, lp_src_3Dpts, f); |
||||
|
||||
CvMat * lp_pts = cvCreateMat((int)l_chain_vector.size(), 4, CV_32F); |
||||
CvMat * lp_v = cvCreateMat(4, 4, CV_32F); |
||||
CvMat * lp_w = cvCreateMat(4, 1, CV_32F); |
||||
|
||||
for (int l_i = 0; l_i < (int)l_chain_vector.size(); ++l_i) |
||||
{ |
||||
CV_MAT_ELEM(*lp_pts, float, l_i, 0) = (float)lp_src_3Dpts[l_i].x; |
||||
CV_MAT_ELEM(*lp_pts, float, l_i, 1) = (float)lp_src_3Dpts[l_i].y; |
||||
CV_MAT_ELEM(*lp_pts, float, l_i, 2) = (float)lp_src_3Dpts[l_i].z; |
||||
CV_MAT_ELEM(*lp_pts, float, l_i, 3) = 1.0f; |
||||
} |
||||
cvSVD(lp_pts, lp_w, 0, lp_v); |
||||
|
||||
float l_n[4] = {CV_MAT_ELEM(*lp_v, float, 0, 3), |
||||
CV_MAT_ELEM(*lp_v, float, 1, 3), |
||||
CV_MAT_ELEM(*lp_v, float, 2, 3), |
||||
CV_MAT_ELEM(*lp_v, float, 3, 3)}; |
||||
|
||||
float l_norm = sqrt(l_n[0] * l_n[0] + l_n[1] * l_n[1] + l_n[2] * l_n[2]); |
||||
|
||||
l_n[0] /= l_norm; |
||||
l_n[1] /= l_norm; |
||||
l_n[2] /= l_norm; |
||||
l_n[3] /= l_norm; |
||||
|
||||
float l_max_dist = 0; |
||||
|
||||
for (int l_i = 0; l_i < (int)l_chain_vector.size(); ++l_i) |
||||
{ |
||||
float l_dist = l_n[0] * CV_MAT_ELEM(*lp_pts, float, l_i, 0) + |
||||
l_n[1] * CV_MAT_ELEM(*lp_pts, float, l_i, 1) + |
||||
l_n[2] * CV_MAT_ELEM(*lp_pts, float, l_i, 2) + |
||||
l_n[3] * CV_MAT_ELEM(*lp_pts, float, l_i, 3); |
||||
|
||||
if (fabs(l_dist) > l_max_dist) |
||||
l_max_dist = l_dist; |
||||
} |
||||
//std::cerr << "plane: " << l_n[0] << ";" << l_n[1] << ";" << l_n[2] << ";" << l_n[3] << " maxdist: " << l_max_dist << " end" << std::endl;
|
||||
int l_minx = ap_depth->width; |
||||
int l_miny = ap_depth->height; |
||||
int l_maxx = 0; |
||||
int l_maxy = 0; |
||||
|
||||
for (int l_i = 0; l_i < (int)a_chain.size(); ++l_i) |
||||
{ |
||||
l_minx = std::min(l_minx, a_chain[l_i].x); |
||||
l_miny = std::min(l_miny, a_chain[l_i].y); |
||||
l_maxx = std::max(l_maxx, a_chain[l_i].x); |
||||
l_maxy = std::max(l_maxy, a_chain[l_i].y); |
||||
} |
||||
int l_w = l_maxx - l_minx + 1; |
||||
int l_h = l_maxy - l_miny + 1; |
||||
int l_nn = (int)a_chain.size(); |
||||
|
||||
CvPoint * lp_chain = new CvPoint[l_nn]; |
||||
|
||||
for (int l_i = 0; l_i < l_nn; ++l_i) |
||||
lp_chain[l_i] = a_chain[l_i]; |
||||
|
||||
cvFillPoly(lp_mask, &lp_chain, &l_nn, 1, cvScalar(255, 255, 255)); |
||||
|
||||
delete[] lp_chain; |
||||
|
||||
//cv_show_image(lp_mask,"hallo1");
|
||||
|
||||
std::vector<cv::Point3d> lp_dst_3Dpts(l_h * l_w); |
||||
|
||||
int l_ind = 0; |
||||
|
||||
for (int l_r = 0; l_r < l_h; ++l_r) |
||||
{ |
||||
for (int l_c = 0; l_c < l_w; ++l_c) |
||||
{ |
||||
lp_dst_3Dpts[l_ind].x = l_c + l_minx; |
||||
lp_dst_3Dpts[l_ind].y = l_r + l_miny; |
||||
lp_dst_3Dpts[l_ind].z = CV_IMAGE_ELEM(ap_depth, unsigned short, l_r + l_miny, l_c + l_minx); |
||||
++l_ind; |
||||
} |
||||
} |
||||
reprojectPoints(lp_dst_3Dpts, lp_dst_3Dpts, f); |
||||
|
||||
l_ind = 0; |
||||
|
||||
for (int l_r = 0; l_r < l_h; ++l_r) |
||||
{ |
||||
for (int l_c = 0; l_c < l_w; ++l_c) |
||||
{ |
||||
float l_dist = (float)(l_n[0] * lp_dst_3Dpts[l_ind].x + l_n[1] * lp_dst_3Dpts[l_ind].y + lp_dst_3Dpts[l_ind].z * l_n[2] + l_n[3]); |
||||
|
||||
++l_ind; |
||||
|
||||
if (CV_IMAGE_ELEM(lp_mask, unsigned char, l_r + l_miny, l_c + l_minx) != 0) |
||||
{ |
||||
if (fabs(l_dist) < std::max(l_thres, (l_max_dist * 2.0f))) |
||||
{ |
||||
for (int l_p = 0; l_p < (int)a_masks.size(); ++l_p) |
||||
{ |
||||
int l_col = cvRound((l_c + l_minx) / (l_p + 1.0)); |
||||
int l_row = cvRound((l_r + l_miny) / (l_p + 1.0)); |
||||
|
||||
CV_IMAGE_ELEM(a_masks[l_p], unsigned char, l_row, l_col) = 0; |
||||
} |
||||
} |
||||
else |
||||
{ |
||||
for (int l_p = 0; l_p < (int)a_masks.size(); ++l_p) |
||||
{ |
||||
int l_col = cvRound((l_c + l_minx) / (l_p + 1.0)); |
||||
int l_row = cvRound((l_r + l_miny) / (l_p + 1.0)); |
||||
|
||||
CV_IMAGE_ELEM(a_masks[l_p], unsigned char, l_row, l_col) = 255; |
||||
} |
||||
} |
||||
} |
||||
} |
||||
} |
||||
cvReleaseImage(&lp_mask); |
||||
cvReleaseMat(&lp_pts); |
||||
cvReleaseMat(&lp_w); |
||||
cvReleaseMat(&lp_v); |
||||
} |
||||
|
||||
void subtractPlane(const cv::Mat& depth, cv::Mat& mask, std::vector<CvPoint>& chain, double f) |
||||
{ |
||||
mask = cv::Mat::zeros(depth.size(), CV_8U); |
||||
std::vector<IplImage*> tmp; |
||||
IplImage mask_ipl = mask; |
||||
tmp.push_back(&mask_ipl); |
||||
IplImage depth_ipl = depth; |
||||
filterPlane(&depth_ipl, tmp, chain, f); |
||||
} |
||||
|
||||
std::vector<CvPoint> maskFromTemplate(const std::vector<cv::linemod::Template>& templates, |
||||
int num_modalities, cv::Point offset, cv::Size size, |
||||
cv::Mat& mask, cv::Mat& dst) |
||||
{ |
||||
templateConvexHull(templates, num_modalities, offset, size, mask); |
||||
|
||||
const int OFFSET = 30; |
||||
cv::dilate(mask, mask, cv::Mat(), cv::Point(-1,-1), OFFSET); |
||||
|
||||
CvMemStorage * lp_storage = cvCreateMemStorage(0); |
||||
CvTreeNodeIterator l_iterator; |
||||
CvSeqReader l_reader; |
||||
CvSeq * lp_contour = 0; |
||||
|
||||
cv::Mat mask_copy = mask.clone(); |
||||
IplImage mask_copy_ipl = mask_copy; |
||||
cvFindContours(&mask_copy_ipl, lp_storage, &lp_contour, sizeof(CvContour), |
||||
CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); |
||||
|
||||
std::vector<CvPoint> l_pts1; // to use as input to cv_primesensor::filter_plane
|
||||
|
||||
cvInitTreeNodeIterator(&l_iterator, lp_contour, 1); |
||||
while ((lp_contour = (CvSeq *)cvNextTreeNode(&l_iterator)) != 0) |
||||
{ |
||||
CvPoint l_pt0; |
||||
cvStartReadSeq(lp_contour, &l_reader, 0); |
||||
CV_READ_SEQ_ELEM(l_pt0, l_reader); |
||||
l_pts1.push_back(l_pt0); |
||||
|
||||
for (int i = 0; i < lp_contour->total; ++i) |
||||
{ |
||||
CvPoint l_pt1; |
||||
CV_READ_SEQ_ELEM(l_pt1, l_reader); |
||||
/// @todo Really need dst at all? Can just as well do this outside
|
||||
cv::line(dst, l_pt0, l_pt1, CV_RGB(0, 255, 0), 2); |
||||
|
||||
l_pt0 = l_pt1; |
||||
l_pts1.push_back(l_pt0); |
||||
} |
||||
} |
||||
cvReleaseMemStorage(&lp_storage); |
||||
|
||||
return l_pts1; |
||||
} |
||||
|
||||
// Adapted from cv_show_angles
|
||||
cv::Mat displayQuantized(const cv::Mat& quantized) |
||||
{ |
||||
cv::Mat color(quantized.size(), CV_8UC3); |
||||
for (int r = 0; r < quantized.rows; ++r) |
||||
{ |
||||
const uchar* quant_r = quantized.ptr(r); |
||||
cv::Vec3b* color_r = color.ptr<cv::Vec3b>(r); |
||||
|
||||
for (int c = 0; c < quantized.cols; ++c) |
||||
{ |
||||
cv::Vec3b& bgr = color_r[c]; |
||||
switch (quant_r[c]) |
||||
{ |
||||
case 0: bgr[0]= 0; bgr[1]= 0; bgr[2]= 0; break; |
||||
case 1: bgr[0]= 55; bgr[1]= 55; bgr[2]= 55; break; |
||||
case 2: bgr[0]= 80; bgr[1]= 80; bgr[2]= 80; break; |
||||
case 4: bgr[0]=105; bgr[1]=105; bgr[2]=105; break; |
||||
case 8: bgr[0]=130; bgr[1]=130; bgr[2]=130; break; |
||||
case 16: bgr[0]=155; bgr[1]=155; bgr[2]=155; break; |
||||
case 32: bgr[0]=180; bgr[1]=180; bgr[2]=180; break; |
||||
case 64: bgr[0]=205; bgr[1]=205; bgr[2]=205; break; |
||||
case 128: bgr[0]=230; bgr[1]=230; bgr[2]=230; break; |
||||
case 255: bgr[0]= 0; bgr[1]= 0; bgr[2]=255; break; |
||||
default: bgr[0]= 0; bgr[1]=255; bgr[2]= 0; break; |
||||
} |
||||
} |
||||
} |
||||
|
||||
return color; |
||||
} |
||||
|
||||
// Adapted from cv_line_template::convex_hull
|
||||
void templateConvexHull(const std::vector<cv::linemod::Template>& templates, |
||||
int num_modalities, cv::Point offset, cv::Size size, |
||||
cv::Mat& dst) |
||||
{ |
||||
std::vector<cv::Point> points; |
||||
for (int m = 0; m < num_modalities; ++m) |
||||
{ |
||||
for (int i = 0; i < (int)templates[m].features.size(); ++i) |
||||
{ |
||||
cv::linemod::Feature f = templates[m].features[i]; |
||||
points.push_back(cv::Point(f.x, f.y) + offset); |
||||
} |
||||
} |
||||
|
||||
std::vector<cv::Point> hull; |
||||
cv::convexHull(points, hull); |
||||
|
||||
dst = cv::Mat::zeros(size, CV_8U); |
||||
const int hull_count = (int)hull.size(); |
||||
const cv::Point* hull_pts = &hull[0]; |
||||
cv::fillPoly(dst, &hull_pts, &hull_count, 1, cv::Scalar(255)); |
||||
} |
||||
|
||||
void drawResponse(const std::vector<cv::linemod::Template>& templates, |
||||
int num_modalities, cv::Mat& dst, cv::Point offset, int T) |
||||
{ |
||||
static const cv::Scalar COLORS[5] = { CV_RGB(0, 0, 255), |
||||
CV_RGB(0, 255, 0), |
||||
CV_RGB(255, 255, 0), |
||||
CV_RGB(255, 140, 0), |
||||
CV_RGB(255, 0, 0) }; |
||||
|
||||
for (int m = 0; m < num_modalities; ++m) |
||||
{ |
||||
// NOTE: Original demo recalculated max response for each feature in the TxT
|
||||
// box around it and chose the display color based on that response. Here
|
||||
// the display color just depends on the modality.
|
||||
cv::Scalar color = COLORS[m]; |
||||
|
||||
for (int i = 0; i < (int)templates[m].features.size(); ++i) |
||||
{ |
||||
cv::linemod::Feature f = templates[m].features[i]; |
||||
cv::Point pt(f.x + offset.x, f.y + offset.y); |
||||
cv::circle(dst, pt, T / 2, color); |
||||
} |
||||
} |
||||
} |
Before Width: | Height: | Size: 95 KiB |
Before Width: | Height: | Size: 93 KiB |
Before Width: | Height: | Size: 59 KiB |
Before Width: | Height: | Size: 97 KiB |
Before Width: | Height: | Size: 111 KiB |
Before Width: | Height: | Size: 69 KiB |
@ -1,128 +0,0 @@ |
||||
/*
|
||||
* textdetection.cpp |
||||
* |
||||
* A demo program of the Extremal Region Filter algorithm described in |
||||
* Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012 |
||||
* |
||||
* Created on: Sep 23, 2013 |
||||
* Author: Lluis Gomez i Bigorda <lgomez AT cvc.uab.es> |
||||
*/ |
||||
|
||||
#include "opencv2/opencv.hpp" |
||||
#include "opencv2/objdetect.hpp" |
||||
#include "opencv2/imgcodecs.hpp" |
||||
#include "opencv2/highgui.hpp" |
||||
#include "opencv2/imgproc.hpp" |
||||
|
||||
#include <vector> |
||||
#include <iostream> |
||||
#include <iomanip> |
||||
|
||||
using namespace std; |
||||
using namespace cv; |
||||
|
||||
void show_help_and_exit(const char *cmd); |
||||
void groups_draw(Mat &src, vector<Rect> &groups); |
||||
void er_show(vector<Mat> &channels, vector<vector<ERStat> > ®ions); |
||||
|
||||
int main(int argc, const char * argv[]) |
||||
{ |
||||
cout << endl << argv[0] << endl << endl; |
||||
cout << "Demo program of the Extremal Region Filter algorithm described in " << endl; |
||||
cout << "Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012" << endl << endl; |
||||
|
||||
if (argc < 2) show_help_and_exit(argv[0]); |
||||
|
||||
Mat src = imread(argv[1]); |
||||
|
||||
// Extract channels to be processed individually
|
||||
vector<Mat> channels; |
||||
computeNMChannels(src, channels); |
||||
|
||||
int cn = (int)channels.size(); |
||||
// Append negative channels to detect ER- (bright regions over dark background)
|
||||
for (int c = 0; c < cn-1; c++) |
||||
channels.push_back(255-channels[c]); |
||||
|
||||
// Create ERFilter objects with the 1st and 2nd stage default classifiers
|
||||
Ptr<ERFilter> er_filter1 = createERFilterNM1(loadClassifierNM1("trained_classifierNM1.xml"),16,0.00015f,0.13f,0.2f,true,0.1f); |
||||
Ptr<ERFilter> er_filter2 = createERFilterNM2(loadClassifierNM2("trained_classifierNM2.xml"),0.5); |
||||
|
||||
vector<vector<ERStat> > regions(channels.size()); |
||||
// Apply the default cascade classifier to each independent channel (could be done in parallel)
|
||||
cout << "Extracting Class Specific Extremal Regions from " << (int)channels.size() << " channels ..." << endl; |
||||
cout << " (...) this may take a while (...)" << endl << endl; |
||||
for (int c=0; c<(int)channels.size(); c++) |
||||
{ |
||||
er_filter1->run(channels[c], regions[c]); |
||||
er_filter2->run(channels[c], regions[c]); |
||||
} |
||||
|
||||
// Detect character groups
|
||||
cout << "Grouping extracted ERs ... "; |
||||
vector<Rect> groups; |
||||
erGrouping(channels, regions, "trained_classifier_erGrouping.xml", 0.5, groups); |
||||
|
||||
// draw groups
|
||||
groups_draw(src, groups); |
||||
imshow("grouping",src); |
||||
|
||||
cout << "Done!" << endl << endl; |
||||
cout << "Press 'e' to show the extracted Extremal Regions, any other key to exit." << endl << endl; |
||||
if( waitKey (-1) == 101) |
||||
er_show(channels,regions); |
||||
|
||||
// memory clean-up
|
||||
er_filter1.release(); |
||||
er_filter2.release(); |
||||
regions.clear(); |
||||
if (!groups.empty()) |
||||
{ |
||||
groups.clear(); |
||||
} |
||||
} |
||||
|
||||
|
||||
|
||||
// helper functions
|
||||
|
||||
void show_help_and_exit(const char *cmd) |
||||
{ |
||||
cout << " Usage: " << cmd << " <input_image> " << endl; |
||||
cout << " Default classifier files (trained_classifierNM*.xml) must be in current directory" << endl << endl; |
||||
exit(-1); |
||||
} |
||||
|
||||
void groups_draw(Mat &src, vector<Rect> &groups) |
||||
{ |
||||
for (int i=(int)groups.size()-1; i>=0; i--) |
||||
{ |
||||
if (src.type() == CV_8UC3) |
||||
rectangle(src,groups.at(i).tl(),groups.at(i).br(),Scalar( 0, 255, 255 ), 3, 8 ); |
||||
else |
||||
rectangle(src,groups.at(i).tl(),groups.at(i).br(),Scalar( 255 ), 3, 8 ); |
||||
} |
||||
} |
||||
|
||||
void er_show(vector<Mat> &channels, vector<vector<ERStat> > ®ions) |
||||
{ |
||||
for (int c=0; c<(int)channels.size(); c++) |
||||
{ |
||||
Mat dst = Mat::zeros(channels[0].rows+2,channels[0].cols+2,CV_8UC1); |
||||
for (int r=0; r<(int)regions[c].size(); r++) |
||||
{ |
||||
ERStat er = regions[c][r]; |
||||
if (er.parent != NULL) // deprecate the root region
|
||||
{ |
||||
int newMaskVal = 255; |
||||
int flags = 4 + (newMaskVal << 8) + FLOODFILL_FIXED_RANGE + FLOODFILL_MASK_ONLY; |
||||
floodFill(channels[c],dst,Point(er.pixel%channels[c].cols,er.pixel/channels[c].cols), |
||||
Scalar(255),0,Scalar(er.level),Scalar(0),flags); |
||||
} |
||||
} |
||||
char buff[10]; char *buff_ptr = buff; |
||||
sprintf(buff, "channel %d", c); |
||||
imshow(buff_ptr, dst); |
||||
} |
||||
waitKey(-1); |
||||
} |