After Width: | Height: | Size: 208 KiB |
After Width: | Height: | Size: 260 KiB |
After Width: | Height: | Size: 317 KiB |
After Width: | Height: | Size: 214 KiB |
After Width: | Height: | Size: 156 KiB |
After Width: | Height: | Size: 176 KiB |
After Width: | Height: | Size: 203 KiB |
After Width: | Height: | Size: 225 KiB |
After Width: | Height: | Size: 241 KiB |
After Width: | Height: | Size: 213 KiB |
After Width: | Height: | Size: 422 KiB |
After Width: | Height: | Size: 161 KiB |
@ -0,0 +1,156 @@ |
||||
.. ximgproc: |
||||
|
||||
Structured forests for fast edge detection |
||||
****************************************** |
||||
|
||||
Introduction |
||||
------------ |
||||
Today most digital images and imaging devices use 8 bits per channel thus limiting the dynamic range of the device to two orders of magnitude (actually 256 levels), while human eye can adapt to lighting conditions varying by ten orders of magnitude. When we take photographs of a real world scene bright regions may be overexposed, while the dark ones may be underexposed, so we can’t capture all details using a single exposure. HDR imaging works with images that use more that 8 bits per channel (usually 32-bit float values), allowing much wider dynamic range. |
||||
|
||||
There are different ways to obtain HDR images, but the most common one is to use photographs of the scene taken with different exposure values. To combine this exposures it is useful to know your camera’s response function and there are algorithms to estimate it. After the HDR image has been blended it has to be converted back to 8-bit to view it on usual displays. This process is called tonemapping. Additional complexities arise when objects of the scene or camera move between shots, since images with different exposures should be registered and aligned. |
||||
|
||||
In this tutorial we show how to generate and display HDR image from an exposure sequence. In our case images are already aligned and there are no moving objects. We also demonstrate an alternative approach called exposure fusion that produces low dynamic range image. Each step of HDR pipeline can be implemented using different algorithms so take a look at the reference manual to see them all. |
||||
|
||||
Examples |
||||
-------- |
||||
|
||||
.. image:: images/01.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/02.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/03.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/04.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/05.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/06.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/07.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/08.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/09.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/10.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/11.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
.. image:: images/12.jpg |
||||
:height: 238pt |
||||
:width: 750pt |
||||
:alt: First example |
||||
:align: center |
||||
|
||||
**Note :** binarization techniques like Canny edge detector are applicable |
||||
to edges produced by both algorithms (``Sobel`` and ``StructuredEdgeDetection::detectEdges``). |
||||
|
||||
Source Code |
||||
----------- |
||||
|
||||
.. literalinclude:: ../../../../modules/ximpgroc/samples/cpp/structured_edge_detection.cpp |
||||
:language: cpp |
||||
:linenos: |
||||
:tab-width: 4 |
||||
|
||||
Explanation |
||||
----------- |
||||
|
||||
1. **Load source color image** |
||||
|
||||
.. code-block:: cpp |
||||
|
||||
cv::Mat image = cv::imread(inFilename, 1); |
||||
if ( image.empty() ) |
||||
{ |
||||
printf("Cannot read image file: %s\n", inFilename.c_str()); |
||||
return -1; |
||||
} |
||||
|
||||
2. **Convert source image to [0;1] range and RGB colospace** |
||||
|
||||
.. code-block:: cpp |
||||
|
||||
cv::cvtColor(image, image, CV_BGR2RGB); |
||||
image.convertTo(image, cv::DataType<float>::type, 1/255.0); |
||||
|
||||
3. **Run main algorithm** |
||||
|
||||
.. code-block:: cpp |
||||
|
||||
cv::Mat edges(image.size(), image.type()); |
||||
|
||||
cv::Ptr<StructuredEdgeDetection> pDollar = |
||||
cv::createStructuredEdgeDetection(modelFilename); |
||||
pDollar->detectEdges(image, edges); |
||||
|
||||
4. **Show results** |
||||
|
||||
.. code-block:: cpp |
||||
|
||||
if ( outFilename == "" ) |
||||
{ |
||||
cv::namedWindow("edges", 1); |
||||
cv::imshow("edges", edges); |
||||
|
||||
cv::waitKey(0); |
||||
} |
||||
else |
||||
cv::imwrite(outFilename, 255*edges); |
||||
|
||||
Literature |
||||
---------- |
||||
For more information, refer to the following papers : |
||||
|
||||
.. [Dollar2013] Dollar P., Zitnick C. L., "Structured forests for fast edge detection", |
||||
IEEE International Conference on Computer Vision (ICCV), 2013, |
||||
pp. 1841-1848. `DOI <http://dx.doi.org/10.1109/ICCV.2013.231>`_ |
||||
|
||||
.. [Lim2013] Lim J. J., Zitnick C. L., Dollar P., "Sketch Tokens: A Learned |
||||
Mid-level Representation for Contour and Object Detection", |
||||
Comoputer Vision and Pattern Recognition (CVPR), 2013, |
||||
pp. 3158-3165. `DOI <http://dx.doi.org/10.1109/CVPR.2013.406>`_ |
@ -0,0 +1,73 @@ |
||||
function modelConvert(model, outname) |
||||
%% script for converting Piotr's matlab model into YAML format |
||||
|
||||
outfile = fopen(outname, 'w'); |
||||
|
||||
fprintf(outfile, '%%YAML:1.0\n\n'); |
||||
|
||||
fprintf(outfile, ['options:\n'... |
||||
' numberOfTrees: 8\n'... |
||||
' numberOfTreesToEvaluate: 4\n'... |
||||
' selfsimilarityGridSize: 5\n'... |
||||
' stride: 2\n'... |
||||
' shrinkNumber: 2\n'... |
||||
' patchSize: 32\n'... |
||||
' patchInnerSize: 16\n'... |
||||
' numberOfGradientOrientations: 4\n'... |
||||
' gradientSmoothingRadius: 0\n'... |
||||
' regFeatureSmoothingRadius: 2\n'... |
||||
' ssFeatureSmoothingRadius: 8\n'... |
||||
' gradientNormalizationRadius: 4\n\n']); |
||||
|
||||
fprintf(outfile, 'childs:\n'); |
||||
printToYML(outfile, model.child', 0); |
||||
|
||||
fprintf(outfile, 'featureIds:\n'); |
||||
printToYML(outfile, model.fids', 0); |
||||
|
||||
fprintf(outfile, 'thresholds:\n'); |
||||
printToYML(outfile, model.thrs', 0); |
||||
|
||||
N = 1000; |
||||
fprintf(outfile, 'edgeBoundaries:\n'); |
||||
printToYML(outfile, model.eBnds, N); |
||||
|
||||
fprintf(outfile, 'edgeBins:\n'); |
||||
printToYML(outfile, model.eBins, N); |
||||
|
||||
fclose(outfile); |
||||
gzip(outname); |
||||
|
||||
end |
||||
|
||||
function printToYML(outfile, A, N) |
||||
%% append matrix A to outfile as |
||||
%% - [a11, a12, a13, a14, ..., a1n] |
||||
%% - [a21, a22, a23, a24, ..., a2n] |
||||
%% ... |
||||
%% |
||||
%% if size(A, 2) == 1, A is printed by N elemnent per row |
||||
|
||||
if (length(size(A)) ~= 2) |
||||
error('printToYML: second-argument matrix should have two dimensions'); |
||||
end |
||||
|
||||
if (size(A,2) ~= 1) |
||||
for i=1:size(A,1) |
||||
fprintf(outfile, ' - ['); |
||||
fprintf(outfile, '%d,', A(i, 1:end-1)); |
||||
fprintf(outfile, '%d]\n', A(i, end)); |
||||
end |
||||
else |
||||
len = length(A); |
||||
for i=1:ceil(len/N) |
||||
first = (i-1)*N + 1; |
||||
last = min(i*N, len) - 1; |
||||
|
||||
fprintf(outfile, ' - ['); |
||||
fprintf(outfile, '%d,', A(first:last)); |
||||
fprintf(outfile, '%d]\n', A(last + 1)); |
||||
end |
||||
end |
||||
fprintf(outfile, '\n'); |
||||
end |
@ -0,0 +1,115 @@ |
||||
.. ximgproc: |
||||
|
||||
Structured forest training |
||||
************************** |
||||
|
||||
Introduction |
||||
------------ |
||||
In this tutorial we show how to train your own structured forest using author's initial Matlab implementation. |
||||
|
||||
Training pipeline |
||||
----------------- |
||||
|
||||
1. Download "Piotr's Toolbox" from `link <http://vision.ucsd.edu/~pdollar/toolbox/doc/index.html>`_ |
||||
and put it into separate directory, e.g. PToolbox |
||||
|
||||
2. Download BSDS500 dataset from `link <http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/>` |
||||
and put it into separate directory named exactly BSR |
||||
|
||||
3. Add both directory and their subdirectories to Matlab path. |
||||
|
||||
4. Download detector code from `link <http://research.microsoft.com/en-us/downloads/389109f6-b4e8-404c-84bf-239f7cbf4e3d/>` |
||||
and put it into root directory. Now you should have :: |
||||
|
||||
. |
||||
BSR |
||||
PToolbox |
||||
models |
||||
private |
||||
Contents.m |
||||
edgesChns.m |
||||
edgesDemo.m |
||||
edgesDemoRgbd.m |
||||
edgesDetect.m |
||||
edgesEval.m |
||||
edgesEvalDir.m |
||||
edgesEvalImg.m |
||||
edgesEvalPlot.m |
||||
edgesSweeps.m |
||||
edgesTrain.m |
||||
license.txt |
||||
readme.txt |
||||
|
||||
5. Rename models/forest/modelFinal.mat to models/forest/modelFinal.mat.backup |
||||
|
||||
6. Open edgesChns.m and comment lines 26--41. Add after commented lines the following:: |
||||
|
||||
shrink=opts.shrink; |
||||
chns = single(getFeatures( im2double(I) )); |
||||
|
||||
7. Now it is time to compile promised getFeatures. I do with the following code: |
||||
|
||||
.. code-block:: cpp |
||||
|
||||
#include <cv.h> |
||||
#include <highgui.h> |
||||
|
||||
#include <mat.h> |
||||
#include <mex.h> |
||||
|
||||
#include "MxArray.hpp" // https://github.com/kyamagu/mexopencv |
||||
|
||||
class NewRFFeatureGetter : public cv::RFFeatureGetter |
||||
{ |
||||
public: |
||||
NewRFFeatureGetter() : name("NewRFFeatureGetter"){} |
||||
|
||||
virtual void getFeatures(const cv::Mat &src, NChannelsMat &features, |
||||
const int gnrmRad, const int gsmthRad, |
||||
const int shrink, const int outNum, const int gradNum) const |
||||
{ |
||||
// here your feature extraction code, the default one is: |
||||
// resulting features Mat should be n-channels, floating point matrix |
||||
} |
||||
|
||||
protected: |
||||
cv::String name; |
||||
}; |
||||
|
||||
MEXFUNCTION_LINKAGE void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) |
||||
{ |
||||
if (nlhs != 1) mexErrMsgTxt("nlhs != 1"); |
||||
if (nrhs != 1) mexErrMsgTxt("nrhs != 1"); |
||||
|
||||
cv::Mat src = MxArray(prhs[0]).toMat(); |
||||
src.convertTo(src, cv::DataType<float>::type); |
||||
|
||||
std::string modelFile = MxArray(prhs[1]).toString(); |
||||
NewRFFeatureGetter *pDollar = createNewRFFeatureGetter(); |
||||
|
||||
cv::Mat edges; |
||||
pDollar->getFeatures(src, edges, 4, 0, 2, 13, 4); |
||||
// you can use other numbers here |
||||
|
||||
edges.convertTo(edges, cv::DataType<double>::type); |
||||
|
||||
plhs[0] = MxArray(edges); |
||||
} |
||||
|
||||
8. Place compiled mex file into root dir and run edgesDemo. |
||||
You will need to wait a couple of hours after that the new model |
||||
will appear inside models/forest/. |
||||
|
||||
9. The final step is converting trained model from Matlab binary format |
||||
to YAML which you can use with our ocv::StructuredEdgeDetection. |
||||
For this purpose run opencv_contrib/doc/tutorials/ximpgroc/training/modelConvert(model, "model.yml") |
||||
|
||||
How to use your model |
||||
--------------------- |
||||
|
||||
Just use expanded constructor with above defined class NewRFFeatureGetter |
||||
|
||||
.. code-block:: cpp |
||||
|
||||
cv::StructuredEdgeDetection pDollar |
||||
= cv::createStructuredEdgeDetection( modelName, makePtr<NewRFFeatureGetter>() ); |