Robertson and tutorial

pull/1474/head
Fedor Morozov 12 years ago
parent e2e604eb18
commit 833f8d16fa
  1. BIN
      doc/tutorials/images/photo.png
  2. 132
      doc/tutorials/photo/hdr_imaging/hdr_imaging.rst
  3. BIN
      doc/tutorials/photo/hdr_imaging/images/fusion.png
  4. BIN
      doc/tutorials/photo/hdr_imaging/images/ldr.png
  5. BIN
      doc/tutorials/photo/hdr_imaging/images/memorial.png
  6. BIN
      doc/tutorials/photo/table_of_content_photo/images/hdr.png
  7. 36
      doc/tutorials/photo/table_of_content_photo/table_of_content_photo.rst
  8. 17
      doc/tutorials/tutorials.rst
  9. 24
      modules/photo/include/opencv2/photo.hpp
  10. 117
      modules/photo/src/calibrate.cpp
  11. 12
      modules/photo/src/hdr_common.cpp
  12. 2
      modules/photo/src/hdr_common.hpp
  13. 70
      modules/photo/src/merge.cpp
  14. 51
      samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp

Binary file not shown.

After

Width:  |  Height:  |  Size: 483 KiB

@ -0,0 +1,132 @@
.. _hdrimaging:
High Dynamic Range Imaging
***************************************
Introduction
------------------
Today most digital images and imaging devices use three bytes per channel thus limiting the dynamic range of the device to two orders of magnitude, while human eye can adapt to lighting conditions varying by ten orders of magnitude. When we take photographs bright regions may be overexposed and dark ones may be on the other hand underexposed so we can't capture the whole scene in a single exposure. HDR imaging works with images that use more that 8 bits per channel (usually 32-bit float values), allowing any dynamic range.
There are different ways to obtain HDR images but the most common one is to use photographs of the scene taken with different exposure values. To combine the exposures it is useful to know your camera's response function and there are algorithms to estimate it. After the HDR image has been constructed it has to be converted back to 8-bit to view it on regular displays. This process is called tonemapping. Additional complexities arise when objects of the scene or camera move between shots.
In this tutorial we show how to make and display HDR image provided we have exposure sequence. In our case images are already aligned and there are no moving objects. We also demonstrate an alternative approach called exposure fusion that produces low dynamic range image. Each step of this pipeline can be made using different algorithms so take a look at the reference manual to find them all.
Exposure sequence
------------------
.. image:: images/memorial.png
:height: 357pt
:width: 242pt
:alt: Exposure sequence
:align: center
Source Code
===========
.. literalinclude:: ../../../../samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp
:language: cpp
:linenos:
:tab-width: 4
Explanation
===========
1. **Load images and exposure times**
.. code-block:: cpp
vector<Mat> images;
vector<float> times;
loadExposureSeq(argv[1], images, times);
First we load input images and exposure times from user-defined destination. The folder should contain images and *list.txt* - file that contains file names and inverse exposure times.
For our image sequence the list looks like this:
.. code-block:: none
memorial00.png 0.03125
memorial01.png 0.0625
...
memorial15.png 1024
2. **Estimate camera response**
.. code-block:: cpp
Mat response;
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
calibrate->process(images, response, times);
It is necessary to know camera response function for most HDR construction algorithms.
We use one of calibration algorithms to estimate inverse CRF for all 256 pixel values.
3. **Make HDR image**
.. code-block:: cpp
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
We use Debevec's weighting scheme to construct HDR image using response calculated in the previous item.
4. **Tonemap HDR image**
.. code-block:: cpp
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
Since we want to see our results on common LDR display we have to map our HDR image to 8-bit range preserving most details.
That is what tonemapping algorithms are for. We use bilateral filtering tonemapper and set 2.2 as value for gamma correction.
5. **Perform exposure fusion**
.. code-block:: cpp
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
There is an alternative way to merge our exposures in case we don't need HDR image.
This process is called exposure fusion and produces LDR image that doesn't require gamma correction. It also doesn't use exposure values of the photographs.
6. **Write results**
.. code-block:: cpp
imwrite("fusion.png", fusion * 255);
imwrite("ldr.png", ldr * 255);
imwrite("hdr.hdr", hdr);
Now it's time to view the results.
Note that HDR image can't be stored in one of common image formats, so we save it as Radiance image (.hdr).
Also all HDR imaging functions return results in [0, 1] range so we multiply them by 255.
Results
=======
Tonemapped image
------------------
.. image:: images/ldr.png
:height: 357pt
:width: 242pt
:alt: Tonemapped image
:align: center
Exposure fusion
------------------
.. image:: images/fusion.png
:height: 357pt
:width: 242pt
:alt: Exposure fusion
:align: center

Binary file not shown.

After

Width:  |  Height:  |  Size: 687 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 560 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 490 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 603 KiB

@ -0,0 +1,36 @@
.. _Table-Of-Content-Photo:
*photo* module. Computational photography
-----------------------------------------------------------
Use OpenCV for advanced photo processing.
.. include:: ../../definitions/tocDefinitions.rst
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
============ ==============================================
|HDR| **Title:** :ref:`hdrimaging`
*Compatibility:* TBA
*Author:* Fedor Morozov
Learn how to create and process high dynamic range images.
============ ==============================================
.. |HDR| image:: images/hdr.png
:height: 90pt
:width: 90pt
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../hdr_imaging/hdr_imaging

@ -132,7 +132,7 @@ As always, we would be happy to hear your comments and receive your contribution
.. cssclass:: toctableopencv
=========== =======================================================
|ml| Use the powerfull machine learning classes for statistical classification, regression and clustering of data.
|ml| Use the powerful machine learning classes for statistical classification, regression and clustering of data.
=========== =======================================================
@ -141,6 +141,21 @@ As always, we would be happy to hear your comments and receive your contribution
:width: 80pt
:alt: ml Icon
* :ref:`Table-Of-Content-Photo`
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=========== =======================================================
|photo| Use OpenCV for advanced photo processing.
=========== =======================================================
.. |photo| image:: images/photo.png
:height: 80pt
:width: 80pt
:alt: photo Icon
* :ref:`Table-Of-Content-GPU`
.. tabularcolumns:: m{100pt} m{300pt}

@ -213,6 +213,20 @@ public:
CV_EXPORTS_W Ptr<CalibrateDebevec> createCalibrateDebevec(int samples = 50, float lambda = 10.0f);
// "Dynamic range improvement through multiple exposures", Robertson et al., 1999
class CV_EXPORTS_W CalibrateRobertson : public ExposureCalibrate
{
public:
CV_WRAP virtual int getMaxIter() const = 0;
CV_WRAP virtual void setMaxIter(int max_iter) = 0;
CV_WRAP virtual float getThreshold() const = 0;
CV_WRAP virtual void setThreshold(float threshold) = 0;
};
CV_EXPORTS_W Ptr<CalibrateRobertson> createCalibrateRobertson(int samples = 50, float lambda = 10.0f);
class CV_EXPORTS_W ExposureMerge : public Algorithm
{
public:
@ -254,6 +268,16 @@ public:
CV_EXPORTS_W Ptr<MergeMertens>
createMergeMertens(float contrast_weight = 1.0f, float saturation_weight = 1.0f, float exposure_weight = 0.0f);
// "Dynamic range improvement through multiple exposures", Robertson et al., 1999
class CV_EXPORTS_W MergeRobertson : public ExposureMerge
{
public:
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst,
const std::vector<float>& times, InputArray response) = 0;
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst, const std::vector<float>& times) = 0;
};
} // cv
#endif

@ -150,4 +150,121 @@ Ptr<CalibrateDebevec> createCalibrateDebevec(int samples, float lambda)
return new CalibrateDebevecImpl(samples, lambda);
}
class CalibrateRobertsonImpl : public CalibrateRobertson
{
public:
CalibrateRobertsonImpl(int max_iter, float threshold) :
max_iter(max_iter),
threshold(threshold),
name("CalibrateRobertson"),
weight(RobertsonWeights())
{
}
void process(InputArrayOfArrays src, OutputArray dst, std::vector<float>& times)
{
std::vector<Mat> images;
src.getMatVector(images);
CV_Assert(images.size() == times.size());
checkImageDimensions(images);
CV_Assert(images[0].depth() == CV_8U);
int channels = images[0].channels();
int CV_32FCC = CV_MAKETYPE(CV_32F, channels);
dst.create(256, 1, CV_32FCC);
Mat response = dst.getMat();
response = Mat::zeros(256, 1, CV_32FCC);
for(int i = 0; i < 256; i++) {
for(int c = 0; c < channels; c++) {
response.at<Vec3f>(i)[c] = i / 128.0;
}
}
Mat card = Mat::zeros(256, 1, CV_32FCC);
for(int i = 0; i < images.size(); i++) {
uchar *ptr = images[i].ptr();
for(int pos = 0; pos < images[i].total(); pos++) {
for(int c = 0; c < channels; c++, ptr++) {
card.at<Vec3f>(*ptr)[c] += 1;
}
}
}
card = 1.0 / card;
for(int iter = 0; iter < max_iter; iter++) {
Scalar channel_err(0, 0, 0);
Mat radiance = Mat::zeros(images[0].size(), CV_32FCC);
Mat wsum = Mat::zeros(images[0].size(), CV_32FCC);
for(int i = 0; i < images.size(); i++) {
Mat im, w;
LUT(images[i], weight, w);
LUT(images[i], response, im);
Mat err_mat;
pow(im - times[i] * radiance, 2.0f, err_mat);
err_mat = w.mul(err_mat);
channel_err += sum(err_mat);
radiance += times[i] * w.mul(im);
wsum += pow(times[i], 2) * w;
}
float err = (channel_err[0] + channel_err[1] + channel_err[2]) / (channels * radiance.total());
radiance = radiance.mul(1 / wsum);
float* rad_ptr = radiance.ptr<float>();
response = Mat::zeros(256, 1, CV_32FC3);
for(int i = 0; i < images.size(); i++) {
uchar *ptr = images[i].ptr();
for(int pos = 0; pos < images[i].total(); pos++) {
for(int c = 0; c < channels; c++, ptr++, rad_ptr++) {
response.at<Vec3f>(*ptr)[c] += times[i] * *rad_ptr;
}
}
}
response = response.mul(card);
for(int c = 0; c < 3; c++) {
for(int i = 0; i < 256; i++) {
response.at<Vec3f>(i)[c] /= response.at<Vec3f>(128)[c];
}
}
}
}
int getMaxIter() const { return max_iter; }
void setMaxIter(int val) { max_iter = val; }
float getThreshold() const { return threshold; }
void setThreshold(float val) { threshold = val; }
void write(FileStorage& fs) const
{
fs << "name" << name
<< "max_iter" << max_iter
<< "threshold" << threshold;
}
void read(const FileNode& fn)
{
FileNode n = fn["name"];
CV_Assert(n.isString() && String(n) == name);
max_iter = fn["max_iter"];
threshold = fn["threshold"];
}
protected:
String name;
int max_iter;
float threshold;
Mat weight;
};
Ptr<CalibrateRobertson> createCalibrateRobertson(int max_iter, float threshold)
{
return new CalibrateRobertsonImpl(max_iter, threshold);
}
}

@ -68,6 +68,18 @@ Mat tringleWeights()
return w;
}
Mat RobertsonWeights()
{
Mat weight(256, 1, CV_32FC3);
for(int i = 0; i < 256; i++) {
float value = exp(-4.0f * pow(i - 127.5f, 2.0f) / pow(127.5f, 2.0f));
for(int c = 0; c < 3; c++) {
weight.at<Vec3f>(i)[c] = value;
}
}
return weight;
}
void mapLuminance(Mat src, Mat dst, Mat lum, Mat new_lum, float saturation)
{
std::vector<Mat> channels(3);

@ -54,6 +54,8 @@ Mat tringleWeights();
void mapLuminance(Mat src, Mat dst, Mat lum, Mat new_lum, float saturation);
Mat RobertsonWeights();
};
#endif

@ -303,4 +303,74 @@ Ptr<MergeMertens> createMergeMertens(float wcon, float wsat, float wexp)
return new MergeMertensImpl(wcon, wsat, wexp);
}
class MergeRobertsonImpl : public MergeRobertson
{
public:
MergeRobertsonImpl() :
name("MergeRobertson"),
weight(RobertsonWeights())
{
}
void process(InputArrayOfArrays src, OutputArray dst, const std::vector<float>& times, InputArray input_response)
{
std::vector<Mat> images;
src.getMatVector(images);
CV_Assert(images.size() == times.size());
checkImageDimensions(images);
CV_Assert(images[0].depth() == CV_8U);
int channels = images[0].channels();
int CV_32FCC = CV_MAKETYPE(CV_32F, channels);
dst.create(images[0].size(), CV_32FCC);
Mat result = dst.getMat();
Mat response = input_response.getMat();
if(response.empty()) {
response = linearResponse(channels);
}
CV_Assert(response.rows == 256 && response.cols == 1 &&
response.channels() == channels);
result = Mat::zeros(images[0].size(), CV_32FCC);
Mat wsum = Mat::zeros(images[0].size(), CV_32FCC);
for(size_t i = 0; i < images.size(); i++) {
Mat im, w;
LUT(images[i], weight, w);
LUT(images[i], response, im);
result += times[i] * w.mul(im);
wsum += pow(times[i], 2) * w;
}
result = result.mul(1 / wsum);
}
void process(InputArrayOfArrays src, OutputArray dst, const std::vector<float>& times)
{
process(src, dst, times, Mat());
}
protected:
String name;
Mat weight;
Mat linearResponse(int channels)
{
Mat response = Mat::zeros(256, 1, CV_32FC3);
for(int i = 0; i < 256; i++) {
for(int c = 0; c < 3; c++) {
response.at<Vec3f>(i)[c] = static_cast<float>(i) / 128.0f;
}
}
return response;
}
};
Ptr<MergeRobertson> createMergeRobertson()
{
return new MergeRobertsonImpl;
}
}

@ -0,0 +1,51 @@
#include <opencv2/photo.hpp>
#include <opencv2/highgui.hpp>
#include <vector>
#include <iostream>
#include <fstream>
using namespace cv;
using namespace std;
void loadExposureSeq(String path, vector<Mat>& images, vector<float>& times)
{
path += "/";
ifstream list_file((path + "list.txt").c_str());
string name;
float val;
while(list_file >> name >> val) {
Mat img = imread(path + name);
images.push_back(img);
times.push_back(1 / val);
}
list_file.close();
}
int main(int argc, char**argv)
{
vector<Mat> images;
vector<float> times;
loadExposureSeq(argv[1], images, times);
Mat response;
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
calibrate->process(images, response, times);
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
imwrite("fusion.png", fusion * 255);
imwrite("ldr.png", ldr * 255);
imwrite("hdr.hdr", hdr);
return 0;
}
Loading…
Cancel
Save