Add Java and Python code for cascade classifier and HDR tutorials.

pull/11715/head
catree 7 years ago
parent 1187a7fa34
commit afa5b0cc93
  1. 6
      doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.markdown
  2. 0
      doc/py_tutorials/py_photo/py_hdr/images/ldr_debevec.jpg
  3. 32
      doc/py_tutorials/py_photo/py_hdr/py_hdr.markdown
  4. 24
      doc/tutorials/objdetect/cascade_classifier/cascade_classifier.markdown
  5. 2
      doc/tutorials/objdetect/table_of_content_objdetect.markdown
  6. 190
      doc/tutorials/photo/hdr_imaging/hdr_imaging.markdown
  7. 2
      doc/tutorials/photo/table_of_content_photo.markdown
  8. 61
      samples/cpp/tutorial_code/objectDetection/objectDetection.cpp
  9. 25
      samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp
  10. 98
      samples/java/tutorial_code/objectDetection/cascade_classifier/ObjectDetectionDemo.java
  11. 102
      samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java
  12. 61
      samples/python/tutorial_code/objectDetection/cascade_classifier/objectDetection.py
  13. 56
      samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py

@ -126,9 +126,9 @@ Result looks like below:
Additional Resources
--------------------
-# Video Lecture on [Face Detection and Tracking](http://www.youtube.com/watch?v=WfdYYNamHZ8)
2. An interesting interview regarding Face Detection by [Adam
Harvey](http://www.makematics.com/research/viola-jones/)
-# Video Lecture on [Face Detection and Tracking](https://www.youtube.com/watch?v=WfdYYNamHZ8)
-# An interesting interview regarding Face Detection by [Adam
Harvey](https://web.archive.org/web/20171204220159/http://www.makematics.com/research/viola-jones/)
Exercises
---------

@ -27,7 +27,7 @@ merged, it has to be converted back to 8-bit to view it on usual displays. This
tonemapping. Additional complexities arise when objects of the scene or camera move between shots,
since images with different exposures should be registered and aligned.
In this tutorial we show 2 algorithms (Debvec, Robertson) to generate and display HDR image from an
In this tutorial we show 2 algorithms (Debevec, Robertson) to generate and display HDR image from an
exposure sequence, and demonstrate an alternative approach called exposure fusion (Mertens), that
produces low dynamic range image and does not need the exposure times data.
Furthermore, we estimate the camera response function (CRF) which is of great value for many computer
@ -65,14 +65,14 @@ exposure_times = np.array([15.0, 2.5, 0.25, 0.0333], dtype=np.float32)
### 2. Merge exposures into HDR image
In this stage we merge the exposure sequence into one HDR image, showing 2 possibilities
which we have in OpenCV. The first method is Debvec and the second one is Robertson.
which we have in OpenCV. The first method is Debevec and the second one is Robertson.
Notice that the HDR image is of type float32, and not uint8, as it contains the
full dynamic range of all exposure images.
@code{.py}
# Merge exposures to HDR image
merge_debvec = cv.createMergeDebevec()
hdr_debvec = merge_debvec.process(img_list, times=exposure_times.copy())
merge_debevec = cv.createMergeDebevec()
hdr_debevec = merge_debevec.process(img_list, times=exposure_times.copy())
merge_robertson = cv.createMergeRobertson()
hdr_robertson = merge_robertson.process(img_list, times=exposure_times.copy())
@endcode
@ -86,7 +86,7 @@ we will later have to clip the data in order to avoid overflow.
@code{.py}
# Tonemap HDR image
tonemap1 = cv.createTonemapDurand(gamma=2.2)
res_debvec = tonemap1.process(hdr_debvec.copy())
res_debevec = tonemap1.process(hdr_debevec.copy())
tonemap2 = cv.createTonemapDurand(gamma=1.3)
res_robertson = tonemap2.process(hdr_robertson.copy())
@endcode
@ -111,11 +111,11 @@ integers in the range of [0..255].
@code{.py}
# Convert datatype to 8-bit and save
res_debvec_8bit = np.clip(res_debvec*255, 0, 255).astype('uint8')
res_debevec_8bit = np.clip(res_debevec*255, 0, 255).astype('uint8')
res_robertson_8bit = np.clip(res_robertson*255, 0, 255).astype('uint8')
res_mertens_8bit = np.clip(res_mertens*255, 0, 255).astype('uint8')
cv.imwrite("ldr_debvec.jpg", res_debvec_8bit)
cv.imwrite("ldr_debevec.jpg", res_debevec_8bit)
cv.imwrite("ldr_robertson.jpg", res_robertson_8bit)
cv.imwrite("fusion_mertens.jpg", res_mertens_8bit)
@endcode
@ -127,9 +127,9 @@ You can see the different results but consider that each algorithm have addition
extra parameters that you should fit to get your desired outcome. Best practice is
to try the different methods and see which one performs best for your scene.
### Debvec:
### Debevec:
![image](images/ldr_debvec.jpg)
![image](images/ldr_debevec.jpg)
### Robertson:
@ -150,9 +150,9 @@ function and use it for the HDR merge.
@code{.py}
# Estimate camera response function (CRF)
cal_debvec = cv.createCalibrateDebevec()
crf_debvec = cal_debvec.process(img_list, times=exposure_times)
hdr_debvec = merge_debvec.process(img_list, times=exposure_times.copy(), response=crf_debvec.copy())
cal_debevec = cv.createCalibrateDebevec()
crf_debevec = cal_debevec.process(img_list, times=exposure_times)
hdr_debevec = merge_debevec.process(img_list, times=exposure_times.copy(), response=crf_debevec.copy())
cal_robertson = cv.createCalibrateRobertson()
crf_robertson = cal_robertson.process(img_list, times=exposure_times)
hdr_robertson = merge_robertson.process(img_list, times=exposure_times.copy(), response=crf_robertson.copy())
@ -166,12 +166,12 @@ For this sequence we got the following estimation:
Additional Resources
--------------------
1. Paul E Debevec and Jitendra Malik. Recovering high dynamic range radiance maps from photographs. In ACM SIGGRAPH 2008 classes, page 31. ACM, 2008.
2. Mark A Robertson, Sean Borman, and Robert L Stevenson. Dynamic range improvement through multiple exposures. In Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on, volume 3, pages 159–163. IEEE, 1999.
3. Tom Mertens, Jan Kautz, and Frank Van Reeth. Exposure fusion. In Computer Graphics and Applications, 2007. PG'07. 15th Pacific Conference on, pages 382–390. IEEE, 2007.
1. Paul E Debevec and Jitendra Malik. Recovering high dynamic range radiance maps from photographs. In ACM SIGGRAPH 2008 classes, page 31. ACM, 2008. @cite DM97
2. Mark A Robertson, Sean Borman, and Robert L Stevenson. Dynamic range improvement through multiple exposures. In Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on, volume 3, pages 159–163. IEEE, 1999. @cite RB99
3. Tom Mertens, Jan Kautz, and Frank Van Reeth. Exposure fusion. In Computer Graphics and Applications, 2007. PG'07. 15th Pacific Conference on, pages 382–390. IEEE, 2007. @cite MK07
4. Images from [Wikipedia-HDR](https://en.wikipedia.org/wiki/High-dynamic-range_imaging)
Exercises
---------
1. Try all tonemap algorithms: [Drago](http://docs.opencv.org/master/da/d53/classcv_1_1TonemapDrago.html), [Durand](http://docs.opencv.org/master/da/d3d/classcv_1_1TonemapDurand.html), [Mantiuk](http://docs.opencv.org/master/de/d76/classcv_1_1TonemapMantiuk.html) and [Reinhard](http://docs.opencv.org/master/d0/dec/classcv_1_1TonemapReinhard.html).
1. Try all tonemap algorithms: cv::TonemapDrago, cv::TonemapDurand, cv::TonemapMantiuk and cv::TonemapReinhard
2. Try changing the parameters in the HDR calibration and tonemap methods.

@ -17,9 +17,23 @@ Theory
Code
----
@add_toggle_cpp
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/objectDetection/objectDetection.cpp)
@include samples/cpp/tutorial_code/objectDetection/objectDetection.cpp
@end_toggle
@add_toggle_java
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/objectDetection/cascade_classifier/ObjectDetectionDemo.java)
@include samples/java/tutorial_code/objectDetection/cascade_classifier/ObjectDetectionDemo.java
@end_toggle
@add_toggle_python
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/objectDetection/cascade_classifier/objectDetection.py)
@include samples/python/tutorial_code/objectDetection/cascade_classifier/objectDetection.py
@end_toggle
Explanation
-----------
@ -40,3 +54,13 @@ Result
detection. For the eyes we keep using the file used in the tutorial.
![](images/Cascade_Classifier_Tutorial_Result_LBP.jpg)
Additional Resources
--------------------
-# Paul Viola and Michael J. Jones. Robust real-time face detection. International Journal of Computer Vision, 57(2):137–154, 2004. @cite Viola04
-# Rainer Lienhart and Jochen Maydt. An extended set of haar-like features for rapid object detection. In Image Processing. 2002. Proceedings. 2002 International Conference on, volume 1, pages I–900. IEEE, 2002. @cite Lienhart02
-# Video Lecture on [Face Detection and Tracking](https://www.youtube.com/watch?v=WfdYYNamHZ8)
-# An interesting interview regarding Face Detection by [Adam
Harvey](https://web.archive.org/web/20171204220159/http://www.makematics.com/research/viola-jones/)
-# [OpenCV Face Detection: Visualized](https://vimeo.com/12774628) on Vimeo by Adam Harvey

@ -5,6 +5,8 @@ Ever wondered how your digital camera detects peoples and faces? Look here to fi
- @subpage tutorial_cascade_classifier
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán

@ -31,21 +31,51 @@ Exposure sequence
Source Code
-----------
@include cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp
@add_toggle_cpp
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp)
@include samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp
@end_toggle
@add_toggle_java
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java)
@include samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java
@end_toggle
@add_toggle_python
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py)
@include samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py
@end_toggle
Sample images
-------------
Data directory that contains images, exposure times and `list.txt` file can be downloaded from
[here](https://github.com/opencv/opencv_extra/tree/3.4/testdata/cv/hdr/exposures).
Explanation
-----------
-# **Load images and exposure times**
@code{.cpp}
vector<Mat> images;
vector<float> times;
loadExposureSeq(argv[1], images, times);
@endcode
Firstly we load input images and exposure times from user-defined folder. The folder should
contain images and *list.txt* - file that contains file names and inverse exposure times.
- **Load images and exposure times**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp Load images and exposure times
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java Load images and exposure times
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py Load images and exposure times
@end_toggle
Firstly we load input images and exposure times from user-defined folder. The folder should
contain images and *list.txt* - file that contains file names and inverse exposure times.
For our image sequence the list is following:
For our image sequence the list is following:
@code{.none}
memorial00.png 0.03125
memorial01.png 0.0625
@ -53,53 +83,96 @@ Explanation
memorial15.png 1024
@endcode
-# **Estimate camera response**
@code{.cpp}
Mat response;
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
calibrate->process(images, response, times);
@endcode
It is necessary to know camera response function (CRF) for a lot of HDR construction algorithms.
We use one of the calibration algorithms to estimate inverse CRF for all 256 pixel values.
-# **Make HDR image**
@code{.cpp}
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
@endcode
- **Estimate camera response**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp Estimate camera response
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java Estimate camera response
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py Estimate camera response
@end_toggle
It is necessary to know camera response function (CRF) for a lot of HDR construction algorithms.
We use one of the calibration algorithms to estimate inverse CRF for all 256 pixel values.
- **Make HDR image**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp Make HDR image
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java Make HDR image
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py Make HDR image
@end_toggle
We use Debevec's weighting scheme to construct HDR image using response calculated in the previous
item.
-# **Tonemap HDR image**
@code{.cpp}
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
@endcode
Since we want to see our results on common LDR display we have to map our HDR image to 8-bit range
preserving most details. It is the main goal of tonemapping methods. We use tonemapper with
bilateral filtering and set 2.2 as the value for gamma correction.
-# **Perform exposure fusion**
@code{.cpp}
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
@endcode
There is an alternative way to merge our exposures in case when we don't need HDR image. This
process is called exposure fusion and produces LDR image that doesn't require gamma correction. It
also doesn't use exposure values of the photographs.
-# **Write results**
@code{.cpp}
imwrite("fusion.png", fusion * 255);
imwrite("ldr.png", ldr * 255);
imwrite("hdr.hdr", hdr);
@endcode
Now it's time to look at the results. Note that HDR image can't be stored in one of common image
formats, so we save it to Radiance image (.hdr). Also all HDR imaging functions return results in
[0, 1] range so we should multiply result by 255.
- **Tonemap HDR image**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp Tonemap HDR image
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java Tonemap HDR image
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py Tonemap HDR image
@end_toggle
Since we want to see our results on common LDR display we have to map our HDR image to 8-bit range
preserving most details. It is the main goal of tonemapping methods. We use tonemapper with
bilateral filtering and set 2.2 as the value for gamma correction.
- **Perform exposure fusion**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp Perform exposure fusion
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java Perform exposure fusion
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py Perform exposure fusion
@end_toggle
There is an alternative way to merge our exposures in case when we don't need HDR image. This
process is called exposure fusion and produces LDR image that doesn't require gamma correction. It
also doesn't use exposure values of the photographs.
- **Write results**
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp Write results
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/photo/hdr_imaging/HDRImagingDemo.java Write results
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/photo/hdr_imaging/hdr_imaging.py Write results
@end_toggle
Now it's time to look at the results. Note that HDR image can't be stored in one of common image
formats, so we save it to Radiance image (.hdr). Also all HDR imaging functions return results in
[0, 1] range so we should multiply result by 255.
You can try other tonemap algorithms: cv::TonemapDrago, cv::TonemapDurand, cv::TonemapMantiuk and cv::TonemapReinhard
You can also adjust the parameters in the HDR calibration and tonemap methods for your own photos.
Results
-------
@ -111,3 +184,12 @@ Results
### Exposure fusion
![](images/fusion.png)
Additional Resources
--------------------
1. Paul E Debevec and Jitendra Malik. Recovering high dynamic range radiance maps from photographs. In ACM SIGGRAPH 2008 classes, page 31. ACM, 2008. @cite DM97
2. Mark A Robertson, Sean Borman, and Robert L Stevenson. Dynamic range improvement through multiple exposures. In Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on, volume 3, pages 159–163. IEEE, 1999. @cite RB99
3. Tom Mertens, Jan Kautz, and Frank Van Reeth. Exposure fusion. In Computer Graphics and Applications, 2007. PG'07. 15th Pacific Conference on, pages 382–390. IEEE, 2007. @cite MK07
4. [Wikipedia-HDR](https://en.wikipedia.org/wiki/High-dynamic-range_imaging)
5. [Recovering High Dynamic Range Radiance Maps from Photographs (webpage)](http://www.pauldebevec.com/Research/HDR/)

@ -5,6 +5,8 @@ Use OpenCV for advanced photo processing.
- @subpage tutorial_hdr_imaging
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 3.0
*Author:* Fedor Morozov

@ -2,7 +2,7 @@
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <stdio.h>
#include <iostream>
using namespace std;
using namespace cv;
@ -11,48 +11,63 @@ using namespace cv;
void detectAndDisplay( Mat frame );
/** Global variables */
String face_cascade_name, eyes_cascade_name;
CascadeClassifier face_cascade;
CascadeClassifier eyes_cascade;
String window_name = "Capture - Face detection";
/** @function main */
int main( int argc, const char** argv )
{
CommandLineParser parser(argc, argv,
"{help h||}"
"{face_cascade|../../data/haarcascades/haarcascade_frontalface_alt.xml|}"
"{eyes_cascade|../../data/haarcascades/haarcascade_eye_tree_eyeglasses.xml|}");
"{face_cascade|../../data/haarcascades/haarcascade_frontalface_alt.xml|Path to face cascade.}"
"{eyes_cascade|../../data/haarcascades/haarcascade_eye_tree_eyeglasses.xml|Path to eyes cascade.}"
"{camera|0|Camera device number.}");
parser.about( "\nThis program demonstrates using the cv::CascadeClassifier class to detect objects (Face + eyes) in a video stream.\n"
"You can use Haar or LBP features.\n\n" );
parser.printMessage();
face_cascade_name = parser.get<String>("face_cascade");
eyes_cascade_name = parser.get<String>("eyes_cascade");
VideoCapture capture;
Mat frame;
String face_cascade_name = parser.get<String>("face_cascade");
String eyes_cascade_name = parser.get<String>("eyes_cascade");
//-- 1. Load the cascades
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading face cascade\n"); return -1; };
if( !eyes_cascade.load( eyes_cascade_name ) ){ printf("--(!)Error loading eyes cascade\n"); return -1; };
if( !face_cascade.load( face_cascade_name ) )
{
cout << "--(!)Error loading face cascade\n";
return -1;
};
if( !eyes_cascade.load( eyes_cascade_name ) )
{
cout << "--(!)Error loading eyes cascade\n";
return -1;
};
int camera_device = parser.get<int>("camera");
VideoCapture capture;
//-- 2. Read the video stream
capture.open( 0 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
capture.open( camera_device );
if ( ! capture.isOpened() )
{
cout << "--(!)Error opening video capture\n";
return -1;
}
Mat frame;
while ( capture.read(frame) )
{
if( frame.empty() )
{
printf(" --(!) No captured frame -- Break!");
cout << "--(!) No captured frame -- Break!\n";
break;
}
//-- 3. Apply the classifier to the frame
detectAndDisplay( frame );
if( waitKey(10) == 27 ) { break; } // escape
if( waitKey(10) == 27 )
{
break; // escape
}
}
return 0;
}
@ -60,33 +75,33 @@ int main( int argc, const char** argv )
/** @function detectAndDisplay */
void detectAndDisplay( Mat frame )
{
std::vector<Rect> faces;
Mat frame_gray;
cvtColor( frame, frame_gray, COLOR_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect faces
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CASCADE_SCALE_IMAGE, Size(60, 60) );
std::vector<Rect> faces;
face_cascade.detectMultiScale( frame_gray, faces );
for ( size_t i = 0; i < faces.size(); i++ )
{
Point center( faces[i].x + faces[i].width/2, faces[i].y + faces[i].height/2 );
ellipse( frame, center, Size( faces[i].width/2, faces[i].height/2 ), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
ellipse( frame, center, Size( faces[i].width/2, faces[i].height/2 ), 0, 0, 360, Scalar( 255, 0, 255 ), 4 );
Mat faceROI = frame_gray( faces[i] );
std::vector<Rect> eyes;
//-- In each face, detect eyes
eyes_cascade.detectMultiScale( faceROI, eyes, 1.1, 2, 0 |CASCADE_SCALE_IMAGE, Size(30, 30) );
std::vector<Rect> eyes;
eyes_cascade.detectMultiScale( faceROI, eyes );
for ( size_t j = 0; j < eyes.size(); j++ )
{
Point eye_center( faces[i].x + eyes[j].x + eyes[j].width/2, faces[i].y + eyes[j].y + eyes[j].height/2 );
int radius = cvRound( (eyes[j].width + eyes[j].height)*0.25 );
circle( frame, eye_center, radius, Scalar( 255, 0, 0 ), 4, 8, 0 );
circle( frame, eye_center, radius, Scalar( 255, 0, 0 ), 4 );
}
}
//-- Show what you got
imshow( window_name, frame );
imshow( "Capture - Face detection", frame );
}

@ -1,6 +1,7 @@
#include <opencv2/photo.hpp>
#include "opencv2/photo.hpp"
#include "opencv2/imgcodecs.hpp"
#include <opencv2/highgui.hpp>
#include "opencv2/highgui.hpp"
#include <vector>
#include <iostream>
#include <fstream>
@ -10,38 +11,52 @@ using namespace std;
void loadExposureSeq(String, vector<Mat>&, vector<float>&);
int main(int, char**argv)
int main(int argc, char**argv)
{
CommandLineParser parser( argc, argv, "{@input | | Input directory that contains images and exposure times. }" );
//! [Load images and exposure times]
vector<Mat> images;
vector<float> times;
loadExposureSeq(argv[1], images, times);
loadExposureSeq(parser.get<String>( "@input" ), images, times);
//! [Load images and exposure times]
//! [Estimate camera response]
Mat response;
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
calibrate->process(images, response, times);
//! [Estimate camera response]
//! [Make HDR image]
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
//! [Make HDR image]
//! [Tonemap HDR image]
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
//! [Tonemap HDR image]
//! [Perform exposure fusion]
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
//! [Perform exposure fusion]
//! [Write results]
imwrite("fusion.png", fusion * 255);
imwrite("ldr.png", ldr * 255);
imwrite("hdr.hdr", hdr);
//! [Write results]
return 0;
}
void loadExposureSeq(String path, vector<Mat>& images, vector<float>& times)
{
path = path + std::string("/");
path = path + "/";
ifstream list_file((path + "list.txt").c_str());
string name;
float val;

@ -0,0 +1,98 @@
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.highgui.HighGui;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.videoio.VideoCapture;
class ObjectDetection {
public void detectAndDisplay(Mat frame, CascadeClassifier faceCascade, CascadeClassifier eyesCascade) {
Mat frameGray = new Mat();
Imgproc.cvtColor(frame, frameGray, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist(frameGray, frameGray);
// -- Detect faces
MatOfRect faces = new MatOfRect();
faceCascade.detectMultiScale(frameGray, faces);
List<Rect> listOfFaces = faces.toList();
for (Rect face : listOfFaces) {
Point center = new Point(face.x + face.width / 2, face.y + face.height / 2);
Imgproc.ellipse(frame, center, new Size(face.width / 2, face.height / 2), 0, 0, 360,
new Scalar(255, 0, 255));
Mat faceROI = frameGray.submat(face);
// -- In each face, detect eyes
MatOfRect eyes = new MatOfRect();
eyesCascade.detectMultiScale(faceROI, eyes);
List<Rect> listOfEyes = eyes.toList();
for (Rect eye : listOfEyes) {
Point eyeCenter = new Point(face.x + eye.x + eye.width / 2, face.y + eye.y + eye.height / 2);
int radius = (int) Math.round((eye.width + eye.height) * 0.25);
Imgproc.circle(frame, eyeCenter, radius, new Scalar(255, 0, 0), 4);
}
}
//-- Show what you got
HighGui.imshow("Capture - Face detection", frame );
}
public void run(String[] args) {
String filenameFaceCascade = args.length > 2 ? args[0] : "../../data/haarcascades/haarcascade_frontalface_alt.xml";
String filenameEyesCascade = args.length > 2 ? args[1] : "../../data/haarcascades/haarcascade_eye_tree_eyeglasses.xml";
int cameraDevice = args.length > 2 ? Integer.parseInt(args[2]) : 0;
CascadeClassifier faceCascade = new CascadeClassifier();
CascadeClassifier eyesCascade = new CascadeClassifier();
if (!faceCascade.load(filenameFaceCascade)) {
System.err.println("--(!)Error loading face cascade: " + filenameFaceCascade);
System.exit(0);
}
if (!eyesCascade.load(filenameEyesCascade)) {
System.err.println("--(!)Error loading eyes cascade: " + filenameEyesCascade);
System.exit(0);
}
VideoCapture capture = new VideoCapture(cameraDevice);
if (!capture.isOpened()) {
System.err.println("--(!)Error opening video capture");
System.exit(0);
}
Mat frame = new Mat();
while (capture.read(frame)) {
if (frame.empty()) {
System.err.println("--(!) No captured frame -- Break!");
break;
}
//-- 3. Apply the classifier to the frame
detectAndDisplay(frame, faceCascade, eyesCascade);
if (HighGui.waitKey(10) == 27) {
break;// escape
}
}
System.exit(0);
}
}
public class ObjectDetectionDemo {
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new ObjectDetection().run(args);
}
}

@ -0,0 +1,102 @@
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.photo.CalibrateDebevec;
import org.opencv.photo.MergeDebevec;
import org.opencv.photo.MergeMertens;
import org.opencv.photo.Photo;
import org.opencv.photo.TonemapDurand;
class HDRImaging {
public void loadExposureSeq(String path, List<Mat> images, List<Float> times) {
path += "/";
List<String> lines;
try {
lines = Files.readAllLines(Paths.get(path + "list.txt"));
for (String line : lines) {
String[] splitStr = line.split("\\s+");
if (splitStr.length == 2) {
String name = splitStr[0];
Mat img = Imgcodecs.imread(path + name);
images.add(img);
float val = Float.parseFloat(splitStr[1]);
times.add(1/ val);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
public void run(String[] args) {
String path = args.length > 0 ? args[0] : "";
if (path.isEmpty()) {
System.out.println("Path is empty. Use the directory that contains images and exposure times.");
System.exit(0);
}
//! [Load images and exposure times]
List<Mat> images = new ArrayList<>();
List<Float> times = new ArrayList<>();
loadExposureSeq(path, images, times);
//! [Load images and exposure times]
//! [Estimate camera response]
Mat response = new Mat();
CalibrateDebevec calibrate = Photo.createCalibrateDebevec();
Mat matTimes = new Mat(times.size(), 1, CvType.CV_32F);
float[] arrayTimes = new float[(int) (matTimes.total()*matTimes.channels())];
for (int i = 0; i < times.size(); i++) {
arrayTimes[i] = times.get(i);
}
matTimes.put(0, 0, arrayTimes);
calibrate.process(images, response, matTimes);
//! [Estimate camera response]
//! [Make HDR image]
Mat hdr = new Mat();
MergeDebevec mergeDebevec = Photo.createMergeDebevec();
mergeDebevec.process(images, hdr, matTimes);
//! [Make HDR image]
//! [Tonemap HDR image]
Mat ldr = new Mat();
TonemapDurand tonemap = Photo.createTonemapDurand();
tonemap.process(hdr, ldr);
//! [Tonemap HDR image]
//! [Perform exposure fusion]
Mat fusion = new Mat();
MergeMertens mergeMertens = Photo.createMergeMertens();
mergeMertens.process(images, fusion);
//! [Perform exposure fusion]
//! [Write results]
fusion = fusion.mul(fusion, 255);
ldr = ldr.mul(ldr, 255);
Imgcodecs.imwrite("fusion.png", fusion);
Imgcodecs.imwrite("ldr.png", ldr);
Imgcodecs.imwrite("hdr.hdr", hdr);
//! [Write results]
System.exit(0);
}
}
public class HDRImagingDemo {
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new HDRImaging().run(args);
}
}

@ -0,0 +1,61 @@
from __future__ import print_function
import cv2 as cv
import argparse
def detectAndDisplay(frame):
frame_gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
frame_gray = cv.equalizeHist(frame_gray)
#-- Detect faces
faces = face_cascade.detectMultiScale(frame_gray)
for (x,y,w,h) in faces:
center = (x + w//2, y + h//2)
frame = cv.ellipse(frame, center, (w//2, h//2), 0, 0, 360, (255, 0, 255), 4)
faceROI = frame_gray[y:y+h,x:x+w]
#-- In each face, detect eyes
eyes = eyes_cascade.detectMultiScale(faceROI)
for (x2,y2,w2,h2) in eyes:
eye_center = (x + x2 + w2//2, y + y2 + h2//2)
radius = int(round((w2 + h2)*0.25))
frame = cv.circle(frame, eye_center, radius, (255, 0, 0 ), 4)
cv.imshow('Capture - Face detection', frame)
parser = argparse.ArgumentParser(description='Code for Cascade Classifier tutorial.')
parser.add_argument('--face_cascade', help='Path to face cascade.', default='../../data/haarcascades/haarcascade_frontalface_alt.xml')
parser.add_argument('--eyes_cascade', help='Path to eyes cascade.', default='../../data/haarcascades/haarcascade_eye_tree_eyeglasses.xml')
parser.add_argument('--camera', help='Camera devide number.', type=int, default=0)
args = parser.parse_args()
face_cascade_name = args.face_cascade
eyes_cascade_name = args.eyes_cascade
face_cascade = cv.CascadeClassifier()
eyes_cascade = cv.CascadeClassifier()
#-- 1. Load the cascades
if not face_cascade.load(face_cascade_name):
print('--(!)Error loading face cascade')
exit(0)
if not eyes_cascade.load(eyes_cascade_name):
print('--(!)Error loading eyes cascade')
exit(0)
camera_device = args.camera
#-- 2. Read the video stream
cap = cv.VideoCapture(camera_device)
if not cap.isOpened:
print('--(!)Error opening video capture')
exit(0)
while True:
ret, frame = cap.read()
if frame is None:
print('--(!) No captured frame -- Break!')
break
detectAndDisplay(frame)
if cv.waitKey(10) == 27:
break

@ -0,0 +1,56 @@
from __future__ import print_function
from __future__ import division
import cv2 as cv
import numpy as np
import argparse
import os
def loadExposureSeq(path):
images = []
times = []
with open(os.path.join(path, 'list.txt')) as f:
content = f.readlines()
for line in content:
tokens = line.split()
images.append(cv.imread(os.path.join(path, tokens[0])))
times.append(1 / float(tokens[1]))
return images, np.asarray(times, dtype=np.float32)
parser = argparse.ArgumentParser(description='Code for High Dynamic Range Imaging tutorial.')
parser.add_argument('--input', type=str, help='Path to the directory that contains images and exposure times.')
args = parser.parse_args()
if not args.input:
parser.print_help()
exit(0)
## [Load images and exposure times]
images, times = loadExposureSeq(args.input)
## [Load images and exposure times]
## [Estimate camera response]
calibrate = cv.createCalibrateDebevec()
response = calibrate.process(images, times)
## [Estimate camera response]
## [Make HDR image]
merge_debevec = cv.createMergeDebevec()
hdr = merge_debevec.process(images, times, response)
## [Make HDR image]
## [Tonemap HDR image]
tonemap = cv.createTonemapDurand(2.2)
ldr = tonemap.process(hdr)
## [Tonemap HDR image]
## [Perform exposure fusion]
merge_mertens = cv.createMergeMertens()
fusion = merge_mertens.process(images)
## [Perform exposure fusion]
## [Write results]
cv.imwrite('fusion.png', fusion * 255)
cv.imwrite('ldr.png', ldr * 255)
cv.imwrite('hdr.hdr', hdr)
## [Write results]
Loading…
Cancel
Save