Merge branch '2.4'

pull/119/head
Andrey Kamaev 12 years ago
commit 468eefe0ce
  1. BIN
      doc/tutorials/contrib/retina_model/images/retina_TreeHdr_retina.jpg
  2. BIN
      doc/tutorials/contrib/retina_model/images/retina_TreeHdr_small.jpg
  3. BIN
      doc/tutorials/contrib/retina_model/images/studentsSample_input.jpg
  4. BIN
      doc/tutorials/contrib/retina_model/images/studentsSample_magno.jpg
  5. BIN
      doc/tutorials/contrib/retina_model/images/studentsSample_parvo.jpg
  6. 414
      doc/tutorials/contrib/retina_model/retina_model.rst
  7. BIN
      doc/tutorials/contrib/table_of_content_contrib/images/retina_TreeHdr_small.jpg
  8. 36
      doc/tutorials/contrib/table_of_content_contrib/table_of_content_contrib.rst
  9. 1
      doc/tutorials/definitions/tocDefinitions.rst
  10. BIN
      doc/tutorials/images/retina.jpg
  11. 16
      doc/tutorials/tutorials.rst
  12. 7
      modules/contrib/doc/retina/index.rst
  13. 72
      modules/java/generator/src/java/android+AsyncServiceHelper.java
  14. 3
      modules/java/generator/src/java/android+CameraBridgeViewBase.java
  15. 7
      modules/java/generator/src/java/android+JavaCameraView.java
  16. 9
      modules/ocl/include/opencv2/ocl/ocl.hpp
  17. 2
      modules/ocl/perf/perf_haar.cpp
  18. 6
      modules/ocl/perf/perf_hog.cpp
  19. 4
      modules/ocl/perf/perf_surf.cpp
  20. 4
      modules/ocl/perf/precomp.hpp
  21. 2
      modules/ocl/perf/utility.hpp
  22. 32
      modules/ocl/src/arithm.cpp
  23. 14
      modules/ocl/src/brute_force_matcher.cpp
  24. 6
      modules/ocl/src/build_warps.cpp
  25. 2
      modules/ocl/src/color.cpp
  26. 2
      modules/ocl/src/columnsum.cpp
  27. 4
      modules/ocl/src/filtering.cpp
  28. 6
      modules/ocl/src/haar.cpp
  29. 2
      modules/ocl/src/initialization.cpp
  30. 2
      modules/ocl/src/interpolate_frames.cpp
  31. 2
      modules/ocl/src/mssegmentation.cpp
  32. 2
      modules/ocl/src/precomp.hpp
  33. 116
      modules/ocl/src/pyrlk.cpp
  34. 20
      modules/ocl/src/surf.cpp
  35. 4
      modules/ocl/test/precomp.hpp
  36. 4
      modules/ocl/test/test_hog.cpp
  37. 4
      modules/ocl/test/test_imgproc.cpp
  38. 14
      samples/android/15-puzzle/src/org/opencv/samples/puzzle15/Puzzle15Activity.java
  39. 15
      samples/android/15-puzzle/src/org/opencv/samples/puzzle15/Puzzle15Processor.java
  40. 10
      samples/android/color-blob-detection/src/org/opencv/samples/colorblobdetect/ColorBlobDetectionActivity.java
  41. 6
      samples/android/face-detection/src/org/opencv/samples/fd/FdActivity.java
  42. 6
      samples/android/image-manipulations/src/org/opencv/samples/imagemanipulations/ImageManipulationsActivity.java
  43. 9
      samples/android/tutorial-1-addopencv/res/layout/tutorial1_surface_view.xml
  44. 52
      samples/android/tutorial-1-addopencv/src/org/opencv/samples/tutorial1/Sample1Java.java
  45. 24
      samples/android/tutorial-2-opencvcamera/src/org/opencv/samples/tutorial2/Sample2NativeCamera.java
  46. 6
      samples/android/tutorial-3-native/src/org/opencv/samples/tutorial3/Sample3Native.java
  47. 8
      samples/android/tutorial-4-mixed/src/org/opencv/samples/tutorial4/Sample4Mixed.java
  48. 146
      samples/cpp/tutorial_code/contrib/retina_tutorial.cpp

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

@ -0,0 +1,414 @@
.. _Retina_Model:
Discovering the human retina and its use for image processing
*************************************************************
Goal
=====
I present here a model of human retina that shows some interesting properties for image preprocessing and enhancement.
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
+ discover the main two channels outing from your retina
+ see the basics to use the retina model
+ discover some parameters tweaks
General overview
================
The proposed model originates from Jeanny Herault's research at `Gipsa <http://www.gipsa-lab.inpg.fr>`_. It is involved in image processing applications with `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer) lab. This is not a complete model but it already present interesting properties that can be involved for enhanced image processing experience. The model allows the following human retina properties to be used :
* spectral whitening that has 3 important effects: high spatio-temporal frequency signals canceling (noise), mid-frequencies details enhancement and low frequencies luminance energy reduction. This *all in one* property directly allows visual signals cleaning of classical undesired distortions introduced by image sensors and input luminance range.
* local logarithmic luminance compression allows details to be enhanced even in low light conditions.
* decorrelation of the details information (Parvocellular output channel) and transient information (events, motion made available at the Magnocellular output channel).
The first two points are illustrated below :
In the figure below, the OpenEXR image sample *CrissyField.exr*, a High Dynamic Range image is shown. In order to make it visible on this web-page, the original input image is linearly rescaled to the classical image luminance range [0-255] and is converted to 8bit/channel format. Such strong conversion hides many details because of too strong local contrasts. Furthermore, noise energy is also strong and pollutes visual information.
.. image:: images/retina_TreeHdr_small.jpg
:alt: A High dynamic range image linearly rescaled within range [0-255].
:align: center
In the following image, as your retina does, local luminance adaptation, spatial noise removal and spectral whitening work together and transmit accurate information on lower range 8bit data channels. On this picture, noise in significantly removed, local details hidden by strong luminance contrasts are enhanced. Output image keeps its naturalness and visual content is enhanced.
.. image:: images/retina_TreeHdr_retina.jpg
:alt: A High dynamic range image compressed within range [0-255] using the retina.
:align: center
*Note :* image sample can be downloaded from the `OpenEXR website <http://www.openexr.com>`_. Regarding this demonstration, before retina processing, input image has been linearly rescaled within 0-255 keeping its channels float format. 5% of its histogram ends has been cut (mostly removes wrong HDR pixels). Check out the sample *opencv/samples/cpp/OpenEXRimages_HighDynamicRange_Retina_toneMapping.cpp* for similar processing. The following demonstration will only consider classical 8bit/channel images.
The retina model output channels
================================
The retina model presents two outputs that benefit from the above cited behaviors.
* The first one is called the Parvocellular channel. It is mainly active in the foveal retina area (high resolution central vision with color sensitive photo-receptors), its aim is to provide accurate color vision for visual details remaining static on the retina. On the other hand objects moving on the retina projection are blurred.
* The second well known channel is the Magnocellular channel. It is mainly active in the retina peripheral vision and send signals related to change events (motion, transient events, etc.). These outing signals also help visual system to focus/center retina on 'transient'/moving areas for more detailed analysis thus improving visual scene context and object classification.
**NOTE :** regarding the proposed model, contrary to the real retina, we apply these two channels on the entire input images using the same resolution. This allows enhanced visual details and motion information to be extracted on all the considered images... but remember, that these two channels are complementary. For example, if Magnocellular channel gives strong energy in an area, then, the Parvocellular channel is certainly blurred there since there is a transient event.
As an illustration, we apply in the following the retina model on a webcam video stream of a dark visual scene. In this visual scene, captured in an amphitheater of the university, some students are moving while talking to the teacher.
In this video sequence, because of the dark ambiance, signal to noise ratio is low and color artifacts are present on visual features edges because of the low quality image capture tool-chain.
.. image:: images/studentsSample_input.jpg
:alt: an input video stream extract sample
:align: center
Below is shown the retina foveal vision applied on the entire image. In the used retina configuration, global luminance is preserved and local contrasts are enhanced. Also, signal to noise ratio is improved : since high frequency spatio-temporal noise is reduced, enhanced details are not corrupted by any enhanced noise.
.. image:: images/studentsSample_parvo.jpg
:alt: the retina Parvocellular output. Enhanced details, luminance adaptation and noise removal. A processing tool for image analysis.
:align: center
Below is the output of the Magnocellular output of the retina model. Its signals are strong where transient events occur. Here, a student is moving at the bottom of the image thus generating high energy. The remaining of the image is static however, it is corrupted by a strong noise. Here, the retina filters out most of the noise thus generating low false motion area 'alarms'. This channel can be used as a transient/moving areas detector : it would provide relevant information for a low cost segmentation tool that would highlight areas in which an event is occurring.
.. image:: images/studentsSample_magno.jpg
:alt: the retina Magnocellular output. Enhanced transient signals (motion, etc.). A preprocessing tool for event detection.
:align: center
Retina use case
===============
This model can be used basically for spatio-temporal video effects but also in the aim of :
* performing texture analysis with enhanced signal to noise ratio and enhanced details robust against input images luminance ranges (check out the Parvocellular retina channel output)
* performing motion analysis also taking benefit of the previously cited properties.
For more information, refer to the following papers :
* Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
* Please have a look at the reference work of Jeanny Herault that you can read in his book :
Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
This retina filter code includes the research contributions of phd/research collegues from which code has been redrawn by the author :
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. ====> more information in the above cited Jeanny Heraults's book.
Code tutorial
=============
Please refer to the original tutorial source code in file *opencv_folder/samples/cpp/tutorial_code/contrib/retina_tutorial.cpp*.
To compile it, assuming OpenCV is correctly installed, use the following command. It requires the opencv_core *(cv::Mat and friends objects management)*, opencv_highgui *(display and image/video read)* and opencv_contrib *(Retina description)* libraries to compile.
.. code-block:: cpp
// compile
gcc retina_tutorial.cpp -o Retina_tuto -lopencv_core -lopencv_highgui -lopencv_contrib
// Run commands : add 'log' as a last parameter to apply a spatial log sampling (simulates retina sampling)
// run on webcam
./Retina_tuto -video
// run on video file
./Retina_tuto -video myVideo.avi
// run on an image
./Retina_tuto -image myPicture.jpg
// run on an image with log sampling
./Retina_tuto -image myPicture.jpg log
Here is a code explanation :
Retina definition is present in the contrib package and a simple include allows to use it
.. code-block:: cpp
#include "opencv2/opencv.hpp"
Provide user some hints to run the program with a help function
.. code-block:: cpp
// the help procedure
static void help(std::string errorMessage)
{
std::cout<<"Program init error : "<<errorMessage<<std::endl;
std::cout<<"\nProgram call procedure : retinaDemo [processing mode] [Optional : media target] [Optional LAST parameter: \"log\" to activate retina log sampling]"<<std::endl;
std::cout<<"\t[processing mode] :"<<std::endl;
std::cout<<"\t -image : for still image processing"<<std::endl;
std::cout<<"\t -video : for video stream processing"<<std::endl;
std::cout<<"\t[Optional : media target] :"<<std::endl;
std::cout<<"\t if processing an image or video file, then, specify the path and filename of the target to process"<<std::endl;
std::cout<<"\t leave empty if processing video stream coming from a connected video device"<<std::endl;
std::cout<<"\t[Optional : activate retina log sampling] : an optional last parameter can be specified for retina spatial log sampling"<<std::endl;
std::cout<<"\t set \"log\" without quotes to activate this sampling, output frame size will be divided by 4"<<std::endl;
std::cout<<"\nExamples:"<<std::endl;
std::cout<<"\t-Image processing : ./retinaDemo -image lena.jpg"<<std::endl;
std::cout<<"\t-Image processing with log sampling : ./retinaDemo -image lena.jpg log"<<std::endl;
std::cout<<"\t-Video processing : ./retinaDemo -video myMovie.mp4"<<std::endl;
std::cout<<"\t-Live video processing : ./retinaDemo -video"<<std::endl;
std::cout<<"\nPlease start again with new parameters"<<std::endl;
std::cout<<"****************************************************"<<std::endl;
std::cout<<" NOTE : this program generates the default retina parameters file 'RetinaDefaultParameters.xml'"<<std::endl;
std::cout<<" => you can use this to fine tune parameters and load them if you save to file 'RetinaSpecificParameters.xml'"<<std::endl;
}
Then, start the main program and first declare a *cv::Mat* matrix in which input images will be loaded. Also allocate a *cv::VideoCapture* object ready to load video streams (if necessary)
.. code-block:: cpp
int main(int argc, char* argv[]) {
// declare the retina input buffer... that will be fed differently in regard of the input media
cv::Mat inputFrame;
cv::VideoCapture videoCapture; // in case a video media is used, its manager is declared here
In the main program, before processing, first check input command parameters. Here it loads a first input image coming from a single loaded image (if user chose command *-image*) or from a video stream (if user chose command *-video*). Also, if the user added *log* command at the end of its program call, the spatial logarithmic image sampling performed by the retina is taken into account by the Boolean flag *useLogSampling*.
.. code-block:: cpp
// welcome message
std::cout<<"****************************************************"<<std::endl;
std::cout<<"* Retina demonstration : demonstrates the use of is a wrapper class of the Gipsa/Listic Labs retina model."<<std::endl;
std::cout<<"* This demo will try to load the file 'RetinaSpecificParameters.xml' (if exists).\nTo create it, copy the autogenerated template 'RetinaDefaultParameters.xml'.\nThen twaek it with your own retina parameters."<<std::endl;
// basic input arguments checking
if (argc<2)
{
help("bad number of parameter");
return -1;
}
bool useLogSampling = !strcmp(argv[argc-1], "log"); // check if user wants retina log sampling processing
std::string inputMediaType=argv[1];
//////////////////////////////////////////////////////////////////////////////
// checking input media type (still image, video file, live video acquisition)
if (!strcmp(inputMediaType.c_str(), "-image") && argc >= 3)
{
std::cout<<"RetinaDemo: processing image "<<argv[2]<<std::endl;
// image processing case
inputFrame = cv::imread(std::string(argv[2]), 1); // load image in RGB mode
}else
if (!strcmp(inputMediaType.c_str(), "-video"))
{
if (argc == 2 || (argc == 3 && useLogSampling)) // attempt to grab images from a video capture device
{
videoCapture.open(0);
}else// attempt to grab images from a video filestream
{
std::cout<<"RetinaDemo: processing video stream "<<argv[2]<<std::endl;
videoCapture.open(argv[2]);
}
// grab a first frame to check if everything is ok
videoCapture>>inputFrame;
}else
{
// bad command parameter
help("bad command parameter");
return -1;
}
Once all input parameters are processed, a first image should have been loaded, if not, display error and stop program :
.. code-block:: cpp
if (inputFrame.empty())
{
help("Input media could not be loaded, aborting");
return -1;
}
Now, everything is ready to run the retina model. I propose here to allocate a retina instance and to manage the eventual log sampling option. The Retina constructor expects at least a cv::Size object that shows the input data size that will have to be managed. One can activate other options such as color and its related color multiplexing strategy (here Bayer multiplexing is chosen using enum cv::RETINA_COLOR_BAYER). If using log sampling, the image reduction factor (smaller output images) and log sampling strengh can be adjusted.
.. code-block:: cpp
// pointer to a retina object
cv::Ptr<cv::Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = new cv::Retina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = new cv::Retina(inputFrame.size());
Once done, the proposed code writes a default xml file that contains the default parameters of the retina. This is useful to make your own config using this template. Here generated template xml file is called *RetinaDefaultParameters.xml*.
.. code-block:: cpp
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
In the following line, the retina attempts to load another xml file called *RetinaSpecificParameters.xml*. If you created it and introduced your own setup, it will be loaded, in the other case, default retina parameters are used.
.. code-block:: cpp
// load parameters if file exists
myRetina->setup("RetinaSpecificParameters.xml");
It is not required here but just to show it is possible, you can reset the retina buffers to zero to force it to forget past events.
.. code-block:: cpp
// reset all retina buffers (imagine you close your eyes for a long time)
myRetina->clearBuffers();
Now, it is time to run the retina ! First create some output buffers ready to receive the two retina channels outputs
.. code-block:: cpp
// declare retina output buffers
cv::Mat retinaOutput_parvo;
cv::Mat retinaOutput_magno;
Then, run retina in a loop, load new frames from video sequence if necessary and get retina outputs back to dedicated buffers.
.. code-block:: cpp
// processing loop with no stop condition
while(true)
{
// if using video stream, then, grabbing a new frame, else, input remains the same
if (videoCapture.isOpened())
videoCapture>>inputFrame;
// run retina filter on the loaded input frame
myRetina->run(inputFrame);
// Retrieve and display retina output
myRetina->getParvo(retinaOutput_parvo);
myRetina->getMagno(retinaOutput_magno);
cv::imshow("retina input", inputFrame);
cv::imshow("Retina Parvo", retinaOutput_parvo);
cv::imshow("Retina Magno", retinaOutput_magno);
cv::waitKey(10);
}
That's done ! But if you want to secure the system, take care and manage Exceptions. The retina can throw some when it sees irrelevant data (no input frame, wrong setup, etc.).
Then, i recommend to surround all the retina code by a try/catch system like this :
.. code-block:: cpp
try{
// pointer to a retina object
cv::Ptr<cv::Retina> myRetina;
[---]
// processing loop with no stop condition
while(true)
{
[---]
}
}catch(cv::Exception e)
{
std::cerr<<"Error using Retina : "<<e.what()<<std::endl;
}
Retina parameters, what to do ?
===============================
First, it is recommended to read the reference paper :
* Benoit A., Caplier A., Durette B., Herault, J., *"Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing"*, Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
Once done open the configuration file *RetinaDefaultParameters.xml* generated by the demo and let's have a look at it.
.. code-block:: cpp
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>7.0e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>5.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.</horizontalCellsGain>
<hcellsTemporalConstant>1.</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>7.0e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>1.2e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>
Here are some hints but actually, the best parameter setup depends more on what you want to do with the retina rather than the images input that you give to retina. Apart from the more specific case of High Dynamic Range images (HDR) that require more specific setup for specific luminance compression objective, the retina behaviors should be rather stable from content to content. Note that OpenCV is able to manage such HDR format thanks to the OpenEXR images compatibility.
Then, if the application target requires details enhancement prior to specific image processing, you need to know if mean luminance information is required or not. If not, the the retina can cancel or significantly reduce its energy thus giving more visibility to higher spatial frequency details.
Basic parameters
----------------
The most simple parameters are the following :
* **colorMode** : let the retina process color information (if 1) or gray scale images (if 0). In this last case, only the first channel of the input will be processed.
* **normaliseOutput** : each channel has this parameter, if value is 1, then the considered channel output is rescaled between 0 and 255. Take care in this case at the Magnocellular output level (motion/transient channel detection). Residual noise will also be rescaled !
**Note :** using color requires color channels multiplexing/demultipexing which requires more processing. You can expect much faster processing using gray levels : it would require around 30 product per pixel for all the retina processes and it has recently been parallelized for multicore architectures.
Photo-receptors parameters
--------------------------
The following parameters act on the entry point of the retina - photo-receptors - and impact all the following processes. These sensors are low pass spatio-temporal filters that smooth temporal and spatial data and also adjust there sensitivity to local luminance thus improving details extraction and high frequency noise canceling.
* **photoreceptorsLocalAdaptationSensitivity** between 0 and 1. Values close to 1 allow high luminance log compression effect at the photo-receptors level. Values closer to 0 give a more linear sensitivity. Increased alone, it can burn the *Parvo (details channel)* output image. If adjusted in collaboration with **ganglionCellsSensitivity** images can be very contrasted whatever the local luminance there is... at the price of a naturalness decrease.
* **photoreceptorsTemporalConstant** this setups the temporal constant of the low pass filter effect at the entry of the retina. High value lead to strong temporal smoothing effect : moving objects are blurred and can disappear while static object are favored. But when starting the retina processing, stable state is reached lately.
* **photoreceptorsSpatialConstant** specifies the spatial constant related to photo-receptors low pass filter effect. This parameters specify the minimum allowed spatial signal period allowed in the following. Typically, this filter should cut high frequency noise. Then a 0 value doesn't cut anything noise while higher values start to cut high spatial frequencies and more and more lower frequencies... Then, do not go to high if you wanna see some details of the input images ! A good compromise for color images is 0.53 since this won't affect too much the color spectrum. Higher values would lead to gray and blurred output images.
Horizontal cells parameters
---------------------------
This parameter set tunes the neural network connected to the photo-receptors, the horizontal cells. It modulates photo-receptors sensitivity and completes the processing for final spectral whitening (part of the spatial band pass effect thus favoring visual details enhancement).
* **horizontalCellsGain** here is a critical parameter ! If you are not interested by the mean luminance and focus on details enhancement, then, set to zero. But if you want to keep some environment luminance data, let some low spatial frequencies pass into the system and set a higher value (<1).
* **hcellsTemporalConstant** similar to photo-receptors, this acts on the temporal constant of a low pass temporal filter that smooths input data. Here, a high value generates a high retina after effect while a lower value makes the retina more reactive.
* **hcellsSpatialConstant** is the spatial constant of the low pass filter of these cells filter. It specifies the lowest spatial frequency allowed in the following. Visually, a high value leads to very low spatial frequencies processing and leads to salient halo effects. Lower values reduce this effect but the limit is : do not go lower than the value of **photoreceptorsSpatialConstant**. Those 2 parameters actually specify the spatial band-pass of the retina.
**NOTE** after the processing managed by the previous parameters, input data is cleaned from noise and luminance in already partly enhanced. The following parameters act on the last processing stages of the two outing retina signals.
Parvo (details channel) dedicated parameter
-------------------------------------------
* **ganglionCellsSensitivity** specifies the strength of the final local adaptation occurring at the output of this details dedicated channel. Parameter values remain between 0 and 1. Low value tend to give a linear response while higher values enforces the remaining low contrasted areas.
**Note :** this parameter can correct eventual burned images by favoring low energetic details of the visual scene, even in bright areas.
IPL Magno (motion/transient channel) parameters
-----------------------------------------------
Once image information is cleaned, this channel acts as a high pass temporal filter that only selects signals related to transient signals (events, motion, etc.). A low pass spatial filter smooths extracted transient data and a final logarithmic compression enhances low transient events thus enhancing event sensitivity.
* **parasolCells_beta** generally set to zero, can be considered as an amplifier gain at the entry point of this processing stage. Generally set to 0.
* **parasolCells_tau** the temporal smoothing effect that can be added
* **parasolCells_k** the spatial constant of the spatial filtering effect, set it at a high value to favor low spatial frequency signals that are lower subject to residual noise.
* **amacrinCellsTemporalCutFrequency** specifies the temporal constant of the high pass filter. High values let slow transient events to be selected.
* **V0CompressionParameter** specifies the strength of the log compression. Similar behaviors to previous description but here it enforces sensitivity of transient events.
* **localAdaptintegration_tau** generally set to 0, no real use here actually
* **localAdaptintegration_k** specifies the size of the area on which local adaptation is performed. Low values lead to short range local adaptation (higher sensitivity to noise), high values secure log compression.

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

@ -0,0 +1,36 @@
.. _Table-Of-Content-Contrib:
*contrib* module. The additional contributions made available !
----------------------------------------------------------------
Here you will learn how to use additional modules of OpenCV defined in the "contrib" module.
.. include:: ../../definitions/tocDefinitions.rst
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=============== ======================================================
|RetinaDemoImg| **Title:** :ref:`Retina_Model`
*Compatibility:* > OpenCV 2.4
*Author:* |Author_AlexB|
You will learn how to process images and video streams with a model of retina filter for details enhancement, spatio-temporal noise removal, luminance correction and spatio-temporal events detection.
=============== ======================================================
.. |RetinaDemoImg| image:: images/retina_TreeHdr_small.jpg
:height: 90pt
:width: 90pt
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../retina_model/retina_model

@ -7,4 +7,5 @@
.. |Author_ArtemM| unicode:: Artem U+0020 Myagkov
.. |Author_FernandoI| unicode:: Fernando U+0020 Iglesias U+0020 Garc U+00ED a
.. |Author_EduardF| unicode:: Eduard U+0020 Feicho
.. |Author_AlexB| unicode:: Alexandre U+0020 Benoit

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

@ -156,6 +156,21 @@ As always, we would be happy to hear your comments and receive your contribution
:width: 80pt
:alt: gpu icon
* :ref:`Table-Of-Content-Contrib`
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=========== =======================================================
|Contrib| Discover additional contribution to OpenCV.
=========== =======================================================
.. |Contrib| image:: images/retina.jpg
:height: 80pt
:width: 80pt
:alt: gpu icon
* :ref:`Table-Of-Content-iOS`
.. tabularcolumns:: m{100pt} m{300pt}
@ -204,5 +219,6 @@ As always, we would be happy to hear your comments and receive your contribution
objdetect/table_of_content_objdetect/table_of_content_objdetect
ml/table_of_content_ml/table_of_content_ml
gpu/table_of_content_gpu/table_of_content_gpu
contrib/table_of_content_contrib/table_of_content_contrib
ios/table_of_content_ios/table_of_content_ios
general/table_of_content_general/table_of_content_general

@ -14,6 +14,8 @@ Class which provides the main controls to the Gipsa/Listic labs human retina mo
* periphearal vision for sensitive transient signals detection (motion and events) : the magnocellular pathway.
**NOTE : See the Retina tutorial in the tutorial/contrib section for complementary explanations.**
The retina can be settled up with various parameters, by default, the retina cancels mean luminance and enforces all details of the visual scene. In order to use your own parameters, you can use at least one time the *write(std::string fs)* method which will write a proper XML file with all default parameters. Then, tweak it on your own and reload them at any time using method *setup(std::string fs)*. These methods update a *Retina::RetinaParameters* member structure that is described hereafter. ::
class Retina
@ -98,7 +100,9 @@ This retina filter code includes the research contributions of phd/research coll
Demos and experiments !
=======================
Take a look at the C++ examples provided with OpenCV :
**NOTE : Complementary to the following examples, have a look at the Retina tutorial in the tutorial/contrib section for complementary explanations.**
Take a look at the provided C++ examples provided with OpenCV :
* **samples/cpp/retinademo.cpp** shows how to use the retina module for details enhancement (Parvo channel output) and transient maps observation (Magno channel output). You can play with images, video sequences and webcam video.
Typical uses are (provided your OpenCV installation is situated in folder *OpenCVReleaseFolder*)
@ -122,6 +126,7 @@ Take a look at the C++ examples provided with OpenCV :
Note that some sliders are made available to allow you to play with luminance compression.
Methods description
===================

@ -46,7 +46,6 @@ class AsyncServiceHelper
protected LoaderCallbackInterface mUserAppCallback;
protected String mOpenCVersion;
protected Context mAppContext;
protected int mStatus = LoaderCallbackInterface.SUCCESS;
protected static boolean mServiceInstallationProgress = false;
protected static boolean mLibraryInstallationProgress = false;
@ -171,12 +170,11 @@ class AsyncServiceHelper
{
if (mEngineService.getEngineVersion() < MINIMUM_ENGINE_VERSION)
{
mStatus = LoaderCallbackInterface.INCOMPATIBLE_MANAGER_VERSION;
Log.d(TAG, "Init finished with status " + mStatus);
Log.d(TAG, "Init finished with status " + LoaderCallbackInterface.INCOMPATIBLE_MANAGER_VERSION);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(mStatus);
mUserAppCallback.onManagerConnected(LoaderCallbackInterface.INCOMPATIBLE_MANAGER_VERSION);
return;
}
@ -193,43 +191,40 @@ class AsyncServiceHelper
}
public void install() {
Log.d(TAG, "Trying to install OpenCV lib via Google Play");
boolean result;
try
{
if (mEngineService.installVersion(mOpenCVersion))
{
mLibraryInstallationProgress = true;
result = true;
Log.d(TAG, "Package installation statred");
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
}
else
{
result = false;
Log.d(TAG, "OpenCV package was not installed!");
mStatus = LoaderCallbackInterface.MARKET_ERROR;
Log.d(TAG, "Init finished with status " + LoaderCallbackInterface.MARKET_ERROR);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(LoaderCallbackInterface.MARKET_ERROR);
}
} catch (RemoteException e) {
e.printStackTrace();
result = false;
mStatus = LoaderCallbackInterface.INIT_FAILED;
}
if (!result)
{
Log.d(TAG, "Init finished with status " + mStatus);
e.printStackTrace();;
Log.d(TAG, "Init finished with status " + LoaderCallbackInterface.INIT_FAILED);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(mStatus);
mUserAppCallback.onManagerConnected(LoaderCallbackInterface.INIT_FAILED);
}
}
public void cancel() {
Log.d(TAG, "OpenCV library installation was canceled");
mStatus = LoaderCallbackInterface.INSTALL_CANCELED;
Log.d(TAG, "Init finished with status " + mStatus);
Log.d(TAG, "Init finished with status " + LoaderCallbackInterface.INSTALL_CANCELED);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(mStatus);
mUserAppCallback.onManagerConnected(LoaderCallbackInterface.INSTALL_CANCELED);
}
public void wait_install() {
Log.e(TAG, "Instalation was not started! Nothing to wait!");
@ -253,12 +248,11 @@ class AsyncServiceHelper
{
Log.d(TAG, "OpenCV library installation was canceled");
mLibraryInstallationProgress = false;
mStatus = LoaderCallbackInterface.INSTALL_CANCELED;
Log.d(TAG, "Init finished with status " + mStatus);
Log.d(TAG, "Init finished with status " + LoaderCallbackInterface.INSTALL_CANCELED);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(mStatus);
mUserAppCallback.onManagerConnected(LoaderCallbackInterface.INSTALL_CANCELED);
}
public void wait_install() {
Log.d(TAG, "Waiting for current installation");
@ -267,21 +261,25 @@ class AsyncServiceHelper
if (!mEngineService.installVersion(mOpenCVersion))
{
Log.d(TAG, "OpenCV package was not installed!");
mStatus = LoaderCallbackInterface.MARKET_ERROR;
Log.d(TAG, "Init finished with status " + mStatus);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Init finished with status " + LoaderCallbackInterface.MARKET_ERROR);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(mStatus);
mUserAppCallback.onManagerConnected(LoaderCallbackInterface.MARKET_ERROR);
}
else
{
Log.d(TAG, "Wating for package installation");
}
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
} catch (RemoteException e) {
e.printStackTrace();
mStatus = LoaderCallbackInterface.INIT_FAILED;
Log.d(TAG, "Init finished with status " + mStatus);
Log.d(TAG, "Init finished with status " + LoaderCallbackInterface.INIT_FAILED);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(mStatus);
mUserAppCallback.onManagerConnected(LoaderCallbackInterface.INIT_FAILED);
}
}
};
@ -297,33 +295,33 @@ class AsyncServiceHelper
String libs = mEngineService.getLibraryList(mOpenCVersion);
Log.d(TAG, "Library list: \"" + libs + "\"");
Log.d(TAG, "First attempt to load libs");
int status;
if (initOpenCVLibs(path, libs))
{
Log.d(TAG, "First attempt to load libs is OK");
mStatus = LoaderCallbackInterface.SUCCESS;
status = LoaderCallbackInterface.SUCCESS;
}
else
{
Log.d(TAG, "First attempt to load libs fails");
mStatus = LoaderCallbackInterface.INIT_FAILED;
status = LoaderCallbackInterface.INIT_FAILED;
}
Log.d(TAG, "Init finished with status " + mStatus);
Log.d(TAG, "Init finished with status " + status);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(mStatus);
mUserAppCallback.onManagerConnected(status);
}
}
catch (RemoteException e)
{
e.printStackTrace();
mStatus = LoaderCallbackInterface.INIT_FAILED;
Log.d(TAG, "Init finished with status " + mStatus);
Log.d(TAG, "Init finished with status " + LoaderCallbackInterface.INIT_FAILED);
Log.d(TAG, "Unbind from service");
mAppContext.unbindService(mServiceConnection);
Log.d(TAG, "Calling using callback");
mUserAppCallback.onManagerConnected(mStatus);
mUserAppCallback.onManagerConnected(LoaderCallbackInterface.INIT_FAILED);
}
}
}

@ -85,6 +85,7 @@ public abstract class CameraBridgeViewBase extends SurfaceView implements Surfac
private Object mSyncObject = new Object();
public void surfaceChanged(SurfaceHolder arg0, int arg1, int arg2, int arg3) {
Log.d(TAG, "call surfaceChanged event");
synchronized(mSyncObject) {
if (!mSurfaceExist) {
mSurfaceExist = true;
@ -163,7 +164,7 @@ public abstract class CameraBridgeViewBase extends SurfaceView implements Surfac
private void checkCurrentState() {
int targetState;
if (mEnabled && mSurfaceExist) {
if (mEnabled && mSurfaceExist && getVisibility() == VISIBLE) {
targetState = STARTED;
} else {
targetState = STOPPED;

@ -40,6 +40,8 @@ public class JavaCameraView extends CameraBridgeViewBase implements PreviewCallb
private Thread mThread;
private boolean mStopThread;
private SurfaceTexture mSurfaceTexture;
public static class JavaCameraSizeAccessor implements ListItemAccessor {
public int getWidth(Object obj) {
@ -101,6 +103,7 @@ public class JavaCameraView extends CameraBridgeViewBase implements PreviewCallb
Size frameSize = calculateCameraFrameSize(sizes, new JavaCameraSizeAccessor(), width, height);
params.setPreviewFormat(ImageFormat.NV21);
Log.d(TAG, "Set preview size to " + Integer.valueOf((int)frameSize.width) + "x" + Integer.valueOf((int)frameSize.height));
params.setPreviewSize((int)frameSize.width, (int)frameSize.height);
List<String> FocusModes = params.getSupportedFocusModes();
@ -127,9 +130,9 @@ public class JavaCameraView extends CameraBridgeViewBase implements PreviewCallb
AllocateCache();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) {
SurfaceTexture tex = new SurfaceTexture(MAGIC_TEXTURE_ID);
mSurfaceTexture = new SurfaceTexture(MAGIC_TEXTURE_ID);
getHolder().setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
mCamera.setPreviewTexture(tex);
mCamera.setPreviewTexture(mSurfaceTexture);
} else
mCamera.setPreviewDisplay(null);
} catch (IOException e) {

@ -378,7 +378,7 @@ namespace cv
// disabled until fix crash
// support all types
CV_EXPORTS Scalar sum(const oclMat &m);
CV_EXPORTS Scalar absSum(const oclMat &m);
CV_EXPORTS Scalar sqrSum(const oclMat &m);
//! finds global minimum and maximum array elements and returns their values
@ -1738,5 +1738,12 @@ namespace cv
}
}
#if _MSC_VER >= 1200
#pragma warning( push)
#pragma warning( disable: 4267)
#endif
#include "opencv2/ocl/matrix_operations.hpp"
#if _MSC_VER >= 1200
#pragma warning( pop)
#endif
#endif /* __OPENCV_GPU_HPP__ */

@ -137,7 +137,7 @@ TEST_F(Haar, FaceDetect)
printf( "cpudetection time = %g ms\n", t / (LOOP_TIMES * (double)cvGetTickFrequency() * 1000.) );
cv::ocl::oclMat image;
CvSeq *_objects;
CvSeq *_objects=NULL;
t = (double)cvGetTickCount();
for(int k = 0; k < LOOP_TIMES; k++)
{

@ -103,10 +103,10 @@ TEST_P(HOG, Performance)
ASSERT_FALSE(img.empty());
// define HOG related arguments
float scale = 1.05;
float scale = 1.05f;
//int nlevels = 13;
float gr_threshold = 8;
float hit_threshold = 1.4;
int gr_threshold = 8;
float hit_threshold = 1.4f;
//bool hit_threshold_auto = true;
int win_width = is48 ? 48 : 64;

@ -54,11 +54,11 @@ using namespace cvtest;
using namespace testing;
using namespace std;
extern std::string workdir;
#define FILTER_IMAGE "../../../samples/gpu/road.png"
TEST(SURF, Performance)
{
cv::Mat img = readImage(workdir+"lena.jpg", cv::IMREAD_GRAYSCALE);
cv::Mat img = readImage(FILTER_IMAGE, cv::IMREAD_GRAYSCALE);
ASSERT_FALSE(img.empty());
ocl::SURF_OCL d_surf;

@ -55,13 +55,13 @@
#include "cvconfig.h"
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
//#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/video/video.hpp"
#include "opencv2/ts/ts.hpp"
#include "opencv2/ts/ts_perf.hpp"
#include "opencv2/ocl/ocl.hpp"
#include "opencv2/nonfree/nonfree.hpp"
//#include "opencv2/nonfree/nonfree.hpp"
#include "utility.hpp"
#include "interpolation.hpp"

@ -45,7 +45,7 @@
#ifdef PRINT_KERNEL_RUN_TIME
#define LOOP_TIMES 1
#else
#define LOOP_TIMES 100
#define LOOP_TIMES 1
#endif
#define MWIDTH 1920
#define MHEIGHT 1080

@ -813,6 +813,22 @@ Scalar cv::ocl::sum(const oclMat &src)
return func(src, 0);
}
Scalar cv::ocl::absSum(const oclMat &src)
{
if(src.clCxt->impl->double_support == 0 && src.depth() == CV_64F)
{
CV_Error(CV_GpuNotSupported, "select device don't support double");
}
static sumFunc functab[2] =
{
arithmetic_sum<float>,
arithmetic_sum<double>
};
sumFunc func;
func = functab[src.clCxt->impl->double_support];
return func(src, 1);
}
Scalar cv::ocl::sqrSum(const oclMat &src)
{
@ -945,13 +961,17 @@ template <typename T> void arithmetic_minMax(const oclMat &src, double *minVal,
T *p = new T[groupnum * vlen * 2];
memset(p, 0, dbsize);
openCLReadBuffer(clCxt, dstBuffer, (void *)p, dbsize);
for(int i = 0; i < vlen * (int)groupnum; i++)
{
*minVal = *minVal < p[i] ? *minVal : p[i];
if(minVal != NULL){
for(int i = 0; i < vlen * (int)groupnum; i++)
{
*minVal = *minVal < p[i] ? *minVal : p[i];
}
}
for(int i = vlen * (int)groupnum; i < 2 * vlen * (int)groupnum; i++)
{
*maxVal = *maxVal > p[i] ? *maxVal : p[i];
if(maxVal != NULL){
for(int i = vlen * (int)groupnum; i < 2 * vlen * (int)groupnum; i++)
{
*maxVal = *maxVal > p[i] ? *maxVal : p[i];
}
}
delete[] p;
openCLFree(dstBuffer);

@ -1548,8 +1548,7 @@ void cv::ocl::BruteForceMatcher_OCL_base::knnMatch(const oclMat &query, vector<
temp.reserve(2 * k);
matches.resize(query.rows);
for(size_t queryIdx = 0; queryIdx < matches.size(); queryIdx++ )
matches[queryIdx].reserve(k);
for_each(matches.begin(), matches.end(), bind2nd(mem_fun_ref(&vector<DMatch>::reserve), k));
for (size_t imgIdx = 0, size = trainDescCollection.size(); imgIdx < size; ++imgIdx)
{
@ -1573,15 +1572,8 @@ void cv::ocl::BruteForceMatcher_OCL_base::knnMatch(const oclMat &query, vector<
if (compactResult)
{
size_t i, j = 0;
for( i = 0; i < matches.size(); i++ )
if( !matches[i].empty() )
{
if( i > j )
matches[j] = matches[i];
j++;
}
matches.resize(j);
vector< vector<DMatch> >::iterator new_end = remove_if(matches.begin(), matches.end(), mem_fun_ref(&vector<DMatch>::empty));
matches.erase(new_end, matches.end());
}
}
}

@ -76,7 +76,7 @@ namespace cv
//////////////////////////////////////////////////////////////////////////////
// buildWarpPlaneMaps
void cv::ocl::buildWarpPlaneMaps(Size src_size, Rect dst_roi, const Mat &K, const Mat &R, const Mat &T,
void cv::ocl::buildWarpPlaneMaps(Size /*src_size*/, Rect dst_roi, const Mat &K, const Mat &R, const Mat &T,
float scale, oclMat &map_x, oclMat &map_y)
{
CV_Assert(K.size() == Size(3, 3) && K.type() == CV_32F);
@ -121,7 +121,7 @@ void cv::ocl::buildWarpPlaneMaps(Size src_size, Rect dst_roi, const Mat &K, cons
//////////////////////////////////////////////////////////////////////////////
// buildWarpCylyndricalMaps
void cv::ocl::buildWarpCylindricalMaps(Size src_size, Rect dst_roi, const Mat &K, const Mat &R, float scale,
void cv::ocl::buildWarpCylindricalMaps(Size /*src_size*/, Rect dst_roi, const Mat &K, const Mat &R, float scale,
oclMat &map_x, oclMat &map_y)
{
CV_Assert(K.size() == Size(3, 3) && K.type() == CV_32F);
@ -160,7 +160,7 @@ void cv::ocl::buildWarpCylindricalMaps(Size src_size, Rect dst_roi, const Mat &K
//////////////////////////////////////////////////////////////////////////////
// buildWarpSphericalMaps
void cv::ocl::buildWarpSphericalMaps(Size src_size, Rect dst_roi, const Mat &K, const Mat &R, float scale,
void cv::ocl::buildWarpSphericalMaps(Size /*src_size*/, Rect dst_roi, const Mat &K, const Mat &R, float scale,
oclMat &map_x, oclMat &map_y)
{
CV_Assert(K.size() == Size(3, 3) && K.type() == CV_32F);

@ -96,7 +96,7 @@ namespace
size_t gt[3] = {src.cols, src.rows, 1}, lt[3] = {16, 16, 1};
openCLExecuteKernel(src.clCxt, &cvt_color, "RGB2Gray", gt, lt, args, -1, -1, build_options);
}
void cvtColor_caller(const oclMat &src, oclMat &dst, int code, int dcn)
void cvtColor_caller(const oclMat &src, oclMat &dst, int code, int /*dcn*/)
{
Size sz = src.size();
int scn = src.oclchannels(), depth = src.depth(), bidx;

@ -88,7 +88,7 @@ void cv::ocl::columnSum(const oclMat &src, oclMat &dst)
args.push_back( make_pair( sizeof(cl_int), (void *)&dst.step));
size_t globalThreads[3] = {dst.cols, 1, 1};
size_t localThreads[3] = {16, 16, 1};
size_t localThreads[3] = {256, 1, 1};
openCLExecuteKernel(clCxt, &imgproc_columnsum, kernelName, globalThreads, localThreads, args, src.channels(), src.depth());

@ -1235,7 +1235,7 @@ void linearRowFilter_gpu(const oclMat &src, const oclMat &dst, oclMat mat_kernel
openCLExecuteKernel(clCxt, &filter_sep_row, kernelName, globalThreads, localThreads, args, channels, src.depth(), compile_option);
}
Ptr<BaseRowFilter_GPU> cv::ocl::getLinearRowFilter_GPU(int srcType, int bufType, const Mat &rowKernel, int anchor, int bordertype)
Ptr<BaseRowFilter_GPU> cv::ocl::getLinearRowFilter_GPU(int srcType, int /*bufType*/, const Mat &rowKernel, int anchor, int bordertype)
{
static const gpuFilter1D_t gpuFilter1D_callers[6] =
{
@ -1391,7 +1391,7 @@ void linearColumnFilter_gpu(const oclMat &src, const oclMat &dst, oclMat mat_ker
openCLExecuteKernel(clCxt, &filter_sep_col, kernelName, globalThreads, localThreads, args, -1, -1, compile_option);
}
Ptr<BaseColumnFilter_GPU> cv::ocl::getLinearColumnFilter_GPU(int bufType, int dstType, const Mat &columnKernel, int anchor, int bordertype, double delta)
Ptr<BaseColumnFilter_GPU> cv::ocl::getLinearColumnFilter_GPU(int /*bufType*/, int dstType, const Mat &columnKernel, int anchor, int bordertype, double /*delta*/)
{
static const gpuFilter1D_t gpuFilter1D_callers[6] =
{

@ -162,7 +162,7 @@ typedef _ALIGNED_ON(128) struct GpuHidHaarFeature
_ALIGNED_ON(4) int p3 ;
_ALIGNED_ON(4) float weight ;
}
_ALIGNED_ON(32) rect[CV_HAAR_FEATURE_MAX] ;
/*_ALIGNED_ON(32)*/ rect[CV_HAAR_FEATURE_MAX] ;
}
GpuHidHaarFeature;
@ -867,8 +867,8 @@ CvSeq *cv::ocl::OclCascadeClassifier::oclHaarDetectObjects( oclMat &gimg, CvMemS
std::vector<cv::Rect> rectList;
std::vector<int> rweights;
double factor;
int datasize;
int totalclassifier;
int datasize=0;
int totalclassifier=0;
void *out;
GpuHidHaarClassifierCascade *gcascade;

@ -644,7 +644,7 @@ namespace cv
size_t kernelWorkGroupSize;
openCLSafeCall(clGetKernelWorkGroupInfo(kernel, clCxt->impl->devices[0],
CL_KERNEL_WORK_GROUP_SIZE, sizeof(size_t), &kernelWorkGroupSize, 0));
CV_DbgAssert( (localThreads[0] <= clCxt->impl->maxWorkItemSizes[0]) &&
CV_Assert( (localThreads[0] <= clCxt->impl->maxWorkItemSizes[0]) &&
(localThreads[1] <= clCxt->impl->maxWorkItemSizes[1]) &&
(localThreads[2] <= clCxt->impl->maxWorkItemSizes[2]) &&
((localThreads[0] * localThreads[1] * localThreads[2]) <= kernelWorkGroupSize) &&

@ -218,7 +218,7 @@ void interpolate::vectorWarp(const oclMat &src, const oclMat &u, const oclMat &v
normalizeKernel(buffer, src.rows, b_offset, d_offset);
}
void interpolate::blendFrames(const oclMat &frame0, const oclMat &frame1, const oclMat &buffer, float pos, oclMat &newFrame, cl_mem &tex_src0, cl_mem &tex_src1)
void interpolate::blendFrames(const oclMat &frame0, const oclMat &/*frame1*/, const oclMat &buffer, float pos, oclMat &newFrame, cl_mem &tex_src0, cl_mem &tex_src1)
{
int step = buffer.step / sizeof(float);

@ -150,7 +150,7 @@ namespace
// Implementation
//
DjSets DjSets::operator = (const DjSets &obj)
DjSets DjSets::operator = (const DjSets &/*obj*/)
{
//cout << "Invalid DjSets constructor\n";
CV_Error(-1, "Invalid DjSets constructor\n");

@ -47,7 +47,7 @@
#define __OPENCV_PRECOMP_H__
#if _MSC_VER >= 1200
#pragma warning( disable: 4244 4251 4710 4711 4514 4996 )
#pragma warning( disable: 4267 4324 4244 4251 4710 4711 4514 4996 )
#endif
#ifdef HAVE_CVCONFIG_H

@ -581,26 +581,26 @@ void pyrDown_cus(const oclMat &src, oclMat &dst)
//
//void callF(const oclMat& src, oclMat& dst, MultiplyScalar op, int mask)
//{
// Mat srcTemp;
// Mat dstTemp;
// src.download(srcTemp);
// dst.download(dstTemp);
// Mat srcTemp;
// Mat dstTemp;
// src.download(srcTemp);
// dst.download(dstTemp);
//
// int i;
// int j;
// int k;
// for(i = 0; i < srcTemp.rows; i++)
// {
// for(j = 0; j < srcTemp.cols; j++)
// {
// for(k = 0; k < srcTemp.channels(); k++)
// {
// ((float*)dstTemp.data)[srcTemp.channels() * (i * srcTemp.rows + j) + k] = (float)op(((float*)srcTemp.data)[srcTemp.channels() * (i * srcTemp.rows + j) + k]);
// }
// }
// }
// int i;
// int j;
// int k;
// for(i = 0; i < srcTemp.rows; i++)
// {
// for(j = 0; j < srcTemp.cols; j++)
// {
// for(k = 0; k < srcTemp.channels(); k++)
// {
// ((float*)dstTemp.data)[srcTemp.channels() * (i * srcTemp.rows + j) + k] = (float)op(((float*)srcTemp.data)[srcTemp.channels() * (i * srcTemp.rows + j) + k]);
// }
// }
// }
//
// dst = dstTemp;
// dst = dstTemp;
//}
//
//static inline bool isAligned(const unsigned char* ptr, size_t size)
@ -622,54 +622,54 @@ void pyrDown_cus(const oclMat &src, oclMat &dst)
// return;
// }
//
// Mat srcTemp;
// Mat dstTemp;
// src.download(srcTemp);
// dst.download(dstTemp);
// Mat srcTemp;
// Mat dstTemp;
// src.download(srcTemp);
// dst.download(dstTemp);
//
// int x_shifted;
// int x_shifted;
//
// int i;
// int j;
// for(i = 0; i < srcTemp.rows; i++)
// {
// const double* srcRow = (const double*)srcTemp.data + i * srcTemp.rows;
// int i;
// int j;
// for(i = 0; i < srcTemp.rows; i++)
// {
// const double* srcRow = (const double*)srcTemp.data + i * srcTemp.rows;
// double* dstRow = (double*)dstTemp.data + i * dstTemp.rows;;
//
// for(j = 0; j < srcTemp.cols; j++)
// {
// x_shifted = j * 4;
// for(j = 0; j < srcTemp.cols; j++)
// {
// x_shifted = j * 4;
//
// if(x_shifted + 4 - 1 < srcTemp.cols)
// {
// dstRow[x_shifted ] = op(srcRow[x_shifted ]);
// dstRow[x_shifted + 1] = op(srcRow[x_shifted + 1]);
// dstRow[x_shifted + 2] = op(srcRow[x_shifted + 2]);
// dstRow[x_shifted + 3] = op(srcRow[x_shifted + 3]);
// }
// else
// {
// for (int real_x = x_shifted; real_x < srcTemp.cols; ++real_x)
// {
// ((float*)dstTemp.data)[i * srcTemp.rows + real_x] = op(((float*)srcTemp.data)[i * srcTemp.rows + real_x]);
// }
// }
// }
// }
// if(x_shifted + 4 - 1 < srcTemp.cols)
// {
// dstRow[x_shifted ] = op(srcRow[x_shifted ]);
// dstRow[x_shifted + 1] = op(srcRow[x_shifted + 1]);
// dstRow[x_shifted + 2] = op(srcRow[x_shifted + 2]);
// dstRow[x_shifted + 3] = op(srcRow[x_shifted + 3]);
// }
// else
// {
// for (int real_x = x_shifted; real_x < srcTemp.cols; ++real_x)
// {
// ((float*)dstTemp.data)[i * srcTemp.rows + real_x] = op(((float*)srcTemp.data)[i * srcTemp.rows + real_x]);
// }
// }
// }
// }
//}
//
//void multiply(const oclMat& src1, double val, oclMat& dst, double scale = 1.0f);
//void multiply(const oclMat& src1, double val, oclMat& dst, double scale)
//{
// MultiplyScalar op(val, scale);
// //if(src1.channels() == 1 && dst.channels() == 1)
// //{
// // callT(src1, dst, op, 0);
// //}
// //else
// //{
// callF(src1, dst, op, 0);
// //}
// //if(src1.channels() == 1 && dst.channels() == 1)
// //{
// // callT(src1, dst, op, 0);
// //}
// //else
// //{
// callF(src1, dst, op, 0);
// //}
//}
cl_mem bindTexture(const oclMat &mat, int depth, int channels)
@ -792,6 +792,12 @@ void lkSparse_run(oclMat &I, oclMat &J,
void cv::ocl::PyrLKOpticalFlow::sparse(const oclMat &prevImg, const oclMat &nextImg, const oclMat &prevPts, oclMat &nextPts, oclMat &status, oclMat *err)
{
if (prevImg.clCxt->impl->devName.find("Intel(R) HD Graphics") != string::npos)
{
cout << " Intel HD GPU device unsupported " << endl;
return;
}
if (prevPts.empty())
{
nextPts.release();

@ -197,11 +197,11 @@ public:
throw std::exception();
//!FIXME
// temp fix for missing min overload
oclMat temp(mask.size(), mask.type());
temp.setTo(Scalar::all(1.0));
//cv::ocl::min(mask, temp, surf_.mask1); ///////// disable this
integral(surf_.mask1, surf_.maskSum);
bindImgTex(surf_.maskSum, maskSumTex);
//oclMat temp(mask.size(), mask.type());
//temp.setTo(Scalar::all(1.0));
////cv::ocl::min(mask, temp, surf_.mask1); ///////// disable this
//integral(surf_.mask1, surf_.maskSum);
//bindImgTex(surf_.maskSum, maskSumTex);
}
}
@ -415,6 +415,11 @@ void cv::ocl::SURF_OCL::operator()(const oclMat &img, const oclMat &mask, oclMat
{
if (!img.empty())
{
if (img.clCxt->impl->devName.find("Intel(R) HD Graphics") != string::npos)
{
cout << " Intel HD GPU device unsupported " << endl;
return;
}
SURF_OCL_Invoker surf(*this, img, mask);
surf.detectKeypoints(keypoints);
@ -426,6 +431,11 @@ void cv::ocl::SURF_OCL::operator()(const oclMat &img, const oclMat &mask, oclMat
{
if (!img.empty())
{
if (img.clCxt->impl->devName.find("Intel(R) HD Graphics") != string::npos)
{
cout << " Intel HD GPU device unsupported " << endl;
return;
}
SURF_OCL_Invoker surf(*this, img, mask);
if (!useProvidedKeypoints)

@ -55,13 +55,13 @@
#include "cvconfig.h"
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
//#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/video/video.hpp"
#include "opencv2/ts/ts.hpp"
#include "opencv2/ts/ts_perf.hpp"
#include "opencv2/ocl/ocl.hpp"
#include "opencv2/nonfree/nonfree.hpp"
//#include "opencv2/nonfree/nonfree.hpp"
#include "utility.hpp"
#include "interpolation.hpp"

@ -199,8 +199,8 @@ TEST_P(HOG, Detect)
int threshold = 10;
int val = 32;
d_comp[0] = d_found.size();
comp[0] = found.size();
d_comp[0] = (int)d_found.size();
comp[0] = (int)found.size();
if (winSize == cv::Size(48, 96))
{
for(int i = 0; i < (int)d_found.size(); i++)

@ -213,8 +213,8 @@ COOR do_meanShift(int x0, int y0, uchar *sptr, uchar *dptr, int sstep, cv::Size
dptr[3] = (uchar)c3;
COOR coor;
coor.x = x0;
coor.y = y0;
coor.x = (short)x0;
coor.y = (short)y0;
return coor;
}

@ -4,8 +4,8 @@ import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Mat;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener;
import org.opencv.android.JavaCameraView;
import android.os.Bundle;
import android.app.Activity;
@ -18,13 +18,13 @@ import android.view.View;
public class Puzzle15Activity extends Activity implements CvCameraViewListener, View.OnTouchListener {
private static final String TAG = "Sample::Puzzle15::Activity";
private static final String TAG = "Sample::Puzzle15::Activity";
private JavaCameraView mOpenCvCameraView;
private Puzzle15Processor mPuzzle15;
private CameraBridgeViewBase mOpenCvCameraView;
private Puzzle15Processor mPuzzle15;
private int mGameWidth;
private int mGameHeight;
private int mGameWidth;
private int mGameHeight;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@ -54,7 +54,7 @@ public class Puzzle15Activity extends Activity implements CvCameraViewListener,
setContentView(R.layout.activity_puzzle15);
mOpenCvCameraView = (JavaCameraView) findViewById(R.id.puzzle_activity_surface_view);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.puzzle_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
mPuzzle15 = new Puzzle15Processor();
mPuzzle15.prepareNewGame();

@ -20,14 +20,13 @@ public class Puzzle15Processor {
private static final int GRID_AREA = GRID_SIZE * GRID_SIZE;
private static final int GRID_EMPTY_INDEX = GRID_AREA - 1;
private static final String TAG = "Puzzle15Processor";
private static final Scalar GRID_EMPTY_COLOR = new Scalar(0x33, 0x33, 0x33, 0xFF);
private int[] mIndexes;
private int[] mTextWidths;
private int[] mTextHeights;
private Mat mRgba15;
private Mat[] mCells;
private Mat[] mCells15;
private boolean mShowTileNumbers = true;
@ -54,8 +53,6 @@ public class Puzzle15Processor {
*/
public synchronized void prepareGameSize(int width, int height) {
mRgba15 = new Mat(height, width, CvType.CV_8UC4);
mCells = new Mat[GRID_AREA];
mCells15 = new Mat[GRID_AREA];
for (int i = 0; i < GRID_SIZE; i++) {
@ -76,6 +73,7 @@ public class Puzzle15Processor {
* the tiles as specified by mIndexes array
*/
public synchronized Mat puzzleFrame(Mat inputPicture) {
Mat[] cells = new Mat[GRID_AREA];
int rows = inputPicture.rows();
int cols = inputPicture.cols();
@ -85,7 +83,7 @@ public class Puzzle15Processor {
for (int i = 0; i < GRID_SIZE; i++) {
for (int j = 0; j < GRID_SIZE; j++) {
int k = i * GRID_SIZE + j;
mCells[k] = inputPicture.submat(i * inputPicture.rows() / GRID_SIZE, (i + 1) * inputPicture.rows() / GRID_SIZE, j * inputPicture.cols()/ GRID_SIZE, (j + 1) * inputPicture.cols() / GRID_SIZE);
cells[k] = inputPicture.submat(i * inputPicture.rows() / GRID_SIZE, (i + 1) * inputPicture.rows() / GRID_SIZE, j * inputPicture.cols()/ GRID_SIZE, (j + 1) * inputPicture.cols() / GRID_SIZE);
}
}
@ -96,9 +94,9 @@ public class Puzzle15Processor {
for (int i = 0; i < GRID_AREA; i++) {
int idx = mIndexes[i];
if (idx == GRID_EMPTY_INDEX)
mCells15[i].setTo(new Scalar(0x33, 0x33, 0x33, 0xFF));
mCells15[i].setTo(GRID_EMPTY_COLOR);
else {
mCells[idx].copyTo(mCells15[i]);
cells[idx].copyTo(mCells15[i]);
if (mShowTileNumbers) {
Core.putText(mCells15[i], Integer.toString(1 + idx), new Point((cols / GRID_SIZE - mTextWidths[idx]) / 2,
(rows / GRID_SIZE + mTextHeights[idx]) / 2), 3/* CV_FONT_HERSHEY_COMPLEX */, 1, new Scalar(255, 0, 0, 255), 2);
@ -106,6 +104,9 @@ public class Puzzle15Processor {
}
}
for (int i = 0; i < GRID_AREA; i++)
cells[i].release();
drawGrid(cols, rows, mRgba15);
return mRgba15;

@ -12,8 +12,8 @@ import org.opencv.core.MatOfPoint;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener;
import org.opencv.android.JavaCameraView;
import org.opencv.imgproc.Imgproc;
import android.app.Activity;
@ -26,9 +26,9 @@ import android.view.WindowManager;
import android.view.View.OnTouchListener;
public class ColorBlobDetectionActivity extends Activity implements OnTouchListener, CvCameraViewListener {
private static final String TAG = "OCVSample::Activity";
private static final String TAG = "OCVSample::Activity";
private boolean mIsColorSelected = false;
private boolean mIsColorSelected = false;
private Mat mRgba;
private Scalar mBlobColorRgba;
private Scalar mBlobColorHsv;
@ -37,7 +37,7 @@ public class ColorBlobDetectionActivity extends Activity implements OnTouchListe
private Size SPECTRUM_SIZE;
private Scalar CONTOUR_COLOR;
private JavaCameraView mOpenCvCameraView;
private CameraBridgeViewBase mOpenCvCameraView;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
@ -71,7 +71,7 @@ public class ColorBlobDetectionActivity extends Activity implements OnTouchListe
setContentView(R.layout.color_blob_detection_surface_view);
mOpenCvCameraView = (JavaCameraView)findViewById(R.id.color_blob_detection_activity_surface_view);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.color_blob_detection_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
}

@ -14,7 +14,7 @@ import org.opencv.core.MatOfRect;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.android.JavaCameraView;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
@ -53,7 +53,7 @@ public class FdActivity extends Activity implements CvCameraViewListener {
private float mRelativeFaceSize = 0;
private int mAbsoluteFaceSize = 0;
private JavaCameraView mOpenCvCameraView;
private CameraBridgeViewBase mOpenCvCameraView;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
@ -125,7 +125,7 @@ public class FdActivity extends Activity implements CvCameraViewListener {
setContentView(R.layout.face_detect_surface_view);
mOpenCvCameraView = (JavaCameraView)findViewById(R.id.fd_activity_surface_view);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.fd_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
}

@ -13,7 +13,7 @@ import org.opencv.core.MatOfInt;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.android.JavaCameraView;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener;
import org.opencv.imgproc.Imgproc;
@ -45,7 +45,7 @@ public class ImageManipulationsActivity extends Activity implements CvCameraView
private MenuItem mItemPreviewZoom;
private MenuItem mItemPreviewPixelize;
private MenuItem mItemPreviewPosterize;
private JavaCameraView mOpenCvCameraView;
private CameraBridgeViewBase mOpenCvCameraView;
private Size mSize0;
private Size mSizeRgba;
@ -106,7 +106,7 @@ public class ImageManipulationsActivity extends Activity implements CvCameraView
setContentView(R.layout.image_manipulations_surface_view);
mOpenCvCameraView = (JavaCameraView)findViewById(R.id.image_manipulations_activity_surface_view);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.image_manipulations_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
}

@ -6,6 +6,13 @@
<org.opencv.android.JavaCameraView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/tutorial1_activity_surface_view" />
android:visibility="gone"
android:id="@+id/tutorial1_activity_java_surface_view" />
<org.opencv.android.NativeCameraView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:visibility="gone"
android:id="@+id/tutorial1_activity_native_surface_view" />
</LinearLayout>

@ -4,19 +4,25 @@ import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Mat;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener;
import org.opencv.android.JavaCameraView;
import android.app.Activity;
import android.os.Bundle;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.SurfaceView;
import android.view.Window;
import android.view.WindowManager;
import android.widget.Toast;
public class Sample1Java extends Activity implements CvCameraViewListener {
private static final String TAG = "OCVSample::Activity";
private JavaCameraView mOpenCvCameraView;
private CameraBridgeViewBase mOpenCvCameraView;
private boolean mIsJavaCamera = true;
private MenuItem mItemSwitchCamera = null;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
@ -49,7 +55,13 @@ public class Sample1Java extends Activity implements CvCameraViewListener {
setContentView(R.layout.tutorial1_surface_view);
mOpenCvCameraView = (JavaCameraView)findViewById(R.id.tutorial1_activity_surface_view);
if (mIsJavaCamera)
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.tutorial1_activity_java_surface_view);
else
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.tutorial1_activity_native_surface_view);
mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE);
mOpenCvCameraView.setCvCameraViewListener(this);
}
@ -72,6 +84,40 @@ public class Sample1Java extends Activity implements CvCameraViewListener {
mOpenCvCameraView.disableView();
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
Log.i(TAG, "called onCreateOptionsMenu");
mItemSwitchCamera = menu.add("Switch camera");
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
String toastMesage = new String();
Log.i(TAG, "called onOptionsItemSelected; selected item: " + item);
if (item == mItemSwitchCamera) {
mOpenCvCameraView.setVisibility(SurfaceView.GONE);
mIsJavaCamera = !mIsJavaCamera;
if (mIsJavaCamera) {
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.tutorial1_activity_java_surface_view);
toastMesage = "Java Camera";
} else {
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.tutorial1_activity_native_surface_view);
toastMesage = "Native Camera";
}
mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE);
mOpenCvCameraView.setCvCameraViewListener(this);
mOpenCvCameraView.enableView();
Toast toast = Toast.makeText(this, toastMesage, Toast.LENGTH_LONG);
toast.show();
}
return true;
}
public void onCameraViewStarted(int width, int height) {
}

@ -8,7 +8,7 @@ import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.android.NativeCameraView;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
@ -22,20 +22,20 @@ import android.view.Window;
import android.view.WindowManager;
public class Sample2NativeCamera extends Activity implements CvCameraViewListener {
private static final String TAG = "OCVSample::Activity";
private static final String TAG = "OCVSample::Activity";
public static final int VIEW_MODE_RGBA = 0;
public static final int VIEW_MODE_GRAY = 1;
public static final int VIEW_MODE_CANNY = 2;
public static final int VIEW_MODE_RGBA = 0;
public static final int VIEW_MODE_GRAY = 1;
public static final int VIEW_MODE_CANNY = 2;
private static int viewMode = VIEW_MODE_RGBA;
private MenuItem mItemPreviewRGBA;
private MenuItem mItemPreviewGray;
private MenuItem mItemPreviewCanny;
private Mat mRgba;
private Mat mIntermediateMat;
private MenuItem mItemPreviewRGBA;
private MenuItem mItemPreviewGray;
private MenuItem mItemPreviewCanny;
private Mat mRgba;
private Mat mIntermediateMat;
private NativeCameraView mOpenCvCameraView;
private CameraBridgeViewBase mOpenCvCameraView;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
@ -68,7 +68,7 @@ public class Sample2NativeCamera extends Activity implements CvCameraViewListene
setContentView(R.layout.tutorial2_surface_view);
mOpenCvCameraView = (NativeCameraView)findViewById(R.id.tutorial2_activity_surface_view);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.tutorial2_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
}

@ -5,7 +5,7 @@ import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.android.JavaCameraView;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener;
import org.opencv.imgproc.Imgproc;
@ -20,7 +20,7 @@ public class Sample3Native extends Activity implements CvCameraViewListener {
private Mat mRgba;
private Mat mGrayMat;
private JavaCameraView mOpenCvCameraView;
private CameraBridgeViewBase mOpenCvCameraView;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
@ -57,7 +57,7 @@ public class Sample3Native extends Activity implements CvCameraViewListener {
setContentView(R.layout.tutorial3_surface_view);
mOpenCvCameraView = (JavaCameraView)findViewById(R.id.tutorial4_activity_surface_view);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.tutorial4_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
}

@ -5,7 +5,7 @@ import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.android.JavaCameraView;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
@ -19,7 +19,7 @@ import android.view.Window;
import android.view.WindowManager;
public class Sample4Mixed extends Activity implements CvCameraViewListener {
private static final String TAG = "OCVSample::Activity";
private static final String TAG = "OCVSample::Activity";
private static final int VIEW_MODE_RGBA = 0;
private static final int VIEW_MODE_GRAY = 1;
@ -36,7 +36,7 @@ public class Sample4Mixed extends Activity implements CvCameraViewListener {
private MenuItem mItemPreviewCanny;
private MenuItem mItemPreviewFeatures;
private JavaCameraView mOpenCvCameraView;
private CameraBridgeViewBase mOpenCvCameraView;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
@ -73,7 +73,7 @@ public class Sample4Mixed extends Activity implements CvCameraViewListener {
setContentView(R.layout.tutorial4_surface_view);
mOpenCvCameraView = (JavaCameraView)findViewById(R.id.tutorial4_activity_surface_view);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.tutorial4_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
}

@ -0,0 +1,146 @@
//============================================================================
// Name : retina_tutorial.cpp
// Author : Alexandre Benoit, benoit.alexandre.vision@gmail.com
// Version : 0.1
// Copyright : LISTIC/GIPSA French Labs, july 2012
// Description : Gipsa/LISTIC Labs retina demo in C++, Ansi-style
//============================================================================
#include <iostream>
#include <cstring>
#include "opencv2/opencv.hpp"
static void help(std::string errorMessage)
{
std::cout<<"Program init error : "<<errorMessage<<std::endl;
std::cout<<"\nProgram call procedure : retinaDemo [processing mode] [Optional : media target] [Optional LAST parameter: \"log\" to activate retina log sampling]"<<std::endl;
std::cout<<"\t[processing mode] :"<<std::endl;
std::cout<<"\t -image : for still image processing"<<std::endl;
std::cout<<"\t -video : for video stream processing"<<std::endl;
std::cout<<"\t[Optional : media target] :"<<std::endl;
std::cout<<"\t if processing an image or video file, then, specify the path and filename of the target to process"<<std::endl;
std::cout<<"\t leave empty if processing video stream coming from a connected video device"<<std::endl;
std::cout<<"\t[Optional : activate retina log sampling] : an optional last parameter can be specified for retina spatial log sampling"<<std::endl;
std::cout<<"\t set \"log\" without quotes to activate this sampling, output frame size will be divided by 4"<<std::endl;
std::cout<<"\nExamples:"<<std::endl;
std::cout<<"\t-Image processing : ./retinaDemo -image lena.jpg"<<std::endl;
std::cout<<"\t-Image processing with log sampling : ./retinaDemo -image lena.jpg log"<<std::endl;
std::cout<<"\t-Video processing : ./retinaDemo -video myMovie.mp4"<<std::endl;
std::cout<<"\t-Live video processing : ./retinaDemo -video"<<std::endl;
std::cout<<"\nPlease start again with new parameters"<<std::endl;
std::cout<<"****************************************************"<<std::endl;
std::cout<<" NOTE : this program generates the default retina parameters file 'RetinaDefaultParameters.xml'"<<std::endl;
std::cout<<" => you can use this to fine tune parameters and load them if you save to file 'RetinaSpecificParameters.xml'"<<std::endl;
}
int main(int argc, char* argv[]) {
// welcome message
std::cout<<"****************************************************"<<std::endl;
std::cout<<"* Retina demonstration : demonstrates the use of is a wrapper class of the Gipsa/Listic Labs retina model."<<std::endl;
std::cout<<"* This demo will try to load the file 'RetinaSpecificParameters.xml' (if exists).\nTo create it, copy the autogenerated template 'RetinaDefaultParameters.xml'.\nThen twaek it with your own retina parameters."<<std::endl;
// basic input arguments checking
if (argc<2)
{
help("bad number of parameter");
return -1;
}
bool useLogSampling = !strcmp(argv[argc-1], "log"); // check if user wants retina log sampling processing
std::string inputMediaType=argv[1];
// declare the retina input buffer... that will be fed differently in regard of the input media
cv::Mat inputFrame;
cv::VideoCapture videoCapture; // in case a video media is used, its manager is declared here
//////////////////////////////////////////////////////////////////////////////
// checking input media type (still image, video file, live video acquisition)
if (!strcmp(inputMediaType.c_str(), "-image") && argc >= 3)
{
std::cout<<"RetinaDemo: processing image "<<argv[2]<<std::endl;
// image processing case
inputFrame = cv::imread(std::string(argv[2]), 1); // load image in RGB mode
}else
if (!strcmp(inputMediaType.c_str(), "-video"))
{
if (argc == 2 || (argc == 3 && useLogSampling)) // attempt to grab images from a video capture device
{
videoCapture.open(0);
}else// attempt to grab images from a video filestream
{
std::cout<<"RetinaDemo: processing video stream "<<argv[2]<<std::endl;
videoCapture.open(argv[2]);
}
// grab a first frame to check if everything is ok
videoCapture>>inputFrame;
}else
{
// bad command parameter
help("bad command parameter");
return -1;
}
if (inputFrame.empty())
{
help("Input media could not be loaded, aborting");
return -1;
}
//////////////////////////////////////////////////////////////////////////////
// Program start in a try/catch safety context (Retina may throw errors)
try
{
// create a retina instance with default parameters setup, uncomment the initialisation you wanna test
cv::Ptr<cv::Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = new cv::Retina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = new cv::Retina(inputFrame.size());
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
// load parameters if file exists
myRetina->setup("RetinaSpecificParameters.xml");
// reset all retina buffers (imagine you close your eyes for a long time)
myRetina->clearBuffers();
// declare retina output buffers
cv::Mat retinaOutput_parvo;
cv::Mat retinaOutput_magno;
// processing loop with no stop condition
while(true)
{
// if using video stream, then, grabbing a new frame, else, input remains the same
if (videoCapture.isOpened())
videoCapture>>inputFrame;
// run retina filter on the loaded input frame
myRetina->run(inputFrame);
// Retrieve and display retina output
myRetina->getParvo(retinaOutput_parvo);
myRetina->getMagno(retinaOutput_magno);
cv::imshow("retina input", inputFrame);
cv::imshow("Retina Parvo", retinaOutput_parvo);
cv::imshow("Retina Magno", retinaOutput_magno);
cv::waitKey(10);
}
}catch(cv::Exception e)
{
std::cerr<<"Error using Retina : "<<e.what()<<std::endl;
}
// Program end message
std::cout<<"Retina demo end"<<std::endl;
return 0;
}
Loading…
Cancel
Save