Add ECC algorithm

Evangelidis, G.D. and Psarakis E.Z. "Parametric Image Alignment using Enhanced
Correlation Coefficient Maximization", IEEE Transactions on PAMI, vol. 32, no.
10, 2008
pull/567/merge
Georgios Evangelidis 12 years ago committed by Andrey Kamaev
parent b9b4200504
commit f40725bb50
  1. 46
      modules/video/doc/motion_analysis_and_object_tracking.rst
  2. 33
      modules/video/include/opencv2/video/tracking.hpp
  3. 74
      modules/video/perf/perf_ecc.cpp
  4. 543
      modules/video/src/ecc.cpp
  5. 401
      modules/video/test/test_ecc.cpp
  6. 386
      samples/cpp/image_alignment.cpp

@ -153,6 +153,50 @@ In case of point sets, the problem is formulated as follows: you need to find a
:ocv:func:`getPerspectiveTransform`,
:ocv:func:`findHomography`
findTransformECC
------------------------
Finds the geometric transform (warp) between two images in terms of the ECC criterion [EP08]_.
.. ocv:function:: double findTransformECC( InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, int motionType=MOTION_AFFINE, TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 50, 0.001))
.. ocv:cfunction:: double cvFindTransformECC( const CvArr* templateImage, const CvArr* inputImage, CvMat* warpMatrix, const int motionType, const CvTermCriteria criteria)
:param templateImage: single-channel template image; ``CV_8U`` or ``CV_32F`` array.
:param inputImage: single-channel input image which should be warped with the final ``warpMatrix`` in order to provide an image similar to ``templateImage``, same type as ``temlateImage``.
:param warpMatrix: floating-point :math:`2\times 3` or :math:`3\times 3` mapping matrix (warp).
:param motionType: parameter, specifying the type of motion:
* **MOTION_TRANSLATION** sets a translational motion model; ``warpMatrix`` is :math:`2\times 3` with the first :math:`2\times 2` part being the unity matrix and the rest two parameters being estimated.
* **MOTION_EUCLIDEAN** sets a Euclidean (rigid) transformation as motion model; three parameters are estimated; ``warpMatrix`` is :math:`2\times 3`.
* **MOTION_AFFINE** sets an affine motion model (DEFAULT); six parameters are estimated; ``warpMatrix`` is :math:`2\times 3`.
* **MOTION_HOMOGRAPHY** sets a homography as a motion model; eight parameters are estimated;``warpMatrix`` is :math:`3\times 3`.
:param criteria: parameter, specifying the termination criteria of the ECC algorithm; ``criteria.epsilon`` defines the threshold of the increment in the correlation coefficient between two iterations (a negative ``criteria.epsilon`` makes ``criteria.maxcount`` the only termination criterion). Default values are shown in the declaration above.
The function estimates the optimum transformation (``warpMatrix``) with respect to ECC criterion ([EP08]_), that is
..math::
\texttt{warpMatrix} =
\texttt{warpMatrix} = \arg\max_{W} \texttt{ECC}(\texttt{templateImage}(x,y),\texttt{inputImage}(x',y'))
where
..math::
\begin{bmatrix} x' \\ y' \end{bmatrix} = W \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}
(the equation holds with homogeneous coordinates for homography). It returns the final enhanced correlation coefficient, that is the correlation coefficient between the template image and the final warped input image. When a :math:`3\times 3` matrix is given with ``motionType`` =0, 1 or 2, the third row is ignored.
Unlike :ocv:func:`findHomography` and :ocv:func:`estimateRigidTransform`, the function :ocv:func:`findTransformECC` implements an area-based alignment that builds on intensity similarities. In essence, the function updates the initial transformation that roughly aligns the images. If this information is missing, the identity warp (unity matrix) should be given as input. Note that if images undergo strong displacements/rotations, an initial transformation that roughly aligns the images is necessary (e.g., a simple euclidean/similarity transform that allows for the images showing the same image content approximately). Use inverse warping in the second image to take an image close to the first one, i.e. use the flag ``WARP_INVERSE_MAP`` with :ocv:func:`warpAffine` or :ocv:func:`warpPerspective`. See also the OpenCV sample ``image_alignment.cpp`` that demonstrates the use of the function. Note that the function throws an exception if algorithm does not converges.
.. seealso::
:ocv:func:`estimateRigidTransform`,
:ocv:func:`findHomography`
updateMotionHistory
@ -726,3 +770,5 @@ Releases all inner buffers.
.. [Zach2007] C. Zach, T. Pock and H. Bischof. "A Duality Based Approach for Realtime TV-L1 Optical Flow", In Proceedings of Pattern Recognition (DAGM), Heidelberg, Germany, pp. 214-223, 2007
.. [Javier2012] Javier Sanchez, Enric Meinhardt-Llopis and Gabriele Facciolo. "TV-L1 Optical Flow Estimation".
.. [EP08] Evangelidis, G.D. and Psarakis E.Z. "Parametric Image Alignment using Enhanced Correlation Coefficient Maximization", IEEE Transactions on PAMI, vol. 32, no. 10, 2008

@ -218,6 +218,24 @@ CVAPI(const CvMat*) cvKalmanCorrect( CvKalman* kalman, const CvMat* measurement
#define cvKalmanUpdateByTime cvKalmanPredict
#define cvKalmanUpdateByMeasurement cvKalmanCorrect
/****************************************************************************************\
* Image Alignment (ECC algorithm) *
\****************************************************************************************/
enum
{
MOTION_TRANSLATION,
MOTION_EUCLIDEAN,
MOTION_AFFINE,
MOTION_HOMOGRAPHY
};
/* Estimate the geometric transformation between 2 images (area-based alignment) */
CVAPI(double) cvFindTransformECC (const CvArr* templateImage, const CvArr* inputImage,
CvMat* warpMatrix,
const int motionType,
const CvTermCriteria criteria);
#ifdef __cplusplus
}
@ -323,6 +341,21 @@ CV_EXPORTS_W void calcOpticalFlowFarneback( InputArray prev, InputArray next,
CV_EXPORTS_W Mat estimateRigidTransform( InputArray src, InputArray dst,
bool fullAffine);
//! estimates the best-fit Translation, Euclidean, Affine or Perspective Transformation
// with respect to Enhanced Correlation Coefficient criterion that maps one image to
// another (area-based alignment)
//
// see reference:
// Evangelidis, G. E., Psarakis, E.Z., Parametric Image Alignment using
// Enhanced Correlation Coefficient Maximization, PAMI, 30(8), 2008
CV_EXPORTS_W double findTransformECC(InputArray templateImage,
InputArray inputImage,
InputOutputArray warpMatrix,
int motionType=MOTION_AFFINE,
TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 50, 0.001));
//! computes dense optical flow using Simple Flow algorithm
CV_EXPORTS_W void calcOpticalFlowSF(InputArray from,
InputArray to,

@ -0,0 +1,74 @@
#include "perf_precomp.hpp"
using namespace std;
using namespace cv;
using namespace perf;
using std::tr1::make_tuple;
using std::tr1::get;
CV_ENUM(MotionType, MOTION_TRANSLATION, MOTION_EUCLIDEAN, MOTION_AFFINE, MOTION_HOMOGRAPHY)
typedef std::tr1::tuple<MotionType> MotionType_t;
typedef perf::TestBaseWithParam<MotionType_t> TransformationType;
PERF_TEST_P(TransformationType, findTransformECC, /*testing::ValuesIn(MotionType::all())*/
testing::Values((int) MOTION_TRANSLATION, (int) MOTION_EUCLIDEAN,
(int) MOTION_AFFINE, (int) MOTION_HOMOGRAPHY)
)
{
Mat inputImage = imread(getDataPath("cv/shared/fruits.png"),0);
Mat img;
resize(inputImage, img, Size(216,216));
Mat templateImage;
int transform_type = get<0>(GetParam());
Mat warpMat;
Mat warpGround;
double angle;
switch (transform_type) {
case MOTION_TRANSLATION:
warpGround = (Mat_<float>(2,3) << 1.f, 0.f, 7.234f,
0.f, 1.f, 11.839f);
warpAffine(img, templateImage, warpGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
break;
case MOTION_EUCLIDEAN:
angle = CV_PI/30;
warpGround = (Mat_<float>(2,3) << (float)cos(angle), (float)-sin(angle), 12.123f,
(float)sin(angle), (float)cos(angle), 14.789f);
warpAffine(img, templateImage, warpGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
break;
case MOTION_AFFINE:
warpGround = (Mat_<float>(2,3) << 0.98f, 0.03f, 15.523f,
-0.02f, 0.95f, 10.456f);
warpAffine(img, templateImage, warpGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
break;
case MOTION_HOMOGRAPHY:
warpGround = (Mat_<float>(3,3) << 0.98f, 0.03f, 15.523f,
-0.02f, 0.95f, 10.456f,
0.0002f, 0.0003f, 1.f);
warpPerspective(img, templateImage, warpGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
break;
}
TEST_CYCLE()
{
if (transform_type<3)
warpMat = Mat::eye(2,3, CV_32F);
else
warpMat = Mat::eye(3,3, CV_32F);
findTransformECC(templateImage, img, warpMat, transform_type,
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 5, -1));
}
SANITY_CHECK(warpMat, 1e-3);
}

@ -0,0 +1,543 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// Intel License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
/****************************************************************************************\
* Image Alignment (ECC algorithm) *
\****************************************************************************************/
using namespace cv;
static void image_jacobian_homo_ECC(const Mat& src1, const Mat& src2,
const Mat& src3, const Mat& src4,
const Mat& src5, Mat& dst)
{
CV_Assert(src1.size() == src2.size());
CV_Assert(src1.size() == src3.size());
CV_Assert(src1.size() == src4.size());
CV_Assert( src1.rows == dst.rows);
CV_Assert(dst.cols == (src1.cols*8));
CV_Assert(dst.type() == CV_32FC1);
CV_Assert(src5.isContinuous());
const float* hptr = src5.ptr<float>(0);
const float h0_ = hptr[0];
const float h1_ = hptr[3];
const float h2_ = hptr[6];
const float h3_ = hptr[1];
const float h4_ = hptr[4];
const float h5_ = hptr[7];
const float h6_ = hptr[2];
const float h7_ = hptr[5];
const int w = src1.cols;
//create denominator for all points as a block
Mat den_ = src3*h2_ + src4*h5_ + 1.0;//check the time of this! otherwise use addWeighted
//create projected points
Mat hatX_ = -src3*h0_ - src4*h3_ - h6_;
divide(hatX_, den_, hatX_);
Mat hatY_ = -src3*h1_ - src4*h4_ - h7_;
divide(hatY_, den_, hatY_);
//instead of dividing each block with den,
//just pre-devide the block of gradients (it's more efficient)
Mat src1Divided_;
Mat src2Divided_;
divide(src1, den_, src1Divided_);
divide(src2, den_, src2Divided_);
//compute Jacobian blocks (8 blocks)
dst.colRange(0, w) = src1Divided_.mul(src3);//1
dst.colRange(w,2*w) = src2Divided_.mul(src3);//2
Mat temp_ = (hatX_.mul(src1Divided_)+hatY_.mul(src2Divided_));
dst.colRange(2*w,3*w) = temp_.mul(src3);//3
hatX_.release();
hatY_.release();
dst.colRange(3*w, 4*w) = src1Divided_.mul(src4);//4
dst.colRange(4*w, 5*w) = src2Divided_.mul(src4);//5
dst.colRange(5*w, 6*w) = temp_.mul(src4);//6
src1Divided_.copyTo(dst.colRange(6*w, 7*w));//7
src2Divided_.copyTo(dst.colRange(7*w, 8*w));//8
}
static void image_jacobian_euclidean_ECC(const Mat& src1, const Mat& src2,
const Mat& src3, const Mat& src4,
const Mat& src5, Mat& dst)
{
CV_Assert( src1.size()==src2.size());
CV_Assert( src1.size()==src3.size());
CV_Assert( src1.size()==src4.size());
CV_Assert( src1.rows == dst.rows);
CV_Assert(dst.cols == (src1.cols*3));
CV_Assert(dst.type() == CV_32FC1);
CV_Assert(src5.isContinuous());
const float* hptr = src5.ptr<float>(0);
const float h0 = hptr[0];//cos(theta)
const float h1 = hptr[3];//sin(theta)
const int w = src1.cols;
//create -sin(theta)*X -cos(theta)*Y for all points as a block -> hatX
Mat hatX = -(src3*h1) - (src4*h0);
//create cos(theta)*X -sin(theta)*Y for all points as a block -> hatY
Mat hatY = (src3*h0) - (src4*h1);
//compute Jacobian blocks (3 blocks)
dst.colRange(0, w) = (src1.mul(hatX))+(src2.mul(hatY));//1
src1.copyTo(dst.colRange(w, 2*w));//2
src2.copyTo(dst.colRange(2*w, 3*w));//3
}
static void image_jacobian_affine_ECC(const Mat& src1, const Mat& src2,
const Mat& src3, const Mat& src4,
Mat& dst)
{
CV_Assert(src1.size() == src2.size());
CV_Assert(src1.size() == src3.size());
CV_Assert(src1.size() == src4.size());
CV_Assert(src1.rows == dst.rows);
CV_Assert(dst.cols == (6*src1.cols));
CV_Assert(dst.type() == CV_32FC1);
const int w = src1.cols;
//compute Jacobian blocks (6 blocks)
dst.colRange(0,w) = src1.mul(src3);//1
dst.colRange(w,2*w) = src2.mul(src3);//2
dst.colRange(2*w,3*w) = src1.mul(src4);//3
dst.colRange(3*w,4*w) = src2.mul(src4);//4
src1.copyTo(dst.colRange(4*w,5*w));//5
src2.copyTo(dst.colRange(5*w,6*w));//6
}
static void image_jacobian_translation_ECC(const Mat& src1, const Mat& src2, Mat& dst)
{
CV_Assert( src1.size()==src2.size());
CV_Assert( src1.rows == dst.rows);
CV_Assert(dst.cols == (src1.cols*2));
CV_Assert(dst.type() == CV_32FC1);
const int w = src1.cols;
//compute Jacobian blocks (2 blocks)
src1.copyTo(dst.colRange(0, w));
src2.copyTo(dst.colRange(w, 2*w));
}
static void project_onto_jacobian_ECC(const Mat& src1, const Mat& src2, Mat& dst)
{
/* this functions is used for two types of projections. If src1.cols ==src.cols
it does a blockwise multiplication (like in the outer product of vectors)
of the blocks in matrices src1 and src2 and dst
has size (number_of_blcks x number_of_blocks), otherwise dst is a vector of size
(number_of_blocks x 1) since src2 is "multiplied"(dot) with each block of src1.
The number_of_blocks is equal to the number of parameters we are lloking for
(i.e. rtanslation:2, euclidean: 3, affine: 6, homography: 8)
*/
CV_Assert(src1.rows == src2.rows);
CV_Assert((src1.cols % src2.cols) == 0);
int w;
float* dstPtr = dst.ptr<float>(0);
if (src1.cols !=src2.cols){//dst.cols==1
w = src2.cols;
for (int i=0; i<dst.rows; i++){
dstPtr[i] = (float) src2.dot(src1.colRange(i*w,(i+1)*w));
}
}
else {
CV_Assert(dst.cols == dst.rows); //dst is square (and symmetric)
w = src2.cols/dst.cols;
Mat mat;
for (int i=0; i<dst.rows; i++){
mat = Mat(src1.colRange(i*w, (i+1)*w));
dstPtr[i*(dst.rows+1)] = (float) pow(norm(mat),2); //diagonal elements
for (int j=i+1; j<dst.cols; j++){ //j starts from i+1
dstPtr[i*dst.cols+j] = (float) mat.dot(src2.colRange(j*w, (j+1)*w));
dstPtr[j*dst.cols+i] = dstPtr[i*dst.cols+j]; //due to symmetry
}
}
}
}
static void update_warping_matrix_ECC (Mat& map_matrix, const Mat& update, const int motionType)
{
CV_Assert (map_matrix.type() == CV_32FC1);
CV_Assert (update.type() == CV_32FC1);
CV_Assert (motionType == MOTION_TRANSLATION || motionType == MOTION_EUCLIDEAN ||
motionType == MOTION_AFFINE || motionType == MOTION_HOMOGRAPHY);
if (motionType == MOTION_HOMOGRAPHY)
CV_Assert (map_matrix.rows == 3 && update.rows == 8);
else if (motionType == MOTION_AFFINE)
CV_Assert(map_matrix.rows == 2 && update.rows == 6);
else if (motionType == MOTION_EUCLIDEAN)
CV_Assert (map_matrix.rows == 2 && update.rows == 3);
else
CV_Assert (map_matrix.rows == 2 && update.rows == 2);
CV_Assert (update.cols == 1);
CV_Assert( map_matrix.isContinuous());
CV_Assert( update.isContinuous() );
float* mapPtr = map_matrix.ptr<float>(0);
const float* updatePtr = update.ptr<float>(0);
if (motionType == MOTION_TRANSLATION){
mapPtr[2] += updatePtr[0];
mapPtr[5] += updatePtr[1];
}
if (motionType == MOTION_AFFINE) {
mapPtr[0] += updatePtr[0];
mapPtr[3] += updatePtr[1];
mapPtr[1] += updatePtr[2];
mapPtr[4] += updatePtr[3];
mapPtr[2] += updatePtr[4];
mapPtr[5] += updatePtr[5];
}
if (motionType == MOTION_HOMOGRAPHY) {
mapPtr[0] += updatePtr[0];
mapPtr[3] += updatePtr[1];
mapPtr[6] += updatePtr[2];
mapPtr[1] += updatePtr[3];
mapPtr[4] += updatePtr[4];
mapPtr[7] += updatePtr[5];
mapPtr[2] += updatePtr[6];
mapPtr[5] += updatePtr[7];
}
if (motionType == MOTION_EUCLIDEAN) {
double new_theta = acos(mapPtr[0]) + updatePtr[0];
mapPtr[2] += updatePtr[1];
mapPtr[5] += updatePtr[2];
mapPtr[0] = mapPtr[4] = (float) cos(new_theta);
mapPtr[3] = (float) sin(new_theta);
mapPtr[1] = -mapPtr[3];
}
}
CV_IMPL double cvFindTransformECC (const CvArr* _image1, const CvArr* _image2,
CvMat* _map_matrix,
const int motionType,
const CvTermCriteria _criteria)
{
Mat image1 = cvarrToMat(_image1);
Mat image2 = cvarrToMat(_image2);
Mat map_matrix = cvarrToMat(_map_matrix);
double cc = cv::findTransformECC(image1, image2, map_matrix, motionType,
TermCriteria(TermCriteria::EPS+TermCriteria::COUNT, _criteria.max_iter, _criteria.epsilon));
return cc;
}
double cv::findTransformECC(InputArray templateImage,
InputArray inputImage,
InputOutputArray warpMatrix,
int motionType,
TermCriteria criteria)
{
Mat src = templateImage.getMat();//template iamge
Mat dst = inputImage.getMat(); //input image (to be warped)
Mat map = warpMatrix.getMat(); //warp (transformation)
CV_Assert(!src.empty());
CV_Assert(!dst.empty());
if( ! (src.type()==dst.type()))
CV_Error( CV_StsUnmatchedFormats, "Both input images must have the same data type" );
//accept only 1-channel images
if( src.type() != CV_8UC1 && src.type()!= CV_32FC1)
CV_Error( CV_StsUnsupportedFormat, "Images must have 8uC1 or 32fC1 type");
if( map.type() != CV_32FC1)
CV_Error( CV_StsUnsupportedFormat, "warpMatrix must be single-channel floating-point matrix");
CV_Assert (map.cols == 3);
CV_Assert (map.rows == 2 || map.rows ==3);
CV_Assert (motionType == MOTION_AFFINE || motionType == MOTION_HOMOGRAPHY ||
motionType == MOTION_EUCLIDEAN || motionType == MOTION_TRANSLATION);
if (motionType == MOTION_HOMOGRAPHY){
CV_Assert (map.rows ==3);
}
CV_Assert (criteria.type & TermCriteria::COUNT || criteria.type & TermCriteria::EPS);
const int numberOfIterations = (criteria.type & TermCriteria::COUNT) ? criteria.maxCount : 200;
const double termination_eps = (criteria.type & TermCriteria::EPS) ? criteria.epsilon : -1;
int paramTemp = 6;//default: affine
switch (motionType){
case MOTION_TRANSLATION:
paramTemp = 2;
break;
case MOTION_EUCLIDEAN:
paramTemp = 3;
break;
case MOTION_HOMOGRAPHY:
paramTemp = 8;
break;
}
const int numberOfParameters = paramTemp;
const int ws = src.cols;
const int hs = src.rows;
const int wd = dst.cols;
const int hd = dst.rows;
Mat Xcoord = Mat(1, ws, CV_32F);
Mat Ycoord = Mat(hs, 1, CV_32F);
Mat Xgrid = Mat(hs, ws, CV_32F);
Mat Ygrid = Mat(hs, ws, CV_32F);
float* XcoPtr = Xcoord.ptr<float>(0);
float* YcoPtr = Ycoord.ptr<float>(0);
int j;
for (j=0; j<ws; j++)
XcoPtr[j] = (float) j;
for (j=0; j<hs; j++)
YcoPtr[j] = (float) j;
repeat(Xcoord, hs, 1, Xgrid);
repeat(Ycoord, 1, ws, Ygrid);
Xcoord.release();
Ycoord.release();
Mat templateZM = Mat(hs, ws, CV_32F);// to store the (smoothed)zero-mean version of template
Mat templateFloat = Mat(hs, ws, CV_32F);// to store the (smoothed) template
Mat imageFloat = Mat(hd, wd, CV_32F);// to store the (smoothed) input image
Mat imageWarped = Mat(hs, ws, CV_32F);// to store the warped zero-mean input image
Mat allOnes = Mat::ones(hd, wd, CV_8U); //to use it for mask warping
Mat imageMask = Mat(hs, ws, CV_8U); //to store the final mask
//gaussian filtering is optional
src.convertTo(templateFloat, templateFloat.type());
GaussianBlur(templateFloat, templateFloat, Size(5, 5), 0, 0);//is in-place filtering slower?
dst.convertTo(imageFloat, imageFloat.type());
GaussianBlur(imageFloat, imageFloat, Size(5, 5), 0, 0);
// needed matrices for gradients and warped gradients
Mat gradientX = Mat::zeros(hd, wd, CV_32FC1);
Mat gradientY = Mat::zeros(hd, wd, CV_32FC1);
Mat gradientXWarped = Mat(hs, ws, CV_32FC1);
Mat gradientYWarped = Mat(hs, ws, CV_32FC1);
// calculate first order image derivatives
Matx13f dx(-0.5f, 0.0f, 0.5f);
filter2D(imageFloat, gradientX, -1, dx);
filter2D(imageFloat, gradientY, -1, dx.t());
// matrices needed for solving linear equation system for maximizing ECC
Mat jacobian = Mat(hs, ws*numberOfParameters, CV_32F);
Mat hessian = Mat(numberOfParameters, numberOfParameters, CV_32F);
Mat hessianInv = Mat(numberOfParameters, numberOfParameters, CV_32F);
Mat imageProjection = Mat(numberOfParameters, 1, CV_32F);
Mat templateProjection = Mat(numberOfParameters, 1, CV_32F);
Mat imageProjectionHessian = Mat(numberOfParameters, 1, CV_32F);
Mat errorProjection = Mat(numberOfParameters, 1, CV_32F);
Mat deltaP = Mat(numberOfParameters, 1, CV_32F);//transformation parameter correction
Mat error = Mat(hs, ws, CV_32F);//error as 2D matrix
const int imageFlags = CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP;
const int maskFlags = CV_INTER_NN+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP;
// iteratively update map_matrix
double rho = -1;
double last_rho = - termination_eps;
for (int i = 1; (i <= numberOfIterations) && (fabs(rho-last_rho)>= termination_eps); i++)
{
// warp-back portion of the inputImage and gradients to the coordinate space of the templateImage
if (motionType != MOTION_HOMOGRAPHY)
{
warpAffine(imageFloat, imageWarped, map, imageWarped.size(), imageFlags);
warpAffine(gradientX, gradientXWarped, map, gradientXWarped.size(), imageFlags);
warpAffine(gradientY, gradientYWarped, map, gradientYWarped.size(), imageFlags);
warpAffine(allOnes, imageMask, map, imageMask.size(), maskFlags);
}
else
{
warpPerspective(imageFloat, imageWarped, map, imageWarped.size(), imageFlags);
warpPerspective(gradientX, gradientXWarped, map, gradientXWarped.size(), imageFlags);
warpPerspective(gradientY, gradientYWarped, map, gradientYWarped.size(), imageFlags);
warpPerspective(allOnes, imageMask, map, imageMask.size(), maskFlags);
}
Scalar imgMean, imgStd, tmpMean, tmpStd;
meanStdDev(imageWarped, imgMean, imgStd, imageMask);
meanStdDev(templateFloat, tmpMean, tmpStd, imageMask);
subtract(imageWarped, imgMean, imageWarped, imageMask);//zero-mean input
subtract(templateFloat, tmpMean, templateZM, imageMask);//zero-mean template
const double tmpNorm = std::sqrt(countNonZero(imageMask)*(tmpStd.val[0])*(tmpStd.val[0]));
const double imgNorm = std::sqrt(countNonZero(imageMask)*(imgStd.val[0])*(imgStd.val[0]));
// calculate jacobian of image wrt parameters
switch (motionType){
case MOTION_AFFINE:
image_jacobian_affine_ECC(gradientXWarped, gradientYWarped, Xgrid, Ygrid, jacobian);
break;
case MOTION_HOMOGRAPHY:
image_jacobian_homo_ECC(gradientXWarped, gradientYWarped, Xgrid, Ygrid, map, jacobian);
break;
case MOTION_TRANSLATION:
image_jacobian_translation_ECC(gradientXWarped, gradientYWarped, jacobian);
break;
case MOTION_EUCLIDEAN:
image_jacobian_euclidean_ECC(gradientXWarped, gradientYWarped, Xgrid, Ygrid, map, jacobian);
break;
}
// calculate Hessian and its inverse
project_onto_jacobian_ECC(jacobian, jacobian, hessian);
hessianInv = hessian.inv();
const double correlation = templateZM.dot(imageWarped);
// calculate enhanced correlation coefficiont (ECC)->rho
last_rho = rho;
rho = correlation/(imgNorm*tmpNorm);
// project images into jacobian
project_onto_jacobian_ECC( jacobian, imageWarped, imageProjection);
project_onto_jacobian_ECC(jacobian, templateZM, templateProjection);
// calculate the parameter lambda to account for illumination variation
imageProjectionHessian = hessianInv*imageProjection;
const double lambda_n = (imgNorm*imgNorm) - imageProjection.dot(imageProjectionHessian);
const double lambda_d = correlation - templateProjection.dot(imageProjectionHessian);
if (lambda_d <= 0.0)
{
rho = -1;
CV_Error(CV_StsNoConv, "The algorithm stopped before its convergence. The correlation is going to be minimized. Images may be uncorrelated or non-overlapped");
}
const double lambda = (lambda_n/lambda_d);
// estimate the update step delta_p
error = lambda*templateZM - imageWarped;
project_onto_jacobian_ECC(jacobian, error, errorProjection);
deltaP = hessianInv * errorProjection;
// update warping matrix
update_warping_matrix_ECC( map, deltaP, motionType);
}
// return final correlation coefficient
return rho;
}
/* End of file. */

@ -0,0 +1,401 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "test_precomp.hpp"
using namespace cv;
using namespace std;
class CV_ECC_BaseTest : public cvtest::BaseTest
{
public:
CV_ECC_BaseTest();
protected:
double computeRMS(const Mat& mat1, const Mat& mat2);
bool isMapCorrect(const Mat& mat);
double MAX_RMS_ECC;//upper bound for RMS error
int ntests;//number of tests per motion type
int ECC_iterations;//number of iterations for ECC
double ECC_epsilon; //we choose a negative value, so that
// ECC_iterations are always executed
};
CV_ECC_BaseTest::CV_ECC_BaseTest()
{
MAX_RMS_ECC=0.1;
ntests = 3;
ECC_iterations = 50;
ECC_epsilon = -1; //-> negative value means that ECC_Iterations will be executed
}
bool CV_ECC_BaseTest::isMapCorrect(const Mat& map)
{
bool tr = true;
float mapVal;
for(int i =0; i<map.rows; i++)
for(int j=0; j<map.cols; j++){
mapVal = map.at<float>(i, j);
tr = tr & (!cvIsNaN(mapVal) && (fabs(mapVal) < 1e9));
}
return tr;
}
double CV_ECC_BaseTest::computeRMS(const Mat& mat1, const Mat& mat2){
CV_Assert(mat1.rows == mat2.rows);
CV_Assert(mat1.cols == mat2.cols);
Mat errorMat;
subtract(mat1, mat2, errorMat);
return sqrt(errorMat.dot(errorMat)/(mat1.rows*mat1.cols));
}
class CV_ECC_Test_Translation : public CV_ECC_BaseTest
{
public:
CV_ECC_Test_Translation();
protected:
void run(int);
bool testTranslation(int);
};
CV_ECC_Test_Translation::CV_ECC_Test_Translation(){};
bool CV_ECC_Test_Translation::testTranslation(int from)
{
Mat img = imread( string(ts->get_data_path()) + "shared/fruits.png", 0);
if (img.empty())
{
ts->printf( ts->LOG, "test image can not be read");
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_TEST_DATA);
return false;
}
Mat testImg;
resize(img, testImg, Size(216, 216));
cv::RNG rng = ts->get_rng();
int progress=0;
for (int k=from; k<ntests; k++){
ts->update_context( this, k, true );
progress = update_progress(progress, k, ntests, 0);
Mat translationGround = (Mat_<float>(2,3) << 1, 0, (rng.uniform(10.f, 20.f)),
0, 1, (rng.uniform(10.f, 20.f)));
Mat warpedImage;
warpAffine(testImg, warpedImage, translationGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
Mat mapTranslation = (Mat_<float>(2,3) << 1, 0, 0, 0, 1, 0);
findTransformECC(warpedImage, testImg, mapTranslation, 0,
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, ECC_iterations, ECC_epsilon));
if (!isMapCorrect(mapTranslation)){
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_OUTPUT);
return false;
}
if (computeRMS(mapTranslation, translationGround)>MAX_RMS_ECC){
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
ts->printf( ts->LOG, "RMS = %f",
computeRMS(mapTranslation, translationGround));
return false;
}
}
return true;
}
void CV_ECC_Test_Translation::run(int from)
{
if (!testTranslation(from))
return;
ts->set_failed_test_info(cvtest::TS::OK);
}
class CV_ECC_Test_Euclidean : public CV_ECC_BaseTest
{
public:
CV_ECC_Test_Euclidean();
protected:
void run(int);
bool testEuclidean(int);
};
CV_ECC_Test_Euclidean::CV_ECC_Test_Euclidean() { }
bool CV_ECC_Test_Euclidean::testEuclidean(int from)
{
Mat img = imread( string(ts->get_data_path()) + "shared/fruits.png", 0);
if (img.empty())
{
ts->printf( ts->LOG, "test image can not be read");
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_TEST_DATA);
return false;
}
Mat testImg;
resize(img, testImg, Size(216, 216));
cv::RNG rng = ts->get_rng();
int progress = 0;
for (int k=from; k<ntests; k++){
ts->update_context( this, k, true );
progress = update_progress(progress, k, ntests, 0);
double angle = CV_PI/30 + CV_PI*rng.uniform((double)-2.f, (double)2.f)/180;
Mat euclideanGround = (Mat_<float>(2,3) << cos(angle), -sin(angle), (rng.uniform(10.f, 20.f)),
sin(angle), cos(angle), (rng.uniform(10.f, 20.f)));
Mat warpedImage;
warpAffine(testImg, warpedImage, euclideanGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
Mat mapEuclidean = (Mat_<float>(2,3) << 1, 0, 0, 0, 1, 0);
findTransformECC(warpedImage, testImg, mapEuclidean, 1,
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, ECC_iterations, ECC_epsilon));
if (!isMapCorrect(mapEuclidean)){
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_OUTPUT);
return false;
}
if (computeRMS(mapEuclidean, euclideanGround)>MAX_RMS_ECC){
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
ts->printf( ts->LOG, "RMS = %f",
computeRMS(mapEuclidean, euclideanGround));
return false;
}
}
return true;
}
void CV_ECC_Test_Euclidean::run(int from)
{
if (!testEuclidean(from))
return;
ts->set_failed_test_info(cvtest::TS::OK);
}
class CV_ECC_Test_Affine : public CV_ECC_BaseTest
{
public:
CV_ECC_Test_Affine();
protected:
void run(int);
bool testAffine(int);
};
CV_ECC_Test_Affine::CV_ECC_Test_Affine(){};
bool CV_ECC_Test_Affine::testAffine(int from)
{
Mat img = imread( string(ts->get_data_path()) + "shared/fruits.png", 0);
if (img.empty())
{
ts->printf( ts->LOG, "test image can not be read");
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_TEST_DATA);
return false;
}
Mat testImg;
resize(img, testImg, Size(216, 216));
cv::RNG rng = ts->get_rng();
int progress = 0;
for (int k=from; k<ntests; k++){
ts->update_context( this, k, true );
progress = update_progress(progress, k, ntests, 0);
Mat affineGround = (Mat_<float>(2,3) << (1-rng.uniform(-0.05f, 0.05f)),
(rng.uniform(-0.03f, 0.03f)), (rng.uniform(10.f, 20.f)),
(rng.uniform(-0.03f, 0.03f)), (1-rng.uniform(-0.05f, 0.05f)),
(rng.uniform(10.f, 20.f)));
Mat warpedImage;
warpAffine(testImg, warpedImage, affineGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
Mat mapAffine = (Mat_<float>(2,3) << 1, 0, 0, 0, 1, 0);
findTransformECC(warpedImage, testImg, mapAffine, 2,
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, ECC_iterations, ECC_epsilon));
if (!isMapCorrect(mapAffine)){
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_OUTPUT);
return false;
}
if (computeRMS(mapAffine, affineGround)>MAX_RMS_ECC){
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
ts->printf( ts->LOG, "RMS = %f",
computeRMS(mapAffine, affineGround));
return false;
}
}
return true;
}
void CV_ECC_Test_Affine::run(int from)
{
if (!testAffine(from))
return;
ts->set_failed_test_info(cvtest::TS::OK);
}
class CV_ECC_Test_Homography : public CV_ECC_BaseTest
{
public:
CV_ECC_Test_Homography();
protected:
void run(int);
bool testHomography(int);
};
CV_ECC_Test_Homography::CV_ECC_Test_Homography(){};
bool CV_ECC_Test_Homography::testHomography(int from)
{
Mat img = imread( string(ts->get_data_path()) + "shared/fruits.png", 0);
if (img.empty())
{
ts->printf( ts->LOG, "test image can not be read");
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_TEST_DATA);
return false;
}
Mat testImg;
resize(img, testImg, Size(216, 216));
cv::RNG rng = ts->get_rng();
int progress = 0;
for (int k=from; k<ntests; k++){
ts->update_context( this, k, true );
progress = update_progress(progress, k, ntests, 0);
Mat homoGround = (Mat_<float>(3,3) << (1-rng.uniform(-0.05f, 0.05f)),
(rng.uniform(-0.03f, 0.03f)), (rng.uniform(10.f, 20.f)),
(rng.uniform(-0.03f, 0.03f)), (1-rng.uniform(-0.05f, 0.05f)),(rng.uniform(10.f, 20.f)),
(rng.uniform(0.0001f, 0.0003f)), (rng.uniform(0.0001f, 0.0003f)), 1.f);
Mat warpedImage;
warpPerspective(testImg, warpedImage, homoGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
Mat mapHomography = Mat::eye(3, 3, CV_32F);
findTransformECC(warpedImage, testImg, mapHomography, 3,
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, ECC_iterations, ECC_epsilon));
if (!isMapCorrect(mapHomography)){
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_OUTPUT);
return false;
}
if (computeRMS(mapHomography, homoGround)>MAX_RMS_ECC){
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
ts->printf( ts->LOG, "RMS = %f",
computeRMS(mapHomography, homoGround));
return false;
}
}
return true;
}
void CV_ECC_Test_Homography::run(int from)
{
if (!testHomography(from))
return;
ts->set_failed_test_info(cvtest::TS::OK);
}
TEST(Video_ECC_Translation, accuracy) { CV_ECC_Test_Translation test; test.safe_run();}
TEST(Video_ECC_Euclidean, accuracy) { CV_ECC_Test_Euclidean test; test.safe_run(); }
TEST(Video_ECC_Affine, accuracy) { CV_ECC_Test_Affine test; test.safe_run(); }
TEST(Video_ECC_Homography, accuracy) { CV_ECC_Test_Homography test; test.safe_run(); }

@ -0,0 +1,386 @@
/*
* This sample demonstrates the use of the function
* findTransformECC that implements the image alignment ECC algorithm
*
*
* The demo loads an image (defaults to fruits.jpg) and it artificially creates
* a template image based on the given motion type. When two images are given,
* the first image is the input image and the second one defines the template image.
* In the latter case, you can also parse the warp's initialization.
*
* Input and output warp files consist of the raw warp (transform) elements
*
* Authors: G. Evangelidis, INRIA, Grenoble, France
* M. Asbach, Fraunhofer IAIS, St. Augustin, Germany
*/
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/video.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <stdio.h>
#include <string>
#include <time.h>
#include <iostream>
#include <fstream>
using namespace cv;
using namespace std;
static void help(void);
static int readWarp(string iFilename, Mat& warp, int motionType);
static int saveWarp(string fileName, const Mat& warp, int motionType);
static void draw_warped_roi(Mat& image, const int width, const int height, Mat& W);
#define HOMO_VECTOR(H, x, y)\
H.at<float>(0,0) = (float)(x);\
H.at<float>(1,0) = (float)(y);\
H.at<float>(2,0) = 1.;
#define GET_HOMO_VALUES(X, x, y)\
(x) = static_cast<float> (X.at<float>(0,0)/X.at<float>(2,0));\
(y) = static_cast<float> (X.at<float>(1,0)/X.at<float>(2,0));
const std::string keys =
"{@inputImage | fruits.jpg | input image filename }"
"{@templateImage | | template image filename (optional)}"
"{@inputWarp | | input warp (matrix) filename (optional)}"
"{n numOfIter | 50 | ECC's iterations }"
"{e epsilon | 0.0001 | ECC's convergence epsilon }"
"{o outputWarp | outWarp.ecc | output warp (matrix) filename }"
"{m motionType | affine | type of motion (translation, euclidean, affine, homography) }"
"{v verbose | 0 | display initial and final images }"
"{w warpedImfile | warpedECC.png | warped input image }"
;
static void help(void)
{
cout << "\nThis file demostrates the use of the ECC image alignment algorithm. When one image"
" is given, the template image is artificially formed by a random warp. When both images"
" are given, the initialization of the warp by command line parsing is possible. "
"If inputWarp is missing, the identity transformation initializes the algorithm. \n" << endl;
cout << "\nUsage example (one image): \n./ecc fruits.jpg -o=outWarp.ecc "
"-m=euclidean -e=1e-6 -N=70 -v=1 \n" << endl;
cout << "\nUsage example (two images with initialization): \n./ecc yourInput.png yourTemplate.png "
"yourInitialWarp.ecc -o=outWarp.ecc -m=homography -e=1e-6 -N=70 -v=1 -w=yourFinalImage.png \n" << endl;
}
static int readWarp(string iFilename, Mat& warp, int motionType){
// it reads from file a specific number of raw values:
// 9 values for homography, 6 otherwise
CV_Assert(warp.type()==CV_32FC1);
int numOfElements;
if (motionType==MOTION_HOMOGRAPHY)
numOfElements=9;
else
numOfElements=6;
int i;
int ret_value;
ifstream myfile(iFilename.c_str());
if (myfile.is_open()){
float* matPtr = warp.ptr<float>(0);
for(i=0; i<numOfElements; i++){
myfile >> matPtr[i];
}
ret_value = 1;
}
else {
cout << "Unable to open file " << iFilename.c_str() << endl;
ret_value = 0;
}
return ret_value;
}
static int saveWarp(string fileName, const Mat& warp, int motionType)
{
// it saves the raw matrix elements in a file
CV_Assert(warp.type()==CV_32FC1);
const float* matPtr = warp.ptr<float>(0);
int ret_value;
ofstream outfile(fileName.c_str());
if( !outfile ) {
cerr << "error in saving "
<< "Couldn't open file '" << fileName.c_str() << "'!" << endl;
ret_value = 0;
}
else {//save the warp's elements
outfile << matPtr[0] << " " << matPtr[1] << " " << matPtr[2] << endl;
outfile << matPtr[3] << " " << matPtr[4] << " " << matPtr[5] << endl;
if (motionType==MOTION_HOMOGRAPHY){
outfile << matPtr[6] << " " << matPtr[7] << " " << matPtr[8] << endl;
}
ret_value = 1;
}
return ret_value;
}
static void draw_warped_roi(Mat& image, const int width, const int height, Mat& W)
{
Point2f top_left, top_right, bottom_left, bottom_right;
Mat H = Mat (3, 1, CV_32F);
Mat U = Mat (3, 1, CV_32F);
Mat warp_mat = Mat::eye (3, 3, CV_32F);
for (int y = 0; y < W.rows; y++)
for (int x = 0; x < W.cols; x++)
warp_mat.at<float>(y,x) = W.at<float>(y,x);
//warp the corners of rectangle
// top-left
HOMO_VECTOR(H, 1, 1);
gemm(warp_mat, H, 1, 0, 0, U);
GET_HOMO_VALUES(U, top_left.x, top_left.y);
// top-right
HOMO_VECTOR(H, width, 1);
gemm(warp_mat, H, 1, 0, 0, U);
GET_HOMO_VALUES(U, top_right.x, top_right.y);
// bottom-left
HOMO_VECTOR(H, 1, height);
gemm(warp_mat, H, 1, 0, 0, U);
GET_HOMO_VALUES(U, bottom_left.x, bottom_left.y);
// bottom-right
HOMO_VECTOR(H, width, height);
gemm(warp_mat, H, 1, 0, 0, U);
GET_HOMO_VALUES(U, bottom_right.x, bottom_right.y);
// draw the warped perimeter
line(image, top_left, top_right, Scalar(255,0,255));
line(image, top_right, bottom_right, Scalar(255,0,255));
line(image, bottom_right, bottom_left, Scalar(255,0,255));
line(image, bottom_left, top_left, Scalar(255,0,255));
}
int main (const int argc, const char * argv[])
{
CommandLineParser parser(argc, argv, keys);
parser.about("ECC demo");
if (argc<2) {
parser.printMessage();
help();
return 1;
}
string imgFile = parser.get<string>(0);
string tempImgFile = parser.get<string>(1);
string inWarpFile = parser.get<string>(2);
int number_of_iterations = parser.get<int>("n");
double termination_eps = parser.get<double>("e");
string warpType = parser.get<string>("m");
int verbose = parser.get<int>("v");
string finalWarp = parser.get<string>("o");
string warpedImFile = parser.get<string>("w");
if (!(warpType == "translation" || warpType == "euclidean"
|| warpType == "affine" || warpType == "homography"))
{
cerr << "Invalid motion transformation" << endl;
return -1;
}
int mode_temp;
if (warpType == "translation")
mode_temp = MOTION_TRANSLATION;
else if (warpType == "euclidean")
mode_temp = MOTION_EUCLIDEAN;
else if (warpType == "affine")
mode_temp = MOTION_AFFINE;
else
mode_temp = MOTION_HOMOGRAPHY;
Mat inputImage = imread(imgFile,0);
if (inputImage.empty())
{
cerr << "Unable to load the inputImage" << endl;
return -1;
}
Mat target_image;
Mat template_image;
if (tempImgFile!="") {
inputImage.copyTo(target_image);
template_image = imread(tempImgFile,0);
if (template_image.empty()){
cerr << "Unable to load the template image" << endl;
return -1;
}
}
else{ //apply random waro to input image
resize(inputImage, target_image, Size(216, 216));
Mat warpGround;
cv::RNG rng;
double angle;
switch (mode_temp) {
case MOTION_TRANSLATION:
warpGround = (Mat_<float>(2,3) << 1, 0, (rng.uniform(10.f, 20.f)),
0, 1, (rng.uniform(10.f, 20.f)));
warpAffine(target_image, template_image, warpGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
break;
case MOTION_EUCLIDEAN:
angle = CV_PI/30 + CV_PI*rng.uniform((double)-2.f, (double)2.f)/180;
warpGround = (Mat_<float>(2,3) << cos(angle), -sin(angle), (rng.uniform(10.f, 20.f)),
sin(angle), cos(angle), (rng.uniform(10.f, 20.f)));
warpAffine(target_image, template_image, warpGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
break;
case MOTION_AFFINE:
warpGround = (Mat_<float>(2,3) << (1-rng.uniform(-0.05f, 0.05f)),
(rng.uniform(-0.03f, 0.03f)), (rng.uniform(10.f, 20.f)),
(rng.uniform(-0.03f, 0.03f)), (1-rng.uniform(-0.05f, 0.05f)),
(rng.uniform(10.f, 20.f)));
warpAffine(target_image, template_image, warpGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
break;
case MOTION_HOMOGRAPHY:
warpGround = (Mat_<float>(3,3) << (1-rng.uniform(-0.05f, 0.05f)),
(rng.uniform(-0.03f, 0.03f)), (rng.uniform(10.f, 20.f)),
(rng.uniform(-0.03f, 0.03f)), (1-rng.uniform(-0.05f, 0.05f)),(rng.uniform(10.f, 20.f)),
(rng.uniform(0.0001f, 0.0003f)), (rng.uniform(0.0001f, 0.0003f)), 1.f);
warpPerspective(target_image, template_image, warpGround,
Size(200,200), CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
break;
}
}
const int warp_mode = mode_temp;
// initialize or load the warp matrix
Mat warp_matrix;
if (warpType == "homography")
warp_matrix = Mat::eye(3, 3, CV_32F);
else
warp_matrix = Mat::eye(2, 3, CV_32F);
if (inWarpFile!=""){
int readflag = readWarp(inWarpFile, warp_matrix, warp_mode);
if ((!readflag) || warp_matrix.empty())
{
cerr << "-> Check warp initialization file" << endl << flush;
return -1;
}
}
else {
printf("\n ->Perfomarnce Warning: Identity warp ideally assumes images of "
"similar size. If the deformation is strong, the identity warp may not "
"be a good initialization. \n");
}
if (number_of_iterations > 200)
cout << "-> Warning: too many iterations " << endl;
if (warp_mode != MOTION_HOMOGRAPHY)
warp_matrix.rows = 2;
// start timing
const double tic_init = (double) getTickCount ();
double cc = findTransformECC (template_image, target_image, warp_matrix, warp_mode,
TermCriteria (TermCriteria::COUNT+TermCriteria::EPS,
number_of_iterations, termination_eps));
if (cc == -1)
{
cerr << "The execution was interrupted. The correlation value is going to be minimized." << endl;
cerr << "Check the warp initialization and/or the size of images." << endl << flush;
}
// end timing
const double toc_final = (double) getTickCount ();
const double total_time = (toc_final-tic_init)/(getTickFrequency());
if (verbose){
cout << "Alignment time (" << warpType << " transformation): "
<< total_time << " sec" << endl << flush;
// cout << "Final correlation: " << cc << endl << flush;
}
// save the final warp matrix
saveWarp(finalWarp, warp_matrix, warp_mode);
if (verbose){
cout << "\nThe final warp has been saved in the file: " << finalWarp << endl << flush;
}
// save the final warped image
Mat warped_image = Mat(template_image.rows, template_image.cols, CV_32FC1);
if (warp_mode != MOTION_HOMOGRAPHY)
warpAffine (target_image, warped_image, warp_matrix, warped_image.size(),
CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
else
warpPerspective (target_image, warped_image, warp_matrix, warped_image.size(),
CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP);
//save the warped image
imwrite(warpedImFile, warped_image);
// display resulting images
if (verbose)
{
cout << "The warped image has been saved in the file: " << warpedImFile << endl << flush;
namedWindow ("image", CV_WINDOW_AUTOSIZE);
namedWindow ("template", CV_WINDOW_AUTOSIZE);
namedWindow ("warped image", CV_WINDOW_AUTOSIZE);
namedWindow ("error (black: no error)", CV_WINDOW_AUTOSIZE);
moveWindow ("template", 350, 350);
moveWindow ("warped image", 600, 300);
moveWindow ("error (black: no error)", 900, 300);
// draw boundaries of corresponding regions
Mat identity_matrix = Mat::eye(3,3,CV_32F);
draw_warped_roi (target_image, template_image.cols-2, template_image.rows-2, warp_matrix);
draw_warped_roi (template_image, template_image.cols-2, template_image.rows-2, identity_matrix);
Mat errorImage;
subtract(template_image, warped_image, errorImage);
double max_of_error;
minMaxLoc(errorImage, NULL, &max_of_error);
// show images
cout << "Press any key to exit the demo (you might need to click on the images before)." << endl << flush;
imshow ("image", target_image);
waitKey (200);
imshow ("template", template_image);
waitKey (200);
imshow ("warped image", warped_image);
waitKey(200);
imshow ("error (black: no error)", abs(errorImage)*255/max_of_error);
waitKey(0);
}
// done
return 0;
}
Loading…
Cancel
Save