The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel
:math:`(x, y)` of the destination image, the functions compute coordinates of the corresponding "donor" pixel in the source image and copy the pixel value:
The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel :math:`(x, y)` of the destination image, the functions compute coordinates of the corresponding "donor" pixel in the source image and copy the pixel value:
Extrapolation of non-existing pixels. Similarly to the filtering functions described in the previous section, for some
:math:`(x,y)` , either one of
:math:`f_x(x,y)` , or
:math:`f_y(x,y)` , or both of them may fall outside of the image. In this case, an extrapolation method needs to be used. OpenCV provides the same selection of extrapolation methods as in the filtering functions. In addition, it provides the method ``BORDER_TRANSPARENT`` . This means that the corresponding pixels in the destination image will not be modified at all.
:math:`f_y(x,y)` are floating-point numbers. This means that
:math:`\left<f_x, f_y\right>` can be either an affine or perspective transformation, or radial lens distortion correction, and so on. So, a pixel value at fractional coordinates needs to be retrieved. In the simplest case, the coordinates can be just rounded to the nearest integer coordinates and the corresponding pixel can be used. This is called a nearest-neighbor interpolation. However, a better result can be achieved by using more sophisticated `interpolation methods <http://en.wikipedia.org/wiki/Multivariate_interpolation>`_
:ocv:func:`remap` from one representation to another. The following options ( ``(map1.type(), map2.type())``:math:`\rightarrow```(dstmap1.type(), dstmap2.type())`` ) are supported:
:math:`\texttt{(CV\_32FC1, CV\_32FC1)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}` . This is the most frequently used conversion operation, in which the original floating-point maps (see
:ocv:func:`remap` ) are converted to a more compact and much faster fixed-point representation. The first output array contains the rounded coordinates and the second array (created only when ``nninterpolation=false`` ) contains indices in the interpolation tables.
\rho = M \cdot \log{\sqrt{x^2 + y^2}} , \phi =atan(y/x)
The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth. The function can not operate in-place.
:param map1:The first map of either ``(x,y)`` points or just ``x`` values having the type ``CV_16SC2`` , ``CV_32FC1`` , or ``CV_32FC2`` . See :ocv:func:`convertMaps` for details on converting a floating point representation to fixed-point for speed.
:param map2:The second map of ``y`` values having the type ``CV_16UC1`` , ``CV_32FC1`` , or none (empty map if ``map1`` is ``(x,y)`` points), respectively.
:param borderMode:Pixel extrapolation method (see :ocv:func:`borderInterpolate` ). When \ ``borderMode=BORDER_TRANSPARENT`` , it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function.
:param dst:Destination image. It has the size ``dsize`` (when it is non-zero) or the size computed from ``src.size()`` , ``fx`` , and ``fy`` . The type of ``dst`` is the same as of ``src`` .
***INTER_AREA** - resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire'-free results. But when the image is zoomed, it is similar to the ``INTER_NEAREST`` method.
The function ``resize`` resizes the image ``src`` down to or up to the specified size.
Note that the initial ``dst`` type or size are not taken into account. Instead, the size and type are derived from the ``src``,``dsize``,``fx`` , and ``fy`` . If you want to resize ``src`` so that it fits the pre-created ``dst`` , you may call the function as follows: ::
To shrink an image, it will generally look best with CV_INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with CV_INTER_CUBIC (slow) or CV_INTER_LINEAR (faster but still looks OK).
:param flags:Combination of interpolation methods (see :ocv:func:`resize` ) and the optional flag ``WARP_INVERSE_MAP`` that means that ``M`` is the inverse transformation ( :math:`\texttt{dst}\rightarrow\texttt{src}` ).
:param borderMode:Pixel extrapolation method (see :ocv:func:`borderInterpolate` ). When \ ``borderMode=BORDER_TRANSPARENT`` , it means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function.
:param flags:Combination of interpolation methods (see :ocv:func:`resize` ) and the optional flag ``WARP_INVERSE_MAP`` that means that ``M`` is the inverse transformation ( :math:`\texttt{dst}\rightarrow\texttt{src}` ).
:param borderMode:Pixel extrapolation method (see :ocv:func:`borderInterpolate` ). When \ ``borderMode=BORDER_TRANSPARENT`` , it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function.
:param cameraMatrix:Input camera matrix :math:`A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}` .
:param distCoeffs:Input vector of distortion coefficients :math:`(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])` of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
:param R:Optional rectification transformation in the object space (3x3 matrix). ``R1`` or ``R2`` , computed by :ocv:func:`stereoRectify` can be passed here. If the matrix is empty, the identity transformation is assumed. In ``cvInitUndistortMap`` R assumed to be an identity matrix.
:ocv:func:`remap` . The undistorted image looks like original, as if it is captured with a camera using the camera matrix ``=newCameraMatrix`` and zero distortion. In case of a monocular camera, ``newCameraMatrix`` is usually equal to ``cameraMatrix`` , or it can be computed by
:ocv:func:`getOptimalNewCameraMatrix` for a better control over scaling. In case of a stereo camera, ``newCameraMatrix`` is normally set to ``P1`` or ``P2`` computed by
Also, this new camera is oriented differently in the coordinate space, according to ``R`` . That, for example, helps to align two heads of a stereo camera so that the epipolar lines on both images become horizontal and have the same y- coordinate (in case of a horizontally aligned stereo camera).
The function actually builds the maps for the inverse mapping algorithm that is used by
:math:`(u, v)` in the destination (corrected and rectified) image, the function computes the corresponding coordinates in the source image (that is, in the original image from camera). The following process is applied:
:ocv:func:`stereoRectify` , which in its turn is called after
:ocv:func:`stereoCalibrate` . But if the stereo camera was not calibrated, it is still possible to compute the rectification transformations directly from the fundamental matrix using
:ocv:func:`stereoRectifyUncalibrated` . For each camera, the function computes homography ``H`` as the rectification transformation in a pixel domain, not a rotation matrix ``R`` in 3D space. ``R`` can be computed from ``H`` as
:param centerPrincipalPoint:Location of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not.
The function returns the camera matrix that is either an exact copy of the input ``cameraMatrix`` (when ``centerPrinicipalPoint=false`` ), or the modified one (when ``centerPrincipalPoint=true``).
:ocv:func:`undistort`) do not move the principal point. However, when you work with stereo, it is important to move the principal points in both views to the same y-coordinate (which is required by most of stereo correspondence algorithms), and may be to the same x-coordinate too. So, you can form the new camera matrix for each view where the principal points are located at the center.
:param dst:Output (corrected) image that has the same size and type as ``src`` .
:param cameraMatrix:Input camera matrix :math:`A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}` .
:param distCoeffs:Input vector of distortion coefficients :math:`(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])` of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
:param newCameraMatrix:Camera matrix of the distorted image. By default, it is the same as ``cameraMatrix`` but you may additionally scale and shift the result by using a different matrix.
The function transforms an image to compensate radial and tangential lens distortion.
:param distCoeffs:Input vector of distortion coefficients :math:`(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])` of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
:param R:Rectification transformation in the object space (3x3 matrix). ``R1`` or ``R2`` computed by :ocv:func:`stereoRectify` can be passed here. If the matrix is empty, the identity transformation is used.
:param P:New camera matrix (3x3) or new projection matrix (3x4). ``P1`` or ``P2`` computed by :ocv:func:`stereoRectify` can be passed here. If the matrix is empty, the identity new camera matrix is used.
:ocv:func:`initUndistortRectifyMap` but it operates on a sparse set of points instead of a raster image. Also the function performs a reverse transformation to
:ocv:func:`projectPoints` . In case of a 3D object, it does not reconstruct its 3D coordinates, but for a planar object, it does, up to a translation vector, if the proper ``R`` is specified. ::
// (u,v) is the input point, (u', v') is the output point
// camera_matrix=[fx 0 cx; 0 fy cy; 0 0 1]
// P=[fx' 0 cx' tx; 0 fy' cy' ty; 0 0 1 tz]
x" = (u - cx)/fx
y" = (v - cy)/fy
(x',y') = undistort(x",y",dist_coeffs)
[X,Y,W]T = R*[x' y' 1]T
x = X/W, y = Y/W
u' = x*fx' + cx'
v' = y*fy' + cy',
where ``undistort()`` is an approximate iterative algorithm that estimates the normalized original point coordinates out of the normalized distorted point coordinates ("normalized" means that the coordinates do not depend on the camera matrix).
The function can be used for both a stereo camera head or a monocular camera (when R is empty).