Merge remote-tracking branch 'upstream/3.4' into merge-3.4

pull/14257/head
Alexander Alekhin 6 years ago
commit 473941c341
  1. 110
      doc/py_tutorials/py_imgproc/py_thresholding/py_thresholding.markdown
  2. 2
      modules/dnn/src/op_inf_engine.cpp
  3. 2
      modules/dnn/test/test_ie_models.cpp
  4. 666
      modules/imgcodecs/src/grfmt_tiff.cpp
  5. 7
      modules/imgcodecs/src/grfmt_tiff.hpp
  6. 5
      modules/imgcodecs/src/utils.cpp
  7. 5
      modules/imgcodecs/src/utils.hpp
  8. 62
      modules/imgcodecs/test/test_tiff.cpp

@ -4,20 +4,21 @@ Image Thresholding {#tutorial_py_thresholding}
Goal Goal
---- ----
- In this tutorial, you will learn Simple thresholding, Adaptive thresholding, Otsu's thresholding - In this tutorial, you will learn Simple thresholding, Adaptive thresholding and Otsu's thresholding.
etc. - You will learn the functions **cv.threshold** and **cv.adaptiveThreshold**.
- You will learn these functions : **cv.threshold**, **cv.adaptiveThreshold** etc.
Simple Thresholding Simple Thresholding
------------------- -------------------
Here, the matter is straight forward. If pixel value is greater than a threshold value, it is Here, the matter is straight forward. For every pixel, the same threshold value is applied.
assigned one value (may be white), else it is assigned another value (may be black). The function If the pixel value is smaller than the threshold, it is set to 0, otherwise it is set to a maximum value.
used is **cv.threshold**. First argument is the source image, which **should be a grayscale The function **cv.threshold** is used to apply the thresholding.
image**. Second argument is the threshold value which is used to classify the pixel values. Third The first argument is the source image, which **should be a grayscale image**.
argument is the maxVal which represents the value to be given if pixel value is more than (sometimes The second argument is the threshold value which is used to classify the pixel values.
less than) the threshold value. OpenCV provides different styles of thresholding and it is decided The third argument is the maximum value which is assigned to pixel values exceeding the threshold.
by the fourth parameter of the function. Different types are: OpenCV provides different types of thresholding which is given by the fourth parameter of the function.
Basic thresholding as described above is done by using the type cv.THRESH_BINARY.
All simple thresholding types are:
- cv.THRESH_BINARY - cv.THRESH_BINARY
- cv.THRESH_BINARY_INV - cv.THRESH_BINARY_INV
@ -25,12 +26,12 @@ by the fourth parameter of the function. Different types are:
- cv.THRESH_TOZERO - cv.THRESH_TOZERO
- cv.THRESH_TOZERO_INV - cv.THRESH_TOZERO_INV
Documentation clearly explain what each type is meant for. Please check out the documentation. See the documentation of the types for the differences.
Two outputs are obtained. First one is a **retval** which will be explained later. Second output is The method returns two outputs.
our **thresholded image**. The first is the threshold that was used and the second output is the **thresholded image**.
Code : This code compares the different simple thresholding types:
@code{.py} @code{.py}
import cv2 as cv import cv2 as cv
import numpy as np import numpy as np
@ -53,34 +54,31 @@ for i in xrange(6):
plt.show() plt.show()
@endcode @endcode
@note To plot multiple images, we have used plt.subplot() function. Please checkout Matplotlib docs @note To plot multiple images, we have used the plt.subplot() function. Please checkout the matplotlib docs for more details.
for more details.
Result is given below : The code yields this result:
![image](images/threshold.jpg) ![image](images/threshold.jpg)
Adaptive Thresholding Adaptive Thresholding
--------------------- ---------------------
In the previous section, we used a global value as threshold value. But it may not be good in all In the previous section, we used one global value as a threshold.
the conditions where image has different lighting conditions in different areas. In that case, we go But this might not be good in all cases, e.g. if an image has different lighting conditions in different areas.
for adaptive thresholding. In this, the algorithm calculate the threshold for a small regions of the In that case, adaptive thresholding thresholding can help.
image. So we get different thresholds for different regions of the same image and it gives us better Here, the algorithm determines the threshold for a pixel based on a small region around it.
results for images with varying illumination. So we get different thresholds for different regions of the same image which gives better results for images with varying illumination.
It has three ‘special’ input params and only one output argument. Additionally to the parameters described above, the method cv.adaptiveThreshold three input parameters:
**Adaptive Method** - It decides how thresholding value is calculated. The **adaptiveMethod** decides how the threshold value is calculated:
- cv.ADAPTIVE_THRESH_MEAN_C : threshold value is the mean of neighbourhood area. - cv.ADAPTIVE_THRESH_MEAN_C: The threshold value is the mean of the neighbourhood area minus the constant **C**.
- cv.ADAPTIVE_THRESH_GAUSSIAN_C : threshold value is the weighted sum of neighbourhood - cv.ADAPTIVE_THRESH_GAUSSIAN_C: The threshold value is a gaussian-weighted sum of the neighbourhood
values where weights are a gaussian window. values minus the constant **C**.
**Block Size** - It decides the size of neighbourhood area. The **blockSize** determines the size of the neighbourhood area and **C** is a constant that is subtracted from the mean or weighted sum of the neighbourhood pixels.
**C** - It is just a constant which is subtracted from the mean or weighted mean calculated. The code below compares global thresholding and adaptive thresholding for an image with varying
Below piece of code compares global thresholding and adaptive thresholding for an image with varying
illumination: illumination:
@code{.py} @code{.py}
import cv2 as cv import cv2 as cv
@ -106,33 +104,30 @@ for i in xrange(4):
plt.xticks([]),plt.yticks([]) plt.xticks([]),plt.yticks([])
plt.show() plt.show()
@endcode @endcode
Result : Result:
![image](images/ada_threshold.jpg) ![image](images/ada_threshold.jpg)
Otsus Binarization Otsu's Binarization
------------------- -------------------
In the first section, I told you there is a second parameter **retVal**. Its use comes when we go In global thresholding, we used an arbitrary chosen value as a threshold.
for Otsu’s Binarization. So what is it? In contrast, Otsu's method avoids having to choose a value and determines it automatically.
In global thresholding, we used an arbitrary value for threshold value, right? So, how can we know a Consider an image with only two distinct image values (*bimodal image*), where the histogram would only consist of two peaks.
value we selected is good or not? Answer is, trial and error method. But consider a **bimodal A good threshold would be in the middle of those two values.
image** (*In simple words, bimodal image is an image whose histogram has two peaks*). For that Similarly, Otsu's method determines an optimal global threshold value from the image histogram.
image, we can approximately take a value in the middle of those peaks as threshold value, right ?
That is what Otsu binarization does. So in simple words, it automatically calculates a threshold In order to do so, the cv.threshold() function is used, where cv.THRESH_OTSU is passed as an extra flag.
value from image histogram for a bimodal image. (For images which are not bimodal, binarization The threshold value can be chosen arbitrary.
won’t be accurate.) The algorithm then finds the optimal threshold value which is returned as the first output.
For this, our cv.threshold() function is used, but pass an extra flag, cv.THRESH_OTSU. **For Check out the example below.
threshold value, simply pass zero**. Then the algorithm finds the optimal threshold value and The input image is a noisy image.
returns you as the second output, retVal. If Otsu thresholding is not used, retVal is same as the In the first case, global thresholding with a value of 127 is applied.
threshold value you used. In the second case, Otsu's thresholding is applied directly.
In the third case, the image is first filtered with a 5x5 gaussian kernel to remove the noise, then Otsu thresholding is applied.
Check out below example. Input image is a noisy image. In first case, I applied global thresholding See how noise filtering improves the result.
for a value of 127. In second case, I applied Otsu’s thresholding directly. In third case, I
filtered image with a 5x5 gaussian kernel to remove the noise, then applied Otsu thresholding. See
how noise filtering improves the result.
@code{.py} @code{.py}
import cv2 as cv import cv2 as cv
import numpy as np import numpy as np
@ -167,17 +162,17 @@ for i in xrange(3):
plt.title(titles[i*3+2]), plt.xticks([]), plt.yticks([]) plt.title(titles[i*3+2]), plt.xticks([]), plt.yticks([])
plt.show() plt.show()
@endcode @endcode
Result : Result:
![image](images/otsu.jpg) ![image](images/otsu.jpg)
### How Otsu's Binarization Works? ### How does Otsu's Binarization work?
This section demonstrates a Python implementation of Otsu's binarization to show how it works This section demonstrates a Python implementation of Otsu's binarization to show how it works
actually. If you are not interested, you can skip this. actually. If you are not interested, you can skip this.
Since we are working with bimodal images, Otsu's algorithm tries to find a threshold value (t) which Since we are working with bimodal images, Otsu's algorithm tries to find a threshold value (t) which
minimizes the **weighted within-class variance** given by the relation : minimizes the **weighted within-class variance** given by the relation:
\f[\sigma_w^2(t) = q_1(t)\sigma_1^2(t)+q_2(t)\sigma_2^2(t)\f] \f[\sigma_w^2(t) = q_1(t)\sigma_1^2(t)+q_2(t)\sigma_2^2(t)\f]
@ -186,7 +181,7 @@ where
\f[q_1(t) = \sum_{i=1}^{t} P(i) \quad \& \quad q_2(t) = \sum_{i=t+1}^{I} P(i)\f]\f[\mu_1(t) = \sum_{i=1}^{t} \frac{iP(i)}{q_1(t)} \quad \& \quad \mu_2(t) = \sum_{i=t+1}^{I} \frac{iP(i)}{q_2(t)}\f]\f[\sigma_1^2(t) = \sum_{i=1}^{t} [i-\mu_1(t)]^2 \frac{P(i)}{q_1(t)} \quad \& \quad \sigma_2^2(t) = \sum_{i=t+1}^{I} [i-\mu_2(t)]^2 \frac{P(i)}{q_2(t)}\f] \f[q_1(t) = \sum_{i=1}^{t} P(i) \quad \& \quad q_2(t) = \sum_{i=t+1}^{I} P(i)\f]\f[\mu_1(t) = \sum_{i=1}^{t} \frac{iP(i)}{q_1(t)} \quad \& \quad \mu_2(t) = \sum_{i=t+1}^{I} \frac{iP(i)}{q_2(t)}\f]\f[\sigma_1^2(t) = \sum_{i=1}^{t} [i-\mu_1(t)]^2 \frac{P(i)}{q_1(t)} \quad \& \quad \sigma_2^2(t) = \sum_{i=t+1}^{I} [i-\mu_2(t)]^2 \frac{P(i)}{q_2(t)}\f]
It actually finds a value of t which lies in between two peaks such that variances to both classes It actually finds a value of t which lies in between two peaks such that variances to both classes
are minimum. It can be simply implemented in Python as follows: are minimal. It can be simply implemented in Python as follows:
@code{.py} @code{.py}
img = cv.imread('noisy2.png',0) img = cv.imread('noisy2.png',0)
blur = cv.GaussianBlur(img,(5,5),0) blur = cv.GaussianBlur(img,(5,5),0)
@ -220,7 +215,6 @@ for i in xrange(1,256):
ret, otsu = cv.threshold(blur,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU) ret, otsu = cv.threshold(blur,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
print( "{} {}".format(thresh,ret) ) print( "{} {}".format(thresh,ret) )
@endcode @endcode
*(Some of the functions may be new here, but we will cover them in coming chapters)*
Additional Resources Additional Resources
-------------------- --------------------

@ -784,6 +784,8 @@ void InfEngineBackendNet::initPlugin(InferenceEngine::ICNNNetwork& net)
continue; continue;
#ifdef _WIN32 #ifdef _WIN32
std::string libName = "cpu_extension" + suffixes[i] + ".dll"; std::string libName = "cpu_extension" + suffixes[i] + ".dll";
#elif defined(__APPLE__)
std::string libName = "libcpu_extension" + suffixes[i] + ".dylib";
#else #else
std::string libName = "libcpu_extension" + suffixes[i] + ".so"; std::string libName = "libcpu_extension" + suffixes[i] + ".so";
#endif // _WIN32 #endif // _WIN32

@ -172,6 +172,8 @@ void runIE(Target target, const std::string& xmlPath, const std::string& binPath
continue; continue;
#ifdef _WIN32 #ifdef _WIN32
std::string libName = "cpu_extension" + suffixes[i] + ".dll"; std::string libName = "cpu_extension" + suffixes[i] + ".dll";
#elif defined(__APPLE__)
std::string libName = "libcpu_extension" + suffixes[i] + ".dylib";
#else #else
std::string libName = "libcpu_extension" + suffixes[i] + ".so"; std::string libName = "libcpu_extension" + suffixes[i] + ".so";
#endif // _WIN32 #endif // _WIN32

@ -48,6 +48,8 @@
#include "precomp.hpp" #include "precomp.hpp"
#ifdef HAVE_TIFF #ifdef HAVE_TIFF
#include <opencv2/core/utils/logger.hpp>
#include "grfmt_tiff.hpp" #include "grfmt_tiff.hpp"
#include <limits> #include <limits>
@ -61,23 +63,58 @@ using namespace tiff_dummy_namespace;
namespace cv namespace cv
{ {
#define CV_TIFF_CHECK_CALL(call) \
if (0 == (call)) { \
CV_LOG_WARNING(NULL, "OpenCV TIFF(line " << __LINE__ << "): failed " #call); \
CV_Error(Error::StsError, "OpenCV TIFF: failed " #call); \
}
#define CV_TIFF_CHECK_CALL_INFO(call) \
if (0 == (call)) { \
CV_LOG_INFO(NULL, "OpenCV TIFF(line " << __LINE__ << "): failed optional call: " #call ", ignoring"); \
}
#define CV_TIFF_CHECK_CALL_DEBUG(call) \
if (0 == (call)) { \
CV_LOG_DEBUG(NULL, "OpenCV TIFF(line " << __LINE__ << "): failed optional call: " #call ", ignoring"); \
}
static void cv_tiffCloseHandle(void* handle)
{
TIFFClose((TIFF*)handle);
}
static void cv_tiffErrorHandler(const char* module, const char* fmt, va_list ap)
{
if (cv::utils::logging::getLogLevel() < cv::utils::logging::LOG_LEVEL_DEBUG)
return;
// TODO cv::vformat() with va_list parameter
fprintf(stderr, "OpenCV TIFF: ");
if (module != NULL)
fprintf(stderr, "%s: ", module);
fprintf(stderr, "Warning, ");
vfprintf(stderr, fmt, ap);
fprintf(stderr, ".\n");
}
static bool cv_tiffSetErrorHandler_()
{
TIFFSetErrorHandler(cv_tiffErrorHandler);
TIFFSetWarningHandler(cv_tiffErrorHandler);
return true;
}
static bool cv_tiffSetErrorHandler()
{
static bool v = cv_tiffSetErrorHandler_();
return v;
}
static const char fmtSignTiffII[] = "II\x2a\x00"; static const char fmtSignTiffII[] = "II\x2a\x00";
static const char fmtSignTiffMM[] = "MM\x00\x2a"; static const char fmtSignTiffMM[] = "MM\x00\x2a";
static int grfmt_tiff_err_handler_init = 0;
static void GrFmtSilentTIFFErrorHandler( const char*, const char*, va_list ) {}
TiffDecoder::TiffDecoder() TiffDecoder::TiffDecoder()
{ {
m_tif = 0;
if( !grfmt_tiff_err_handler_init )
{
grfmt_tiff_err_handler_init = 1;
TIFFSetErrorHandler( GrFmtSilentTIFFErrorHandler );
TIFFSetWarningHandler( GrFmtSilentTIFFErrorHandler );
}
m_hdr = false; m_hdr = false;
m_buf_supported = true; m_buf_supported = true;
m_buf_pos = 0; m_buf_pos = 0;
@ -86,12 +123,7 @@ TiffDecoder::TiffDecoder()
void TiffDecoder::close() void TiffDecoder::close()
{ {
if( m_tif ) m_tif.release();
{
TIFF* tif = (TIFF*)m_tif;
TIFFClose( tif );
m_tif = 0;
}
} }
TiffDecoder::~TiffDecoder() TiffDecoder::~TiffDecoder()
@ -113,11 +145,13 @@ bool TiffDecoder::checkSignature( const String& signature ) const
int TiffDecoder::normalizeChannelsNumber(int channels) const int TiffDecoder::normalizeChannelsNumber(int channels) const
{ {
CV_Assert(channels <= 4);
return channels > 4 ? 4 : channels; return channels > 4 ? 4 : channels;
} }
ImageDecoder TiffDecoder::newDecoder() const ImageDecoder TiffDecoder::newDecoder() const
{ {
cv_tiffSetErrorHandler();
return makePtr<TiffDecoder>(); return makePtr<TiffDecoder>();
} }
@ -201,8 +235,8 @@ bool TiffDecoder::readHeader()
{ {
bool result = false; bool result = false;
TIFF* tif = static_cast<TIFF*>(m_tif); TIFF* tif = static_cast<TIFF*>(m_tif.get());
if (!m_tif) if (!tif)
{ {
// TIFFOpen() mode flags are different to fopen(). A 'b' in mode "rb" has no effect when reading. // TIFFOpen() mode flags are different to fopen(). A 'b' in mode "rb" has no effect when reading.
// http://www.remotesensing.org/libtiff/man/TIFFOpen.3tiff.html // http://www.remotesensing.org/libtiff/man/TIFFOpen.3tiff.html
@ -221,25 +255,30 @@ bool TiffDecoder::readHeader()
{ {
tif = TIFFOpen(m_filename.c_str(), "r"); tif = TIFFOpen(m_filename.c_str(), "r");
} }
if (tif)
m_tif.reset(tif, cv_tiffCloseHandle);
else
m_tif.release();
} }
if( tif ) if (tif)
{ {
uint32 wdth = 0, hght = 0; uint32 wdth = 0, hght = 0;
uint16 photometric = 0; uint16 photometric = 0;
m_tif = tif;
if( TIFFGetField( tif, TIFFTAG_IMAGEWIDTH, &wdth ) && CV_TIFF_CHECK_CALL(TIFFGetField(tif, TIFFTAG_IMAGEWIDTH, &wdth));
TIFFGetField( tif, TIFFTAG_IMAGELENGTH, &hght ) && CV_TIFF_CHECK_CALL(TIFFGetField(tif, TIFFTAG_IMAGELENGTH, &hght));
TIFFGetField( tif, TIFFTAG_PHOTOMETRIC, &photometric )) CV_TIFF_CHECK_CALL(TIFFGetField(tif, TIFFTAG_PHOTOMETRIC, &photometric));
{ {
uint16 bpp=8, ncn = photometric > 1 ? 3 : 1; bool isGrayScale = photometric == PHOTOMETRIC_MINISWHITE || photometric == PHOTOMETRIC_MINISBLACK;
TIFFGetField( tif, TIFFTAG_BITSPERSAMPLE, &bpp ); uint16 bpp = 8, ncn = isGrayScale ? 1 : 3;
TIFFGetField( tif, TIFFTAG_SAMPLESPERPIXEL, &ncn ); CV_TIFF_CHECK_CALL(TIFFGetField(tif, TIFFTAG_BITSPERSAMPLE, &bpp));
CV_TIFF_CHECK_CALL_DEBUG(TIFFGetField(tif, TIFFTAG_SAMPLESPERPIXEL, &ncn));
m_width = wdth; m_width = wdth;
m_height = hght; m_height = hght;
if((bpp == 32 && ncn == 3) || photometric == PHOTOMETRIC_LOGLUV) if (ncn == 3 && photometric == PHOTOMETRIC_LOGLUV)
{ {
m_type = CV_32FC3; m_type = CV_32FC3;
m_hdr = true; m_hdr = true;
@ -256,23 +295,23 @@ bool TiffDecoder::readHeader()
switch(bpp) switch(bpp)
{ {
case 1: case 1:
m_type = CV_MAKETYPE(CV_8U, photometric > 1 ? wanted_channels : 1); m_type = CV_MAKETYPE(CV_8U, !isGrayScale ? wanted_channels : 1);
result = true; result = true;
break; break;
case 8: case 8:
m_type = CV_MAKETYPE(CV_8U, photometric > 1 ? wanted_channels : 1); m_type = CV_MAKETYPE(CV_8U, !isGrayScale ? wanted_channels : 1);
result = true; result = true;
break; break;
case 16: case 16:
m_type = CV_MAKETYPE(CV_16U, photometric > 1 ? wanted_channels : 1); m_type = CV_MAKETYPE(CV_16U, !isGrayScale ? wanted_channels : 1);
result = true; result = true;
break; break;
case 32: case 32:
m_type = CV_MAKETYPE(CV_32F, photometric > 1 ? 3 : 1); m_type = CV_MAKETYPE(CV_32F, wanted_channels);
result = true; result = true;
break; break;
case 64: case 64:
m_type = CV_MAKETYPE(CV_64F, photometric > 1 ? 3 : 1); m_type = CV_MAKETYPE(CV_64F, wanted_channels);
result = true; result = true;
break; break;
default: default:
@ -290,206 +329,210 @@ bool TiffDecoder::readHeader()
bool TiffDecoder::nextPage() bool TiffDecoder::nextPage()
{ {
// Prepare the next page, if any. // Prepare the next page, if any.
return m_tif && return !m_tif.empty() &&
TIFFReadDirectory(static_cast<TIFF*>(m_tif)) && TIFFReadDirectory(static_cast<TIFF*>(m_tif.get())) &&
readHeader(); readHeader();
} }
bool TiffDecoder::readData( Mat& img ) bool TiffDecoder::readData( Mat& img )
{ {
if(m_hdr && img.type() == CV_32FC3) int type = img.type();
{ int depth = CV_MAT_DEPTH(type);
return readData_32FC3(img);
} CV_Assert(!m_tif.empty());
if(img.type() == CV_32FC1) TIFF* tif = (TIFF*)m_tif.get();
uint16 photometric = (uint16)-1;
CV_TIFF_CHECK_CALL(TIFFGetField(tif, TIFFTAG_PHOTOMETRIC, &photometric));
if (m_hdr && depth >= CV_32F)
{ {
return readData_32FC1(img); CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_SGILOGDATAFMT, SGILOGDATAFMT_FLOAT));
} }
bool result = false;
bool color = img.channels() > 1; bool color = img.channels() > 1;
if( img.depth() != CV_8U && img.depth() != CV_16U && img.depth() != CV_32F && img.depth() != CV_64F ) CV_CheckType(type, depth == CV_8U || depth == CV_16U || depth == CV_32F || depth == CV_64F, "");
return false;
if( m_tif && m_width && m_height ) if (m_width && m_height)
{ {
TIFF* tif = (TIFF*)m_tif; int is_tiled = TIFFIsTiled(tif) != 0;
uint32 tile_width0 = m_width, tile_height0 = 0; bool isGrayScale = photometric == PHOTOMETRIC_MINISWHITE || photometric == PHOTOMETRIC_MINISBLACK;
int x, y, i; uint16 bpp = 8, ncn = isGrayScale ? 1 : 3;
int is_tiled = TIFFIsTiled(tif); CV_TIFF_CHECK_CALL(TIFFGetField(tif, TIFFTAG_BITSPERSAMPLE, &bpp));
uint16 photometric; CV_TIFF_CHECK_CALL_DEBUG(TIFFGetField(tif, TIFFTAG_SAMPLESPERPIXEL, &ncn));
TIFFGetField( tif, TIFFTAG_PHOTOMETRIC, &photometric );
uint16 bpp = 8, ncn = photometric > 1 ? 3 : 1;
TIFFGetField( tif, TIFFTAG_BITSPERSAMPLE, &bpp );
TIFFGetField( tif, TIFFTAG_SAMPLESPERPIXEL, &ncn );
uint16 img_orientation = ORIENTATION_TOPLEFT; uint16 img_orientation = ORIENTATION_TOPLEFT;
TIFFGetField( tif, TIFFTAG_ORIENTATION, &img_orientation); CV_TIFF_CHECK_CALL_DEBUG(TIFFGetField(tif, TIFFTAG_ORIENTATION, &img_orientation));
bool vert_flip = (img_orientation == ORIENTATION_BOTRIGHT) || (img_orientation == ORIENTATION_RIGHTBOT) || bool vert_flip = (img_orientation == ORIENTATION_BOTRIGHT) || (img_orientation == ORIENTATION_RIGHTBOT) ||
(img_orientation == ORIENTATION_BOTLEFT) || (img_orientation == ORIENTATION_LEFTBOT); (img_orientation == ORIENTATION_BOTLEFT) || (img_orientation == ORIENTATION_LEFTBOT);
const int bitsPerByte = 8; const int bitsPerByte = 8;
int dst_bpp = (int)(img.elemSize1() * bitsPerByte); int dst_bpp = (int)(img.elemSize1() * bitsPerByte);
int wanted_channels = normalizeChannelsNumber(img.channels()); int wanted_channels = normalizeChannelsNumber(img.channels());
if(dst_bpp == 8) if (dst_bpp == 8)
{ {
char errmsg[1024]; char errmsg[1024];
if(!TIFFRGBAImageOK( tif, errmsg )) if (!TIFFRGBAImageOK(tif, errmsg))
{ {
CV_LOG_WARNING(NULL, "OpenCV TIFF: TIFFRGBAImageOK: " << errmsg);
close(); close();
return false; return false;
} }
} }
if( (!is_tiled) || uint32 tile_width0 = m_width, tile_height0 = 0;
(is_tiled &&
TIFFGetField( tif, TIFFTAG_TILEWIDTH, &tile_width0 ) && if (is_tiled)
TIFFGetField( tif, TIFFTAG_TILELENGTH, &tile_height0 ))) {
CV_TIFF_CHECK_CALL(TIFFGetField(tif, TIFFTAG_TILEWIDTH, &tile_width0));
CV_TIFF_CHECK_CALL(TIFFGetField(tif, TIFFTAG_TILELENGTH, &tile_height0));
}
else
{ {
if(!is_tiled) // optional
TIFFGetField( tif, TIFFTAG_ROWSPERSTRIP, &tile_height0 ); CV_TIFF_CHECK_CALL_DEBUG(TIFFGetField(tif, TIFFTAG_ROWSPERSTRIP, &tile_height0));
}
if( tile_width0 <= 0 ) {
if (tile_width0 == 0)
tile_width0 = m_width; tile_width0 = m_width;
if( tile_height0 <= 0 || if (tile_height0 == 0 ||
(!is_tiled && tile_height0 == std::numeric_limits<uint32>::max()) ) (!is_tiled && tile_height0 == std::numeric_limits<uint32>::max()) )
tile_height0 = m_height; tile_height0 = m_height;
if(dst_bpp == 8) { if (dst_bpp == 8)
{
// we will use TIFFReadRGBA* functions, so allocate temporary buffer for 32bit RGBA // we will use TIFFReadRGBA* functions, so allocate temporary buffer for 32bit RGBA
bpp = 8; bpp = 8;
ncn = 4; ncn = 4;
} }
const size_t buffer_size = (bpp/bitsPerByte) * ncn * tile_height0 * tile_width0; else if (dst_bpp == 32 || dst_bpp == 64)
AutoBuffer<uchar> _buffer( buffer_size ); {
CV_Assert(ncn == img.channels());
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_SAMPLEFORMAT, SAMPLEFORMAT_IEEEFP));
}
const size_t buffer_size = (bpp / bitsPerByte) * ncn * tile_height0 * tile_width0;
AutoBuffer<uchar> _buffer(buffer_size);
uchar* buffer = _buffer.data(); uchar* buffer = _buffer.data();
ushort* buffer16 = (ushort*)buffer; ushort* buffer16 = (ushort*)buffer;
float* buffer32 = (float*)buffer;
double* buffer64 = (double*)buffer;
int tileidx = 0; int tileidx = 0;
for( y = 0; y < m_height; y += tile_height0 ) for (int y = 0; y < m_height; y += (int)tile_height0)
{ {
int tile_height = tile_height0; int tile_height = std::min((int)tile_height0, m_height - y);
if( y + tile_height > m_height )
tile_height = m_height - y;
uchar* data = img.ptr(vert_flip ? m_height - y - tile_height : y); const int img_y = vert_flip ? m_height - y - tile_height : y;
for( x = 0; x < m_width; x += tile_width0, tileidx++ ) for(int x = 0; x < m_width; x += (int)tile_width0, tileidx++)
{ {
int tile_width = tile_width0, ok; int tile_width = std::min((int)tile_width0, m_width - x);
if( x + tile_width > m_width )
tile_width = m_width - x;
switch(dst_bpp) switch (dst_bpp)
{ {
case 8: case 8:
{ {
uchar * bstart = buffer; uchar* bstart = buffer;
if( !is_tiled ) if (!is_tiled)
ok = TIFFReadRGBAStrip( tif, y, (uint32*)buffer );
else
{ {
ok = TIFFReadRGBATile( tif, x, y, (uint32*)buffer ); CV_TIFF_CHECK_CALL(TIFFReadRGBAStrip(tif, y, (uint32*)buffer));
//Tiles fill the buffer from the bottom up
bstart += (tile_height0 - tile_height) * tile_width0 * 4;
} }
if( !ok ) else
{ {
close(); CV_TIFF_CHECK_CALL(TIFFReadRGBATile(tif, x, y, (uint32*)buffer));
return false; // Tiles fill the buffer from the bottom up
bstart += (tile_height0 - tile_height) * tile_width0 * 4;
} }
for( i = 0; i < tile_height; i++ ) for (int i = 0; i < tile_height; i++)
if( color ) {
if (color)
{ {
if (wanted_channels == 4) if (wanted_channels == 4)
{ {
icvCvt_BGRA2RGBA_8u_C4R( bstart + i*tile_width0*4, 0, icvCvt_BGRA2RGBA_8u_C4R(bstart + i*tile_width0*4, 0,
data + x*4 + img.step*(tile_height - i - 1), 0, img.ptr(img_y + tile_height - i - 1, x), 0,
Size(tile_width,1) ); Size(tile_width, 1) );
} }
else else
{ {
icvCvt_BGRA2BGR_8u_C4C3R( bstart + i*tile_width0*4, 0, icvCvt_BGRA2BGR_8u_C4C3R(bstart + i*tile_width0*4, 0,
data + x*3 + img.step*(tile_height - i - 1), 0, img.ptr(img_y + tile_height - i - 1, x), 0,
Size(tile_width,1), 2 ); Size(tile_width, 1), 2);
} }
} }
else else
{
icvCvt_BGRA2Gray_8u_C4C1R( bstart + i*tile_width0*4, 0, icvCvt_BGRA2Gray_8u_C4C1R( bstart + i*tile_width0*4, 0,
data + x + img.step*(tile_height - i - 1), 0, img.ptr(img_y + tile_height - i - 1, x), 0,
Size(tile_width,1), 2 ); Size(tile_width, 1), 2);
}
}
break; break;
} }
case 16: case 16:
{ {
if( !is_tiled ) if (!is_tiled)
ok = (int)TIFFReadEncodedStrip( tif, tileidx, (uint32*)buffer, buffer_size ) >= 0; {
CV_TIFF_CHECK_CALL((int)TIFFReadEncodedStrip(tif, tileidx, (uint32*)buffer, buffer_size) >= 0);
}
else else
ok = (int)TIFFReadEncodedTile( tif, tileidx, (uint32*)buffer, buffer_size ) >= 0;
if( !ok )
{ {
close(); CV_TIFF_CHECK_CALL((int)TIFFReadEncodedTile(tif, tileidx, (uint32*)buffer, buffer_size) >= 0);
return false;
} }
for( i = 0; i < tile_height; i++ ) for (int i = 0; i < tile_height; i++)
{ {
if( color ) if (color)
{ {
if( ncn == 1 ) if (ncn == 1)
{ {
icvCvt_Gray2BGR_16u_C1C3R(buffer16 + i*tile_width0*ncn, 0, icvCvt_Gray2BGR_16u_C1C3R(buffer16 + i*tile_width0*ncn, 0,
(ushort*)(data + img.step*i) + x*3, 0, img.ptr<ushort>(img_y + i, x), 0,
Size(tile_width,1) ); Size(tile_width, 1));
} }
else if( ncn == 3 ) else if (ncn == 3)
{ {
icvCvt_RGB2BGR_16u_C3R(buffer16 + i*tile_width0*ncn, 0, icvCvt_RGB2BGR_16u_C3R(buffer16 + i*tile_width0*ncn, 0,
(ushort*)(data + img.step*i) + x*3, 0, img.ptr<ushort>(img_y + i, x), 0,
Size(tile_width,1) ); Size(tile_width, 1));
} }
else if (ncn == 4) else if (ncn == 4)
{ {
if (wanted_channels == 4) if (wanted_channels == 4)
{ {
icvCvt_BGRA2RGBA_16u_C4R(buffer16 + i*tile_width0*ncn, 0, icvCvt_BGRA2RGBA_16u_C4R(buffer16 + i*tile_width0*ncn, 0,
(ushort*)(data + img.step*i) + x * 4, 0, img.ptr<ushort>(img_y + i, x), 0,
Size(tile_width, 1)); Size(tile_width, 1));
} }
else else
{ {
icvCvt_BGRA2BGR_16u_C4C3R(buffer16 + i*tile_width0*ncn, 0, icvCvt_BGRA2BGR_16u_C4C3R(buffer16 + i*tile_width0*ncn, 0,
(ushort*)(data + img.step*i) + x * 3, 0, img.ptr<ushort>(img_y + i, x), 0,
Size(tile_width, 1), 2); Size(tile_width, 1), 2);
} }
} }
else else
{ {
icvCvt_BGRA2BGR_16u_C4C3R(buffer16 + i*tile_width0*ncn, 0, icvCvt_BGRA2BGR_16u_C4C3R(buffer16 + i*tile_width0*ncn, 0,
(ushort*)(data + img.step*i) + x*3, 0, img.ptr<ushort>(img_y + i, x), 0,
Size(tile_width,1), 2 ); Size(tile_width, 1), 2);
} }
} }
else else
{ {
if( ncn == 1 ) if( ncn == 1 )
{ {
memcpy((ushort*)(data + img.step*i)+x, memcpy(img.ptr<ushort>(img_y + i, x),
buffer16 + i*tile_width0*ncn, buffer16 + i*tile_width0*ncn,
tile_width*sizeof(buffer16[0])); tile_width*sizeof(ushort));
} }
else else
{ {
icvCvt_BGRA2Gray_16u_CnC1R(buffer16 + i*tile_width0*ncn, 0, icvCvt_BGRA2Gray_16u_CnC1R(buffer16 + i*tile_width0*ncn, 0,
(ushort*)(data + img.step*i) + x, 0, img.ptr<ushort>(img_y + i, x), 0,
Size(tile_width,1), ncn, 2 ); Size(tile_width, 1), ncn, 2);
} }
} }
} }
@ -500,120 +543,43 @@ bool TiffDecoder::readData( Mat& img )
case 64: case 64:
{ {
if( !is_tiled ) if( !is_tiled )
ok = (int)TIFFReadEncodedStrip( tif, tileidx, buffer, buffer_size ) >= 0;
else
ok = (int)TIFFReadEncodedTile( tif, tileidx, buffer, buffer_size ) >= 0;
if( !ok || ncn != 1 )
{ {
close(); CV_TIFF_CHECK_CALL((int)TIFFReadEncodedStrip(tif, tileidx, buffer, buffer_size) >= 0);
return false;
} }
else
for( i = 0; i < tile_height; i++ )
{ {
if(dst_bpp == 32) CV_TIFF_CHECK_CALL((int)TIFFReadEncodedTile(tif, tileidx, buffer, buffer_size) >= 0);
{
memcpy((float*)(data + img.step*i)+x,
buffer32 + i*tile_width0*ncn,
tile_width*sizeof(buffer32[0]));
}
else
{
memcpy((double*)(data + img.step*i)+x,
buffer64 + i*tile_width0*ncn,
tile_width*sizeof(buffer64[0]));
}
} }
Mat m_tile(Size(tile_width0, tile_height0), CV_MAKETYPE((dst_bpp == 32) ? CV_32F : CV_64F, ncn), buffer);
Rect roi_tile(0, 0, tile_width, tile_height);
Rect roi_img(x, img_y, tile_width, tile_height);
if (!m_hdr && ncn == 3)
cvtColor(m_tile(roi_tile), img(roi_img), COLOR_RGB2BGR);
else if (!m_hdr && ncn == 4)
cvtColor(m_tile(roi_tile), img(roi_img), COLOR_RGBA2BGRA);
else
m_tile(roi_tile).copyTo(img(roi_img));
break; break;
} }
default: default:
{ {
close(); CV_Assert(0 && "OpenCV TIFF: unsupported depth");
return false;
} }
} } // switch (dst_bpp)
} } // for x
} } // for y
result = true;
} }
} }
return result; if (m_hdr && depth >= CV_32F)
}
bool TiffDecoder::readData_32FC3(Mat& img)
{
int rows_per_strip = 0, photometric = 0;
if(!m_tif)
{
return false;
}
TIFF *tif = static_cast<TIFF*>(m_tif);
TIFFGetField(tif, TIFFTAG_ROWSPERSTRIP, &rows_per_strip);
TIFFGetField( tif, TIFFTAG_PHOTOMETRIC, &photometric );
TIFFSetField(tif, TIFFTAG_SGILOGDATAFMT, SGILOGDATAFMT_FLOAT);
int size = 3 * m_width * m_height * sizeof (float);
tstrip_t strip_size = 3 * m_width * rows_per_strip;
float *ptr = img.ptr<float>();
for (tstrip_t i = 0; i < TIFFNumberOfStrips(tif); i++, ptr += strip_size)
{
TIFFReadEncodedStrip(tif, i, ptr, size);
size -= strip_size * sizeof(float);
}
close();
if(photometric == PHOTOMETRIC_LOGLUV)
{ {
CV_Assert(photometric == PHOTOMETRIC_LOGLUV);
cvtColor(img, img, COLOR_XYZ2BGR); cvtColor(img, img, COLOR_XYZ2BGR);
} }
else
{
cvtColor(img, img, COLOR_RGB2BGR);
}
return true; return true;
} }
bool TiffDecoder::readData_32FC1(Mat& img)
{
if(!m_tif)
{
return false;
}
TIFF *tif = static_cast<TIFF*>(m_tif);
uint32 img_width, img_height;
TIFFGetField(tif,TIFFTAG_IMAGEWIDTH, &img_width);
TIFFGetField(tif,TIFFTAG_IMAGELENGTH, &img_height);
if(img.size() != Size(img_width,img_height))
{
close();
return false;
}
tsize_t scanlength = TIFFScanlineSize(tif);
tdata_t buf = _TIFFmalloc(scanlength);
float* data;
bool result = true;
for (uint32 row = 0; row < img_height; row++)
{
if (TIFFReadScanline(tif, buf, row) != 1)
{
result = false;
break;
}
data=(float*)buf;
for (uint32 i=0; i<img_width; i++)
{
img.at<float>(row,i) = data[i];
}
}
_TIFFfree(buf);
close();
return result;
}
////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////
TiffEncoder::TiffEncoder() TiffEncoder::TiffEncoder()
@ -633,7 +599,7 @@ ImageEncoder TiffEncoder::newEncoder() const
bool TiffEncoder::isFormatSupported( int depth ) const bool TiffEncoder::isFormatSupported( int depth ) const
{ {
return depth == CV_8U || depth == CV_16U || depth == CV_32F; return depth == CV_8U || depth == CV_16U || depth == CV_32F || depth == CV_64F;
} }
void TiffEncoder::writeTag( WLByteStream& strm, TiffTag tag, void TiffEncoder::writeTag( WLByteStream& strm, TiffTag tag,
@ -656,6 +622,8 @@ public:
TIFF* open () TIFF* open ()
{ {
// do NOT put "wb" as the mode, because the b means "big endian" mode, not "binary" mode.
// http://www.remotesensing.org/libtiff/man/TIFFOpen.3tiff.html
return TIFFClientOpen( "", "w", reinterpret_cast<thandle_t>(this), &TiffEncoderBufHelper::read, return TIFFClientOpen( "", "w", reinterpret_cast<thandle_t>(this), &TiffEncoderBufHelper::read,
&TiffEncoderBufHelper::write, &TiffEncoderBufHelper::seek, &TiffEncoderBufHelper::write, &TiffEncoderBufHelper::seek,
&TiffEncoderBufHelper::close, &TiffEncoderBufHelper::size, &TiffEncoderBufHelper::close, &TiffEncoderBufHelper::size,
@ -721,35 +689,39 @@ private:
toff_t m_buf_pos; toff_t m_buf_pos;
}; };
static void readParam(const std::vector<int>& params, int key, int& value) static bool readParam(const std::vector<int>& params, int key, int& value)
{ {
for(size_t i = 0; i + 1 < params.size(); i += 2) for (size_t i = 0; i + 1 < params.size(); i += 2)
if(params[i] == key) {
if (params[i] == key)
{ {
value = params[i+1]; value = params[i + 1];
break; return true;
} }
}
return false;
} }
bool TiffEncoder::writeLibTiff( const std::vector<Mat>& img_vec, const std::vector<int>& params) bool TiffEncoder::writeLibTiff( const std::vector<Mat>& img_vec, const std::vector<int>& params)
{ {
// do NOT put "wb" as the mode, because the b means "big endian" mode, not "binary" mode. // do NOT put "wb" as the mode, because the b means "big endian" mode, not "binary" mode.
// http://www.remotesensing.org/libtiff/man/TIFFOpen.3tiff.html // http://www.remotesensing.org/libtiff/man/TIFFOpen.3tiff.html
TIFF* pTiffHandle; TIFF* tif = NULL;
TiffEncoderBufHelper buf_helper(m_buf); TiffEncoderBufHelper buf_helper(m_buf);
if ( m_buf ) if ( m_buf )
{ {
pTiffHandle = buf_helper.open(); tif = buf_helper.open();
} }
else else
{ {
pTiffHandle = TIFFOpen(m_filename.c_str(), "w"); tif = TIFFOpen(m_filename.c_str(), "w");
} }
if (!pTiffHandle) if (!tif)
{ {
return false; return false;
} }
cv::Ptr<void> tif_cleanup(tif, cv_tiffCloseHandle);
//Settings that matter to all images //Settings that matter to all images
int compression = COMPRESSION_LZW; int compression = COMPRESSION_LZW;
@ -768,7 +740,29 @@ bool TiffEncoder::writeLibTiff( const std::vector<Mat>& img_vec, const std::vect
const Mat& img = img_vec[page]; const Mat& img = img_vec[page];
int channels = img.channels(); int channels = img.channels();
int width = img.cols, height = img.rows; int width = img.cols, height = img.rows;
int depth = img.depth(); int type = img.type();
int depth = CV_MAT_DEPTH(type);
CV_CheckType(type, depth == CV_8U || depth == CV_16U || depth == CV_32F || depth == CV_64F, "");
CV_CheckType(type, channels >= 1 && channels <= 4, "");
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_IMAGEWIDTH, width));
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_IMAGELENGTH, height));
if (img_vec.size() > 1)
{
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_SUBFILETYPE, FILETYPE_PAGE));
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_PAGENUMBER, page, img_vec.size()));
}
int compression_param = -1; // OPENCV_FUTURE
if (type == CV_32FC3 && (!readParam(params, IMWRITE_TIFF_COMPRESSION, compression_param) || compression_param == COMPRESSION_SGILOG))
{
if (!write_32FC3_SGILOG(img, tif))
return false;
continue;
}
int page_compression = compression;
int bitsPerChannel = -1; int bitsPerChannel = -1;
switch (depth) switch (depth)
@ -783,9 +777,20 @@ bool TiffEncoder::writeLibTiff( const std::vector<Mat>& img_vec, const std::vect
bitsPerChannel = 16; bitsPerChannel = 16;
break; break;
} }
case CV_32F:
{
bitsPerChannel = 32;
page_compression = COMPRESSION_NONE;
break;
}
case CV_64F:
{
bitsPerChannel = 64;
page_compression = COMPRESSION_NONE;
break;
}
default: default:
{ {
TIFFClose(pTiffHandle);
return false; return false;
} }
} }
@ -795,57 +800,42 @@ bool TiffEncoder::writeLibTiff( const std::vector<Mat>& img_vec, const std::vect
int rowsPerStrip = (int)((1 << 13) / fileStep); int rowsPerStrip = (int)((1 << 13) / fileStep);
readParam(params, TIFFTAG_ROWSPERSTRIP, rowsPerStrip); readParam(params, TIFFTAG_ROWSPERSTRIP, rowsPerStrip);
rowsPerStrip = std::max(1, std::min(height, rowsPerStrip));
int colorspace = channels > 1 ? PHOTOMETRIC_RGB : PHOTOMETRIC_MINISBLACK;
if (rowsPerStrip < 1) CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_BITSPERSAMPLE, bitsPerChannel));
rowsPerStrip = 1; CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_COMPRESSION, page_compression));
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_PHOTOMETRIC, colorspace));
if (rowsPerStrip > height) CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_SAMPLESPERPIXEL, channels));
rowsPerStrip = height; CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG));
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_ROWSPERSTRIP, rowsPerStrip));
int colorspace = channels > 1 ? PHOTOMETRIC_RGB : PHOTOMETRIC_MINISBLACK;
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_SAMPLEFORMAT, depth >= CV_32F ? SAMPLEFORMAT_IEEEFP : SAMPLEFORMAT_UINT));
if (!TIFFSetField(pTiffHandle, TIFFTAG_IMAGEWIDTH, width)
|| !TIFFSetField(pTiffHandle, TIFFTAG_IMAGELENGTH, height) if (page_compression != COMPRESSION_NONE)
|| !TIFFSetField(pTiffHandle, TIFFTAG_BITSPERSAMPLE, bitsPerChannel)
|| !TIFFSetField(pTiffHandle, TIFFTAG_COMPRESSION, compression)
|| !TIFFSetField(pTiffHandle, TIFFTAG_PHOTOMETRIC, colorspace)
|| !TIFFSetField(pTiffHandle, TIFFTAG_SAMPLESPERPIXEL, channels)
|| !TIFFSetField(pTiffHandle, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG)
|| !TIFFSetField(pTiffHandle, TIFFTAG_ROWSPERSTRIP, rowsPerStrip)
|| (img_vec.size() > 1 && (
!TIFFSetField(pTiffHandle, TIFFTAG_SUBFILETYPE, FILETYPE_PAGE)
|| !TIFFSetField(pTiffHandle, TIFFTAG_PAGENUMBER, page, img_vec.size() )))
)
{ {
TIFFClose(pTiffHandle); CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_PREDICTOR, predictor));
return false;
} }
if (compression != COMPRESSION_NONE && !TIFFSetField(pTiffHandle, TIFFTAG_PREDICTOR, predictor)) if (resUnit >= RESUNIT_NONE && resUnit <= RESUNIT_CENTIMETER)
{ {
TIFFClose(pTiffHandle); CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_RESOLUTIONUNIT, resUnit));
return false;
} }
if (dpiX >= 0)
if (((resUnit >= RESUNIT_NONE && resUnit <= RESUNIT_CENTIMETER) && !TIFFSetField(pTiffHandle, TIFFTAG_RESOLUTIONUNIT, resUnit))
|| (dpiX >= 0 && !TIFFSetField(pTiffHandle, TIFFTAG_XRESOLUTION, (float)dpiX))
|| (dpiY >= 0 && !TIFFSetField(pTiffHandle, TIFFTAG_YRESOLUTION, (float)dpiY))
)
{ {
TIFFClose(pTiffHandle); CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_XRESOLUTION, (float)dpiX));
return false; }
if (dpiY >= 0)
{
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_YRESOLUTION, (float)dpiY));
} }
// row buffer, because TIFFWriteScanline modifies the original data! // row buffer, because TIFFWriteScanline modifies the original data!
size_t scanlineSize = TIFFScanlineSize(pTiffHandle); size_t scanlineSize = TIFFScanlineSize(tif);
AutoBuffer<uchar> _buffer(scanlineSize + 32); AutoBuffer<uchar> _buffer(scanlineSize + 32);
uchar* buffer = _buffer.data(); uchar* buffer = _buffer.data(); CV_DbgAssert(buffer);
if (!buffer) Mat m_buffer(Size(width, 1), CV_MAKETYPE(depth, channels), buffer, (size_t)scanlineSize);
{
TIFFClose(pTiffHandle);
return false;
}
for (int y = 0; y < height; ++y) for (int y = 0; y < height; ++y)
{ {
@ -859,122 +849,54 @@ bool TiffEncoder::writeLibTiff( const std::vector<Mat>& img_vec, const std::vect
case 3: case 3:
{ {
if (depth == CV_8U) cvtColor(img(Rect(0, y, width, 1)), (const Mat&)m_buffer, COLOR_BGR2RGB);
icvCvt_BGR2RGB_8u_C3R( img.ptr(y), 0, buffer, 0, Size(width, 1));
else
icvCvt_BGR2RGB_16u_C3R( img.ptr<ushort>(y), 0, (ushort*)buffer, 0, Size(width, 1));
break; break;
} }
case 4: case 4:
{ {
if (depth == CV_8U) cvtColor(img(Rect(0, y, width, 1)), (const Mat&)m_buffer, COLOR_BGRA2RGBA);
icvCvt_BGRA2RGBA_8u_C4R( img.ptr(y), 0, buffer, 0, Size(width, 1));
else
icvCvt_BGRA2RGBA_16u_C4R( img.ptr<ushort>(y), 0, (ushort*)buffer, 0, Size(width, 1));
break; break;
} }
default: default:
{ {
TIFFClose(pTiffHandle); CV_Assert(0);
return false;
} }
} }
int writeResult = TIFFWriteScanline(pTiffHandle, buffer, y, 0); CV_TIFF_CHECK_CALL(TIFFWriteScanline(tif, buffer, y, 0) == 1);
if (writeResult != 1)
{
TIFFClose(pTiffHandle);
return false;
}
} }
TIFFWriteDirectory(pTiffHandle); CV_TIFF_CHECK_CALL(TIFFWriteDirectory(tif));
} }
TIFFClose(pTiffHandle);
return true; return true;
} }
bool TiffEncoder::write_32FC3(const Mat& _img) bool TiffEncoder::write_32FC3_SGILOG(const Mat& _img, void* tif_)
{ {
TIFF* tif = (TIFF*)tif_;
CV_Assert(tif);
Mat img; Mat img;
cvtColor(_img, img, COLOR_BGR2XYZ); cvtColor(_img, img, COLOR_BGR2XYZ);
TIFF* tif; //done by caller: CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_IMAGEWIDTH, img.cols));
//done by caller: CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_IMAGELENGTH, img.rows));
TiffEncoderBufHelper buf_helper(m_buf); CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_SAMPLESPERPIXEL, 3));
if ( m_buf ) CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_BITSPERSAMPLE, 32));
{ CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_COMPRESSION, COMPRESSION_SGILOG));
tif = buf_helper.open(); CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_LOGLUV));
} CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG));
else CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_SGILOGDATAFMT, SGILOGDATAFMT_FLOAT));
{ CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_ROWSPERSTRIP, 1));
tif = TIFFOpen(m_filename.c_str(), "w"); const int strip_size = 3 * img.cols;
} for (int i = 0; i < img.rows; i++)
{
if (!tif) CV_TIFF_CHECK_CALL(TIFFWriteEncodedStrip(tif, i, (tdata_t)img.ptr<float>(i), strip_size * sizeof(float)) != (tsize_t)-1);
{ }
return false; CV_TIFF_CHECK_CALL(TIFFWriteDirectory(tif));
}
TIFFSetField(tif, TIFFTAG_IMAGEWIDTH, img.cols);
TIFFSetField(tif, TIFFTAG_IMAGELENGTH, img.rows);
TIFFSetField(tif, TIFFTAG_SAMPLESPERPIXEL, 3);
TIFFSetField(tif, TIFFTAG_COMPRESSION, COMPRESSION_SGILOG);
TIFFSetField(tif, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_LOGLUV);
TIFFSetField(tif, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG);
TIFFSetField(tif, TIFFTAG_SGILOGDATAFMT, SGILOGDATAFMT_FLOAT);
TIFFSetField(tif, TIFFTAG_ROWSPERSTRIP, 1);
int strip_size = 3 * img.cols;
float *ptr = const_cast<float*>(img.ptr<float>());
for (int i = 0; i < img.rows; i++, ptr += strip_size)
{
TIFFWriteEncodedStrip(tif, i, ptr, strip_size * sizeof(float));
}
TIFFClose(tif);
return true;
}
bool TiffEncoder::write_32FC1(const Mat& _img)
{
TIFF* tif;
TiffEncoderBufHelper buf_helper(m_buf);
if ( m_buf )
{
tif = buf_helper.open();
}
else
{
tif = TIFFOpen(m_filename.c_str(), "w");
}
if (!tif)
{
return false;
}
TIFFSetField(tif, TIFFTAG_IMAGEWIDTH, _img.cols);
TIFFSetField(tif, TIFFTAG_IMAGELENGTH, _img.rows);
TIFFSetField(tif, TIFFTAG_SAMPLESPERPIXEL, 1);
TIFFSetField(tif, TIFFTAG_BITSPERSAMPLE, 32);
TIFFSetField(tif, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_MINISBLACK);
TIFFSetField(tif, TIFFTAG_SAMPLEFORMAT, SAMPLEFORMAT_IEEEFP);
TIFFSetField(tif, TIFFTAG_COMPRESSION, COMPRESSION_NONE);
for (uint32 row = 0; row < (uint32)_img.rows; row++)
{
if (TIFFWriteScanline(tif, (tdata_t)_img.ptr<float>(row), row, 1) != 1)
{
TIFFClose(tif);
return false;
}
}
TIFFWriteDirectory(tif);
TIFFClose(tif);
return true; return true;
} }
@ -985,18 +907,10 @@ bool TiffEncoder::writemulti(const std::vector<Mat>& img_vec, const std::vector<
bool TiffEncoder::write( const Mat& img, const std::vector<int>& params) bool TiffEncoder::write( const Mat& img, const std::vector<int>& params)
{ {
int depth = img.depth(); int type = img.type();
int depth = CV_MAT_DEPTH(type);
if(img.type() == CV_32FC3)
{
return write_32FC3(img);
}
if(img.type() == CV_32FC1)
{
return write_32FC1(img);
}
CV_Assert(depth == CV_8U || depth == CV_16U); CV_CheckType(type, depth == CV_8U || depth == CV_16U || depth == CV_32F || depth == CV_64F, "");
std::vector<Mat> img_vec; std::vector<Mat> img_vec;
img_vec.push_back(img); img_vec.push_back(img);

@ -106,10 +106,8 @@ public:
ImageDecoder newDecoder() const CV_OVERRIDE; ImageDecoder newDecoder() const CV_OVERRIDE;
protected: protected:
void* m_tif; cv::Ptr<void> m_tif;
int normalizeChannelsNumber(int channels) const; int normalizeChannelsNumber(int channels) const;
bool readData_32FC3(Mat& img);
bool readData_32FC1(Mat& img);
bool m_hdr; bool m_hdr;
size_t m_buf_pos; size_t m_buf_pos;
@ -139,8 +137,7 @@ protected:
int count, int value ); int count, int value );
bool writeLibTiff( const std::vector<Mat>& img_vec, const std::vector<int>& params ); bool writeLibTiff( const std::vector<Mat>& img_vec, const std::vector<int>& params );
bool write_32FC3( const Mat& img ); bool write_32FC3_SGILOG(const Mat& img, void* tif);
bool write_32FC1( const Mat& img );
private: private:
TiffEncoder(const TiffEncoder &); // copy disabled TiffEncoder(const TiffEncoder &); // copy disabled

@ -42,8 +42,7 @@
#include "precomp.hpp" #include "precomp.hpp"
#include "utils.hpp" #include "utils.hpp"
namespace cv namespace cv {
{
int validateToInt(size_t sz) int validateToInt(size_t sz)
{ {
@ -601,4 +600,4 @@ uchar* FillGrayRow1( uchar* data, uchar* indices, int len, uchar* palette )
return data; return data;
} }
} } // namespace

@ -42,8 +42,7 @@
#ifndef _UTILS_H_ #ifndef _UTILS_H_
#define _UTILS_H_ #define _UTILS_H_
namespace cv namespace cv {
{
int validateToInt(size_t step); int validateToInt(size_t step);
@ -139,6 +138,6 @@ CV_INLINE bool isBigEndian( void )
return (((const int*)"\0\x1\x2\x3\x4\x5\x6\x7")[0] & 255) != 0; return (((const int*)"\0\x1\x2\x3\x4\x5\x6\x7")[0] & 255) != 0;
} }
} } // namespace
#endif/*_UTILS_H_*/ #endif/*_UTILS_H_*/

@ -158,12 +158,68 @@ TEST(Imgcodecs_Tiff, readWrite_32FC1)
ASSERT_TRUE(cv::imwrite(filenameOutput, img)); ASSERT_TRUE(cv::imwrite(filenameOutput, img));
const Mat img2 = cv::imread(filenameOutput, IMREAD_UNCHANGED); const Mat img2 = cv::imread(filenameOutput, IMREAD_UNCHANGED);
ASSERT_EQ(img2.type(),img.type()); ASSERT_EQ(img2.type(), img.type());
ASSERT_EQ(img2.size(),img.size()); ASSERT_EQ(img2.size(), img.size());
EXPECT_GE(1e-3, cvtest::norm(img, img2, NORM_INF | NORM_RELATIVE)); EXPECT_LE(cvtest::norm(img, img2, NORM_INF | NORM_RELATIVE), 1e-3);
EXPECT_EQ(0, remove(filenameOutput.c_str())); EXPECT_EQ(0, remove(filenameOutput.c_str()));
} }
TEST(Imgcodecs_Tiff, readWrite_64FC1)
{
const string root = cvtest::TS::ptr()->get_data_path();
const string filenameInput = root + "readwrite/test64FC1.tiff";
const string filenameOutput = cv::tempfile(".tiff");
const Mat img = cv::imread(filenameInput, IMREAD_UNCHANGED);
ASSERT_FALSE(img.empty());
ASSERT_EQ(CV_64FC1, img.type());
ASSERT_TRUE(cv::imwrite(filenameOutput, img));
const Mat img2 = cv::imread(filenameOutput, IMREAD_UNCHANGED);
ASSERT_EQ(img2.type(), img.type());
ASSERT_EQ(img2.size(), img.size());
EXPECT_LE(cvtest::norm(img, img2, NORM_INF | NORM_RELATIVE), 1e-3);
EXPECT_EQ(0, remove(filenameOutput.c_str()));
}
TEST(Imgcodecs_Tiff, readWrite_32FC3_SGILOG)
{
const string root = cvtest::TS::ptr()->get_data_path();
const string filenameInput = root + "readwrite/test32FC3_sgilog.tiff";
const string filenameOutput = cv::tempfile(".tiff");
const Mat img = cv::imread(filenameInput, IMREAD_UNCHANGED);
ASSERT_FALSE(img.empty());
ASSERT_EQ(CV_32FC3, img.type());
ASSERT_TRUE(cv::imwrite(filenameOutput, img));
const Mat img2 = cv::imread(filenameOutput, IMREAD_UNCHANGED);
ASSERT_EQ(img2.type(), img.type());
ASSERT_EQ(img2.size(), img.size());
EXPECT_LE(cvtest::norm(img, img2, NORM_INF | NORM_RELATIVE), 0.01);
EXPECT_EQ(0, remove(filenameOutput.c_str()));
}
TEST(Imgcodecs_Tiff, readWrite_32FC3_RAW)
{
const string root = cvtest::TS::ptr()->get_data_path();
const string filenameInput = root + "readwrite/test32FC3_raw.tiff";
const string filenameOutput = cv::tempfile(".tiff");
const Mat img = cv::imread(filenameInput, IMREAD_UNCHANGED);
ASSERT_FALSE(img.empty());
ASSERT_EQ(CV_32FC3, img.type());
std::vector<int> params;
params.push_back(IMWRITE_TIFF_COMPRESSION);
params.push_back(1/*COMPRESSION_NONE*/);
ASSERT_TRUE(cv::imwrite(filenameOutput, img, params));
const Mat img2 = cv::imread(filenameOutput, IMREAD_UNCHANGED);
ASSERT_EQ(img2.type(), img.type());
ASSERT_EQ(img2.size(), img.size());
EXPECT_LE(cvtest::norm(img, img2, NORM_INF | NORM_RELATIVE), 1e-3);
EXPECT_EQ(0, remove(filenameOutput.c_str()));
}
//================================================================================================== //==================================================================================================
typedef testing::TestWithParam<int> Imgcodecs_Tiff_Modes; typedef testing::TestWithParam<int> Imgcodecs_Tiff_Modes;

Loading…
Cancel
Save