Adding support for GDAL raster images.

pull/2806/head
Marvin Smith 11 years ago
parent 6bb8c46d9a
commit 14548227ca
  1. 7
      CMakeLists.txt
  2. 12
      cmake/OpenCVFindLibsGrfmt.cmake
  3. 3
      cmake/templates/cvconfig.h.in
  4. BIN
      doc/tutorials/highgui/raster-gdal/images/flood-zone.jpg
  5. BIN
      doc/tutorials/highgui/raster-gdal/images/heat-map.jpg
  6. BIN
      doc/tutorials/highgui/raster-gdal/images/output.jpg
  7. 113
      doc/tutorials/highgui/raster-gdal/raster_io_gdal.rst
  8. BIN
      doc/tutorials/highgui/table_of_content_highgui/images/gdal-io.jpg
  9. 21
      doc/tutorials/highgui/table_of_content_highgui/table_of_content_highgui.rst
  10. 5
      modules/highgui/CMakeLists.txt
  11. 3
      modules/highgui/include/opencv2/highgui.hpp
  12. 560
      modules/highgui/src/grfmt_gdal.cpp
  13. 160
      modules/highgui/src/grfmt_gdal.hpp
  14. 1
      modules/highgui/src/grfmts.hpp
  15. 88
      modules/highgui/src/loadsave.cpp
  16. 229
      samples/cpp/tutorial_code/HighGUI/GDAL_IO/gdal-image.cpp

@ -159,6 +159,7 @@ OCV_OPTION(WITH_OPENCLAMDBLAS "Include AMD OpenCL BLAS library support" ON
OCV_OPTION(WITH_DIRECTX "Include DirectX support" ON IF WIN32 )
OCV_OPTION(WITH_INTELPERC "Include Intel Perceptual Computing support" OFF IF WIN32 )
OCV_OPTION(WITH_IPP_A "Include Intel IPP_A support" OFF IF (MSVC OR X86 OR X86_64) )
OCV_OPTION(WITH_GDAL "Include GDAL Support" OFF IF (NOT ANDROID AND NOT IOS) )
# OpenCV build components
# ===================================================
@ -808,6 +809,12 @@ else()
status(" OpenEXR:" "NO")
endif()
if( WITH_GDAL )
status(" GDAL:" GDAL_FOUND THEN "${GDAL_LIBRARY}")
else()
status(" GDAL:" "NO")
endif()
# ========================== VIDEO IO ==========================
status("")
status(" Video I/O:")

@ -198,3 +198,15 @@ if(WITH_OPENEXR)
set(HAVE_OPENEXR YES)
endif()
# --- GDAL (optional) ---
if(WITH_GDAL)
find_package(GDAL)
if(NOT GDAL_FOUND)
ocv_clear_vars(GDAL_LIBRARY GDAL_INCLUDE_DIR)
set(HAVE_GDAL NO)
else()
set(HAVE_GDAL YES)
endif()
endif()

@ -76,6 +76,9 @@
/* ffmpeg in Gentoo */
#cmakedefine HAVE_GENTOO_FFMPEG
/* Geospatial Data Abstraction Library */
#cmakedefine HAVE_GDAL
/* GStreamer multimedia framework */
#cmakedefine HAVE_GSTREAMER

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

@ -0,0 +1,113 @@
.. _Raster_IO_GDAL:
Reading Geospatial Raster files with GDAL
*****************************************
Geospatial raster data is a heavily used product in Geographic Information
Systems and Photogrammetry. Raster data typically can represent imagery
and Digital Elevation Models (DEM). The standard library for loading
GIS imagery is the Geographic Data Abstraction Library (GDAL). In this example, we
will show techniques for loading GIS raster formats using native OpenCV functions.
In addition, we will show some an example of how OpenCV can use this data for
novel and interesting purposes.
Goals
=====
The primary objectives for this tutorial:
.. container:: enumeratevisibleitemswithsquare
+ How to use OpenCV imread to load satellite imagery.
+ How to use OpenCV imread to load SRTM Digital Elevation Models
+ Given the corner coordinates of both the image and DEM, correllate the elevation data to the image to find elevations for each pixel.
+ Show a basic, easy-to-implement example of a terrain heat map.
+ Show a basic use of DEM data coupled with ortho-rectified imagery.
To implement these goals, the following code takes a Digital Elevation Model as well as a GeoTiff image of San Francisco as input.
The image and DEM data is processed and generates a terrain heat map of the image as well as labels areas of the city which would
be affected should the water level of the bay rise 10, 50, and 100 meters.
Code
====
.. literalinclude:: ../../../../samples/cpp/tutorial_code/HighGUI/GDAL_IO/gdal-image.cpp
:language: cpp
:linenos:
:tab-width: 4
How to Read Raster Data using GDAL
======================================
This demonstration uses the default OpenCV :ocv:func:`imread` function. The primary difference is that in order to force GDAL to load the
image, you must use the appropriate flag.
.. code-block:: cpp
cv::Mat image = cv::imread( argv[1], cv::IMREAD_LOAD_GDAL );
When loading digital elevation models, the actual numeric value of each pixel is essential
and cannot be scaled or truncated. For example, with image data a pixel represented as a double with a value of 1 has
an equal appearance to a pixel which is represented as an unsigned character with a value of 255.
With terrain data, the pixel value represents the elevation in meters. In order to ensure that OpenCV preserves the native value,
use the GDAL flag in imread with the ANYDEPTH flag.
.. code-block:: cpp
cv::Mat dem = cv::imread( argv[2], cv::IMREAD_LOAD_GDAL | cv::IMREAD_ANYDEPTH );
If you know beforehand the type of DEM model you are loading, then it may be a safe bet to test the ``Mat::type()`` or ``Mat::depth()``
using an assert or other mechanism. NASA or DOD specification documents can provide the input types for various
elevation models. The major types, SRTM and DTED, are both signed shorts.
Notes
=====
Lat/Lon (Geodetic) Coordinates should normally be avoided
---------------------------------------------------------
The Geodetic Coordinate System is a spherical coordinate system, meaning that using them with Cartesian mathematics is technically incorrect. This
demo uses them to increase the readability and is accurate enough to make the point. A better coordinate system would be Universal Transverse Mercator.
Finding the corner coordinates
------------------------------
One easy method to find the corner coordinates of an image is to use the command-line tool ``gdalinfo``. For imagery which is ortho-rectified and contains
the projection information, you can use the `USGS EarthExplorer <http://http://earthexplorer.usgs.gov>`_.
.. code-block:: bash
$> gdalinfo N37W123.hgt
Driver: SRTMHGT/SRTMHGT File Format
Files: N37W123.hgt
Size is 3601, 3601
Coordinate System is:
GEOGCS["WGS 84",
DATUM["WGS_1984",
... more output ...
Corner Coordinates:
Upper Left (-123.0001389, 38.0001389) (123d 0' 0.50"W, 38d 0' 0.50"N)
Lower Left (-123.0001389, 36.9998611) (123d 0' 0.50"W, 36d59'59.50"N)
Upper Right (-121.9998611, 38.0001389) (121d59'59.50"W, 38d 0' 0.50"N)
Lower Right (-121.9998611, 36.9998611) (121d59'59.50"W, 36d59'59.50"N)
Center (-122.5000000, 37.5000000) (122d30' 0.00"W, 37d30' 0.00"N)
... more output ...
Results
=======
Below is the output of the program. Use the first image as the input. For the DEM model, download the SRTM file located at the USGS here. `http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip <http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip>`_
.. image:: images/output.jpg
.. image:: images/heat-map.jpg
.. image:: images/flood-zone.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

@ -64,6 +64,26 @@ This section contains valuable tutorials about how to read/save your image/video
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=============== ======================================================
|hGDAL_IO| *Title:* :ref:`Raster_IO_GDAL`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_MarvinS|
Read common GIS Raster and DEM files to display and manipulate geographic data.
=============== ======================================================
.. |hGDAL_IO| image:: images/gdal-io.jpg
:height: 90pt
:width: 90pt
.. raw:: latex
@ -75,3 +95,4 @@ This section contains valuable tutorials about how to read/save your image/video
../trackbar/trackbar
../video-input-psnr-ssim/video-input-psnr-ssim
../video-write/video-write
../raster-gdal/raster_io_gdal

@ -50,6 +50,11 @@ if(HAVE_OPENEXR)
list(APPEND GRFMT_LIBS ${OPENEXR_LIBRARIES})
endif()
if(HAVE_GDAL)
include_directories(SYSTEM ${GDAL_INCLUDE_DIR})
list(APPEND GRFMT_LIBS ${GDAL_LIBRARY})
endif()
file(GLOB grfmt_hdrs src/grfmt*.hpp)
file(GLOB grfmt_srcs src/grfmt*.cpp)
list(APPEND grfmt_hdrs src/bitstrm.hpp)

@ -212,7 +212,8 @@ enum { IMREAD_UNCHANGED = -1, // 8bit, color or not
IMREAD_GRAYSCALE = 0, // 8bit, gray
IMREAD_COLOR = 1, // ?, color
IMREAD_ANYDEPTH = 2, // any depth, ?
IMREAD_ANYCOLOR = 4 // ?, any color
IMREAD_ANYCOLOR = 4, // ?, any color
IMREAD_LOAD_GDAL = 8 // Use gdal driver
};
enum { IMWRITE_JPEG_QUALITY = 1,

@ -0,0 +1,560 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// Intel License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "grfmt_gdal.hpp"
#ifdef HAVE_GDAL
/// C++ Standard Libraries
#include <iostream>
#include <stdexcept>
#include <string>
namespace cv{
/**
* Convert GDAL Palette Interpretation to OpenCV Pixel Type
*/
int gdalPaletteInterpretation2OpenCV( GDALPaletteInterp const& paletteInterp, GDALDataType const& gdalType ){
switch( paletteInterp ){
/// GRAYSCALE
case GPI_Gray:
if( gdalType == GDT_Byte ){ return CV_8UC1; }
if( gdalType == GDT_UInt16 ){ return CV_16UC1; }
if( gdalType == GDT_Int16 ){ return CV_16SC1; }
if( gdalType == GDT_UInt32 ){ return CV_32SC1; }
if( gdalType == GDT_Int32 ){ return CV_32SC1; }
if( gdalType == GDT_Float32 ){ return CV_32FC1; }
if( gdalType == GDT_Float64 ){ return CV_64FC1; }
return -1;
/// RGB
case GPI_RGB:
if( gdalType == GDT_Byte ){ return CV_8UC1; }
if( gdalType == GDT_UInt16 ){ return CV_16UC3; }
if( gdalType == GDT_Int16 ){ return CV_16SC3; }
if( gdalType == GDT_UInt32 ){ return CV_32SC3; }
if( gdalType == GDT_Int32 ){ return CV_32SC3; }
if( gdalType == GDT_Float32 ){ return CV_32FC3; }
if( gdalType == GDT_Float64 ){ return CV_64FC3; }
return -1;
/// otherwise
default:
return -1;
}
}
/**
* Convert gdal type to opencv type
*/
int gdal2opencv( const GDALDataType& gdalType, const int& channels ){
switch( gdalType ){
/// UInt8
case GDT_Byte:
if( channels == 1 ){ return CV_8UC1; }
if( channels == 3 ){ return CV_8UC3; }
if( channels == 4 ){ return CV_8UC4; }
return -1;
/// UInt16
case GDT_UInt16:
if( channels == 1 ){ return CV_16UC1; }
if( channels == 3 ){ return CV_16UC3; }
if( channels == 4 ){ return CV_16UC4; }
return -1;
/// Int16
case GDT_Int16:
if( channels == 1 ){ return CV_16SC1; }
if( channels == 3 ){ return CV_16SC3; }
if( channels == 4 ){ return CV_16SC4; }
return -1;
/// UInt32
case GDT_UInt32:
case GDT_Int32:
if( channels == 1 ){ return CV_32SC1; }
if( channels == 3 ){ return CV_32SC3; }
if( channels == 4 ){ return CV_32SC4; }
return -1;
default:
std::cout << "Unknown GDAL Data Type" << std::endl;
std::cout << "Type: " << GDALGetDataTypeName(gdalType) << std::endl;
return -1;
}
return -1;
}
std::string GetOpenCVTypeName( const int& type ){
switch(type){
case CV_8UC1:
return "CV_8UC1";
case CV_8UC3:
return "CV_8UC3";
case CV_8UC4:
return "CV_8UC4";
case CV_16UC1:
return "CV_16UC1";
case CV_16UC3:
return "CV_16UC3";
case CV_16UC4:
return "CV_16UC4";
case CV_16SC1:
return "CV_16SC1";
case CV_16SC3:
return "CV_16SC3";
case CV_16SC4:
return "CV_16SC4";
default:
return "Unknown";
}
return "Unknown";
}
/**
* GDAL Decoder Constructor
*/
GdalDecoder::GdalDecoder(){
// set a dummy signature
m_signature="0";
for( size_t i=0; i<160; i++ ){
m_signature += "0";
}
/// Register the driver
GDALAllRegister();
m_driver = NULL;
m_dataset = NULL;
}
/**
* GDAL Decoder Destructor
*/
GdalDecoder::~GdalDecoder(){
if( m_dataset != NULL ){
close();
}
}
/**
* Convert data range
*/
double range_cast( const GDALDataType& gdalType, const int& cvDepth, const double& value ){
// uint8 -> uint8
if( gdalType == GDT_Byte && cvDepth == CV_8U ){
return value;
}
// uint8 -> uint16
if( gdalType == GDT_Byte && (cvDepth == CV_16U || cvDepth == CV_16S)){
return (value*256);
}
// uint8 -> uint32
if( gdalType == GDT_Byte && (cvDepth == CV_32F || cvDepth == CV_32S)){
return (value*16777216);
}
// int16 -> uint8
if( (gdalType == GDT_UInt16 || gdalType == GDT_Int16) && cvDepth == CV_8U ){
return std::floor(value/256.0);
}
// int16 -> int16
if( (gdalType == GDT_UInt16 || gdalType == GDT_Int16) &&
( cvDepth == CV_16U || cvDepth == CV_16S )){
return value;
}
std::cout << GDALGetDataTypeName( gdalType ) << std::endl;
std::cout << "warning: unknown range cast requested." << std::endl;
return (value);
}
/**
* There are some better mpl techniques for doing this.
*/
void write_pixel( const double& pixelValue,
const GDALDataType& gdalType,
const int& gdalChannels,
Mat& image,
const int& row,
const int& col,
const int& channel ){
// convert the pixel
double newValue = range_cast(gdalType, image.depth(), pixelValue );
// input: 1 channel, output: 1 channel
if( gdalChannels == 1 && image.channels() == 1 ){
if( image.depth() == CV_8U ){ image.at<uchar>(row,col) = newValue; }
else if( image.depth() == CV_16U ){ image.at<unsigned short>(row,col) = newValue; }
else if( image.depth() == CV_16S ){ image.at<short>(row,col) = newValue; }
else if( image.depth() == CV_32S ){ image.at<int>(row,col) = newValue; }
else if( image.depth() == CV_32F ){ image.at<float>(row,col) = newValue; }
else if( image.depth() == CV_64F ){ image.at<double>(row,col) = newValue; }
else{ throw std::runtime_error("Unknown image depth, gdal: 1, img: 1"); }
}
// input: 1 channel, output: 3 channel
else if( gdalChannels == 1 && image.channels() == 3 ){
if( image.depth() == CV_8U ){ image.at<Vec3b>(row,col) = Vec3b(newValue,newValue,newValue); }
else if( image.depth() == CV_16U ){ image.at<Vec3s>(row,col) = Vec3s(newValue,newValue,newValue); }
else if( image.depth() == CV_16S ){ image.at<Vec3s>(row,col) = Vec3s(newValue,newValue,newValue); }
else if( image.depth() == CV_32S ){ image.at<Vec3i>(row,col) = Vec3i(newValue,newValue,newValue); }
else if( image.depth() == CV_32F ){ image.at<Vec3f>(row,col) = Vec3f(newValue,newValue,newValue); }
else if( image.depth() == CV_64F ){ image.at<Vec3d>(row,col) = Vec3d(newValue,newValue,newValue); }
else{ throw std::runtime_error("Unknown image depth, gdal:1, img: 3"); }
}
// input: 3 channel, output: 1 channel
else if( gdalChannels == 3 && image.channels() == 1 ){
if( image.depth() == CV_8U ){ image.at<uchar>(row,col) += (newValue/3.0); }
else{ throw std::runtime_error("Unknown image depth, gdal:3, img: 1"); }
}
// input: 4 channel, output: 1 channel
else if( gdalChannels == 4 && image.channels() == 1 ){
if( image.depth() == CV_8U ){ image.at<uchar>(row,col) = newValue; }
else{ throw std::runtime_error("Unknown image depth, gdal: 4, image: 1"); }
}
// input: 3 channel, output: 3 channel
else if( gdalChannels == 3 && image.channels() == 3 ){
if( image.depth() == CV_8U ){ image.at<Vec3b>(row,col)[channel] = newValue; }
else if( image.depth() == CV_16U ){ image.at<Vec3s>(row,col)[channel] = newValue; }
else if( image.depth() == CV_16S ){ image.at<Vec3s>(row,col)[channel] = newValue; }
else if( image.depth() == CV_32S ){ image.at<Vec3i>(row,col)[channel] = newValue; }
else if( image.depth() == CV_32F ){ image.at<Vec3f>(row,col)[channel] = newValue; }
else if( image.depth() == CV_64F ){ image.at<Vec3d>(row,col)[channel] = newValue; }
else{ throw std::runtime_error("Unknown image depth, gdal: 3, image: 3"); }
}
// input: 4 channel, output: 3 channel
else if( gdalChannels == 4 && image.channels() == 3 ){
if( channel >= 4 ){ return; }
else if( image.depth() == CV_8U && channel < 4 ){ image.at<Vec3b>(row,col)[channel] = newValue; }
else if( image.depth() == CV_16U && channel < 4 ){ image.at<Vec3s>(row,col)[channel] = newValue; }
else if( image.depth() == CV_16S && channel < 4 ){ image.at<Vec3s>(row,col)[channel] = newValue; }
else if( image.depth() == CV_32S && channel < 4 ){ image.at<Vec3i>(row,col)[channel] = newValue; }
else if( image.depth() == CV_32F && channel < 4 ){ image.at<Vec3f>(row,col)[channel] = newValue; }
else if( image.depth() == CV_64F && channel < 4 ){ image.at<Vec3d>(row,col)[channel] = newValue; }
else{ throw std::runtime_error("Unknown image depth, gdal: 4, image: 3"); }
}
// input: 4 channel, output: 4 channel
else if( gdalChannels == 4 && image.channels() == 4 ){
if( image.depth() == CV_8U ){ image.at<Vec4b>(row,col)[channel] = newValue; }
else{ throw std::runtime_error("Unknown image depth, gdal: 4, image: 4"); }
}
// otherwise, throw an error
else{
throw std::runtime_error("error: can't convert types.");
}
}
void write_ctable_pixel( const double& pixelValue,
const GDALDataType& gdalType,
GDALColorTable const* gdalColorTable,
Mat& image,
const int& y,
const int& x,
const int& c ){
if( gdalColorTable == NULL ){
write_pixel( pixelValue, gdalType, 1, image, y, x, c );
}
// if we are Grayscale, then do a straight conversion
if( gdalColorTable->GetPaletteInterpretation() == GPI_Gray ){
write_pixel( pixelValue, gdalType, 1, image, y, x, c );
}
// if we are rgb, then convert here
else if( gdalColorTable->GetPaletteInterpretation() == GPI_RGB ){
// get the pixel
short r = gdalColorTable->GetColorEntry( (int)pixelValue )->c1;
short g = gdalColorTable->GetColorEntry( (int)pixelValue )->c2;
short b = gdalColorTable->GetColorEntry( (int)pixelValue )->c3;
short a = gdalColorTable->GetColorEntry( (int)pixelValue )->c4;
write_pixel( r, gdalType, 4, image, y, x, 2 );
write_pixel( g, gdalType, 4, image, y, x, 1 );
write_pixel( b, gdalType, 4, image, y, x, 0 );
if( image.channels() > 3 ){
write_pixel( a, gdalType, 4, image, y, x, 1 );
}
}
// otherwise, set zeros
else{
write_pixel( pixelValue, gdalType, 1, image, y, x, c );
}
}
/**
* read data
*/
bool GdalDecoder::readData( Mat& img ){
// make sure the image is the proper size
if( img.size().height != m_height ){
return false;
}
if( img.size().width != m_width ){
return false;
}
// make sure the raster is alive
if( m_dataset == NULL || m_driver == NULL ){
return false;
}
// set the image to zero
img = 0;
// iterate over each raster band
// note that OpenCV does bgr rather than rgb
int nChannels = m_dataset->GetRasterCount();
GDALColorTable* gdalColorTable = NULL;
if( m_dataset->GetRasterBand(1)->GetColorTable() != NULL ){
gdalColorTable = m_dataset->GetRasterBand(1)->GetColorTable();
}
const GDALDataType gdalType = m_dataset->GetRasterBand(1)->GetRasterDataType();
int nRows, nCols;
if( nChannels > img.channels() ){
nChannels = img.channels();
}
for( int c = 0; c<nChannels; c++ ){
// get the GDAL Band
GDALRasterBand* band = m_dataset->GetRasterBand(c+1);
// make sure the image band has the same dimensions as the image
if( band->GetXSize() != m_width || band->GetYSize() != m_height ){ return false; }
// grab the raster size
nRows = band->GetYSize();
nCols = band->GetXSize();
// create a temporary scanline pointer to store data
double* scanline = new double[nCols];
// iterate over each row and column
for( int y=0; y<nRows; y++ ){
// get the entire row
band->RasterIO( GF_Read, 0, y, nCols, 1, scanline, nCols, 1, GDT_Float64, 0, 0);
// set inside the image
for( int x=0; x<nCols; x++ ){
// set depending on image types
// given boost, I would use enable_if to speed up. Avoid for now.
if( hasColorTable == false ){
write_pixel( scanline[x], gdalType, nChannels, img, y, x, c );
}
else{
write_ctable_pixel( scanline[x], gdalType, gdalColorTable, img, y, x, c );
}
}
}
// delete our temp pointer
delete [] scanline;
}
return true;
}
/**
* Read image header
*/
bool GdalDecoder::readHeader(){
// load the dataset
m_dataset = (GDALDataset*) GDALOpen( m_filename.c_str(), GA_ReadOnly);
// if dataset is null, then there was a problem
if( m_dataset == NULL ){
return false;
}
// make sure we have pixel data inside the raster
if( m_dataset->GetRasterCount() <= 0 ){
return false;
}
//extract the driver infomation
m_driver = m_dataset->GetDriver();
// if the driver failed, then exit
if( m_driver == NULL ){
return false;
}
// get the image dimensions
m_width = m_dataset->GetRasterXSize();
m_height= m_dataset->GetRasterYSize();
// make sure we have at least one band/channel
if( m_dataset->GetRasterCount() <= 0 ){
return false;
}
// check if we have a color palette
int tempType;
if( m_dataset->GetRasterBand(1)->GetColorInterpretation() == GCI_PaletteIndex ){
// remember that we have a color palette
hasColorTable = true;
// if the color tables does not exist, then we failed
if( m_dataset->GetRasterBand(1)->GetColorTable() == NULL ){
return false;
}
// otherwise, get the pixeltype
else{
// convert the palette interpretation to opencv type
tempType = gdalPaletteInterpretation2OpenCV( m_dataset->GetRasterBand(1)->GetColorTable()->GetPaletteInterpretation(),
m_dataset->GetRasterBand(1)->GetRasterDataType() );
if( tempType == -1 ){
return false;
}
m_type = tempType;
}
}
// otherwise, we have standard channels
else{
// remember that we don't have a color table
hasColorTable = false;
// convert the datatype to opencv
tempType = gdal2opencv( m_dataset->GetRasterBand(1)->GetRasterDataType(), m_dataset->GetRasterCount() );
if( tempType == -1 ){
return false;
}
m_type = tempType;
}
return true;
}
/**
* Close the module
*/
void GdalDecoder::close(){
GDALClose((GDALDatasetH)m_dataset);
m_dataset = NULL;
m_driver = NULL;
}
/**
* Create a new decoder
*/
ImageDecoder GdalDecoder::newDecoder()const{
return makePtr<GdalDecoder>();
}
/**
* Test the file signature
*/
bool GdalDecoder::checkSignature( const String& signature )const{
// look for NITF
std::string str = signature.c_str();
if( str.substr(0,4).find("NITF") != std::string::npos ){
return true;
}
// look for DTED
if( str.substr(140,4) == "DTED" ){
return true;
}
return false;
}
} /// End of cv Namespace
#endif /**< End of HAVE_GDAL Definition */

@ -0,0 +1,160 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// Intel License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __GRFMT_GDAL_HPP__
#define __GRFMT_GDAL_HPP__
/// Macro to make sure we specified GDAL in CMake
#ifdef HAVE_GDAL
/// C++ Libraries
#include <iostream>
/// OpenCV Libraries
#include "grfmt_base.hpp"
#include "precomp.hpp"
/// Geospatial Data Abstraction Library
#include <gdal/cpl_conv.h>
#include <gdal/gdal_priv.h>
#include <gdal/gdal.h>
/// Start of CV Namespace
namespace cv {
/**
* Convert GDAL Palette Interpretation to OpenCV Pixel Type
*/
int gdalPaletteInterpretation2OpenCV( GDALPaletteInterp const& paletteInterp,
GDALDataType const& gdalType );
/**
* Convert a GDAL Raster Type to OpenCV Type
*/
int gdal2opencv( const GDALDataType& gdalType, const int& channels );
/**
* Write an image to pixel
*/
void write_pixel( const double& pixelValue,
GDALDataType const& gdalType,
const int& gdalChannels,
Mat& image,
const int& row,
const int& col,
const int& channel );
/**
* Write a color table pixel to the image
*/
void write_ctable_pixel( const double& pixelValue,
const GDALDataType& gdalType,
const GDALColorTable* gdalColorTable,
Mat& image,
const int& y,
const int& x,
const int& c );
/**
* Loader for GDAL
*/
class GdalDecoder : public BaseImageDecoder{
public:
/**
* Default Constructor
*/
GdalDecoder();
/**
* Destructor
*/
~GdalDecoder();
/**
* Read image data
*/
bool readData( Mat& img );
/**
* Read the image header
*/
bool readHeader();
/**
* Close the module
*/
void close();
/**
* Create a new decoder
*/
ImageDecoder newDecoder() const;
/**
* Test the file signature
*
* In general, this should be avoided as the user should specifically request GDAL.
* The reason is that GDAL tends to overlap with other image formats and it is probably
* safer to use other formats first.
*/
virtual bool checkSignature( const String& signature ) const;
protected:
/// GDAL Dataset
GDALDataset* m_dataset;
/// GDAL Driver
GDALDriver* m_driver;
/// Check if we are reading from a color table
bool hasColorTable;
}; /// End of GdalDecoder Class
} /// End of Namespace cv
#endif/*HAVE_GDAL*/
#endif/*__GRFMT_GDAL_HPP__*/

@ -53,5 +53,6 @@
#include "grfmt_exr.hpp"
#include "grfmt_webp.hpp"
#include "grfmt_hdr.hpp"
#include "grfmt_gdal.hpp"
#endif/*_GRFMTS_H_*/

@ -55,12 +55,22 @@
namespace cv
{
/**
* @struct ImageCodecInitializer
*
* Container which stores the registered codecs to be used by OpenCV
*/
struct ImageCodecInitializer
{
/**
* Default Constructor for the ImageCodeInitializer
*/
ImageCodecInitializer()
{
/// BMP Support
decoders.push_back( makePtr<BmpDecoder>() );
encoders.push_back( makePtr<BmpEncoder>() );
decoders.push_back( makePtr<HdrDecoder>() );
encoders.push_back( makePtr<HdrEncoder>() );
#ifdef HAVE_JPEG
@ -91,6 +101,11 @@ struct ImageCodecInitializer
decoders.push_back( makePtr<ExrDecoder>() );
encoders.push_back( makePtr<ExrEncoder>() );
#endif
#ifdef HAVE_GDAL
/// Attach the GDAL Decoder
decoders.push_back( makePtr<GdalDecoder>() );
#endif/*HAVE_GDAL*/
}
std::vector<ImageDecoder> decoders;
@ -99,29 +114,45 @@ struct ImageCodecInitializer
static ImageCodecInitializer codecs;
static ImageDecoder findDecoder( const String& filename )
{
/**
* Find the decoders
*
* @param[in] filename File to search
*
* @return Image decoder to parse image file.
*/
static ImageDecoder findDecoder( const String& filename ) {
size_t i, maxlen = 0;
/// iterate through list of registered codecs
for( i = 0; i < codecs.decoders.size(); i++ )
{
size_t len = codecs.decoders[i]->signatureLength();
maxlen = std::max(maxlen, len);
}
/// Open the file
FILE* f= fopen( filename.c_str(), "rb" );
/// in the event of a failure, return an empty image decoder
if( !f )
return ImageDecoder();
// read the file signature
String signature(maxlen, ' ');
maxlen = fread( (void*)signature.c_str(), 1, maxlen, f );
fclose(f);
signature = signature.substr(0, maxlen);
/// compare signature against all decoders
for( i = 0; i < codecs.decoders.size(); i++ )
{
if( codecs.decoders[i]->checkSignature(signature) )
return codecs.decoders[i]->newDecoder();
}
/// If no decoder was found, return base type
return ImageDecoder();
}
@ -193,6 +224,18 @@ static ImageEncoder findEncoder( const String& _ext )
enum { LOAD_CVMAT=0, LOAD_IMAGE=1, LOAD_MAT=2 };
/**
* Read an image into memory and return the information
*
* @param[in] filename File to load
* @param[in] flags Flags
* @param[in] hdrtype { LOAD_CVMAT=0,
* LOAD_IMAGE=1,
* LOAD_MAT=2
* }
* @param[in] mat Reference to C++ Mat object (If LOAD_MAT)
*
*/
static void*
imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 )
{
@ -200,16 +243,37 @@ imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 )
CvMat *matrix = 0;
Mat temp, *data = &temp;
ImageDecoder decoder = findDecoder(filename);
if( !decoder )
/// Search for the relevant decoder to handle the imagery
ImageDecoder decoder;
#ifdef HAVE_GDAL
if( (flags & IMREAD_LOAD_GDAL) == IMREAD_LOAD_GDAL ){
decoder = GdalDecoder().newDecoder();
}else{
#endif
decoder = findDecoder(filename);
#ifdef HAVE_GDAL
}
#endif
/// if no decoder was found, return nothing.
if( !decoder ){
return 0;
}
/// set the filename in the driver
decoder->setSource(filename);
if( !decoder->readHeader() )
// read the header to make sure it succeeds
if( !decoder->readHeader() )
return 0;
// established the required input image size
CvSize size;
size.width = decoder->width();
size.height = decoder->height();
// grab the decoded type
int type = decoder->type();
if( flags != -1 )
{
@ -242,6 +306,7 @@ imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 )
temp = cvarrToMat(image);
}
// read the image data
if( !decoder->readData( *data ))
{
cvReleaseImage( &image );
@ -255,10 +320,23 @@ imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 )
hdrtype == LOAD_IMAGE ? (void*)image : (void*)mat;
}
/**
* Read an image
*
* This function merely calls the actual implementation above and returns itself.
*
* @param[in] filename File to load
* @param[in] flags Flags you wish to set.
*/
Mat imread( const String& filename, int flags )
{
/// create the basic container
Mat img;
/// load the data
imread_( filename, flags, LOAD_MAT, &img );
/// return a reference to the data
return img;
}

@ -0,0 +1,229 @@
/**
* gdal_image.cpp -- Load GIS data into OpenCV Containers using the Geospatial Data Abstraction Library
*/
/// OpenCV Headers
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
/// C++ Standard Libraries
#include <cmath>
#include <iostream>
#include <stdexcept>
#include <vector>
using namespace std;
/// define the corner points
/// Note that GDAL can natively determine this
cv::Point2d tl( -122.441017, 37.815664 );
cv::Point2d tr( -122.370919, 37.815311 );
cv::Point2d bl( -122.441533, 37.747167 );
cv::Point2d br( -122.3715, 37.746814 );
/// determine dem corners
cv::Point2d dem_bl( -122.0, 38);
cv::Point2d dem_tr( -123.0, 37);
/// range of the heat map colors
std::vector<std::pair<cv::Vec3b,double> > color_range;
/**
* Linear Interpolation
* p1 - Point 1
* p2 - Point 2
* t - Ratio from Point 1 to Point 2
*/
cv::Point2d lerp( cv::Point2d const& p1, cv::Point2d const& p2, const double& t ){
return cv::Point2d( ((1-t)*p1.x) + (t*p2.x),
((1-t)*p1.y) + (t*p2.y));
}
/**
* Interpolate Colors
*/
template <typename DATATYPE, int N>
cv::Vec<DATATYPE,N> lerp( cv::Vec<DATATYPE,N> const& minColor,
cv::Vec<DATATYPE,N> const& maxColor,
double const& t ){
cv::Vec<DATATYPE,N> output;
for( int i=0; i<N; i++ ){
output[i] = ((1-t)*minColor[i]) + (t * maxColor[i]);
}
return output;
}
/**
* Compute the dem color
*/
cv::Vec3b get_dem_color( const double& elevation ){
// if the elevation is below the minimum, return the minimum
if( elevation < color_range[0].second ){
return color_range[0].first;
}
// if the elevation is above the maximum, return the maximum
if( elevation > color_range.back().second ){
return color_range.back().first;
}
// otherwise, find the proper starting index
int idx=0;
double t = 0;
for( int x=0; x<color_range.size()-1; x++ ){
// if the current elevation is below the next item, then use the current
// two colors as our range
if( elevation < color_range[x+1].second ){
idx=x;
t = (color_range[x+1].second - elevation)/
(color_range[x+1].second - color_range[x].second);
break;
}
}
// interpolate the color
return lerp( color_range[idx].first, color_range[idx+1].first, t);
}
/**
* Given a pixel coordinate and the size of the input image, compute the pixel location
* on the DEM image.
*/
cv::Point2d world2dem( cv::Point2d const& coordinate, const cv::Size& dem_size ){
// relate this to the dem points
// ASSUMING THAT DEM DATA IS ORTHORECTIFIED
double demRatioX = ((dem_tr.x - coordinate.x)/(dem_tr.x - dem_bl.x));
double demRatioY = 1-((dem_tr.y - coordinate.y)/(dem_tr.y - dem_bl.y));
cv::Point2d output;
output.x = demRatioX * dem_size.width;
output.y = demRatioY * dem_size.height;
return output;
}
/**
* Convert a pixel coordinate to world coordinates
*/
cv::Point2d pixel2world( const int& x, const int& y, const cv::Size& size ){
// compute the ratio of the pixel location to its dimension
double rx = (double)x / size.width;
double ry = (double)y / size.height;
// compute LERP of each coordinate
cv::Point2d rightSide = lerp(tr, br, ry);
cv::Point2d leftSide = lerp(tl, bl, ry);
// compute the actual Lat/Lon coordinate of the interpolated coordinate
return lerp( leftSide, rightSide, rx );
}
/**
* Add color to a specific pixel color value
*/
void add_color( cv::Vec3b& pix, const uchar& b, const uchar& g, const uchar& r ){
if( pix[0] + b < 255 && pix[0] + b >= 0 ){ pix[0] += b; }
if( pix[1] + g < 255 && pix[1] + g >= 0 ){ pix[1] += g; }
if( pix[2] + r < 255 && pix[2] + r >= 0 ){ pix[2] += r; }
}
/**
* Main Function
*/
int main( int argc, char* argv[] ){
/**
* Check input arguments
*/
if( argc < 3 ){
cout << "usage: " << argv[0] << " <image> <dem>" << endl;
return 1;
}
/// load the image (note that we don't have the projection information. You will
/// need to load that yourself or use the full GDAL driver. The values are pre-defined
/// at the top of this file
cv::Mat image = cv::imread(argv[1], cv::IMREAD_LOAD_GDAL | cv::IMREAD_COLOR );
/// load the dem model
cv::Mat dem = cv::imread(argv[2], cv::IMREAD_LOAD_GDAL | cv::IMREAD_ANYDEPTH );
/// create our output products
cv::Mat output_dem( image.size(), CV_8UC3 );
cv::Mat output_dem_flood( image.size(), CV_8UC3 );
/// for sanity sake, make sure GDAL Loads it as a signed short
if( dem.type() != CV_16SC1 ){ throw std::runtime_error("DEM image type must be CV_16SC1"); }
/// define the color range to create our output DEM heat map
// Pair format ( Color, elevation ); Push from low to high
// Note: This would be perfect for a configuration file, but is here for a working demo.
color_range.push_back( std::pair<cv::Vec3b,double>(cv::Vec3b( 188, 154, 46), -1));
color_range.push_back( std::pair<cv::Vec3b,double>(cv::Vec3b( 110, 220, 110), 0.25));
color_range.push_back( std::pair<cv::Vec3b,double>(cv::Vec3b( 150, 250, 230), 20));
color_range.push_back( std::pair<cv::Vec3b,double>(cv::Vec3b( 160, 220, 200), 75));
color_range.push_back( std::pair<cv::Vec3b,double>(cv::Vec3b( 220, 190, 170), 100));
color_range.push_back( std::pair<cv::Vec3b,double>(cv::Vec3b( 250, 180, 140), 200));
// define a minimum elevation
double minElevation = -10;
// iterate over each pixel in the image, computing the dem point
for( int y=0; y<image.rows; y++ ){
for( int x=0; x<image.cols; x++ ){
// convert the pixel coordinate to lat/lon coordinates
cv::Point2d coordinate = pixel2world( x, y, image.size() );
// compute the dem image pixel coordinate from lat/lon
cv::Point2d dem_coordinate = world2dem( coordinate, dem.size() );
// extract the elevation
double dz;
if( dem_coordinate.x >= 0 && dem_coordinate.y >= 0 &&
dem_coordinate.x < dem.cols && dem_coordinate.y < dem.rows ){
dz = dem.at<short>(dem_coordinate);
}else{
dz = minElevation;
}
// write the pixel value to the file
output_dem_flood.at<cv::Vec3b>(y,x) = image.at<cv::Vec3b>(y,x);
// compute the color for the heat map output
cv::Vec3b actualColor = get_dem_color(dz);
output_dem.at<cv::Vec3b>(y,x) = actualColor;
// show effect of a 10 meter increase in ocean levels
if( dz < 10 ){
add_color( output_dem_flood.at<cv::Vec3b>(y,x), 90, 0, 0 );
}
// show effect of a 50 meter increase in ocean levels
else if( dz < 50 ){
add_color( output_dem_flood.at<cv::Vec3b>(y,x), 0, 90, 0 );
}
// show effect of a 100 meter increase in ocean levels
else if( dz < 100 ){
add_color( output_dem_flood.at<cv::Vec3b>(y,x), 0, 0, 90 );
}
}}
// print our heat map
cv::imwrite( "heat-map.jpg" , output_dem );
// print the flooding effect image
cv::imwrite( "flooded.jpg", output_dem_flood);
return 0;
}
Loading…
Cancel
Save