the combined cxts + gtest

pull/13383/head
Vadim Pisarevsky 14 years ago
parent 77529b1fa6
commit e4b91918b1
  1. 4
      modules/ts/CMakeLists.txt
  2. 422
      modules/ts/README
  3. 552
      modules/ts/include/opencv2/ts/ts.hpp
  4. 18007
      modules/ts/include/opencv2/ts/ts_gtest.h
  5. 1
      modules/ts/src/precomp.cpp
  6. 2
      modules/ts/src/precomp.hpp
  7. 582
      modules/ts/src/ts.cpp
  8. 358
      modules/ts/src/ts_arrtest.cpp
  9. 2899
      modules/ts/src/ts_func.cpp
  10. 8510
      modules/ts/src/ts_gtest.cpp

@ -0,0 +1,4 @@
if(BUILD_SHARED_LIBS)
add_definitions(-DGTEST_CREATE_SHARED_LIBRARY=1)
endif()
define_opencv_module(ts opencv_core)

@ -0,0 +1,422 @@
The new OpenCV test engine is based
on the Google C++ Testing Framework (GTest).
Below is the original GTest README.
-----------------------------------
Google C++ Testing Framework
============================
http://code.google.com/p/googletest/
Overview
--------
Google's framework for writing C++ tests on a variety of platforms
(Linux, Mac OS X, Windows, Windows CE, Symbian, etc). Based on the
xUnit architecture. Supports automatic test discovery, a rich set of
assertions, user-defined assertions, death tests, fatal and non-fatal
failures, various options for running the tests, and XML test report
generation.
Please see the project page above for more information as well as the
mailing list for questions, discussions, and development. There is
also an IRC channel on OFTC (irc.oftc.net) #gtest available. Please
join us!
Requirements for End Users
--------------------------
Google Test is designed to have fairly minimal requirements to build
and use with your projects, but there are some. Currently, we support
Linux, Windows, Mac OS X, and Cygwin. We will also make our best
effort to support other platforms (e.g. Solaris, AIX, and z/OS).
However, since core members of the Google Test project have no access
to these platforms, Google Test may have outstanding issues there. If
you notice any problems on your platform, please notify
googletestframework@googlegroups.com. Patches for fixing them are
even more welcome!
### Linux Requirements ###
These are the base requirements to build and use Google Test from a source
package (as described below):
* GNU-compatible Make or gmake
* POSIX-standard shell
* POSIX(-2) Regular Expressions (regex.h)
* A C++98-standard-compliant compiler
### Windows Requirements ###
* Microsoft Visual C++ 7.1 or newer
### Cygwin Requirements ###
* Cygwin 1.5.25-14 or newer
### Mac OS X Requirements ###
* Mac OS X 10.4 Tiger or newer
* Developer Tools Installed
Also, you'll need CMake 2.6.4 or higher if you want to build the
samples using the provided CMake script, regardless of the platform.
Requirements for Contributors
-----------------------------
We welcome patches. If you plan to contribute a patch, you need to
build Google Test and its own tests from an SVN checkout (described
below), which has further requirements:
* Python version 2.3 or newer (for running some of the tests and
re-generating certain source files from templates)
* CMake 2.6.4 or newer
Getting the Source
------------------
There are two primary ways of getting Google Test's source code: you
can download a stable source release in your preferred archive format,
or directly check out the source from our Subversion (SVN) repositary.
The SVN checkout requires a few extra steps and some extra software
packages on your system, but lets you track the latest development and
make patches much more easily, so we highly encourage it.
### Source Package ###
Google Test is released in versioned source packages which can be
downloaded from the download page [1]. Several different archive
formats are provided, but the only difference is the tools used to
manipulate them, and the size of the resulting file. Download
whichever you are most comfortable with.
[1] http://code.google.com/p/googletest/downloads/list
Once the package is downloaded, expand it using whichever tools you
prefer for that type. This will result in a new directory with the
name "gtest-X.Y.Z" which contains all of the source code. Here are
some examples on Linux:
tar -xvzf gtest-X.Y.Z.tar.gz
tar -xvjf gtest-X.Y.Z.tar.bz2
unzip gtest-X.Y.Z.zip
### SVN Checkout ###
To check out the main branch (also known as the "trunk") of Google
Test, run the following Subversion command:
svn checkout http://googletest.googlecode.com/svn/trunk/ gtest-svn
Setting up the Build
--------------------
To build Google Test and your tests that use it, you need to tell your
build system where to find its headers and source files. The exact
way to do it depends on which build system you use, and is usually
straightforward.
### Generic Build Instructions ###
Suppose you put Google Test in directory ${GTEST_DIR}. To build it,
create a library build target (or a project as called by Visual Studio
and Xcode) to compile
${GTEST_DIR}/src/gtest-all.cc
with
${GTEST_DIR}/include and ${GTEST_DIR}
in the header search path. Assuming a Linux-like system and gcc,
something like the following will do:
g++ -I${GTEST_DIR}/include -I${GTEST_DIR} -c ${GTEST_DIR}/src/gtest-all.cc
ar -rv libgtest.a gtest-all.o
Next, you should compile your test source file with
${GTEST_DIR}/include in the header search path, and link it with gtest
and any other necessary libraries:
g++ -I${GTEST_DIR}/include path/to/your_test.cc libgtest.a -o your_test
As an example, the make/ directory contains a Makefile that you can
use to build Google Test on systems where GNU make is available
(e.g. Linux, Mac OS X, and Cygwin). It doesn't try to build Google
Test's own tests. Instead, it just builds the Google Test library and
a sample test. You can use it as a starting point for your own build
script.
If the default settings are correct for your environment, the
following commands should succeed:
cd ${GTEST_DIR}/make
make
./sample1_unittest
If you see errors, try to tweak the contents of make/Makefile to make
them go away. There are instructions in make/Makefile on how to do
it.
### Using CMake ###
Google Test comes with a CMake build script (CMakeLists.txt) that can
be used on a wide range of platforms ("C" stands for cross-platofrm.).
If you don't have CMake installed already, you can download it for
free from http://www.cmake.org/.
CMake works by generating native makefiles or build projects that can
be used in the compiler environment of your choice. The typical
workflow starts with:
mkdir mybuild # Create a directory to hold the build output.
cd mybuild
cmake ${GTEST_DIR} # Generate native build scripts.
If you want to build Google Test's samples, you should replace the
last command with
cmake -Dbuild_gtest_samples=ON ${GTEST_DIR}
If you are on a *nix system, you should now see a Makefile in the
current directory. Just type 'make' to build gtest.
If you use Windows and have Vistual Studio installed, a gtest.sln file
and several .vcproj files will be created. You can then build them
using Visual Studio.
On Mac OS X with Xcode installed, a .xcodeproj file will be generated.
### Legacy Build Scripts ###
Before settling on CMake, we have been providing hand-maintained build
projects/scripts for Visual Studio, Xcode, and Autotools. While we
continue to provide them for convenience, they are not actively
maintained any more. We highly recommend that you follow the
instructions in the previous two sections to integrate Google Test
with your existing build system.
If you still need to use the legacy build scripts, here's how:
The msvc\ folder contains two solutions with Visual C++ projects.
Open the gtest.sln or gtest-md.sln file using Visual Studio, and you
are ready to build Google Test the same way you build any Visual
Studio project. Files that have names ending with -md use DLL
versions of Microsoft runtime libraries (the /MD or the /MDd compiler
option). Files without that suffix use static versions of the runtime
libraries (the /MT or the /MTd option). Please note that one must use
the same option to compile both gtest and the test code. If you use
Visual Studio 2005 or above, we recommend the -md version as /MD is
the default for new projects in these versions of Visual Studio.
On Mac OS X, open the gtest.xcodeproj in the xcode/ folder using
Xcode. Build the "gtest" target. The universal binary framework will
end up in your selected build directory (selected in the Xcode
"Preferences..." -> "Building" pane and defaults to xcode/build).
Alternatively, at the command line, enter:
xcodebuild
This will build the "Release" configuration of gtest.framework in your
default build location. See the "xcodebuild" man page for more
information about building different configurations and building in
different locations.
Tweaking Google Test
--------------------
Google Test can be used in diverse environments. The default
configuration may not work (or may not work well) out of the box in
some environments. However, you can easily tweak Google Test by
defining control macros on the compiler command line. Generally,
these macros are named like GTEST_XYZ and you define them to either 1
or 0 to enable or disable a certain feature.
We list the most frequently used macros below. For a complete list,
see file include/gtest/internal/gtest-port.h.
### Choosing a TR1 Tuple Library ###
Some Google Test features require the C++ Technical Report 1 (TR1)
tuple library, which is not yet available with all compilers. The
good news is that Google Test implements a subset of TR1 tuple that's
enough for its own need, and will automatically use this when the
compiler doesn't provide TR1 tuple.
Usually you don't need to care about which tuple library Google Test
uses. However, if your project already uses TR1 tuple, you need to
tell Google Test to use the same TR1 tuple library the rest of your
project uses, or the two tuple implementations will clash. To do
that, add
-DGTEST_USE_OWN_TR1_TUPLE=0
to the compiler flags while compiling Google Test and your tests. If
you want to force Google Test to use its own tuple library, just add
-DGTEST_USE_OWN_TR1_TUPLE=1
to the compiler flags instead.
If you don't want Google Test to use tuple at all, add
-DGTEST_HAS_TR1_TUPLE=0
and all features using tuple will be disabled.
### Multi-threaded Tests ###
Google Test is thread-safe where the pthread library is available.
After #include <gtest/gtest.h>, you can check the GTEST_IS_THREADSAFE
macro to see whether this is the case (yes if the macro is #defined to
1, no if it's undefined.).
If Google Test doesn't correctly detect whether pthread is available
in your environment, you can force it with
-DGTEST_HAS_PTHREAD=1
or
-DGTEST_HAS_PTHREAD=0
When Google Test uses pthread, you may need to add flags to your
compiler and/or linker to select the pthread library, or you'll get
link errors. If you use the CMake script or the deprecated Autotools
script, this is taken care of for you. If you use your own build
script, you'll need to read your compiler and linker's manual to
figure out what flags to add.
### As a Shared Library (DLL) ###
Google Test is compact, so most users can build and link it as a
static library for the simplicity. You can choose to use Google Test
as a shared library (known as a DLL on Windows) if you prefer.
To compile gtest as a shared library, add
-DGTEST_CREATE_SHARED_LIBRARY=1
to the compiler flags. You'll also need to tell the linker to produce
a shared library instead - consult your linker's manual for how to do
it.
To compile your tests that use the gtest shared library, add
-DGTEST_LINKED_AS_SHARED_LIBRARY=1
to the compiler flags.
### Avoiding Macro Name Clashes ###
In C++, macros don't obey namespaces. Therefore two libraries that
both define a macro of the same name will clash if you #include both
definitions. In case a Google Test macro clashes with another
library, you can force Google Test to rename its macro to avoid the
conflict.
Specifically, if both Google Test and some other code define macro
FOO, you can add
-DGTEST_DONT_DEFINE_FOO=1
to the compiler flags to tell Google Test to change the macro's name
from FOO to GTEST_FOO. Currently FOO can be FAIL, SUCCEED, or TEST.
For example, with -DGTEST_DONT_DEFINE_TEST=1, you'll need to write
GTEST_TEST(SomeTest, DoesThis) { ... }
instead of
TEST(SomeTest, DoesThis) { ... }
in order to define a test.
Upgrating from an Earlier Version
---------------------------------
We strive to keep Google Test releases backward compatible.
Sometimes, though, we have to make some breaking changes for the
users' long-term benefits. This section describes what you'll need to
do if you are upgrading from an earlier version of Google Test.
### Upgrading from 1.3.0 or Earlier ###
You may need to explicitly enable or disable Google Test's own TR1
tuple library. See the instructions in section "Choosing a TR1 Tuple
Library".
### Upgrading from 1.4.0 or Earlier ###
The Autotools build script (configure + make) is no longer officially
supportted. You are encouraged to migrate to your own build system or
use CMake. If you still need to use Autotools, you can find
instructions in the README file from Google Test 1.4.0.
On platforms where the pthread library is available, Google Test uses
it in order to be thread-safe. See the "Multi-threaded Tests" section
for what this means to your build script.
If you use Microsoft Visual C++ 7.1 with exceptions disabled, Google
Test will no longer compile. This should affect very few people, as a
large portion of STL (including <string>) doesn't compile in this mode
anyway. We decided to stop supporting it in order to greatly simplify
Google Test's implementation.
Developing Google Test
----------------------
This section discusses how to make your own changes to Google Test.
### Testing Google Test Itself ###
To make sure your changes work as intended and don't break existing
functionality, you'll want to compile and run Google Test's own tests.
For that you can use CMake:
mkdir mybuild
cd mybuild
cmake -Dbuild_all_gtest_tests=ON ${GTEST_DIR}
Make sure you have Python installed, as some of Google Test's tests
are written in Python. If the cmake command complains about not being
able to find Python ("Could NOT find PythonInterp (missing:
PYTHON_EXECUTABLE)"), try telling it explicitly where your Python
executable can be found:
cmake -DPYTHON_EXECUTABLE=path/to/python -Dbuild_all_gtest_tests=ON \
${GTEST_DIR}
Next, you can build Google Test and all of its own tests. On *nix,
this is usually done by 'make'. To run the tests, do
make test
All tests should pass.
### Regenerating Source Files ###
Some of Google Test's source files are generated from templates (not
in the C++ sense) using a script. A template file is named FOO.pump,
where FOO is the name of the file it will generate. For example, the
file include/gtest/internal/gtest-type-util.h.pump is used to generate
gtest-type-util.h in the same directory.
Normally you don't need to worry about regenerating the source files,
unless you need to modify them. In that case, you should modify the
corresponding .pump files instead and run the pump.py Python script to
regenerate them. You can find pump.py in the scripts/ directory.
Read the Pump manual [2] for how to use it.
[2] http://code.google.com/p/googletest/wiki/PumpManual
### Contributing a Patch ###
We welcome patches. Please read the Google Test developer's guide [3]
for how you can contribute. In particular, make sure you have signed
the Contributor License Agreement, or we won't be able to accept the
patch.
[3] http://code.google.com/p/googletest/wiki/GoogleTestDevGuide
Happy testing!

@ -0,0 +1,552 @@
#ifndef __OPENCV_GTESTCV_HPP__
#define __OPENCV_GTESTCV_HPP__
#include "opencv2/ts/ts_gtest.h"
#include "opencv2/core/core.hpp"
namespace cvtest
{
using std::vector;
using std::string;
using cv::RNG;
using cv::Mat;
using cv::Scalar;
using cv::Size;
using cv::Point;
using cv::Rect;
class CV_EXPORTS TS;
enum
{
TYPE_MASK_8U = 1 << CV_8U,
TYPE_MASK_8S = 1 << CV_8S,
TYPE_MASK_16U = 1 << CV_16U,
TYPE_MASK_16S = 1 << CV_16S,
TYPE_MASK_32S = 1 << CV_32S,
TYPE_MASK_32F = 1 << CV_32F,
TYPE_MASK_64F = 1 << CV_64F,
TYPE_MASK_ALL = (TYPE_MASK_64F<<1)-1,
TYPE_MASK_ALL_BUT_8S = TYPE_MASK_ALL & ~TYPE_MASK_8S,
TYPE_MASK_FLT = TYPE_MASK_32F + TYPE_MASK_64F
};
CV_EXPORTS int64 readSeed(const char* str);
CV_EXPORTS void randUni( RNG& rng, Mat& a, const Scalar& param1, const Scalar& param2 );
inline unsigned randInt( RNG& rng )
{
return (unsigned)rng;
}
inline double randReal( RNG& rng )
{
return (double)rng;
}
CV_EXPORTS const char* getTypeName( int type );
CV_EXPORTS int typeByName( const char* type_name );
CV_EXPORTS string vec2str(const string& sep, const int* v, size_t nelems);
inline int clipInt( int val, int min_val, int max_val )
{
if( val < min_val )
val = min_val;
if( val > max_val )
val = max_val;
return val;
}
CV_EXPORTS double getMinVal(int depth);
CV_EXPORTS double getMaxVal(int depth);
CV_EXPORTS Size randomSize(RNG& rng, double maxSizeLog);
CV_EXPORTS void randomSize(RNG& rng, int minDims, int maxDims, double maxSizeLog, vector<int>& sz);
CV_EXPORTS int randomType(RNG& rng, int typeMask, int minChannels, int maxChannels);
CV_EXPORTS Mat randomMat(RNG& rng, Size size, int type, double minVal, double maxVal, bool useRoi);
CV_EXPORTS Mat randomMat(RNG& rng, const vector<int>& size, int type, double minVal, double maxVal, bool useRoi);
CV_EXPORTS void add(const Mat& a, double alpha, const Mat& b, double beta,
Scalar gamma, Mat& c, int ctype, bool calcAbs=false);
CV_EXPORTS void multiply(const Mat& a, const Mat& b, Mat& c, double alpha=1);
CV_EXPORTS void divide(const Mat& a, const Mat& b, Mat& c, double alpha=1);
CV_EXPORTS void convert(const Mat& src, Mat& dst, int dtype, double alpha=1, double beta=0);
CV_EXPORTS void copy(const Mat& src, Mat& dst, const Mat& mask=Mat(), bool invertMask=false);
CV_EXPORTS void set(Mat& dst, const Scalar& gamma, const Mat& mask=Mat());
// working with multi-channel arrays
CV_EXPORTS void extract( const Mat& a, Mat& plane, int coi );
CV_EXPORTS void insert( const Mat& plane, Mat& a, int coi );
// checks that the array does not have NaNs and/or Infs and all the elements are
// within [min_val,max_val). idx is the index of the first "bad" element.
CV_EXPORTS int check( const Mat& data, double min_val, double max_val, vector<int>* idx );
// modifies values that are close to zero
CV_EXPORTS void patchZeros( Mat& mat, double level );
CV_EXPORTS void transpose(const Mat& src, Mat& dst);
CV_EXPORTS void erode(const Mat& src, Mat& dst, const Mat& _kernel, Point anchor=Point(-1,-1),
int borderType=IPL_BORDER_CONSTANT, const Scalar& borderValue=Scalar());
CV_EXPORTS void dilate(const Mat& src, Mat& dst, const Mat& _kernel, Point anchor=Point(-1,-1),
int borderType=IPL_BORDER_CONSTANT, const Scalar& borderValue=Scalar());
CV_EXPORTS void filter2D(const Mat& src, Mat& dst, int ddepth, const Mat& kernel,
Point anchor, double delta, int borderType,
const Scalar& borderValue=Scalar());
CV_EXPORTS void copyMakeBorder(const Mat& src, Mat& dst, int top, int bottom, int left, int right,
int borderType, const Scalar& borderValue=Scalar());
CV_EXPORTS Mat calcSobelKernel2D( int dx, int dy, int apertureSize, int origin=0 );
CV_EXPORTS Mat calcLaplaceKernel2D( int aperture_size );
CV_EXPORTS void initUndistortMap( const Mat& a, const Mat& k, Size sz, Mat& mapx, Mat& mapy );
CV_EXPORTS void minMaxLoc(const Mat& src, double* minval, double* maxval,
vector<int>* minloc, vector<int>* maxloc, const Mat& mask=Mat());
CV_EXPORTS double norm(const Mat& src, int normType, const Mat& mask=Mat());
CV_EXPORTS double norm(const Mat& src1, const Mat& src2, int normType, const Mat& mask=Mat());
CV_EXPORTS Scalar mean(const Mat& src, const Mat& mask=Mat());
CV_EXPORTS bool cmpUlps(const Mat& data, const Mat& refdata, int expMaxDiff, double* realMaxDiff, vector<int>* idx);
// compares two arrays. max_diff is the maximum actual difference,
// success_err_level is maximum allowed difference, idx is the index of the first
// element for which difference is >success_err_level
// (or index of element with the maximum difference)
CV_EXPORTS int cmpEps( const Mat& data, const Mat& refdata, double* max_diff,
double success_err_level, vector<int>* idx,
bool element_wise_relative_error );
// a wrapper for the previous function. in case of error prints the message to log file.
CV_EXPORTS int cmpEps2( TS* ts, const Mat& data, const Mat& refdata, double success_err_level,
bool element_wise_relative_error, const char* desc );
CV_EXPORTS int cmpEps2_64f( TS* ts, const double* val, const double* refval, int len,
double eps, const char* param_name );
CV_EXPORTS void logicOp(const Mat& src1, const Mat& src2, Mat& dst, char c);
CV_EXPORTS void logicOp(const Mat& src, const Scalar& s, Mat& dst, char c);
CV_EXPORTS void min(const Mat& src1, const Mat& src2, Mat& dst);
CV_EXPORTS void min(const Mat& src, double s, Mat& dst);
CV_EXPORTS void max(const Mat& src1, const Mat& src2, Mat& dst);
CV_EXPORTS void max(const Mat& src, double s, Mat& dst);
CV_EXPORTS void compare(const Mat& src1, const Mat& src2, Mat& dst, int cmpop);
CV_EXPORTS void compare(const Mat& src, double s, Mat& dst, int cmpop);
CV_EXPORTS void gemm(const Mat& src1, const Mat& src2, double alpha,
const Mat& src3, double beta, Mat& dst, int flags);
CV_EXPORTS void transform( const Mat& src, Mat& dst, const Mat& transmat, const Mat& shift );
CV_EXPORTS double crossCorr(const Mat& src1, const Mat& src2);
struct CV_EXPORTS MatInfo
{
MatInfo(const Mat& _m) : m(&_m) {}
const Mat* m;
};
CV_EXPORTS std::ostream& operator << (std::ostream& out, const MatInfo& m);
struct CV_EXPORTS MatComparator
{
public:
MatComparator(double maxdiff, int context);
::testing::AssertionResult operator()(const char* expr1, const char* expr2,
const Mat& m1, const Mat& m2);
double maxdiff;
double realmaxdiff;
vector<int> loc0;
int context;
};
class BaseTest;
class TS;
class CV_EXPORTS BaseTest
{
public:
// constructor(s) and destructor
BaseTest();
virtual ~BaseTest();
// the main procedure of the test
virtual void run( int start_from );
// the wrapper for run that cares of exceptions
virtual void safe_run( int start_from=0 );
const string& get_name() const { return name; }
// returns true if and only if the different test cases do not depend on each other
// (so that test system could get right to a problematic test case)
virtual bool can_do_fast_forward();
// deallocates all the memory.
// called by init() (before initialization) and by the destructor
virtual void clear();
protected:
int test_case_count; // the total number of test cases
// read test params
virtual int read_params( CvFileStorage* fs );
// returns the number of tests or -1 if it is unknown a-priori
virtual int get_test_case_count();
// prepares data for the next test case. rng seed is updated by the function
virtual int prepare_test_case( int test_case_idx );
// checks if the test output is valid and accurate
virtual int validate_test_results( int test_case_idx );
// calls the tested function. the method is called from run_test_case()
virtual void run_func(); // runs tested func(s)
// updates progress bar
virtual int update_progress( int progress, int test_case_idx, int count, double dt );
// finds test parameter
const CvFileNode* find_param( CvFileStorage* fs, const char* param_name );
// name of the test (it is possible to locate a test by its name)
string name;
// pointer to the system that includes the test
TS* ts;
};
/*****************************************************************************************\
* Information about a failed test *
\*****************************************************************************************/
struct TestInfo
{
TestInfo();
// pointer to the test
BaseTest* test;
// failure code (CV_FAIL*)
int code;
// seed value right before the data for the failed test case is prepared.
uint64 rng_seed;
// seed value right before running the test
uint64 rng_seed0;
// index of test case, can be then passed to BaseTest::proceed_to_test_case()
int test_case_idx;
};
/*****************************************************************************************\
* Base Class for test system *
\*****************************************************************************************/
// common parameters:
struct CV_EXPORTS TSParams
{
TSParams();
// RNG seed, passed to and updated by every test executed.
uint64 rng_seed;
// whether to use IPP, MKL etc. or not
bool use_optimized;
// extensivity of the tests, scale factor for test_case_count
double test_case_count_scale;
};
class CV_EXPORTS TS
{
public:
// constructor(s) and destructor
TS();
virtual ~TS();
enum
{
NUL=0,
SUMMARY_IDX=0,
SUMMARY=1 << SUMMARY_IDX,
LOG_IDX=1,
LOG=1 << LOG_IDX,
CSV_IDX=2,
CSV=1 << CSV_IDX,
CONSOLE_IDX=3,
CONSOLE=1 << CONSOLE_IDX,
MAX_IDX=4
};
static TS* ptr();
// initialize test system before running the first test
virtual void init( const string& modulename );
// low-level printing functions that are used by individual tests and by the system itself
virtual void printf( int streams, const char* fmt, ... );
virtual void vprintf( int streams, const char* fmt, va_list arglist );
// updates the context: current test, test case, rng state
virtual void update_context( BaseTest* test, int test_case_idx, bool update_ts_context );
const TestInfo* get_current_test_info() { return &current_test_info; }
// sets information about a failed test
virtual void set_failed_test_info( int fail_code );
virtual void set_gtest_status();
// test error codes
enum
{
// everything is Ok
OK=0,
// generic error: stub value to be used
// temporarily if the error's cause is unknown
FAIL_GENERIC=-1,
// the test is missing some essential data to proceed further
FAIL_MISSING_TEST_DATA=-2,
// the tested function raised an error via cxcore error handler
FAIL_ERROR_IN_CALLED_FUNC=-3,
// an exception has been raised;
// for memory and arithmetic exception
// there are two specialized codes (see below...)
FAIL_EXCEPTION=-4,
// a memory exception
// (access violation, access to missed page, stack overflow etc.)
FAIL_MEMORY_EXCEPTION=-5,
// arithmetic exception (overflow, division by zero etc.)
FAIL_ARITHM_EXCEPTION=-6,
// the tested function corrupted memory (no exception have been raised)
FAIL_MEMORY_CORRUPTION_BEGIN=-7,
FAIL_MEMORY_CORRUPTION_END=-8,
// the tested function (or test ifself) do not deallocate some memory
FAIL_MEMORY_LEAK=-9,
// the tested function returned invalid object, e.g. matrix, containing NaNs,
// structure with NULL or out-of-range fields (while it should not)
FAIL_INVALID_OUTPUT=-10,
// the tested function returned valid object, but it does not match to
// the original (or produced by the test) object
FAIL_MISMATCH=-11,
// the tested function returned valid object (a single number or numerical array),
// but it differs too much from the original (or produced by the test) object
FAIL_BAD_ACCURACY=-12,
// the tested function hung. Sometimes, can be determined by unexpectedly long
// processing time (in this case there should be possibility to interrupt such a function
FAIL_HANG=-13,
// unexpected responce on passing bad arguments to the tested function
// (the function crashed, proceed succesfully (while it should not), or returned
// error code that is different from what is expected)
FAIL_BAD_ARG_CHECK=-14,
// the test data (in whole or for the particular test case) is invalid
FAIL_INVALID_TEST_DATA=-15,
// the test has been skipped because it is not in the selected subset of the tests to run,
// because it has been run already within the same run with the same parameters, or because
// of some other reason and this is not considered as an error.
// Normally TS::run() (or overrided method in the derived class) takes care of what
// needs to be run, so this code should not occur.
SKIPPED=1
};
// get file storage
CvFileStorage* get_file_storage();
// get RNG to generate random input data for a test
RNG& get_rng() { return rng; }
// returns the current error code
int get_err_code() { return current_test_info.code; }
// returns the test extensivity scale
double get_test_case_count_scale() { return params.test_case_count_scale; }
const string& get_data_path() const { return data_path; }
// returns textual description of failure code
static string str_from_code( int code );
protected:
// these are allocated within a test to try keep them valid in case of stack corruption
RNG rng;
// information about the current test
TestInfo current_test_info;
// the path to data files used by tests
string data_path;
TSParams params;
std::string output_buf[MAX_IDX];
};
/*****************************************************************************************\
* Subclass of BaseTest for testing functions that process dense arrays *
\*****************************************************************************************/
class CV_EXPORTS ArrayTest : public BaseTest
{
public:
// constructor(s) and destructor
ArrayTest();
virtual ~ArrayTest();
virtual void clear();
protected:
virtual int read_params( CvFileStorage* fs );
virtual int prepare_test_case( int test_case_idx );
virtual int validate_test_results( int test_case_idx );
virtual void prepare_to_validation( int test_case_idx );
virtual void get_test_array_types_and_sizes( int test_case_idx, vector<vector<Size> >& sizes, vector<vector<int> >& types );
virtual void fill_array( int test_case_idx, int i, int j, Mat& arr );
virtual void get_minmax_bounds( int i, int j, int type, Scalar& low, Scalar& high );
virtual double get_success_error_level( int test_case_idx, int i, int j );
bool cvmat_allowed;
bool iplimage_allowed;
bool optional_mask;
bool element_wise_relative_error;
int min_log_array_size;
int max_log_array_size;
enum { INPUT, INPUT_OUTPUT, OUTPUT, REF_INPUT_OUTPUT, REF_OUTPUT, TEMP, MASK, MAX_ARR };
vector<vector<void*> > test_array;
vector<vector<Mat> > test_mat;
float buf[4];
};
class CV_EXPORTS BadArgTest : public BaseTest
{
public:
// constructor(s) and destructor
BadArgTest();
virtual ~BadArgTest();
protected:
virtual int run_test_case( int expected_code, const string& descr );
virtual void run_func(void) = 0;
int test_case_idx;
int progress;
double t, freq;
template<class F>
int run_test_case( int expected_code, const string& _descr, F f)
{
double new_t = (double)cv::getTickCount(), dt;
if( test_case_idx < 0 )
{
test_case_idx = 0;
progress = 0;
dt = 0;
}
else
{
dt = (new_t - t)/(freq*1000);
t = new_t;
}
progress = update_progress(progress, test_case_idx, 0, dt);
int errcount = 0;
bool thrown = false;
const char* descr = _descr.c_str() ? _descr.c_str() : "";
try
{
f();
}
catch(const cv::Exception& e)
{
thrown = true;
if( e.code != expected_code )
{
ts->printf(TS::LOG, "%s (test case #%d): the error code %d is different from the expected %d\n",
descr, test_case_idx, e.code, expected_code);
errcount = 1;
}
}
catch(...)
{
thrown = true;
ts->printf(TS::LOG, "%s (test case #%d): unknown exception was thrown (the function has likely crashed)\n",
descr, test_case_idx);
errcount = 1;
}
if(!thrown)
{
ts->printf(TS::LOG, "%s (test case #%d): no expected exception was thrown\n",
descr, test_case_idx);
errcount = 1;
}
test_case_idx++;
return errcount;
}
};
struct CV_EXPORTS DefaultRngAuto
{
const uint64 old_state;
DefaultRngAuto() : old_state(cv::theRNG().state) { cv::theRNG().state = (uint64)-1; }
~DefaultRngAuto() { cv::theRNG().state = old_state; }
DefaultRngAuto& operator=(const DefaultRngAuto&);
};
}
// fills c with zeros
CV_EXPORTS void cvTsZero( CvMat* c, const CvMat* mask=0 );
// copies a to b (whole matrix or only the selected region)
CV_EXPORTS void cvTsCopy( const CvMat* a, CvMat* b, const CvMat* mask=0 );
// converts one array to another
CV_EXPORTS void cvTsConvert( const CvMat* src, CvMat* dst );
CV_EXPORTS void cvTsGEMM( const CvMat* a, const CvMat* b, double alpha,
const CvMat* c, double beta, CvMat* d, int flags );
#define CV_TEST_MAIN(resourcesubdir) \
int main(int argc, char **argv) \
{ \
cvtest::TS::ptr()->init(resourcesubdir); \
::testing::InitGoogleTest(&argc, argv); \
return RUN_ALL_TESTS(); \
}
#endif

File diff suppressed because it is too large Load Diff

@ -0,0 +1 @@
#include "precomp.hpp"

@ -0,0 +1,2 @@
#include "opencv2/ts/ts.hpp"
#include "opencv2/core/core_c.h"

@ -0,0 +1,582 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// Intel License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
#include <ctype.h>
#include <stdarg.h>
#include <stdlib.h>
#include <fcntl.h>
#include <time.h>
#if defined WIN32 || defined _WIN32 || defined WIN64 || defined _WIN64
#include <io.h>
#include <windows.h>
#ifdef _MSC_VER
#include <eh.h>
#endif
#else
#include <unistd.h>
#endif
namespace cvtest
{
/*****************************************************************************************\
* Exception and memory handlers *
\*****************************************************************************************/
// a few platform-dependent declarations
#if defined WIN32 || defined _WIN32 || defined WIN64 || defined _WIN64
#ifdef _MSC_VER
static void setSEHTranslator( unsigned int /*u*/, EXCEPTION_POINTERS* pExp )
{
int code = TS::FAIL_EXCEPTION;
switch( pExp->ExceptionRecord->ExceptionCode )
{
case EXCEPTION_ACCESS_VIOLATION:
case EXCEPTION_ARRAY_BOUNDS_EXCEEDED:
case EXCEPTION_DATATYPE_MISALIGNMENT:
case EXCEPTION_FLT_STACK_CHECK:
case EXCEPTION_STACK_OVERFLOW:
case EXCEPTION_IN_PAGE_ERROR:
code = TS::FAIL_MEMORY_EXCEPTION;
break;
case EXCEPTION_FLT_DENORMAL_OPERAND:
case EXCEPTION_FLT_DIVIDE_BY_ZERO:
case EXCEPTION_FLT_INEXACT_RESULT:
case EXCEPTION_FLT_INVALID_OPERATION:
case EXCEPTION_FLT_OVERFLOW:
case EXCEPTION_FLT_UNDERFLOW:
case EXCEPTION_INT_DIVIDE_BY_ZERO:
case EXCEPTION_INT_OVERFLOW:
code = TS::FAIL_ARITHM_EXCEPTION;
break;
case EXCEPTION_BREAKPOINT:
case EXCEPTION_ILLEGAL_INSTRUCTION:
case EXCEPTION_INVALID_DISPOSITION:
case EXCEPTION_NONCONTINUABLE_EXCEPTION:
case EXCEPTION_PRIV_INSTRUCTION:
case EXCEPTION_SINGLE_STEP:
code = TS::FAIL_EXCEPTION;
}
throw code;
}
#endif
#else
#include <signal.h>
#include <setjmp.h>
static const int tsSigId[] = { SIGSEGV, SIGBUS, SIGFPE, SIGILL, SIGABRT, -1 };
static jmp_buf tsJmpMark;
void signalHandler( int sig_code )
{
int code = TS::FAIL_EXCEPTION;
switch( sig_code )
{
case SIGFPE:
code = TS::FAIL_ARITHM_EXCEPTION;
break;
case SIGSEGV:
case SIGBUS:
code = TS::FAIL_ARITHM_EXCEPTION;
break;
case SIGILL:
code = TS::FAIL_EXCEPTION;
}
longjmp( tsJmpMark, code );
}
#endif
// reads 16-digit hexadecimal number (i.e. 64-bit integer)
int64 readSeed( const char* str )
{
int64 val = 0;
if( str && strlen(str) == 16 )
{
for( int i = 0; str[i]; i++ )
{
int c = tolower(str[i]);
if( !isxdigit(c) )
return 0;
val = val * 16 +
(str[i] < 'a' ? str[i] - '0' : str[i] - 'a' + 10);
}
}
return val;
}
/*****************************************************************************************\
* Base Class for Tests *
\*****************************************************************************************/
BaseTest::BaseTest()
{
ts = TS::ptr();
test_case_count = -1;
}
BaseTest::~BaseTest()
{
clear();
}
void BaseTest::clear()
{
}
const CvFileNode* BaseTest::find_param( CvFileStorage* fs, const char* param_name )
{
CvFileNode* node = cvGetFileNodeByName(fs, 0, get_name().c_str());
return node ? cvGetFileNodeByName( fs, node, param_name ) : 0;
}
int BaseTest::read_params( CvFileStorage* )
{
return 0;
}
bool BaseTest::can_do_fast_forward()
{
return true;
}
void BaseTest::safe_run( int start_from )
{
read_params( ts->get_file_storage() );
ts->update_context( 0, -1, true );
ts->update_context( this, -1, true );
if( !::testing::GTEST_FLAG(catch_exceptions) )
run( start_from );
else
{
try
{
#if !defined WIN32 && !defined _WIN32
int _code = setjmp( tsJmpMark );
if( !_code )
run( start_from );
else
throw _code;
#else
run( start_from );
#endif
}
catch (const cv::Exception& exc)
{
const char* errorStr = cvErrorStr(exc.code);
char buf[1 << 16];
sprintf( buf, "OpenCV Error: %s (%s) in %s, file %s, line %d",
errorStr, exc.err.c_str(), exc.func.size() > 0 ?
exc.func.c_str() : "unknown function", exc.file.c_str(), exc.line );
ts->printf(TS::LOG, "%s\n", buf);
ts->set_failed_test_info( TS::FAIL_ERROR_IN_CALLED_FUNC );
}
catch (...)
{
ts->set_failed_test_info( TS::FAIL_EXCEPTION );
}
}
ts->set_gtest_status();
}
void BaseTest::run( int start_from )
{
int test_case_idx, count = get_test_case_count();
int64 t_start = cvGetTickCount();
double freq = cv::getTickFrequency();
bool ff = can_do_fast_forward();
int progress = 0, code;
int64 t1 = t_start;
for( test_case_idx = ff && start_from >= 0 ? start_from : 0;
count < 0 || test_case_idx < count; test_case_idx++ )
{
ts->update_context( this, test_case_idx, ff );
progress = update_progress( progress, test_case_idx, count, (double)(t1 - t_start)/(freq*1000) );
code = prepare_test_case( test_case_idx );
if( code < 0 || ts->get_err_code() < 0 )
return;
if( code == 0 )
continue;
run_func();
if( ts->get_err_code() < 0 )
return;
if( validate_test_results( test_case_idx ) < 0 || ts->get_err_code() < 0 )
return;
}
}
void BaseTest::run_func(void)
{
assert(0);
}
int BaseTest::get_test_case_count(void)
{
return test_case_count;
}
int BaseTest::prepare_test_case( int )
{
return 0;
}
int BaseTest::validate_test_results( int )
{
return 0;
}
int BaseTest::update_progress( int progress, int test_case_idx, int count, double dt )
{
int width = 60 - (int)get_name().size();
if( count > 0 )
{
int t = cvRound( ((double)test_case_idx * width)/count );
if( t > progress )
{
ts->printf( TS::CONSOLE, "." );
progress = t;
}
}
else if( cvRound(dt) > progress )
{
ts->printf( TS::CONSOLE, "." );
progress = cvRound(dt);
}
return progress;
}
BadArgTest::BadArgTest()
{
progress = -1;
test_case_idx = -1;
freq = cv::getTickFrequency();
}
BadArgTest::~BadArgTest(void)
{
}
int BadArgTest::run_test_case( int expected_code, const string& _descr )
{
double new_t = (double)cv::getTickCount(), dt;
if( test_case_idx < 0 )
{
test_case_idx = 0;
progress = 0;
dt = 0;
}
else
{
dt = (new_t - t)/(freq*1000);
t = new_t;
}
progress = update_progress(progress, test_case_idx, 0, dt);
int errcount = 0;
bool thrown = false;
const char* descr = _descr.c_str() ? _descr.c_str() : "";
try
{
run_func();
}
catch(const cv::Exception& e)
{
thrown = true;
if( e.code != expected_code )
{
ts->printf(TS::LOG, "%s (test case #%d): the error code %d is different from the expected %d\n",
descr, test_case_idx, e.code, expected_code);
errcount = 1;
}
}
catch(...)
{
thrown = true;
ts->printf(TS::LOG, "%s (test case #%d): unknown exception was thrown (the function has likely crashed)\n",
descr, test_case_idx);
errcount = 1;
}
if(!thrown)
{
ts->printf(TS::LOG, "%s (test case #%d): no expected exception was thrown\n",
descr, test_case_idx);
errcount = 1;
}
test_case_idx++;
return errcount;
}
/*****************************************************************************************\
* Base Class for Test System *
\*****************************************************************************************/
/******************************** Constructors/Destructors ******************************/
TSParams::TSParams()
{
rng_seed = (uint64)-1;
use_optimized = true;
test_case_count_scale = 1;
}
TestInfo::TestInfo()
{
test = 0;
code = 0;
rng_seed = rng_seed0 = 0;
test_case_idx = -1;
}
TS::TS()
{
} // ctor
TS::~TS()
{
} // dtor
string TS::str_from_code( int code )
{
switch( code )
{
case OK: return "Ok";
case FAIL_GENERIC: return "Generic/Unknown";
case FAIL_MISSING_TEST_DATA: return "No test data";
case FAIL_INVALID_TEST_DATA: return "Invalid test data";
case FAIL_ERROR_IN_CALLED_FUNC: return "cvError invoked";
case FAIL_EXCEPTION: return "Hardware/OS exception";
case FAIL_MEMORY_EXCEPTION: return "Invalid memory access";
case FAIL_ARITHM_EXCEPTION: return "Arithmetic exception";
case FAIL_MEMORY_CORRUPTION_BEGIN: return "Corrupted memblock (beginning)";
case FAIL_MEMORY_CORRUPTION_END: return "Corrupted memblock (end)";
case FAIL_MEMORY_LEAK: return "Memory leak";
case FAIL_INVALID_OUTPUT: return "Invalid function output";
case FAIL_MISMATCH: return "Unexpected output";
case FAIL_BAD_ACCURACY: return "Bad accuracy";
case FAIL_HANG: return "Infinite loop(?)";
case FAIL_BAD_ARG_CHECK: return "Incorrect handling of bad arguments";
default:
;
}
return "Generic/Unknown";
}
/************************************** Running tests **********************************/
void TS::init( const string& modulename )
{
char* datapath_dir = getenv("OPENCV_TEST_DATA_PATH");
if( datapath_dir )
{
char buf[1024];
size_t l = strlen(datapath_dir);
bool haveSlash = l > 0 && (datapath_dir[l-1] == '/' || datapath_dir[l-1] == '\\');
sprintf( buf, "%s%s%s/", datapath_dir, haveSlash ? "" : "/", modulename.c_str() );
data_path = string(buf);
}
if( ::testing::GTEST_FLAG(catch_exceptions) )
{
cvSetErrMode( CV_ErrModeParent );
cvRedirectError( cvStdErrReport );
#if defined WIN32 || defined _WIN32
#ifdef _MSC_VER
_set_se_translator( SEHTranslator );
#endif
#else
for( int i = 0; tsSigId[i] >= 0; i++ )
signal( tsSigId[i], signalHandler );
#endif
}
else
{
cvSetErrMode( CV_ErrModeLeaf );
cvRedirectError( cvGuiBoxReport );
#if defined WIN32 || defined _WIN32
#ifdef _MSC_VER
_set_se_translator( 0 );
#endif
#else
for( int i = 0; tsSigId[i] >= 0; i++ )
signal( tsSigId[i], SIG_DFL );
#endif
}
if( params.use_optimized == 0 )
cv::setUseOptimized(false);
rng = RNG(params.rng_seed);
}
void TS::set_gtest_status()
{
int code = get_err_code();
if( code >= 0 )
return SUCCEED();
char seedstr[32];
sprintf(seedstr, "%08x%08x", (unsigned)(current_test_info.rng_seed>>32),
(unsigned)(current_test_info.rng_seed));
string logs = "";
if( !output_buf[SUMMARY_IDX].empty() )
logs += "\n-----------------------------------\n\tSUM: " + output_buf[SUMMARY_IDX];
if( !output_buf[LOG_IDX].empty() )
logs += "\n-----------------------------------\n\tLOG: " + output_buf[LOG_IDX];
if( !output_buf[CONSOLE_IDX].empty() )
logs += "\n-----------------------------------\n\tCONSOLE: " + output_buf[CONSOLE_IDX];
logs += "\n-----------------------------------\n";
FAIL() << "\n\tfailure reason: " << str_from_code(code) <<
"\n\ttest case #" << current_test_info.test_case_idx <<
"\n\tseed: " << seedstr << logs;
}
CvFileStorage* TS::get_file_storage() { return 0; }
void TS::update_context( BaseTest* test, int test_case_idx, bool update_ts_context )
{
if( current_test_info.test != test )
{
for( int i = 0; i <= CONSOLE_IDX; i++ )
output_buf[i] = string();
rng = RNG(params.rng_seed);
current_test_info.rng_seed0 = current_test_info.rng_seed = rng.state;
}
current_test_info.test = test;
current_test_info.test_case_idx = test_case_idx;
current_test_info.code = 0;
cvSetErrStatus( CV_StsOk );
if( update_ts_context )
current_test_info.rng_seed = rng.state;
}
void TS::set_failed_test_info( int fail_code )
{
if( current_test_info.code >= 0 )
current_test_info.code = fail_code;
}
#if defined _MSC_VER && _MSC_VER < 1400
#undef vsnprintf
#define vsnprintf _vsnprintf
#endif
void TS::vprintf( int streams, const char* fmt, va_list l )
{
char str[1 << 14];
vsnprintf( str, sizeof(str)-1, fmt, l );
for( int i = 0; i < MAX_IDX; i++ )
if( (streams & (1 << i)) )
{
output_buf[i] += std::string(str);
// in the new GTest-based framework we do not use
// any output files (except for the automatically generated xml report).
// if a test fails, all the buffers are printed, so we do not want to duplicate the information and
// thus only add the new information to a single buffer and return from the function.
break;
}
}
void TS::printf( int streams, const char* fmt, ... )
{
if( streams )
{
va_list l;
va_start( l, fmt );
vprintf( streams, fmt, l );
va_end( l );
}
}
TS ts;
TS* TS::ptr() { return &ts; }
}
/* End of file. */

@ -0,0 +1,358 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// Intel License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
namespace cvtest
{
static const int default_test_case_count = 500;
static const int default_max_log_array_size = 9;
ArrayTest::ArrayTest()
{
test_case_count = default_test_case_count;
iplimage_allowed = true;
cvmat_allowed = true;
optional_mask = false;
min_log_array_size = 0;
max_log_array_size = default_max_log_array_size;
element_wise_relative_error = true;
test_array.resize(MAX_ARR);
}
ArrayTest::~ArrayTest()
{
clear();
}
void ArrayTest::clear()
{
for( size_t i = 0; i < test_array.size(); i++ )
{
for( size_t j = 0; j < test_array[i].size(); j++ )
cvRelease( &test_array[i][j] );
}
BaseTest::clear();
}
int ArrayTest::read_params( CvFileStorage* fs )
{
int code = BaseTest::read_params( fs );
if( code < 0 )
return code;
min_log_array_size = cvReadInt( find_param( fs, "min_log_array_size" ), min_log_array_size );
max_log_array_size = cvReadInt( find_param( fs, "max_log_array_size" ), max_log_array_size );
test_case_count = cvReadInt( find_param( fs, "test_case_count" ), test_case_count );
test_case_count = cvRound( test_case_count*ts->get_test_case_count_scale() );
min_log_array_size = clipInt( min_log_array_size, 0, 20 );
max_log_array_size = clipInt( max_log_array_size, min_log_array_size, 20 );
test_case_count = clipInt( test_case_count, 0, 100000 );
return code;
}
void ArrayTest::get_test_array_types_and_sizes( int /*test_case_idx*/, vector<vector<Size> >& sizes, vector<vector<int> >& types )
{
RNG& rng = ts->get_rng();
Size size;
double val;
size_t i, j;
val = randReal(rng) * (max_log_array_size - min_log_array_size) + min_log_array_size;
size.width = cvRound( exp(val*CV_LOG2) );
val = randReal(rng) * (max_log_array_size - min_log_array_size) + min_log_array_size;
size.height = cvRound( exp(val*CV_LOG2) );
for( i = 0; i < test_array.size(); i++ )
{
size_t sizei = test_array[i].size();
for( j = 0; j < sizei; j++ )
{
sizes[i][j] = size;
types[i][j] = CV_8UC1;
}
}
}
static const int icvTsTypeToDepth[] =
{
IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16U, IPL_DEPTH_16S,
IPL_DEPTH_32S, IPL_DEPTH_32F, IPL_DEPTH_64F
};
int ArrayTest::prepare_test_case( int test_case_idx )
{
int code = 1;
size_t max_arr = test_array.size();
vector<vector<Size> > sizes(max_arr);
vector<vector<Size> > whole_sizes(max_arr);
vector<vector<int> > types(max_arr);
size_t i, j;
RNG& rng = ts->get_rng();
bool is_image = false;
for( i = 0; i < max_arr; i++ )
{
size_t sizei = std::max(test_array[i].size(), (size_t)1);
sizes[i].resize(sizei);
types[i].resize(sizei);
whole_sizes[i].resize(sizei);
}
get_test_array_types_and_sizes( test_case_idx, sizes, types );
for( i = 0; i < max_arr; i++ )
{
size_t sizei = test_array[i].size();
for( j = 0; j < sizei; j++ )
{
unsigned t = randInt(rng);
bool create_mask = true, use_roi = false;
CvSize size = sizes[i][j], whole_size = size;
CvRect roi = {0,0,0,0};
is_image = !cvmat_allowed ? true : iplimage_allowed ? (t & 1) != 0 : false;
create_mask = (t & 6) == 0; // ~ each of 3 tests will use mask
use_roi = (t & 8) != 0;
if( use_roi )
{
whole_size.width += randInt(rng) % 10;
whole_size.height += randInt(rng) % 10;
}
cvRelease( &test_array[i][j] );
if( size.width > 0 && size.height > 0 &&
types[i][j] >= 0 && (i != MASK || create_mask) )
{
if( use_roi )
{
roi.width = size.width;
roi.height = size.height;
if( whole_size.width > size.width )
roi.x = randInt(rng) % (whole_size.width - size.width);
if( whole_size.height > size.height )
roi.y = randInt(rng) % (whole_size.height - size.height);
}
if( is_image )
{
test_array[i][j] = cvCreateImage( whole_size,
icvTsTypeToDepth[CV_MAT_DEPTH(types[i][j])], CV_MAT_CN(types[i][j]) );
if( use_roi )
cvSetImageROI( (IplImage*)test_array[i][j], roi );
}
else
{
test_array[i][j] = cvCreateMat( whole_size.height, whole_size.width, types[i][j] );
if( use_roi )
{
CvMat submat, *mat = (CvMat*)test_array[i][j];
cvGetSubRect( test_array[i][j], &submat, roi );
submat.refcount = mat->refcount;
*mat = submat;
}
}
}
}
}
test_mat.resize(test_array.size());
for( i = 0; i < max_arr; i++ )
{
size_t sizei = test_array[i].size();
test_mat[i].resize(sizei);
for( j = 0; j < sizei; j++ )
{
CvArr* arr = test_array[i][j];
test_mat[i][j] = cv::cvarrToMat(arr);
if( !test_mat[i][j].empty() )
fill_array( test_case_idx, i, j, test_mat[i][j] );
}
}
return code;
}
void ArrayTest::get_minmax_bounds( int i, int /*j*/, int type, Scalar& low, Scalar& high )
{
double l, u;
int depth = CV_MAT_DEPTH(type);
if( i == MASK )
{
l = -2;
u = 2;
}
else if( depth < CV_32S )
{
l = getMinVal(type);
u = getMaxVal(type);
}
else
{
u = depth == CV_32S ? 1000000 : 1000.;
l = -u;
}
low = Scalar::all(l);
high = Scalar::all(u);
}
void ArrayTest::fill_array( int /*test_case_idx*/, int i, int j, Mat& arr )
{
if( i == REF_INPUT_OUTPUT )
cvtest::copy( test_mat[INPUT_OUTPUT][j], arr, Mat() );
else if( i == INPUT || i == INPUT_OUTPUT || i == MASK )
{
Scalar low, high;
get_minmax_bounds( i, j, arr.type(), low, high );
randUni( ts->get_rng(), arr, low, high );
}
}
double ArrayTest::get_success_error_level( int /*test_case_idx*/, int i, int j )
{
int elem_depth = CV_MAT_DEPTH(cvGetElemType(test_array[i][j]));
assert( i == OUTPUT || i == INPUT_OUTPUT );
return elem_depth < CV_32F ? 0 : elem_depth == CV_32F ? FLT_EPSILON*100: DBL_EPSILON*5000;
}
void ArrayTest::prepare_to_validation( int /*test_case_idx*/ )
{
assert(0);
}
int ArrayTest::validate_test_results( int test_case_idx )
{
static const char* arr_names[] = { "input", "input/output", "output",
"ref input/output", "ref output",
"temporary", "mask" };
size_t i, j;
prepare_to_validation( test_case_idx );
for( i = 0; i < 2; i++ )
{
int i0 = i == 0 ? OUTPUT : INPUT_OUTPUT;
int i1 = i == 0 ? REF_OUTPUT : REF_INPUT_OUTPUT;
size_t sizei = test_array[i0].size();
assert( sizei == test_array[i1].size() );
for( j = 0; j < sizei; j++ )
{
double err_level;
vector<int> idx;
double max_diff = 0;
int code;
char msg[100];
if( !test_array[i1][j] )
continue;
err_level = get_success_error_level( test_case_idx, i0, j );
code = cmpEps( test_mat[i0][j], test_mat[i1][j], &max_diff, err_level, &idx, element_wise_relative_error );
switch( code )
{
case -1:
sprintf( msg, "Too big difference (=%g)", max_diff );
code = TS::FAIL_BAD_ACCURACY;
break;
case -2:
strcpy( msg, "Invalid output" );
code = TS::FAIL_INVALID_OUTPUT;
break;
case -3:
strcpy( msg, "Invalid output in the reference array" );
code = TS::FAIL_INVALID_OUTPUT;
break;
default:
continue;
}
string idxstr = vec2str(", ", &idx[0], idx.size());
ts->printf( TS::LOG, "%s in %s array %d at (%s)", msg, arr_names[i0], j, idxstr.c_str() );
for( i0 = 0; i0 < (int)test_array.size(); i0++ )
{
size_t sizei0 = test_array[i0].size();
if( i0 == REF_INPUT_OUTPUT || i0 == OUTPUT || i0 == TEMP )
continue;
for( i1 = 0; i1 < (int)sizei0; i1++ )
{
const Mat& arr = test_mat[i0][i1];
if( !arr.empty() )
{
string sizestr = vec2str(", ", &arr.size[0], arr.dims);
ts->printf( TS::LOG, "%s array %d type=%sC%d, size=(%s)\n",
arr_names[i0], i1, getTypeName(arr.depth()),
arr.channels(), sizestr.c_str() );
}
}
}
ts->set_failed_test_info( code );
return code;
}
}
return 0;
}
}
/* End of file. */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff
Loading…
Cancel
Save