Conflicts:
	modules/features2d/include/opencv2/features2d.hpp
	modules/features2d/src/freak.cpp
	modules/features2d/src/stardetector.cpp
pull/1932/head
sprice 11 years ago
commit 75ed2f52f1
  1. 1
      .gitignore
  2. 2
      .tgitconfig
  3. 502
      3rdparty/include/MultiMon.h
  4. 2
      3rdparty/include/opencl/1.2/CL/cl.hpp
  5. 14
      3rdparty/include/opencl/1.2/CL/cl_platform.h
  6. 2
      3rdparty/libjasper/CMakeLists.txt
  7. 2
      3rdparty/libjpeg/CMakeLists.txt
  8. 2
      3rdparty/libpng/CMakeLists.txt
  9. 2
      3rdparty/libtiff/CMakeLists.txt
  10. 12
      3rdparty/libtiff/tif_config.h.cmakein
  11. 2
      3rdparty/openexr/CMakeLists.txt
  12. 6
      3rdparty/tbb/CMakeLists.txt
  13. 2
      3rdparty/zlib/CMakeLists.txt
  14. 119
      CMakeLists.txt
  15. 20
      LICENSE
  16. 5
      README.md
  17. 12
      apps/haartraining/CMakeLists.txt
  18. 2
      apps/haartraining/cvclassifier.h
  19. 4
      apps/traincascade/CMakeLists.txt
  20. 12
      cmake/OpenCVCRTLinkage.cmake
  21. 11
      cmake/OpenCVCompilerOptions.cmake
  22. 10
      cmake/OpenCVDetectAndroidSDK.cmake
  23. 5
      cmake/OpenCVDetectCUDA.cmake
  24. 14
      cmake/OpenCVDetectDirectX.cmake
  25. 28
      cmake/OpenCVDetectPython.cmake
  26. 7
      cmake/OpenCVDetectVTK.cmake
  27. 2
      cmake/OpenCVExtraTargets.cmake
  28. 37
      cmake/OpenCVFindIPP.cmake
  29. 20
      cmake/OpenCVFindIntelPerCSDK.cmake
  30. 5
      cmake/OpenCVFindLibsVideo.cmake
  31. 6
      cmake/OpenCVGenAndroidMK.cmake
  32. 38
      cmake/OpenCVGenConfig.cmake
  33. 2
      cmake/OpenCVGenHeaders.cmake
  34. 4
      cmake/OpenCVGenPkgconfig.cmake
  35. 25
      cmake/OpenCVModule.cmake
  36. 110
      cmake/OpenCVPackaging.cmake
  37. 14
      cmake/OpenCVUtils.cmake
  38. 70
      cmake/checks/directx.cpp
  39. 5
      cmake/cl2cpp.cmake
  40. 16
      cmake/templates/OpenCV.mk.in
  41. 11
      cmake/templates/cvconfig.h.in
  42. 51
      cmake/templates/opencv_run_all_tests_android.sh.in
  43. 25
      cmake/templates/opencv_run_all_tests_unix.sh.in
  44. 2
      cmake/templates/opencv_testing.sh.in
  45. 20
      data/CMakeLists.txt
  46. 16241
      data/haarcascades/haarcascade_eye.xml
  47. 27817
      data/haarcascades/haarcascade_eye_tree_eyeglasses.xml
  48. 42235
      data/haarcascades/haarcascade_frontalface_alt.xml
  49. 44173
      data/haarcascades/haarcascade_frontalface_alt2.xml
  50. 167335
      data/haarcascades/haarcascade_frontalface_alt_tree.xml
  51. 58574
      data/haarcascades/haarcascade_frontalface_default.xml
  52. 29860
      data/haarcascades/haarcascade_fullbody.xml
  53. 9173
      data/haarcascades/haarcascade_lefteye_2splits.xml
  54. 24017
      data/haarcascades/haarcascade_lowerbody.xml
  55. 11286
      data/haarcascades/haarcascade_mcs_eyepair_big.xml
  56. 13006
      data/haarcascades/haarcascade_mcs_eyepair_small.xml
  57. 9593
      data/haarcascades/haarcascade_mcs_leftear.xml
  58. 24525
      data/haarcascades/haarcascade_mcs_lefteye.xml
  59. 22683
      data/haarcascades/haarcascade_mcs_mouth.xml
  60. 49700
      data/haarcascades/haarcascade_mcs_nose.xml
  61. 9969
      data/haarcascades/haarcascade_mcs_rightear.xml
  62. 43650
      data/haarcascades/haarcascade_mcs_righteye.xml
  63. 47904
      data/haarcascades/haarcascade_mcs_upperbody.xml
  64. 50820
      data/haarcascades/haarcascade_profileface.xml
  65. 9222
      data/haarcascades/haarcascade_righteye_2splits.xml
  66. 8630
      data/haarcascades/haarcascade_smile.xml
  67. 48205
      data/haarcascades/haarcascade_upperbody.xml
  68. 6
      doc/CMakeLists.txt
  69. 2
      doc/conf.py
  70. 2
      doc/haartraining.htm
  71. 6
      doc/py_tutorials/py_calib3d/py_calibration/py_calibration.rst
  72. 12
      doc/py_tutorials/py_gui/py_drawing_functions/py_drawing_functions.rst
  73. 2
      doc/py_tutorials/py_gui/py_image_display/py_image_display.rst
  74. 10
      doc/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.rst
  75. 10
      doc/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.rst
  76. 2
      doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.rst
  77. BIN
      doc/tutorials/bioinspired/retina_model/images/retina_TreeHdr_retina.jpg
  78. BIN
      doc/tutorials/bioinspired/retina_model/images/retina_TreeHdr_small.jpg
  79. BIN
      doc/tutorials/bioinspired/retina_model/images/studentsSample_input.jpg
  80. BIN
      doc/tutorials/bioinspired/retina_model/images/studentsSample_magno.jpg
  81. BIN
      doc/tutorials/bioinspired/retina_model/images/studentsSample_parvo.jpg
  82. 418
      doc/tutorials/bioinspired/retina_model/retina_model.rst
  83. BIN
      doc/tutorials/bioinspired/table_of_content_bioinspired/images/retina_TreeHdr_small.jpg
  84. 36
      doc/tutorials/bioinspired/table_of_content_bioinspired/table_of_content_bioinspired.rst
  85. 36
      doc/tutorials/bioinspired/table_of_content_bioinspired/table_of_content_bioinspired.rst~
  86. 1
      doc/tutorials/definitions/tocDefinitions.rst
  87. 2
      doc/tutorials/imgproc/histograms/template_matching/template_matching.rst
  88. 18
      doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.rst
  89. 16
      doc/tutorials/introduction/android_binary_package/O4A_SDK.rst
  90. 18
      doc/tutorials/introduction/android_binary_package/dev_with_OCV_on_Android.rst
  91. 728
      doc/tutorials/introduction/clojure_dev_intro/clojure_dev_intro.rst
  92. BIN
      doc/tutorials/introduction/clojure_dev_intro/images/blurred.png
  93. BIN
      doc/tutorials/introduction/clojure_dev_intro/images/lena.png
  94. 4
      doc/tutorials/introduction/crosscompilation/arm_crosscompile_with_cmake.rst
  95. 17
      doc/tutorials/introduction/linux_gcc_cmake/linux_gcc_cmake.rst
  96. 7
      doc/tutorials/introduction/linux_install/linux_install.rst
  97. 2
      doc/tutorials/introduction/load_save_image/load_save_image.rst
  98. BIN
      doc/tutorials/introduction/table_of_content_introduction/images/clojure-logo.png
  99. 16
      doc/tutorials/introduction/table_of_content_introduction/table_of_content_introduction.rst
  100. 4
      doc/tutorials/introduction/windows_install/windows_install.rst
  101. Some files were not shown because too many files have changed in this diff Show More

1
.gitignore vendored

@ -1,6 +1,7 @@
*.autosave
*.pyc
*.user
*~
.*.swp
.DS_Store
.sw[a-z]

@ -0,0 +1,2 @@
[tgit]
icon = doc/opencv.ico

@ -1,502 +0,0 @@
//=============================================================================
//
// multimon.h -- Stub module that fakes multiple monitor apis on Win32 OSes
// without them.
//
// By using this header your code will get back default values from
// GetSystemMetrics() for new metrics, and the new multimonitor APIs
// will act like only one display is present on a Win32 OS without
// multimonitor APIs.
//
// Exactly one source must include this with COMPILE_MULTIMON_STUBS defined.
//
// Copyright (c) Microsoft Corporation. All rights reserved.
//
//=============================================================================
#ifdef __cplusplus
extern "C" { // Assume C declarations for C++
#endif // __cplusplus
//
// If we are building with Win95/NT4 headers, we need to declare
// the multimonitor-related metrics and APIs ourselves.
//
#ifndef SM_CMONITORS
#define SM_XVIRTUALSCREEN 76
#define SM_YVIRTUALSCREEN 77
#define SM_CXVIRTUALSCREEN 78
#define SM_CYVIRTUALSCREEN 79
#define SM_CMONITORS 80
#define SM_SAMEDISPLAYFORMAT 81
// HMONITOR is already declared if WINVER >= 0x0500 in windef.h
// This is for components built with an older version number.
//
#if !defined(HMONITOR_DECLARED) && (WINVER < 0x0500)
DECLARE_HANDLE(HMONITOR);
#define HMONITOR_DECLARED
#endif
#define MONITOR_DEFAULTTONULL 0x00000000
#define MONITOR_DEFAULTTOPRIMARY 0x00000001
#define MONITOR_DEFAULTTONEAREST 0x00000002
#define MONITORINFOF_PRIMARY 0x00000001
typedef struct tagMONITORINFO
{
DWORD cbSize;
RECT rcMonitor;
RECT rcWork;
DWORD dwFlags;
} MONITORINFO, *LPMONITORINFO;
#ifndef CCHDEVICENAME
#define CCHDEVICENAME 32
#endif
#ifdef __cplusplus
typedef struct tagMONITORINFOEXA : public tagMONITORINFO
{
CHAR szDevice[CCHDEVICENAME];
} MONITORINFOEXA, *LPMONITORINFOEXA;
typedef struct tagMONITORINFOEXW : public tagMONITORINFO
{
WCHAR szDevice[CCHDEVICENAME];
} MONITORINFOEXW, *LPMONITORINFOEXW;
#ifdef UNICODE
typedef MONITORINFOEXW MONITORINFOEX;
typedef LPMONITORINFOEXW LPMONITORINFOEX;
#else
typedef MONITORINFOEXA MONITORINFOEX;
typedef LPMONITORINFOEXA LPMONITORINFOEX;
#endif // UNICODE
#else // ndef __cplusplus
typedef struct tagMONITORINFOEXA
{
MONITORINFO;
CHAR szDevice[CCHDEVICENAME];
} MONITORINFOEXA, *LPMONITORINFOEXA;
typedef struct tagMONITORINFOEXW
{
MONITORINFO;
WCHAR szDevice[CCHDEVICENAME];
} MONITORINFOEXW, *LPMONITORINFOEXW;
#ifdef UNICODE
typedef MONITORINFOEXW MONITORINFOEX;
typedef LPMONITORINFOEXW LPMONITORINFOEX;
#else
typedef MONITORINFOEXA MONITORINFOEX;
typedef LPMONITORINFOEXA LPMONITORINFOEX;
#endif // UNICODE
#endif
typedef BOOL (CALLBACK* MONITORENUMPROC)(HMONITOR, HDC, LPRECT, LPARAM);
#ifndef DISPLAY_DEVICE_ATTACHED_TO_DESKTOP
typedef struct _DISPLAY_DEVICEA {
DWORD cb;
CHAR DeviceName[32];
CHAR DeviceString[128];
DWORD StateFlags;
CHAR DeviceID[128];
CHAR DeviceKey[128];
} DISPLAY_DEVICEA, *PDISPLAY_DEVICEA, *LPDISPLAY_DEVICEA;
typedef struct _DISPLAY_DEVICEW {
DWORD cb;
WCHAR DeviceName[32];
WCHAR DeviceString[128];
DWORD StateFlags;
WCHAR DeviceID[128];
WCHAR DeviceKey[128];
} DISPLAY_DEVICEW, *PDISPLAY_DEVICEW, *LPDISPLAY_DEVICEW;
#ifdef UNICODE
typedef DISPLAY_DEVICEW DISPLAY_DEVICE;
typedef PDISPLAY_DEVICEW PDISPLAY_DEVICE;
typedef LPDISPLAY_DEVICEW LPDISPLAY_DEVICE;
#else
typedef DISPLAY_DEVICEA DISPLAY_DEVICE;
typedef PDISPLAY_DEVICEA PDISPLAY_DEVICE;
typedef LPDISPLAY_DEVICEA LPDISPLAY_DEVICE;
#endif // UNICODE
#define DISPLAY_DEVICE_ATTACHED_TO_DESKTOP 0x00000001
#define DISPLAY_DEVICE_MULTI_DRIVER 0x00000002
#define DISPLAY_DEVICE_PRIMARY_DEVICE 0x00000004
#define DISPLAY_DEVICE_MIRRORING_DRIVER 0x00000008
#define DISPLAY_DEVICE_VGA_COMPATIBLE 0x00000010
#endif
#endif // SM_CMONITORS
#undef GetMonitorInfo
#undef GetSystemMetrics
#undef MonitorFromWindow
#undef MonitorFromRect
#undef MonitorFromPoint
#undef EnumDisplayMonitors
#undef EnumDisplayDevices
//
// Define COMPILE_MULTIMON_STUBS to compile the stubs;
// otherwise, you get the declarations.
//
#ifdef COMPILE_MULTIMON_STUBS
//-----------------------------------------------------------------------------
//
// Implement the API stubs.
//
//-----------------------------------------------------------------------------
#ifndef _MULTIMON_USE_SECURE_CRT
#if defined(__GOT_SECURE_LIB__) && __GOT_SECURE_LIB__ >= 200402L
#define _MULTIMON_USE_SECURE_CRT 1
#else
#define _MULTIMON_USE_SECURE_CRT 0
#endif
#endif
#ifndef MULTIMON_FNS_DEFINED
int (WINAPI* g_pfnGetSystemMetrics)(int) = NULL;
HMONITOR (WINAPI* g_pfnMonitorFromWindow)(HWND, DWORD) = NULL;
HMONITOR (WINAPI* g_pfnMonitorFromRect)(LPCRECT, DWORD) = NULL;
HMONITOR (WINAPI* g_pfnMonitorFromPoint)(POINT, DWORD) = NULL;
BOOL (WINAPI* g_pfnGetMonitorInfo)(HMONITOR, LPMONITORINFO) = NULL;
BOOL (WINAPI* g_pfnEnumDisplayMonitors)(HDC, LPCRECT, MONITORENUMPROC, LPARAM) = NULL;
BOOL (WINAPI* g_pfnEnumDisplayDevices)(PVOID, DWORD, PDISPLAY_DEVICE,DWORD) = NULL;
BOOL g_fMultiMonInitDone = FALSE;
BOOL g_fMultimonPlatformNT = FALSE;
#endif
BOOL IsPlatformNT()
{
OSVERSIONINFOA osvi = {0};
osvi.dwOSVersionInfoSize = sizeof(osvi);
GetVersionExA((OSVERSIONINFOA*)&osvi);
return (VER_PLATFORM_WIN32_NT == osvi.dwPlatformId);
}
BOOL InitMultipleMonitorStubs(void)
{
HMODULE hUser32;
if (g_fMultiMonInitDone)
{
return g_pfnGetMonitorInfo != NULL;
}
g_fMultimonPlatformNT = IsPlatformNT();
hUser32 = GetModuleHandle(TEXT("USER32"));
if (hUser32 &&
(*(FARPROC*)&g_pfnGetSystemMetrics = GetProcAddress(hUser32,"GetSystemMetrics")) != NULL &&
(*(FARPROC*)&g_pfnMonitorFromWindow = GetProcAddress(hUser32,"MonitorFromWindow")) != NULL &&
(*(FARPROC*)&g_pfnMonitorFromRect = GetProcAddress(hUser32,"MonitorFromRect")) != NULL &&
(*(FARPROC*)&g_pfnMonitorFromPoint = GetProcAddress(hUser32,"MonitorFromPoint")) != NULL &&
(*(FARPROC*)&g_pfnEnumDisplayMonitors = GetProcAddress(hUser32,"EnumDisplayMonitors")) != NULL &&
#ifdef UNICODE
(*(FARPROC*)&g_pfnEnumDisplayDevices = GetProcAddress(hUser32,"EnumDisplayDevicesW")) != NULL &&
(*(FARPROC*)&g_pfnGetMonitorInfo = g_fMultimonPlatformNT ? GetProcAddress(hUser32,"GetMonitorInfoW") :
GetProcAddress(hUser32,"GetMonitorInfoA")) != NULL
#else
(*(FARPROC*)&g_pfnGetMonitorInfo = GetProcAddress(hUser32,"GetMonitorInfoA")) != NULL &&
(*(FARPROC*)&g_pfnEnumDisplayDevices = GetProcAddress(hUser32,"EnumDisplayDevicesA")) != NULL
#endif
) {
g_fMultiMonInitDone = TRUE;
return TRUE;
}
else
{
g_pfnGetSystemMetrics = NULL;
g_pfnMonitorFromWindow = NULL;
g_pfnMonitorFromRect = NULL;
g_pfnMonitorFromPoint = NULL;
g_pfnGetMonitorInfo = NULL;
g_pfnEnumDisplayMonitors = NULL;
g_pfnEnumDisplayDevices = NULL;
g_fMultiMonInitDone = TRUE;
return FALSE;
}
}
//-----------------------------------------------------------------------------
//
// fake implementations of Monitor APIs that work with the primary display
// no special parameter validation is made since these run in client code
//
//-----------------------------------------------------------------------------
int WINAPI
xGetSystemMetrics(int nIndex)
{
if (InitMultipleMonitorStubs())
return g_pfnGetSystemMetrics(nIndex);
switch (nIndex)
{
case SM_CMONITORS:
case SM_SAMEDISPLAYFORMAT:
return 1;
case SM_XVIRTUALSCREEN:
case SM_YVIRTUALSCREEN:
return 0;
case SM_CXVIRTUALSCREEN:
nIndex = SM_CXSCREEN;
break;
case SM_CYVIRTUALSCREEN:
nIndex = SM_CYSCREEN;
break;
}
return GetSystemMetrics(nIndex);
}
#define xPRIMARY_MONITOR ((HMONITOR)0x12340042)
HMONITOR WINAPI
xMonitorFromPoint(POINT ptScreenCoords, DWORD dwFlags)
{
if (InitMultipleMonitorStubs())
return g_pfnMonitorFromPoint(ptScreenCoords, dwFlags);
if ((dwFlags & (MONITOR_DEFAULTTOPRIMARY | MONITOR_DEFAULTTONEAREST)) ||
((ptScreenCoords.x >= 0) &&
(ptScreenCoords.x < GetSystemMetrics(SM_CXSCREEN)) &&
(ptScreenCoords.y >= 0) &&
(ptScreenCoords.y < GetSystemMetrics(SM_CYSCREEN))))
{
return xPRIMARY_MONITOR;
}
return NULL;
}
HMONITOR WINAPI
xMonitorFromRect(LPCRECT lprcScreenCoords, DWORD dwFlags)
{
if (InitMultipleMonitorStubs())
return g_pfnMonitorFromRect(lprcScreenCoords, dwFlags);
if ((dwFlags & (MONITOR_DEFAULTTOPRIMARY | MONITOR_DEFAULTTONEAREST)) ||
((lprcScreenCoords->right > 0) &&
(lprcScreenCoords->bottom > 0) &&
(lprcScreenCoords->left < GetSystemMetrics(SM_CXSCREEN)) &&
(lprcScreenCoords->top < GetSystemMetrics(SM_CYSCREEN))))
{
return xPRIMARY_MONITOR;
}
return NULL;
}
HMONITOR WINAPI
xMonitorFromWindow(HWND hWnd, DWORD dwFlags)
{
WINDOWPLACEMENT wp;
if (InitMultipleMonitorStubs())
return g_pfnMonitorFromWindow(hWnd, dwFlags);
if (dwFlags & (MONITOR_DEFAULTTOPRIMARY | MONITOR_DEFAULTTONEAREST))
return xPRIMARY_MONITOR;
if (IsIconic(hWnd) ?
GetWindowPlacement(hWnd, &wp) :
GetWindowRect(hWnd, &wp.rcNormalPosition)) {
return xMonitorFromRect(&wp.rcNormalPosition, dwFlags);
}
return NULL;
}
BOOL WINAPI
xGetMonitorInfo(HMONITOR hMonitor, __inout LPMONITORINFO lpMonitorInfo)
{
RECT rcWork;
if (InitMultipleMonitorStubs())
{
BOOL f = g_pfnGetMonitorInfo(hMonitor, lpMonitorInfo);
#ifdef UNICODE
if (f && !g_fMultimonPlatformNT && (lpMonitorInfo->cbSize >= sizeof(MONITORINFOEX)))
{
MultiByteToWideChar(CP_ACP, 0,
(LPSTR)((MONITORINFOEX*)lpMonitorInfo)->szDevice, -1,
((MONITORINFOEX*)lpMonitorInfo)->szDevice, (sizeof(((MONITORINFOEX*)lpMonitorInfo)->szDevice)/sizeof(TCHAR)));
}
#endif
return f;
}
if ((hMonitor == xPRIMARY_MONITOR) &&
lpMonitorInfo &&
(lpMonitorInfo->cbSize >= sizeof(MONITORINFO)) &&
SystemParametersInfoA(SPI_GETWORKAREA, 0, &rcWork, 0))
{
lpMonitorInfo->rcMonitor.left = 0;
lpMonitorInfo->rcMonitor.top = 0;
lpMonitorInfo->rcMonitor.right = GetSystemMetrics(SM_CXSCREEN);
lpMonitorInfo->rcMonitor.bottom = GetSystemMetrics(SM_CYSCREEN);
lpMonitorInfo->rcWork = rcWork;
lpMonitorInfo->dwFlags = MONITORINFOF_PRIMARY;
if (lpMonitorInfo->cbSize >= sizeof(MONITORINFOEX))
{
#ifdef UNICODE
MultiByteToWideChar(CP_ACP, 0, "DISPLAY", -1, ((MONITORINFOEX*)lpMonitorInfo)->szDevice, (sizeof(((MONITORINFOEX*)lpMonitorInfo)->szDevice)/sizeof(TCHAR)));
#else // UNICODE
#if _MULTIMON_USE_SECURE_CRT
strncpy_s(((MONITORINFOEX*)lpMonitorInfo)->szDevice, (sizeof(((MONITORINFOEX*)lpMonitorInfo)->szDevice)/sizeof(TCHAR)), TEXT("DISPLAY"), (sizeof(((MONITORINFOEX*)lpMonitorInfo)->szDevice)/sizeof(TCHAR)) - 1);
#else
lstrcpyn(((MONITORINFOEX*)lpMonitorInfo)->szDevice, TEXT("DISPLAY"), (sizeof(((MONITORINFOEX*)lpMonitorInfo)->szDevice)/sizeof(TCHAR)));
#endif // _MULTIMON_USE_SECURE_CRT
#endif // UNICODE
}
return TRUE;
}
return FALSE;
}
BOOL WINAPI
xEnumDisplayMonitors(
HDC hdcOptionalForPainting,
LPCRECT lprcEnumMonitorsThatIntersect,
MONITORENUMPROC lpfnEnumProc,
LPARAM dwData)
{
RECT rcLimit;
if (InitMultipleMonitorStubs()) {
return g_pfnEnumDisplayMonitors(
hdcOptionalForPainting,
lprcEnumMonitorsThatIntersect,
lpfnEnumProc,
dwData);
}
if (!lpfnEnumProc)
return FALSE;
rcLimit.left = 0;
rcLimit.top = 0;
rcLimit.right = GetSystemMetrics(SM_CXSCREEN);
rcLimit.bottom = GetSystemMetrics(SM_CYSCREEN);
if (hdcOptionalForPainting)
{
RECT rcClip;
POINT ptOrg;
switch (GetClipBox(hdcOptionalForPainting, &rcClip))
{
default:
if (!GetDCOrgEx(hdcOptionalForPainting, &ptOrg))
return FALSE;
OffsetRect(&rcLimit, -ptOrg.x, -ptOrg.y);
if (IntersectRect(&rcLimit, &rcLimit, &rcClip) &&
(!lprcEnumMonitorsThatIntersect ||
IntersectRect(&rcLimit, &rcLimit, lprcEnumMonitorsThatIntersect))) {
break;
}
//fall thru
case NULLREGION:
return TRUE;
case ERROR:
return FALSE;
}
} else {
if ( lprcEnumMonitorsThatIntersect &&
!IntersectRect(&rcLimit, &rcLimit, lprcEnumMonitorsThatIntersect)) {
return TRUE;
}
}
return lpfnEnumProc(
xPRIMARY_MONITOR,
hdcOptionalForPainting,
&rcLimit,
dwData);
}
BOOL WINAPI
xEnumDisplayDevices(
PVOID Unused,
DWORD iDevNum,
__inout PDISPLAY_DEVICE lpDisplayDevice,
DWORD dwFlags)
{
if (InitMultipleMonitorStubs())
return g_pfnEnumDisplayDevices(Unused, iDevNum, lpDisplayDevice, dwFlags);
if (Unused != NULL)
return FALSE;
if (iDevNum != 0)
return FALSE;
if (lpDisplayDevice == NULL || lpDisplayDevice->cb < sizeof(DISPLAY_DEVICE))
return FALSE;
#ifdef UNICODE
MultiByteToWideChar(CP_ACP, 0, "DISPLAY", -1, lpDisplayDevice->DeviceName, (sizeof(lpDisplayDevice->DeviceName)/sizeof(TCHAR)));
MultiByteToWideChar(CP_ACP, 0, "DISPLAY", -1, lpDisplayDevice->DeviceString, (sizeof(lpDisplayDevice->DeviceString)/sizeof(TCHAR)));
#else // UNICODE
#if _MULTIMON_USE_SECURE_CRT
strncpy_s((LPTSTR)lpDisplayDevice->DeviceName, (sizeof(lpDisplayDevice->DeviceName)/sizeof(TCHAR)), TEXT("DISPLAY"), (sizeof(lpDisplayDevice->DeviceName)/sizeof(TCHAR)) - 1);
strncpy_s((LPTSTR)lpDisplayDevice->DeviceString, (sizeof(lpDisplayDevice->DeviceString)/sizeof(TCHAR)), TEXT("DISPLAY"), (sizeof(lpDisplayDevice->DeviceName)/sizeof(TCHAR)) - 1);
#else
lstrcpyn((LPTSTR)lpDisplayDevice->DeviceName, TEXT("DISPLAY"), (sizeof(lpDisplayDevice->DeviceName)/sizeof(TCHAR)));
lstrcpyn((LPTSTR)lpDisplayDevice->DeviceString, TEXT("DISPLAY"), (sizeof(lpDisplayDevice->DeviceString)/sizeof(TCHAR)));
#endif // _MULTIMON_USE_SECURE_CRT
#endif // UNICODE
lpDisplayDevice->StateFlags = DISPLAY_DEVICE_ATTACHED_TO_DESKTOP | DISPLAY_DEVICE_PRIMARY_DEVICE;
return TRUE;
}
#undef xPRIMARY_MONITOR
#undef COMPILE_MULTIMON_STUBS
#else // COMPILE_MULTIMON_STUBS
extern int WINAPI xGetSystemMetrics(int);
extern HMONITOR WINAPI xMonitorFromWindow(HWND, DWORD);
extern HMONITOR WINAPI xMonitorFromRect(LPCRECT, DWORD);
extern HMONITOR WINAPI xMonitorFromPoint(POINT, DWORD);
extern BOOL WINAPI xGetMonitorInfo(HMONITOR, LPMONITORINFO);
extern BOOL WINAPI xEnumDisplayMonitors(HDC, LPCRECT, MONITORENUMPROC, LPARAM);
extern BOOL WINAPI xEnumDisplayDevices(PVOID, DWORD, PDISPLAY_DEVICE, DWORD);
#endif // COMPILE_MULTIMON_STUBS
//
// build defines that replace the regular APIs with our versions
//
#define GetSystemMetrics xGetSystemMetrics
#define MonitorFromWindow xMonitorFromWindow
#define MonitorFromRect xMonitorFromRect
#define MonitorFromPoint xMonitorFromPoint
#define GetMonitorInfo xGetMonitorInfo
#define EnumDisplayMonitors xEnumDisplayMonitors
#define EnumDisplayDevices xEnumDisplayDevices
#ifdef __cplusplus
}
#endif // __cplusplus

@ -210,7 +210,7 @@
#include <string>
#endif
#if defined(linux) || defined(__APPLE__) || defined(__MACOSX)
#if defined(__linux__) || defined(__APPLE__) || defined(__MACOSX)
#include <alloca.h>
#include <emmintrin.h>

@ -332,13 +332,13 @@ typedef unsigned int cl_GLenum;
/* Define basic vector types */
#if defined( __VEC__ )
#include <altivec.h> /* may be omitted depending on compiler. AltiVec spec provides no way to detect whether the header is required. */
typedef vector unsigned char __cl_uchar16;
typedef vector signed char __cl_char16;
typedef vector unsigned short __cl_ushort8;
typedef vector signed short __cl_short8;
typedef vector unsigned int __cl_uint4;
typedef vector signed int __cl_int4;
typedef vector float __cl_float4;
typedef __vector unsigned char __cl_uchar16;
typedef __vector signed char __cl_char16;
typedef __vector unsigned short __cl_ushort8;
typedef __vector signed short __cl_short8;
typedef __vector unsigned int __cl_uint4;
typedef __vector signed int __cl_int4;
typedef __vector float __cl_float4;
#define __CL_UCHAR16__ 1
#define __CL_CHAR16__ 1
#define __CL_USHORT8__ 1

@ -47,5 +47,5 @@ if(ENABLE_SOLUTION_FOLDERS)
endif()
if(NOT BUILD_SHARED_LIBS)
ocv_install_target(${JASPER_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT main)
ocv_install_target(${JASPER_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT dev)
endif()

@ -46,5 +46,5 @@ if(ENABLE_SOLUTION_FOLDERS)
endif()
if(NOT BUILD_SHARED_LIBS)
ocv_install_target(${JPEG_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT main)
ocv_install_target(${JPEG_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT dev)
endif()

@ -55,5 +55,5 @@ if(ENABLE_SOLUTION_FOLDERS)
endif()
if(NOT BUILD_SHARED_LIBS)
ocv_install_target(${PNG_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT main)
ocv_install_target(${PNG_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT dev)
endif()

@ -115,5 +115,5 @@ if(ENABLE_SOLUTION_FOLDERS)
endif()
if(NOT BUILD_SHARED_LIBS)
ocv_install_target(${TIFF_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT main)
ocv_install_target(${TIFF_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT dev)
endif()

@ -54,7 +54,7 @@
/* Native cpu byte order: 1 if big-endian (Motorola) or 0 if little-endian
(Intel) */
#define HOST_BIGENDIAN 0
#define HOST_BIGENDIAN @WORDS_BIGENDIAN@
/* Set the native cpu bit order (FILLORDER_LSB2MSB or FILLORDER_MSB2LSB) */
#define HOST_FILLORDER FILLORDER_LSB2MSB
@ -156,15 +156,7 @@
/* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most
significant byte first (like Motorola and SPARC, unlike Intel). */
#if defined AC_APPLE_UNIVERSAL_BUILD
# if defined __BIG_ENDIAN__
# define WORDS_BIGENDIAN 1
# endif
#else
# ifndef WORDS_BIGENDIAN
/* # undef WORDS_BIGENDIAN */
# endif
#endif
#cmakedefine WORDS_BIGENDIAN 1
/* Support Deflate compression */
#define ZIP_SUPPORT 1

@ -64,7 +64,7 @@ if(ENABLE_SOLUTION_FOLDERS)
endif()
if(NOT BUILD_SHARED_LIBS)
ocv_install_target(IlmImf EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT main)
ocv_install_target(IlmImf EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT dev)
endif()
set(OPENEXR_INCLUDE_PATHS ${OPENEXR_INCLUDE_PATHS} PARENT_SCOPE)

@ -232,9 +232,9 @@ if(ENABLE_SOLUTION_FOLDERS)
endif()
ocv_install_target(tbb EXPORT OpenCVModules
RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT main
LIBRARY DESTINATION ${OPENCV_LIB_INSTALL_PATH} COMPONENT main
ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT main
RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT libs
LIBRARY DESTINATION ${OPENCV_LIB_INSTALL_PATH} COMPONENT libs
ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT dev
)
# get TBB version

@ -95,5 +95,5 @@ if(ENABLE_SOLUTION_FOLDERS)
endif()
if(NOT BUILD_SHARED_LIBS)
ocv_install_target(${ZLIB_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT main)
ocv_install_target(${ZLIB_LIBRARY} EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT dev)
endif()

@ -116,7 +116,7 @@ endif()
OCV_OPTION(WITH_1394 "Include IEEE1394 support" ON IF (NOT ANDROID AND NOT IOS) )
OCV_OPTION(WITH_AVFOUNDATION "Use AVFoundation for Video I/O" ON IF IOS)
OCV_OPTION(WITH_CARBON "Use Carbon for UI instead of Cocoa" OFF IF APPLE )
OCV_OPTION(WITH_VTK "Include VTK library support (and build opencv_viz module eiher)" OFF IF (NOT ANDROID AND NOT IOS) )
OCV_OPTION(WITH_VTK "Include VTK library support (and build opencv_viz module eiher)" ON IF (NOT ANDROID AND NOT IOS) )
OCV_OPTION(WITH_CUDA "Include NVidia Cuda Runtime support" ON IF (NOT IOS) )
OCV_OPTION(WITH_CUFFT "Include NVidia Cuda Fast Fourier Transform (FFT) library support" ON IF (NOT IOS) )
OCV_OPTION(WITH_CUBLAS "Include NVidia Cuda Basic Linear Algebra Subprograms (BLAS) library support" OFF IF (NOT IOS) )
@ -132,7 +132,7 @@ OCV_OPTION(WITH_JASPER "Include JPEG2K support" ON
OCV_OPTION(WITH_JPEG "Include JPEG support" ON)
OCV_OPTION(WITH_WEBP "Include WebP support" ON IF (NOT IOS) )
OCV_OPTION(WITH_OPENEXR "Include ILM support via OpenEXR" ON IF (NOT IOS) )
OCV_OPTION(WITH_OPENGL "Include OpenGL support" OFF IF (NOT ANDROID AND NOT APPLE) )
OCV_OPTION(WITH_OPENGL "Include OpenGL support" OFF IF (NOT ANDROID) )
OCV_OPTION(WITH_OPENNI "Include OpenNI support" OFF IF (NOT ANDROID AND NOT IOS) )
OCV_OPTION(WITH_PNG "Include PNG support" ON)
OCV_OPTION(WITH_PVAPI "Include Prosilica GigE support" ON IF (NOT ANDROID AND NOT IOS) )
@ -155,6 +155,8 @@ OCV_OPTION(WITH_CLP "Include Clp support (EPL)" OFF
OCV_OPTION(WITH_OPENCL "Include OpenCL Runtime support" ON IF (NOT IOS) )
OCV_OPTION(WITH_OPENCLAMDFFT "Include AMD OpenCL FFT library support" ON IF (NOT ANDROID AND NOT IOS) )
OCV_OPTION(WITH_OPENCLAMDBLAS "Include AMD OpenCL BLAS library support" ON IF (NOT ANDROID AND NOT IOS) )
OCV_OPTION(WITH_DIRECTX "Include DirectX support" ON IF WIN32 )
OCV_OPTION(WITH_INTELPERC "Include Intel Perceptual Computing support" OFF IF WIN32 )
# OpenCV build components
@ -189,13 +191,14 @@ OCV_OPTION(INSTALL_C_EXAMPLES "Install C examples" OFF )
OCV_OPTION(INSTALL_PYTHON_EXAMPLES "Install Python examples" OFF )
OCV_OPTION(INSTALL_ANDROID_EXAMPLES "Install Android examples" OFF IF ANDROID )
OCV_OPTION(INSTALL_TO_MANGLED_PATHS "Enables mangled install paths, that help with side by side installs." OFF IF (UNIX AND NOT ANDROID AND NOT IOS AND BUILD_SHARED_LIBS) )
OCV_OPTION(INSTALL_TESTS "Install accuracy and performance test binaries and test data" OFF)
# OpenCV build options
# ===================================================
OCV_OPTION(ENABLE_PRECOMPILED_HEADERS "Use precompiled headers" ON IF (NOT IOS) )
OCV_OPTION(ENABLE_SOLUTION_FOLDERS "Solution folder in Visual Studio or in other IDEs" (MSVC_IDE OR CMAKE_GENERATOR MATCHES Xcode) )
OCV_OPTION(ENABLE_PROFILING "Enable profiling in the GCC compiler (Add flags: -g -pg)" OFF IF CMAKE_COMPILER_IS_GNUCXX )
OCV_OPTION(ENABLE_COVERAGE "Enable coverage collection with GCov" OFF IF CMAKE_COMPILER_IS_GNUCXX )
OCV_OPTION(ENABLE_OMIT_FRAME_POINTER "Enable -fomit-frame-pointer for GCC" ON IF CMAKE_COMPILER_IS_GNUCXX AND NOT (APPLE AND CMAKE_COMPILER_IS_CLANGCXX) )
OCV_OPTION(ENABLE_POWERPC "Enable PowerPC for GCC" ON IF (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_SYSTEM_PROCESSOR MATCHES powerpc.*) )
OCV_OPTION(ENABLE_FAST_MATH "Enable -ffast-math (not recommended for GCC 4.6.x)" OFF IF (CMAKE_COMPILER_IS_GNUCXX AND (X86 OR X86_64)) )
@ -206,10 +209,12 @@ OCV_OPTION(ENABLE_SSSE3 "Enable SSSE3 instructions"
OCV_OPTION(ENABLE_SSE41 "Enable SSE4.1 instructions" OFF IF ((CV_ICC OR CMAKE_COMPILER_IS_GNUCXX) AND (X86 OR X86_64)) )
OCV_OPTION(ENABLE_SSE42 "Enable SSE4.2 instructions" OFF IF (CMAKE_COMPILER_IS_GNUCXX AND (X86 OR X86_64)) )
OCV_OPTION(ENABLE_AVX "Enable AVX instructions" OFF IF ((MSVC OR CMAKE_COMPILER_IS_GNUCXX) AND (X86 OR X86_64)) )
OCV_OPTION(ENABLE_NEON "Enable NEON instructions" OFF IF (CMAKE_COMPILER_IS_GNUCXX AND ARM) )
OCV_OPTION(ENABLE_NEON "Enable NEON instructions" OFF IF CMAKE_COMPILER_IS_GNUCXX AND ARM )
OCV_OPTION(ENABLE_VFPV3 "Enable VFPv3-D32 instructions" OFF IF CMAKE_COMPILER_IS_GNUCXX AND ARM )
OCV_OPTION(ENABLE_NOISY_WARNINGS "Show all warnings even if they are too noisy" OFF )
OCV_OPTION(OPENCV_WARNINGS_ARE_ERRORS "Treat warnings as errors" OFF )
OCV_OPTION(ENABLE_WINRT_MODE "Build with Windows Runtime support" OFF IF WIN32 )
OCV_OPTION(ENABLE_WINRT_MODE_NATIVE "Build with Windows Runtime native C++ support" OFF IF WIN32 )
# ----------------------------------------------------------------------------
@ -225,6 +230,15 @@ include(cmake/OpenCVVersion.cmake)
# Save libs and executables in the same place
set(EXECUTABLE_OUTPUT_PATH "${CMAKE_BINARY_DIR}/bin" CACHE PATH "Output directory for applications" )
if (ANDROID)
if (ANDROID_ABI MATCHES "NEON")
set(ENABLE_NEON ON)
endif()
if (ANDROID_ABI MATCHES "VFPV3")
set(ENABLE_VFPV3 ON)
endif()
endif()
if(ANDROID OR WIN32)
set(OPENCV_DOC_INSTALL_PATH doc)
elseif(INSTALL_TO_MANGLED_PATHS)
@ -240,13 +254,27 @@ if(WIN32)
message(STATUS "Can't detect runtime and/or arch")
set(OpenCV_INSTALL_BINARIES_PREFIX "")
endif()
elseif(ANDROID)
set(OpenCV_INSTALL_BINARIES_PREFIX "sdk/native/")
else()
set(OpenCV_INSTALL_BINARIES_PREFIX "")
endif()
set(OPENCV_SAMPLES_BIN_INSTALL_PATH "${OpenCV_INSTALL_BINARIES_PREFIX}samples")
if(ANDROID)
set(OPENCV_SAMPLES_BIN_INSTALL_PATH "${OpenCV_INSTALL_BINARIES_PREFIX}samples/${ANDROID_NDK_ABI_NAME}")
else()
set(OPENCV_SAMPLES_BIN_INSTALL_PATH "${OpenCV_INSTALL_BINARIES_PREFIX}samples")
endif()
if(ANDROID)
set(OPENCV_BIN_INSTALL_PATH "${OpenCV_INSTALL_BINARIES_PREFIX}bin/${ANDROID_NDK_ABI_NAME}")
else()
set(OPENCV_BIN_INSTALL_PATH "${OpenCV_INSTALL_BINARIES_PREFIX}bin")
endif()
set(OPENCV_BIN_INSTALL_PATH "${OpenCV_INSTALL_BINARIES_PREFIX}bin")
if(NOT OPENCV_TEST_INSTALL_PATH)
set(OPENCV_TEST_INSTALL_PATH "${OPENCV_BIN_INSTALL_PATH}")
endif()
if(ANDROID)
set(LIBRARY_OUTPUT_PATH "${OpenCV_BINARY_DIR}/lib/${ANDROID_NDK_ABI_NAME}")
@ -255,6 +283,7 @@ if(ANDROID)
set(OPENCV_3P_LIB_INSTALL_PATH sdk/native/3rdparty/libs/${ANDROID_NDK_ABI_NAME})
set(OPENCV_CONFIG_INSTALL_PATH sdk/native/jni)
set(OPENCV_INCLUDE_INSTALL_PATH sdk/native/jni/include)
set(OPENCV_SAMPLES_SRC_INSTALL_PATH samples/native)
else()
set(LIBRARY_OUTPUT_PATH "${OpenCV_BINARY_DIR}/lib")
set(3P_LIBRARY_OUTPUT_PATH "${OpenCV_BINARY_DIR}/3rdparty/lib${LIB_SUFFIX}")
@ -265,9 +294,11 @@ else()
set(OPENCV_LIB_INSTALL_PATH "${OpenCV_INSTALL_BINARIES_PREFIX}lib${LIB_SUFFIX}")
endif()
set(OPENCV_3P_LIB_INSTALL_PATH "${OpenCV_INSTALL_BINARIES_PREFIX}staticlib${LIB_SUFFIX}")
set(OPENCV_SAMPLES_SRC_INSTALL_PATH samples/native)
else()
set(OPENCV_LIB_INSTALL_PATH lib${LIB_SUFFIX})
set(OPENCV_3P_LIB_INSTALL_PATH share/OpenCV/3rdparty/${OPENCV_LIB_INSTALL_PATH})
set(OPENCV_SAMPLES_SRC_INSTALL_PATH share/OpenCV/samples)
endif()
set(OPENCV_INCLUDE_INSTALL_PATH "include")
@ -372,6 +403,8 @@ if(UNIX)
set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} dl m log)
elseif(${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD|NetBSD|DragonFly")
set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} m pthread)
elseif(EMSCRIPTEN)
# no need to link to system libs with emscripten
else()
set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} dl m pthread rt)
endif()
@ -383,6 +416,19 @@ endif()
include(cmake/OpenCVPCHSupport.cmake)
include(cmake/OpenCVModule.cmake)
# ----------------------------------------------------------------------------
# Detect endianness of build platform
# ----------------------------------------------------------------------------
if(CMAKE_SYSTEM_NAME STREQUAL iOS)
# test_big_endian needs try_compile, which doesn't work for iOS
# http://public.kitware.com/Bug/view.php?id=12288
set(WORDS_BIGENDIAN 0)
else()
include(TestBigEndian)
test_big_endian(WORDS_BIGENDIAN)
endif()
# ----------------------------------------------------------------------------
# Detect 3rd-party libraries
# ----------------------------------------------------------------------------
@ -428,6 +474,11 @@ if(WITH_OPENCL)
include(cmake/OpenCVDetectOpenCL.cmake)
endif()
# --- DirectX ---
if(WITH_DIRECTX)
include(cmake/OpenCVDetectDirectX.cmake)
endif()
# --- Matlab/Octave ---
include(cmake/OpenCVFindMatlab.cmake)
@ -515,6 +566,49 @@ include(cmake/OpenCVGenConfig.cmake)
# Generate Info.plist for the IOS framework
include(cmake/OpenCVGenInfoPlist.cmake)
# Generate environment setup file
if(INSTALL_TESTS AND OPENCV_TEST_DATA_PATH AND UNIX)
if(ANDROID)
get_filename_component(TEST_PATH ${OPENCV_TEST_INSTALL_PATH} DIRECTORY)
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/cmake/templates/opencv_run_all_tests_android.sh.in"
"${CMAKE_BINARY_DIR}/unix-install/opencv_run_all_tests.sh" @ONLY)
install(PROGRAMS "${CMAKE_BINARY_DIR}/unix-install/opencv_run_all_tests.sh"
DESTINATION ${CMAKE_INSTALL_PREFIX} COMPONENT tests)
else()
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/cmake/templates/opencv_testing.sh.in"
"${CMAKE_BINARY_DIR}/unix-install/opencv_testing.sh" @ONLY)
install(FILES "${CMAKE_BINARY_DIR}/unix-install/opencv_testing.sh"
DESTINATION /etc/profile.d/ COMPONENT tests)
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/cmake/templates/opencv_run_all_tests_unix.sh.in"
"${CMAKE_BINARY_DIR}/unix-install/opencv_run_all_tests.sh" @ONLY)
install(PROGRAMS "${CMAKE_BINARY_DIR}/unix-install/opencv_run_all_tests.sh"
DESTINATION ${OPENCV_TEST_INSTALL_PATH} COMPONENT tests)
endif()
endif()
if(NOT OPENCV_README_FILE)
if(ANDROID)
set(OPENCV_README_FILE ${CMAKE_CURRENT_SOURCE_DIR}/platforms/android/README.android)
endif()
endif()
if(NOT OPENCV_LICENSE_FILE)
set(OPENCV_LICENSE_FILE ${CMAKE_CURRENT_SOURCE_DIR}/LICENSE)
endif()
# for UNIX it does not make sense as LICENSE and readme will be part of the package automatically
if(ANDROID OR NOT UNIX)
install(FILES ${OPENCV_LICENSE_FILE}
PERMISSIONS OWNER_READ GROUP_READ WORLD_READ
DESTINATION ${CMAKE_INSTALL_PREFIX} COMPONENT libs)
if(OPENCV_README_FILE)
install(FILES ${OPENCV_README_FILE}
PERMISSIONS OWNER_READ GROUP_READ WORLD_READ
DESTINATION ${CMAKE_INSTALL_PREFIX} COMPONENT libs)
endif()
endif()
# ----------------------------------------------------------------------------
# Summary:
# ----------------------------------------------------------------------------
@ -624,7 +718,7 @@ endif()
if(WIN32)
status("")
status(" Windows RT support:" HAVE_WINRT THEN YES ELSE NO)
if (ENABLE_WINRT_MODE)
if (ENABLE_WINRT_MODE OR ENABLE_WINRT_MODE_NATIVE)
status(" Windows SDK v8.0:" ${WINDOWS_SDK_PATH})
status(" Visual Studio 2012:" ${VISUAL_STUDIO_PATH})
endif()
@ -814,6 +908,11 @@ if(DEFINED WITH_XINE)
status(" Xine:" HAVE_XINE THEN "YES (ver ${ALIASOF_libxine_VERSION})" ELSE NO)
endif(DEFINED WITH_XINE)
if(DEFINED WITH_INTELPERC)
status(" Intel PerC:" HAVE_INTELPERC THEN "YES" ELSE NO)
endif(DEFINED WITH_INTELPERC)
# ========================== Other third-party libraries ==========================
status("")
status(" Other third-party libraries:")
@ -946,3 +1045,9 @@ ocv_finalize_status()
if("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_CURRENT_BINARY_DIR}")
message(WARNING "The source directory is the same as binary directory. \"make clean\" may damage the source tree")
endif()
# ----------------------------------------------------------------------------
# CPack stuff
# ----------------------------------------------------------------------------
include(cmake/OpenCVPackaging.cmake)

@ -1,16 +1,11 @@
IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
By downloading, copying, installing or using the software you agree to this license.
If you do not agree to this license, do not download, install,
copy or use the software.
By downloading, copying, installing or using the software you agree to this license.
If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
Third party copyrights are property of their respective owners.
(3-clause BSD License)
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
@ -22,13 +17,14 @@ are permitted provided that the following conditions are met:
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* The name of the copyright holders may not be used to endorse or promote products
derived from this software without specific prior written permission.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are disclaimed.
In no event shall the Intel Corporation or contributors be liable for any direct,
In no event shall copyright holders or contributors be liable for any direct,
indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused

@ -1,5 +1,7 @@
### OpenCV: Open Source Computer Vision Library
[![Gittip](http://img.shields.io/gittip/OpenCV.png)](https://www.gittip.com/OpenCV/)
#### Resources
* Homepage: <http://opencv.org>
@ -18,6 +20,3 @@ Summary of guidelines:
* Include tests and documentation;
* Clean up "oops" commits before submitting;
* Follow the coding style guide.
[![Donate OpenCV project](http://opencv.org/wp-content/uploads/2013/07/gittip1.png)](https://www.gittip.com/OpenCV/)
[![Donate OpenCV project](http://opencv.org/wp-content/uploads/2013/07/paypal-donate-button.png)](https://www.paypal.com/cgi-bin/webscr?item_name=Donation+to+OpenCV&cmd=_donations&business=accountant%40opencv.org)

@ -71,14 +71,14 @@ set_target_properties(opencv_performance PROPERTIES
if(INSTALL_CREATE_DISTRIB)
if(BUILD_SHARED_LIBS)
install(TARGETS opencv_haartraining RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} CONFIGURATIONS Release COMPONENT main)
install(TARGETS opencv_createsamples RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} CONFIGURATIONS Release COMPONENT main)
install(TARGETS opencv_performance RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} CONFIGURATIONS Release COMPONENT main)
install(TARGETS opencv_haartraining RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} CONFIGURATIONS Release COMPONENT dev)
install(TARGETS opencv_createsamples RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} CONFIGURATIONS Release COMPONENT dev)
install(TARGETS opencv_performance RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} CONFIGURATIONS Release COMPONENT dev)
endif()
else()
install(TARGETS opencv_haartraining RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT main)
install(TARGETS opencv_createsamples RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT main)
install(TARGETS opencv_performance RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT main)
install(TARGETS opencv_haartraining RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT dev)
install(TARGETS opencv_createsamples RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT dev)
install(TARGETS opencv_performance RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT dev)
endif()
if(ENABLE_SOLUTION_FOLDERS)

@ -340,7 +340,7 @@ typedef enum CvBoostType
CV_LKCLASS = 5, /* classification (K class problem) */
CV_LSREG = 6, /* least squares regression */
CV_LADREG = 7, /* least absolute deviation regression */
CV_MREG = 8, /* M-regression (Huber loss) */
CV_MREG = 8 /* M-regression (Huber loss) */
} CvBoostType;
/****************************************************************************************\

@ -35,8 +35,8 @@ endif()
if(INSTALL_CREATE_DISTRIB)
if(BUILD_SHARED_LIBS)
install(TARGETS ${the_target} RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} CONFIGURATIONS Release COMPONENT main)
install(TARGETS ${the_target} RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} CONFIGURATIONS Release COMPONENT dev)
endif()
else()
install(TARGETS ${the_target} RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT main)
install(TARGETS ${the_target} RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT dev)
endif()

@ -9,7 +9,7 @@ set(HAVE_WINRT FALSE)
# search Windows Platform SDK
message(STATUS "Checking for Windows Platform SDK")
GET_FILENAME_COMPONENT(WINDOWS_SDK_PATH "[HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Microsoft SDKs\\Windows\\v8.0;InstallationFolder]" ABSOLUTE CACHE)
if (WINDOWS_SDK_PATH STREQUAL "")
if(WINDOWS_SDK_PATH STREQUAL "")
set(HAVE_MSPDK FALSE)
message(STATUS "Windows Platform SDK 8.0 was not found")
else()
@ -19,7 +19,7 @@ endif()
#search for Visual Studio 11.0 install directory
message(STATUS "Checking for Visual Studio 2012")
GET_FILENAME_COMPONENT(VISUAL_STUDIO_PATH [HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\VisualStudio\\11.0\\Setup\\VS;ProductDir] REALPATH CACHE)
if (VISUAL_STUDIO_PATH STREQUAL "")
if(VISUAL_STUDIO_PATH STREQUAL "")
set(HAVE_MSVC2012 FALSE)
message(STATUS "Visual Studio 2012 was not found")
else()
@ -30,11 +30,15 @@ try_compile(HAVE_WINRT_SDK
"${OpenCV_BINARY_DIR}"
"${OpenCV_SOURCE_DIR}/cmake/checks/winrttest.cpp")
if (ENABLE_WINRT_MODE AND HAVE_WINRT_SDK AND HAVE_MSVC2012 AND HAVE_MSPDK)
if(ENABLE_WINRT_MODE AND HAVE_WINRT_SDK AND HAVE_MSVC2012 AND HAVE_MSPDK)
set(HAVE_WINRT TRUE)
set(HAVE_WINRT_CX TRUE)
elseif(ENABLE_WINRT_MODE_NATIVE AND HAVE_WINRT_SDK AND HAVE_MSVC2012 AND HAVE_MSPDK)
set(HAVE_WINRT TRUE)
set(HAVE_WINRT_CX FALSE)
endif()
if (HAVE_WINRT)
if(HAVE_WINRT)
add_definitions(/DWINVER=0x0602 /DNTDDI_VERSION=NTDDI_WIN8 /D_WIN32_WINNT=0x0602)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /appcontainer")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /appcontainer")

@ -124,6 +124,12 @@ if(CMAKE_COMPILER_IS_GNUCXX)
if(ENABLE_SSE2)
add_extra_compiler_option(-msse2)
endif()
if (ENABLE_NEON)
add_extra_compiler_option("-mfpu=neon")
endif()
if (ENABLE_VFPV3 AND NOT ENABLE_NEON)
add_extra_compiler_option("-mfpu=vfpv3")
endif()
# SSE3 and further should be disabled under MingW because it generates compiler errors
if(NOT MINGW)
@ -179,6 +185,11 @@ if(CMAKE_COMPILER_IS_GNUCXX)
add_extra_compiler_option(-ffunction-sections)
endif()
if(ENABLE_COVERAGE)
set(OPENCV_EXTRA_C_FLAGS "${OPENCV_EXTRA_C_FLAGS} --coverage")
set(OPENCV_EXTRA_CXX_FLAGS "${OPENCV_EXTRA_CXX_FLAGS} --coverage")
endif()
set(OPENCV_EXTRA_FLAGS_RELEASE "${OPENCV_EXTRA_FLAGS_RELEASE} -DNDEBUG")
set(OPENCV_EXTRA_FLAGS_DEBUG "${OPENCV_EXTRA_FLAGS_DEBUG} -O0 -DDEBUG -D_DEBUG")
endif()

@ -344,20 +344,20 @@ macro(add_android_project target path)
add_custom_command(TARGET ${target} POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy "${android_proj_bin_dir}/bin/${target}-debug.apk" "${OpenCV_BINARY_DIR}/bin/${target}.apk")
if(INSTALL_ANDROID_EXAMPLES AND "${target}" MATCHES "^example-")
#apk
install(FILES "${OpenCV_BINARY_DIR}/bin/${target}.apk" DESTINATION "samples" COMPONENT main)
install(FILES "${OpenCV_BINARY_DIR}/bin/${target}.apk" DESTINATION "samples" COMPONENT samples)
get_filename_component(sample_dir "${path}" NAME)
#java part
list(REMOVE_ITEM android_proj_files ${ANDROID_MANIFEST_FILE})
foreach(f ${android_proj_files} ${ANDROID_MANIFEST_FILE})
get_filename_component(install_subdir "${f}" PATH)
install(FILES "${android_proj_bin_dir}/${f}" DESTINATION "samples/${sample_dir}/${install_subdir}" COMPONENT main)
install(FILES "${android_proj_bin_dir}/${f}" DESTINATION "samples/${sample_dir}/${install_subdir}" COMPONENT samples)
endforeach()
#jni part + eclipse files
file(GLOB_RECURSE jni_files RELATIVE "${path}" "${path}/jni/*" "${path}/.cproject")
ocv_list_filterout(jni_files "\\\\.svn")
foreach(f ${jni_files} ".classpath" ".project" ".settings/org.eclipse.jdt.core.prefs")
get_filename_component(install_subdir "${f}" PATH)
install(FILES "${path}/${f}" DESTINATION "samples/${sample_dir}/${install_subdir}" COMPONENT main)
install(FILES "${path}/${f}" DESTINATION "samples/${sample_dir}/${install_subdir}" COMPONENT samples)
endforeach()
#update proj
if(android_proj_lib_deps_commands)
@ -365,9 +365,9 @@ macro(add_android_project target path)
endif()
install(CODE "EXECUTE_PROCESS(COMMAND ${ANDROID_EXECUTABLE} --silent update project --path . --target \"${android_proj_sdk_target}\" --name \"${target}\" ${inst_lib_opt}
WORKING_DIRECTORY \"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/samples/${sample_dir}\"
)" COMPONENT main)
)" COMPONENT samples)
#empty 'gen'
install(CODE "MAKE_DIRECTORY(\"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/samples/${sample_dir}/gen\")" COMPONENT main)
install(CODE "MAKE_DIRECTORY(\"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/samples/${sample_dir}/gen\")" COMPONENT samples)
endif()
endif()
endmacro()

@ -178,9 +178,8 @@ if(CUDA_FOUND)
# we remove -Wsign-promo as it generates warnings under linux
string(REPLACE "-Wsign-promo" "" ${var} "${${var}}")
# we remove -fvisibility-inlines-hidden because it's used for C++ compiler
# but NVCC uses C compiler by default
string(REPLACE "-fvisibility-inlines-hidden" "" ${var} "${${var}}")
# we remove -Wno-sign-promo as it generates warnings under linux
string(REPLACE "-Wno-sign-promo" "" ${var} "${${var}}")
# we remove -Wno-delete-non-virtual-dtor because it's used for C++ compiler
# but NVCC uses C compiler by default

@ -0,0 +1,14 @@
if(WIN32)
try_compile(__VALID_DIRECTX
"${OpenCV_BINARY_DIR}"
"${OpenCV_SOURCE_DIR}/cmake/checks/directx.cpp"
OUTPUT_VARIABLE TRY_OUT
)
if(NOT __VALID_DIRECTX)
return()
endif()
set(HAVE_DIRECTX ON)
set(HAVE_D3D11 ON)
set(HAVE_D3D10 ON)
set(HAVE_D3D9 ON)
endif()

@ -24,7 +24,8 @@ if(PYTHONINTERP_FOUND)
if(NOT ANDROID AND NOT IOS)
ocv_check_environment_variables(PYTHON_LIBRARY PYTHON_INCLUDE_DIR)
find_host_package(PythonLibs "${PYTHON_VERSION_STRING}" EXACT)
# not using PYTHON_VERSION_STRING here, because it might not conform to the CMake version format
find_host_package(PythonLibs "${PYTHON_VERSION_MAJOR_MINOR}.${PYTHON_VERSION_PATCH}" EXACT)
endif()
if(NOT ANDROID AND NOT IOS)
@ -59,24 +60,39 @@ if(PYTHONINTERP_FOUND)
SET(PYTHON_PACKAGES_PATH "${_PYTHON_PACKAGES_PATH}" CACHE PATH "Where to install the python packages.")
if(NOT PYTHON_NUMPY_INCLUDE_DIRS)
if(CMAKE_CROSSCOMPILING)
message(STATUS "Cannot probe for Python/Numpy support (because we are cross-compiling OpenCV)")
message(STATUS "If you want to enable Python/Numpy support, set the following variables:")
message(STATUS " PYTHON_INCLUDE_PATH")
message(STATUS " PYTHON_LIBRARIES")
message(STATUS " PYTHON_NUMPY_INCLUDE_DIRS")
else()
# Attempt to discover the NumPy include directory. If this succeeds, then build python API with NumPy
execute_process(COMMAND "${PYTHON_EXECUTABLE}" -c
"import os; os.environ['DISTUTILS_USE_SDK']='1'; import numpy.distutils; print(os.pathsep.join(numpy.distutils.misc_util.get_numpy_include_dirs()))"
execute_process(COMMAND "${PYTHON_EXECUTABLE}" -c "import os; os.environ['DISTUTILS_USE_SDK']='1'; import numpy.distutils; print(os.pathsep.join(numpy.distutils.misc_util.get_numpy_include_dirs()))"
RESULT_VARIABLE PYTHON_NUMPY_PROCESS
OUTPUT_VARIABLE PYTHON_NUMPY_INCLUDE_DIRS
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(PYTHON_NUMPY_PROCESS EQUAL 0)
file(TO_CMAKE_PATH "${PYTHON_NUMPY_INCLUDE_DIRS}" _PYTHON_NUMPY_INCLUDE_DIRS)
set(PYTHON_NUMPY_INCLUDE_DIRS "${_PYTHON_NUMPY_INCLUDE_DIRS}" CACHE PATH "Path to numpy headers")
if(NOT PYTHON_NUMPY_PROCESS EQUAL 0)
unset(PYTHON_NUMPY_INCLUDE_DIRS)
endif()
endif()
endif()
if(PYTHON_NUMPY_INCLUDE_DIRS)
file(TO_CMAKE_PATH "${PYTHON_NUMPY_INCLUDE_DIRS}" _PYTHON_NUMPY_INCLUDE_DIRS)
set(PYTHON_NUMPY_INCLUDE_DIRS ${_PYTHON_NUMPY_INCLUDE_DIRS} CACHE PATH "Path to numpy headers")
if(CMAKE_CROSSCOMPILING)
if(NOT PYTHON_NUMPY_VERSION)
set(PYTHON_NUMPY_VERSION "undefined - cannot be probed because of the cross-compilation")
endif()
else()
execute_process(COMMAND "${PYTHON_EXECUTABLE}" -c "import numpy; print(numpy.version.version)"
RESULT_VARIABLE PYTHON_NUMPY_PROCESS
OUTPUT_VARIABLE PYTHON_NUMPY_VERSION
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
endif()
endif(NOT ANDROID AND NOT IOS)
endif()

@ -2,7 +2,12 @@ if(NOT WITH_VTK OR ANDROID OR IOS)
return()
endif()
find_package(VTK 6.0 QUIET COMPONENTS vtkRenderingCore vtkInteractionWidgets vtkInteractionStyle vtkIOLegacy vtkIOPLY vtkRenderingFreeType vtkRenderingLOD vtkFiltersTexture NO_MODULE)
if (HAVE_QT5)
message(STATUS "VTK is disabled because OpenCV is linked with Q5. Some VTK disributives are compiled with Q4 and therefore can't be linked together Qt5.")
return()
endif()
find_package(VTK 6.0 QUIET COMPONENTS vtkRenderingCore vtkInteractionWidgets vtkInteractionStyle vtkIOLegacy vtkIOPLY vtkRenderingFreeType vtkRenderingLOD vtkFiltersTexture vtkIOExport NO_MODULE)
if(NOT DEFINED VTK_FOUND OR NOT VTK_FOUND)
find_package(VTK 5.10 QUIET COMPONENTS vtkCommon vtkFiltering vtkRendering vtkWidgets vtkImaging NO_MODULE)

@ -4,7 +4,7 @@
CONFIGURE_FILE(
"${OpenCV_SOURCE_DIR}/cmake/templates/cmake_uninstall.cmake.in"
"${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake"
IMMEDIATE @ONLY)
@ONLY)
ADD_CUSTOM_TARGET(uninstall "${CMAKE_COMMAND}" -P "${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake")
if(ENABLE_SOLUTION_FOLDERS)

@ -163,9 +163,16 @@ function(set_ipp_new_libraries _LATEST_VERSION)
${IPP_LIB_PREFIX}${IPP_PREFIX}${IPPCV}${IPP_SUFFIX}${IPP_LIB_SUFFIX}
${IPP_LIB_PREFIX}${IPP_PREFIX}${IPPI}${IPP_SUFFIX}${IPP_LIB_SUFFIX}
${IPP_LIB_PREFIX}${IPP_PREFIX}${IPPS}${IPP_SUFFIX}${IPP_LIB_SUFFIX}
${IPP_LIB_PREFIX}${IPP_PREFIX}${IPPCORE}${IPP_SUFFIX}${IPP_LIB_SUFFIX}
PARENT_SCOPE)
${IPP_LIB_PREFIX}${IPP_PREFIX}${IPPCORE}${IPP_SUFFIX}${IPP_LIB_SUFFIX})
if (UNIX)
set(IPP_LIBRARIES
${IPP_LIBRARIES}
${IPP_LIB_PREFIX}irc${CMAKE_SHARED_LIBRARY_SUFFIX}
${IPP_LIB_PREFIX}imf${CMAKE_SHARED_LIBRARY_SUFFIX}
${IPP_LIB_PREFIX}svml${CMAKE_SHARED_LIBRARY_SUFFIX})
endif()
set(IPP_LIBRARIES ${IPP_LIBRARIES} PARENT_SCOPE)
return()
endfunction()
@ -208,18 +215,38 @@ function(set_ipp_variables _LATEST_VERSION)
set(IPP_INCLUDE_DIRS ${IPP_ROOT_DIR}/include PARENT_SCOPE)
if (APPLE)
set(IPP_LIBRARY_DIRS ${IPP_ROOT_DIR}/lib PARENT_SCOPE)
set(IPP_LIBRARY_DIRS ${IPP_ROOT_DIR}/lib)
elseif (IPP_X64)
if(NOT EXISTS ${IPP_ROOT_DIR}/lib/intel64)
message(SEND_ERROR "IPP EM64T libraries not found")
endif()
set(IPP_LIBRARY_DIRS ${IPP_ROOT_DIR}/lib/intel64 PARENT_SCOPE)
set(IPP_LIBRARY_DIRS ${IPP_ROOT_DIR}/lib/intel64)
else()
if(NOT EXISTS ${IPP_ROOT_DIR}/lib/ia32)
message(SEND_ERROR "IPP IA32 libraries not found")
endif()
set(IPP_LIBRARY_DIRS ${IPP_ROOT_DIR}/lib/ia32 PARENT_SCOPE)
set(IPP_LIBRARY_DIRS ${IPP_ROOT_DIR}/lib/ia32)
endif()
if (UNIX)
get_filename_component(INTEL_COMPILER_LIBRARY_DIR ${IPP_ROOT_DIR}/../lib REALPATH)
if (IPP_X64)
if(NOT EXISTS ${INTEL_COMPILER_LIBRARY_DIR}/intel64)
message(SEND_ERROR "Intel compiler EM64T libraries not found")
endif()
set(IPP_LIBRARY_DIRS
${IPP_LIBRARY_DIRS}
${INTEL_COMPILER_LIBRARY_DIR}/intel64)
else()
if(NOT EXISTS ${INTEL_COMPILER_LIBRARY_DIR}/ia32)
message(SEND_ERROR "Intel compiler IA32 libraries not found")
endif()
set(IPP_LIBRARY_DIRS
${IPP_LIBRARY_DIRS}
${INTEL_COMPILER_LIBRARY_DIR}/ia32)
endif()
endif()
set(IPP_LIBRARY_DIRS ${IPP_LIBRARY_DIRS} PARENT_SCOPE)
# set IPP_LIBRARIES variable (7.x or 8.x lib names)
set_ipp_new_libraries(${_LATEST_VERSION})

@ -0,0 +1,20 @@
# Main variables:
# INTELPERC_LIBRARIES and INTELPERC_INCLUDE to link Intel Perceptial Computing SDK modules
# HAVE_INTELPERC for conditional compilation OpenCV with/without Intel Perceptial Computing SDK
if(X86_64)
find_path(INTELPERC_INCLUDE_DIR "pxcsession.h" PATHS "$ENV{PCSDK_DIR}include" DOC "Path to Intel Perceptual Computing SDK interface headers")
find_file(INTELPERC_LIBRARIES "libpxc.lib" PATHS "$ENV{PCSDK_DIR}lib/x64" DOC "Path to Intel Perceptual Computing SDK interface libraries")
else()
find_path(INTELPERC_INCLUDE_DIR "pxcsession.h" PATHS "$ENV{PCSDK_DIR}include" DOC "Path to Intel Perceptual Computing SDK interface headers")
find_file(INTELPERC_LIBRARIES "libpxc.lib" PATHS "$ENV{PCSDK_DIR}lib/Win32" DOC "Path to Intel Perceptual Computing SDK interface libraries")
endif()
if(INTELPERC_INCLUDE_DIR AND INTELPERC_LIBRARIES)
set(HAVE_INTELPERC TRUE)
else()
set(HAVE_INTELPERC FALSE)
message(WARNING "Intel Perceptual Computing SDK library directory (set by INTELPERC_LIB_DIR variable) is not found or does not have Intel Perceptual Computing SDK libraries.")
endif() #if(INTELPERC_INCLUDE_DIR AND INTELPERC_LIBRARIES)
mark_as_advanced(FORCE INTELPERC_LIBRARIES INTELPERC_INCLUDE_DIR)

@ -277,3 +277,8 @@ if (NOT IOS)
set(HAVE_QTKIT YES)
endif()
endif()
# --- Intel Perceptual Computing SDK ---
if(WITH_INTELPERC)
include("${OpenCV_SOURCE_DIR}/cmake/OpenCVFindIntelPerCSDK.cmake")
endif(WITH_INTELPERC)

@ -91,7 +91,7 @@ if(ANDROID)
set(OPENCV_LIBS_DIR_CONFIGCMAKE "\$(OPENCV_THIS_DIR)/lib/\$(OPENCV_TARGET_ARCH_ABI)")
set(OPENCV_3RDPARTY_LIBS_DIR_CONFIGCMAKE "\$(OPENCV_THIS_DIR)/3rdparty/lib/\$(OPENCV_TARGET_ARCH_ABI)")
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCV.mk.in" "${CMAKE_BINARY_DIR}/OpenCV.mk" IMMEDIATE @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCV.mk.in" "${CMAKE_BINARY_DIR}/OpenCV.mk" @ONLY)
# -------------------------------------------------------------------------------------------
# Part 2/2: ${BIN_DIR}/unix-install/OpenCV.mk -> For use with "make install"
@ -101,6 +101,6 @@ if(ANDROID)
set(OPENCV_LIBS_DIR_CONFIGCMAKE "\$(OPENCV_THIS_DIR)/../libs/\$(OPENCV_TARGET_ARCH_ABI)")
set(OPENCV_3RDPARTY_LIBS_DIR_CONFIGCMAKE "\$(OPENCV_THIS_DIR)/../3rdparty/libs/\$(OPENCV_TARGET_ARCH_ABI)")
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCV.mk.in" "${CMAKE_BINARY_DIR}/unix-install/OpenCV.mk" IMMEDIATE @ONLY)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCV.mk DESTINATION ${OPENCV_CONFIG_INSTALL_PATH})
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCV.mk.in" "${CMAKE_BINARY_DIR}/unix-install/OpenCV.mk" @ONLY)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCV.mk DESTINATION ${OPENCV_CONFIG_INSTALL_PATH} COMPONENT dev)
endif(ANDROID)

@ -83,9 +83,9 @@ endif()
export(TARGETS ${OpenCVModules_TARGETS} FILE "${CMAKE_BINARY_DIR}/OpenCVModules${modules_file_suffix}.cmake")
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig.cmake.in" "${CMAKE_BINARY_DIR}/OpenCVConfig.cmake" IMMEDIATE @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig.cmake.in" "${CMAKE_BINARY_DIR}/OpenCVConfig.cmake" @ONLY)
#support for version checking when finding opencv. find_package(OpenCV 2.3.1 EXACT) should now work.
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig-version.cmake.in" "${CMAKE_BINARY_DIR}/OpenCVConfig-version.cmake" IMMEDIATE @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig-version.cmake.in" "${CMAKE_BINARY_DIR}/OpenCVConfig-version.cmake" @ONLY)
# --------------------------------------------------------------------------------------------
# Part 2/3: ${BIN_DIR}/unix-install/OpenCVConfig.cmake -> For use *with* "make install"
@ -98,8 +98,8 @@ if(INSTALL_TO_MANGLED_PATHS)
set(OpenCV_3RDPARTY_LIB_DIRS_CONFIGCMAKE "\"\${OpenCV_INSTALL_PATH}/${OpenCV_3RDPARTY_LIB_DIRS_CONFIGCMAKE}\"")
endif()
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig.cmake.in" "${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig.cmake" IMMEDIATE @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig-version.cmake.in" "${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig-version.cmake" IMMEDIATE @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig.cmake.in" "${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig.cmake" @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig-version.cmake.in" "${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig-version.cmake" @ONLY)
if(UNIX) # ANDROID configuration is created here also
#http://www.vtk.org/Wiki/CMake/Tutorials/Packaging reference
@ -109,18 +109,18 @@ if(UNIX) # ANDROID configuration is created here also
# <prefix>/(share|lib)/<name>*/ (U)
# <prefix>/(share|lib)/<name>*/(cmake|CMake)/ (U)
if(INSTALL_TO_MANGLED_PATHS)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig-version.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/)
install(EXPORT OpenCVModules DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/ FILE OpenCVModules${modules_file_suffix}.cmake)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/ COMPONENT dev)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig-version.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/ COMPONENT dev)
install(EXPORT OpenCVModules DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/ FILE OpenCVModules${modules_file_suffix}.cmake COMPONENT dev)
else()
install(FILES "${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig.cmake" DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig-version.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/)
install(EXPORT OpenCVModules DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/ FILE OpenCVModules${modules_file_suffix}.cmake)
install(FILES "${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig.cmake" DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/ COMPONENT dev)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig-version.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/ COMPONENT dev)
install(EXPORT OpenCVModules DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/ FILE OpenCVModules${modules_file_suffix}.cmake COMPONENT dev)
endif()
endif()
if(ANDROID)
install(FILES "${OpenCV_SOURCE_DIR}/platforms/android/android.toolchain.cmake" DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/)
install(FILES "${OpenCV_SOURCE_DIR}/platforms/android/android.toolchain.cmake" DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/ COMPONENT dev)
endif()
# --------------------------------------------------------------------------------------------
@ -131,15 +131,15 @@ if(WIN32)
set(OpenCV2_INCLUDE_DIRS_CONFIGCMAKE "\"\"")
exec_program(mkdir ARGS "-p \"${CMAKE_BINARY_DIR}/win-install/\"" OUTPUT_VARIABLE RET_VAL)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig.cmake.in" "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig.cmake" IMMEDIATE @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig-version.cmake.in" "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig-version.cmake" IMMEDIATE @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig.cmake.in" "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig.cmake" @ONLY)
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/OpenCVConfig-version.cmake.in" "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig-version.cmake" @ONLY)
if(BUILD_SHARED_LIBS)
install(FILES "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig.cmake" DESTINATION "${OpenCV_INSTALL_BINARIES_PREFIX}lib")
install(EXPORT OpenCVModules DESTINATION "${OpenCV_INSTALL_BINARIES_PREFIX}lib" FILE OpenCVModules${modules_file_suffix}.cmake)
install(FILES "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig.cmake" DESTINATION "${OpenCV_INSTALL_BINARIES_PREFIX}lib" COMPONENT dev)
install(EXPORT OpenCVModules DESTINATION "${OpenCV_INSTALL_BINARIES_PREFIX}lib" FILE OpenCVModules${modules_file_suffix}.cmake COMPONENT dev)
else()
install(FILES "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig.cmake" DESTINATION "${OpenCV_INSTALL_BINARIES_PREFIX}staticlib")
install(EXPORT OpenCVModules DESTINATION "${OpenCV_INSTALL_BINARIES_PREFIX}staticlib" FILE OpenCVModules${modules_file_suffix}.cmake)
install(FILES "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig.cmake" DESTINATION "${OpenCV_INSTALL_BINARIES_PREFIX}staticlib" COMPONENT dev)
install(EXPORT OpenCVModules DESTINATION "${OpenCV_INSTALL_BINARIES_PREFIX}staticlib" FILE OpenCVModules${modules_file_suffix}.cmake COMPONENT dev)
endif()
install(FILES "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig-version.cmake" DESTINATION "${CMAKE_INSTALL_PREFIX}")
install(FILES "${OpenCV_SOURCE_DIR}/cmake/OpenCVConfig.cmake" DESTINATION "${CMAKE_INSTALL_PREFIX}/")
install(FILES "${CMAKE_BINARY_DIR}/win-install/OpenCVConfig-version.cmake" DESTINATION "${CMAKE_INSTALL_PREFIX}" COMPONENT dev)
install(FILES "${OpenCV_SOURCE_DIR}/cmake/OpenCVConfig.cmake" DESTINATION "${CMAKE_INSTALL_PREFIX}/" COMPONENT dev)
endif()

@ -23,4 +23,4 @@ set(OPENCV_MODULE_DEFINITIONS_CONFIGMAKE "${OPENCV_MODULE_DEFINITIONS_CONFIGMAKE
#endforeach()
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/opencv_modules.hpp.in" "${OPENCV_CONFIG_FILE_INCLUDE_DIR}/opencv2/opencv_modules.hpp")
install(FILES "${OPENCV_CONFIG_FILE_INCLUDE_DIR}/opencv2/opencv_modules.hpp" DESTINATION ${OPENCV_INCLUDE_INSTALL_PATH}/opencv2 COMPONENT main)
install(FILES "${OPENCV_CONFIG_FILE_INCLUDE_DIR}/opencv2/opencv_modules.hpp" DESTINATION ${OPENCV_INCLUDE_INSTALL_PATH}/opencv2 COMPONENT dev)

@ -78,8 +78,8 @@ else()
endif()
configure_file("${OpenCV_SOURCE_DIR}/cmake/templates/opencv-XXX.pc.in"
"${CMAKE_BINARY_DIR}/unix-install/${OPENCV_PC_FILE_NAME}"
@ONLY IMMEDIATE)
@ONLY)
if(UNIX AND NOT ANDROID)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/${OPENCV_PC_FILE_NAME} DESTINATION ${OPENCV_LIB_INSTALL_PATH}/pkgconfig)
install(FILES ${CMAKE_BINARY_DIR}/unix-install/${OPENCV_PC_FILE_NAME} DESTINATION ${OPENCV_LIB_INSTALL_PATH}/pkgconfig COMPONENT dev)
endif()

@ -135,13 +135,13 @@ macro(ocv_add_module _name)
# parse list of dependencies
if("${ARGV1}" STREQUAL "INTERNAL" OR "${ARGV1}" STREQUAL "BINDINGS")
set(OPENCV_MODULE_${the_module}_CLASS "${ARGV1}" CACHE INTERNAL "The cathegory of the module")
set(OPENCV_MODULE_${the_module}_CLASS "${ARGV1}" CACHE INTERNAL "The category of the module")
set(__ocv_argn__ ${ARGN})
list(REMOVE_AT __ocv_argn__ 0)
ocv_add_dependencies(${the_module} ${__ocv_argn__})
unset(__ocv_argn__)
else()
set(OPENCV_MODULE_${the_module}_CLASS "PUBLIC" CACHE INTERNAL "The cathegory of the module")
set(OPENCV_MODULE_${the_module}_CLASS "PUBLIC" CACHE INTERNAL "The category of the module")
ocv_add_dependencies(${the_module} ${ARGN})
if(BUILD_${the_module})
set(OPENCV_MODULES_PUBLIC ${OPENCV_MODULES_PUBLIC} "${the_module}" CACHE INTERNAL "List of OpenCV modules marked for export")
@ -583,9 +583,9 @@ macro(ocv_create_module)
endif()
ocv_install_target(${the_module} EXPORT OpenCVModules
RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT main
LIBRARY DESTINATION ${OPENCV_LIB_INSTALL_PATH} COMPONENT main
ARCHIVE DESTINATION ${OPENCV_LIB_INSTALL_PATH} COMPONENT main
RUNTIME DESTINATION ${OPENCV_BIN_INSTALL_PATH} COMPONENT libs
LIBRARY DESTINATION ${OPENCV_LIB_INSTALL_PATH} COMPONENT libs
ARCHIVE DESTINATION ${OPENCV_LIB_INSTALL_PATH} COMPONENT dev
)
# only "public" headers need to be installed
@ -593,7 +593,7 @@ macro(ocv_create_module)
foreach(hdr ${OPENCV_MODULE_${the_module}_HEADERS})
string(REGEX REPLACE "^.*opencv2/" "opencv2/" hdr2 "${hdr}")
if(NOT hdr2 MATCHES "opencv2/${the_module}/private.*" AND hdr2 MATCHES "^(opencv2/?.*)/[^/]+.h(..)?$" )
install(FILES ${hdr} DESTINATION "${OPENCV_INCLUDE_INSTALL_PATH}/${CMAKE_MATCH_1}" COMPONENT main)
install(FILES ${hdr} DESTINATION "${OPENCV_INCLUDE_INSTALL_PATH}/${CMAKE_MATCH_1}" COMPONENT dev)
endif()
endforeach()
endif()
@ -717,6 +717,9 @@ function(ocv_add_perf_tests)
else(OCV_DEPENDENCIES_FOUND)
# TODO: warn about unsatisfied dependencies
endif(OCV_DEPENDENCIES_FOUND)
if(INSTALL_TESTS)
install(TARGETS ${the_target} RUNTIME DESTINATION ${OPENCV_TEST_INSTALL_PATH} COMPONENT tests)
endif()
endif()
endfunction()
@ -770,6 +773,10 @@ function(ocv_add_accuracy_tests)
else(OCV_DEPENDENCIES_FOUND)
# TODO: warn about unsatisfied dependencies
endif(OCV_DEPENDENCIES_FOUND)
if(INSTALL_TESTS)
install(TARGETS ${the_target} RUNTIME DESTINATION ${OPENCV_TEST_INSTALL_PATH} COMPONENT tests)
endif()
endif()
endfunction()
@ -801,7 +808,7 @@ function(ocv_add_samples)
endif()
if(WIN32)
install(TARGETS ${the_target} RUNTIME DESTINATION "samples/${module_id}" COMPONENT main)
install(TARGETS ${the_target} RUNTIME DESTINATION "samples/${module_id}" COMPONENT samples)
endif()
endforeach()
endif()
@ -810,8 +817,8 @@ function(ocv_add_samples)
if(INSTALL_C_EXAMPLES AND NOT WIN32 AND EXISTS "${samples_path}")
file(GLOB sample_files "${samples_path}/*")
install(FILES ${sample_files}
DESTINATION share/OpenCV/samples/${module_id}
PERMISSIONS OWNER_READ GROUP_READ WORLD_READ)
DESTINATION ${OPENCV_SAMPLES_SRC_INSTALL_PATH}/${module_id}
PERMISSIONS OWNER_READ GROUP_READ WORLD_READ COMPONENT samples)
endif()
endfunction()

@ -0,0 +1,110 @@
if(EXISTS "${CMAKE_ROOT}/Modules/CPack.cmake")
set(CPACK_set_DESTDIR "on")
if(NOT OPENCV_CUSTOM_PACKAGE_INFO)
set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "Open Computer Vision Library")
set(CPACK_PACKAGE_DESCRIPTION
"OpenCV (Open Source Computer Vision Library) is an open source computer vision
and machine learning software library. OpenCV was built to provide a common
infrastructure for computer vision applications and to accelerate the use of
machine perception in the commercial products. Being a BSD-licensed product,
OpenCV makes it easy for businesses to utilize and modify the code.")
set(CPACK_PACKAGE_VENDOR "OpenCV Foundation")
set(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_CURRENT_SOURCE_DIR}/LICENSE")
set(CPACK_PACKAGE_CONTACT "admin@opencv.org")
set(CPACK_PACKAGE_VERSION_MAJOR "${OPENCV_VERSION_MAJOR}")
set(CPACK_PACKAGE_VERSION_MINOR "${OPENCV_VERSION_MINOR}")
set(CPACK_PACKAGE_VERSION_PATCH "${OPENCV_VERSION_PATCH}")
set(CPACK_PACKAGE_VERSION "${OPENCV_VCSVERSION}")
endif(NOT OPENCV_CUSTOM_PACKAGE_INFO)
#arch
if(X86)
set(CPACK_DEBIAN_ARCHITECTURE "i386")
set(CPACK_RPM_PACKAGE_ARCHITECTURE "i686")
elseif(X86_64)
set(CPACK_DEBIAN_ARCHITECTURE "amd64")
set(CPACK_RPM_PACKAGE_ARCHITECTURE "x86_64")
elseif(ARM)
set(CPACK_DEBIAN_ARCHITECTURE "armhf")
set(CPACK_RPM_PACKAGE_ARCHITECTURE "armhf")
else()
set(CPACK_DEBIAN_ARCHITECTURE ${CMAKE_SYSTEM_PROCESSOR})
set(CPACK_RPM_PACKAGE_ARCHITECTURE ${CMAKE_SYSTEM_PROCESSOR})
endif()
if(CPACK_GENERATOR STREQUAL "DEB")
set(OPENCV_PACKAGE_ARCH_SUFFIX ${CPACK_DEBIAN_ARCHITECTURE})
elseif(CPACK_GENERATOR STREQUAL "RPM")
set(OPENCV_PACKAGE_ARCH_SUFFIX ${CPACK_RPM_PACKAGE_ARCHITECTURE})
else()
set(OPENCV_PACKAGE_ARCH_SUFFIX ${CMAKE_SYSTEM_PROCESSOR})
endif()
set(CPACK_PACKAGE_FILE_NAME "${CMAKE_PROJECT_NAME}-${OPENCV_VCSVERSION}-${OPENCV_PACKAGE_ARCH_SUFFIX}")
set(CPACK_SOURCE_PACKAGE_FILE_NAME "${CMAKE_PROJECT_NAME}-${OPENCV_VCSVERSION}-${OPENCV_PACKAGE_ARCH_SUFFIX}")
#rpm options
set(CPACK_RPM_COMPONENT_INSTALL TRUE)
set(CPACK_RPM_PACKAGE_SUMMARY ${CPACK_PACKAGE_DESCRIPTION_SUMMARY})
set(CPACK_RPM_PACKAGE_DESCRIPTION ${CPACK_PACKAGE_DESCRIPTION})
set(CPACK_RPM_PACKAGE_URL "http://opencv.org")
set(CPACK_RPM_PACKAGE_LICENSE "BSD")
#deb options
set(CPACK_DEB_COMPONENT_INSTALL TRUE)
set(CPACK_DEBIAN_PACKAGE_PRIORITY "optional")
set(CPACK_DEBIAN_PACKAGE_SECTION "libs")
set(CPACK_DEBIAN_PACKAGE_HOMEPAGE "http://opencv.org")
#depencencies
set(CPACK_DEBIAN_PACKAGE_SHLIBDEPS TRUE)
set(CPACK_COMPONENT_samples_DEPENDS libs)
set(CPACK_COMPONENT_dev_DEPENDS libs)
set(CPACK_COMPONENT_docs_DEPENDS libs)
set(CPACK_COMPONENT_java_DEPENDS libs)
set(CPACK_COMPONENT_python_DEPENDS libs)
set(CPACK_COMPONENT_tests_DEPENDS libs)
if(HAVE_CUDA)
string(REPLACE "." "-" cuda_version_suffix ${CUDA_VERSION})
set(CPACK_DEB_libs_PACKAGE_DEPENDS "cuda-core-libs-${cuda_version_suffix}, cuda-extra-libs-${cuda_version_suffix}")
set(CPACK_COMPONENT_dev_DEPENDS libs)
set(CPACK_DEB_dev_PACKAGE_DEPENDS "cuda-headers-${cuda_version_suffix}")
endif()
if(NOT OPENCV_CUSTOM_PACKAGE_INFO)
set(CPACK_COMPONENT_libs_DISPLAY_NAME "lib${CMAKE_PROJECT_NAME}")
set(CPACK_COMPONENT_libs_DESCRIPTION "Open Computer Vision Library")
set(CPACK_COMPONENT_python_DISPLAY_NAME "lib${CMAKE_PROJECT_NAME}-python")
set(CPACK_COMPONENT_python_DESCRIPTION "Python bindings for Open Source Computer Vision Library")
set(CPACK_COMPONENT_java_DISPLAY_NAME "lib${CMAKE_PROJECT_NAME}-java")
set(CPACK_COMPONENT_java_DESCRIPTION "Java bindings for Open Source Computer Vision Library")
set(CPACK_COMPONENT_dev_DISPLAY_NAME "lib${CMAKE_PROJECT_NAME}-dev")
set(CPACK_COMPONENT_dev_DESCRIPTION "Development files for Open Source Computer Vision Library")
set(CPACK_COMPONENT_docs_DISPLAY_NAME "lib${CMAKE_PROJECT_NAME}-docs")
set(CPACK_COMPONENT_docs_DESCRIPTION "Documentation for Open Source Computer Vision Library")
set(CPACK_COMPONENT_samples_DISPLAY_NAME "lib${CMAKE_PROJECT_NAME}-samples")
set(CPACK_COMPONENT_samples_DESCRIPTION "Samples for Open Source Computer Vision Library")
set(CPACK_COMPONENT_tests_DISPLAY_NAME "lib${CMAKE_PROJECT_NAME}-tests")
set(CPACK_COMPONENT_tests_DESCRIPTION "Accuracy and performance tests for Open Source Computer Vision Library")
endif(NOT OPENCV_CUSTOM_PACKAGE_INFO)
if(NOT OPENCV_CUSTOM_PACKAGE_LAYOUT)
set(CPACK_libs_COMPONENT_INSTALL TRUE)
set(CPACK_dev_COMPONENT_INSTALL TRUE)
set(CPACK_docs_COMPONENT_INSTALL TRUE)
set(CPACK_python_COMPONENT_INSTALL TRUE)
set(CPACK_java_COMPONENT_INSTALL TRUE)
set(CPACK_samples_COMPONENT_INSTALL TRUE)
endif(NOT OPENCV_CUSTOM_PACKAGE_LAYOUT)
include(CPack)
ENDif(EXISTS "${CMAKE_ROOT}/Modules/CPack.cmake")

@ -467,6 +467,20 @@ macro(ocv_convert_to_full_paths VAR)
endmacro()
# convert list of paths to libraries names without lib prefix
macro(ocv_convert_to_lib_name var)
set(__tmp "")
foreach(path ${ARGN})
get_filename_component(__tmp_name "${path}" NAME_WE)
string(REGEX REPLACE "^lib" "" __tmp_name ${__tmp_name})
list(APPEND __tmp "${__tmp_name}")
endforeach()
set(${var} ${__tmp})
unset(__tmp)
unset(__tmp_name)
endmacro()
# add install command
function(ocv_install_target)
install(TARGETS ${ARGN})

@ -0,0 +1,70 @@
#include <windows.h>
#include <d3d11.h>
#pragma comment (lib, "d3d11.lib")
HINSTANCE g_hInst = NULL;
D3D_DRIVER_TYPE g_driverType = D3D_DRIVER_TYPE_NULL;
D3D_FEATURE_LEVEL g_featureLevel = D3D_FEATURE_LEVEL_11_0;
ID3D11Device* g_pd3dDevice = NULL;
ID3D11DeviceContext* g_pImmediateContext = NULL;
IDXGISwapChain* g_pSwapChain = NULL;
static HRESULT InitDevice()
{
HRESULT hr = S_OK;
UINT width = 640;
UINT height = 480;
UINT createDeviceFlags = 0;
D3D_DRIVER_TYPE driverTypes[] =
{
D3D_DRIVER_TYPE_HARDWARE,
D3D_DRIVER_TYPE_WARP,
D3D_DRIVER_TYPE_REFERENCE,
};
UINT numDriverTypes = ARRAYSIZE(driverTypes);
D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
};
UINT numFeatureLevels = ARRAYSIZE(featureLevels);
DXGI_SWAP_CHAIN_DESC sd;
ZeroMemory( &sd, sizeof( sd ) );
sd.BufferCount = 1;
sd.BufferDesc.Width = width;
sd.BufferDesc.Height = height;
sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
sd.BufferDesc.RefreshRate.Numerator = 60;
sd.BufferDesc.RefreshRate.Denominator = 1;
sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
sd.OutputWindow = NULL; //g_hWnd;
sd.SampleDesc.Count = 1;
sd.SampleDesc.Quality = 0;
sd.Windowed = TRUE;
for (UINT driverTypeIndex = 0; driverTypeIndex < numDriverTypes; driverTypeIndex++)
{
g_driverType = driverTypes[driverTypeIndex];
hr = D3D11CreateDeviceAndSwapChain(NULL, g_driverType, NULL, createDeviceFlags, featureLevels, numFeatureLevels,
D3D11_SDK_VERSION, &sd, &g_pSwapChain, &g_pd3dDevice, &g_featureLevel, &g_pImmediateContext);
if (SUCCEEDED(hr))
break;
}
if (FAILED(hr))
return hr;
return S_OK;
}
int main(int /*argc*/, char** /*argv*/)
{
InitDevice();
return 0;
}

@ -29,6 +29,7 @@ ${nested_namespace_start}
set(STR_HPP "// This file is auto-generated. Do not edit!
#include \"opencv2/core/ocl_genbase.hpp\"
#include \"opencv2/core/opencl/ocl_defs.hpp\"
namespace cv
{
@ -64,8 +65,8 @@ foreach(cl ${cl_list})
set(STR_CPP_DECL "const struct ProgramEntry ${cl_filename}={\"${cl_filename}\",\n\"${lines}, \"${hash}\"};\n")
set(STR_HPP_DECL "extern const struct ProgramEntry ${cl_filename};\n")
if(new_mode)
set(STR_CPP_DECL "${STR_CPP_DECL}ProgramSource2 ${cl_filename}_oclsrc(${cl_filename}.programStr);\n")
set(STR_HPP_DECL "${STR_HPP_DECL}extern ProgramSource2 ${cl_filename}_oclsrc;\n")
set(STR_CPP_DECL "${STR_CPP_DECL}ProgramSource ${cl_filename}_oclsrc(${cl_filename}.programStr);\n")
set(STR_HPP_DECL "${STR_HPP_DECL}extern ProgramSource ${cl_filename}_oclsrc;\n")
endif()
set(STR_CPP "${STR_CPP}${STR_CPP_DECL}")

@ -2,6 +2,13 @@
# you might need to define NDK_USE_CYGPATH=1 before calling the ndk-build
USER_LOCAL_PATH:=$(LOCAL_PATH)
USER_LOCAL_C_INCLUDES:=$(LOCAL_C_INCLUDES)
USER_LOCAL_CFLAGS:=$(LOCAL_CFLAGS)
USER_LOCAL_STATIC_LIBRARIES:=$(LOCAL_STATIC_LIBRARIES)
USER_LOCAL_SHARED_LIBRARIES:=$(LOCAL_SHARED_LIBRARIES)
USER_LOCAL_LDLIBS:=$(LOCAL_LDLIBS)
LOCAL_PATH:=$(subst ?,,$(firstword ?$(subst \, ,$(subst /, ,$(call my-dir)))))
OPENCV_TARGET_ARCH_ABI:=$(TARGET_ARCH_ABI)
@ -47,7 +54,7 @@ else
endif
endif
ifeq (${OPENCV_CAMERA_MODULES},on)
ifeq ($(OPENCV_CAMERA_MODULES),on)
ifeq ($(TARGET_ARCH_ABI),armeabi)
OPENCV_CAMERA_MODULES:=@OPENCV_CAMERA_LIBS_ARMEABI_CONFIGCMAKE@
endif
@ -113,6 +120,13 @@ ifeq ($(OPENCV_LOCAL_CFLAGS),)
endif
include $(CLEAR_VARS)
LOCAL_C_INCLUDES:=$(USER_LOCAL_C_INCLUDES)
LOCAL_CFLAGS:=$(USER_LOCAL_CFLAGS)
LOCAL_STATIC_LIBRARIES:=$(USER_LOCAL_STATIC_LIBRARIES)
LOCAL_SHARED_LIBRARIES:=$(USER_LOCAL_SHARED_LIBRARIES)
LOCAL_LDLIBS:=$(USER_LOCAL_LDLIBS)
LOCAL_C_INCLUDES += $(OPENCV_LOCAL_C_INCLUDES)
LOCAL_CFLAGS += $(OPENCV_LOCAL_CFLAGS)

@ -55,6 +55,12 @@
/* IEEE1394 capturing support - libdc1394 v2.x */
#cmakedefine HAVE_DC1394_2
/* DirectX */
#cmakedefine HAVE_DIRECTX
#cmakedefine HAVE_D3D11
#cmakedefine HAVE_D3D10
#cmakedefine HAVE_D3D9
/* DirectShow Video Capture library */
#cmakedefine HAVE_DSHOW
@ -82,6 +88,9 @@
/* Define to 1 if you have the <inttypes.h> header file. */
#cmakedefine HAVE_INTTYPES_H 1
/* Intel Perceptual Computing SDK library */
#cmakedefine HAVE_INTELPERC
/* Intel Integrated Performance Primitives */
#cmakedefine HAVE_IPP
@ -158,6 +167,6 @@
/* Xine video library */
#cmakedefine HAVE_XINE
/* Define to 1 if your processor stores words with the most significant byte
/* Define if your processor stores words with the most significant byte
first (like Motorola and SPARC, unlike Intel and VAX). */
#cmakedefine WORDS_BIGENDIAN

@ -0,0 +1,51 @@
#!/bin/sh
BASE_DIR=`dirname $0`
OPENCV_TEST_PATH=$BASE_DIR/@TEST_PATH@
OPENCV_TEST_DATA_PATH=$BASE_DIR/sdk/etc/testdata/
if [ $# -ne 1 ]; then
echo "Device architecture is not preset in command line"
echo "Tests are available for architectures: `ls -m ${OPENCV_TEST_PATH}`"
echo "Usage: $0 <target_device_arch>"
return 1
else
TARGET_ARCH=$1
fi
if [ -z `which adb` ]; then
echo "adb command was not found in PATH"
return 1
fi
adb push $OPENCV_TEST_DATA_PATH /sdcard/opencv_testdata
adb shell "mkdir -p /data/local/tmp/opencv_test"
SUMMARY_STATUS=0
for t in "$OPENCV_TEST_PATH/$TARGET_ARCH/"opencv_test_* "$OPENCV_TEST_PATH/$TARGET_ARCH/"opencv_perf_*;
do
test_name=`basename "$t"`
report="$test_name-`date --rfc-3339=date`.xml"
adb push $t /data/local/tmp/opencv_test/
adb shell "export OPENCV_TEST_DATA_PATH=/sdcard/opencv_testdata && /data/local/tmp/opencv_test/$test_name --perf_min_samples=1 --perf_force_samples=1 --gtest_output=xml:/data/local/tmp/opencv_test/$report"
adb pull "/data/local/tmp/opencv_test/$report" $report
TEST_STATUS=0
if [ -e $report ]; then
if [ `grep -c "<fail" $report` -ne 0 ]; then
TEST_STATUS=2
fi
else
TEST_STATUS=3
fi
if [ $TEST_STATUS -ne 0 ]; then
SUMMARY_STATUS=$TEST_STATUS
fi
done
if [ $SUMMARY_STATUS -eq 0 ]; then
echo "All OpenCV tests finished successfully"
else
echo "OpenCV tests finished with status $SUMMARY_STATUS"
fi
return $SUMMARY_STATUS

@ -0,0 +1,25 @@
#!/bin/sh
OPENCV_TEST_PATH=@CMAKE_INSTALL_PREFIX@/@OPENCV_TEST_INSTALL_PATH@
export OPENCV_TEST_DATA_PATH=@CMAKE_INSTALL_PREFIX@/share/OpenCV/testdata
SUMMARY_STATUS=0
for t in "$OPENCV_TEST_PATH/"opencv_test_* "$OPENCV_TEST_PATH/"opencv_perf_*;
do
report="`basename "$t"`-`date --rfc-3339=date`.xml"
"$t" --perf_min_samples=1 --perf_force_samples=1 --gtest_output=xml:"$report"
TEST_STATUS=$?
if [ $TEST_STATUS -ne 0 ]; then
SUMMARY_STATUS=$TEST_STATUS
fi
done
rm -f /tmp/__opencv_temp.*
if [ $SUMMARY_STATUS -eq 0 ]; then
echo "All OpenCV tests finished successfully"
else
echo "OpenCV tests finished with status $SUMMARY_STATUS"
fi
return $SUMMARY_STATUS

@ -0,0 +1,2 @@
# Environment setup for OpenCV testing
export OPENCV_TEST_DATA_PATH=@CMAKE_INSTALL_PREFIX@/share/OpenCV/testdata

@ -2,9 +2,21 @@ file(GLOB HAAR_CASCADES haarcascades/*.xml)
file(GLOB LBP_CASCADES lbpcascades/*.xml)
if(ANDROID)
install(FILES ${HAAR_CASCADES} DESTINATION sdk/etc/haarcascades COMPONENT main)
install(FILES ${LBP_CASCADES} DESTINATION sdk/etc/lbpcascades COMPONENT main)
install(FILES ${HAAR_CASCADES} DESTINATION sdk/etc/haarcascades COMPONENT libs)
install(FILES ${LBP_CASCADES} DESTINATION sdk/etc/lbpcascades COMPONENT libs)
elseif(NOT WIN32)
install(FILES ${HAAR_CASCADES} DESTINATION share/OpenCV/haarcascades COMPONENT main)
install(FILES ${LBP_CASCADES} DESTINATION share/OpenCV/lbpcascades COMPONENT main)
install(FILES ${HAAR_CASCADES} DESTINATION share/OpenCV/haarcascades COMPONENT libs)
install(FILES ${LBP_CASCADES} DESTINATION share/OpenCV/lbpcascades COMPONENT libs)
endif()
if(INSTALL_TESTS AND OPENCV_TEST_DATA_PATH)
if(ANDROID)
install(DIRECTORY ${OPENCV_TEST_DATA_PATH} DESTINATION sdk/etc/testdata COMPONENT tests)
elseif(NOT WIN32)
# CPack does not set correct permissions by default, so we do it explicitly.
install(DIRECTORY ${OPENCV_TEST_DATA_PATH}
DIRECTORY_PERMISSIONS OWNER_WRITE OWNER_READ OWNER_EXECUTE
GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE
DESTINATION share/OpenCV/testdata COMPONENT tests)
endif()
endif()

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

@ -33,7 +33,7 @@ if(BUILD_DOCS AND HAVE_SPHINX)
endif()
endforeach()
set(FIXED_ORDER_MODULES core imgproc highgui video calib3d features2d objdetect ml flann photo stitching nonfree contrib legacy bioinspired)
set(FIXED_ORDER_MODULES core imgproc highgui video calib3d features2d objdetect ml flann photo stitching nonfree contrib legacy)
list(REMOVE_ITEM BASE_MODULES ${FIXED_ORDER_MODULES})
@ -148,11 +148,11 @@ if(BUILD_DOCS AND HAVE_SPHINX)
endif()
foreach(f ${DOC_LIST})
install(FILES "${f}" DESTINATION "${OPENCV_DOC_INSTALL_PATH}" COMPONENT main)
install(FILES "${f}" DESTINATION "${OPENCV_DOC_INSTALL_PATH}" COMPONENT docs)
endforeach()
foreach(f ${OPTIONAL_DOC_LIST})
install(FILES "${f}" DESTINATION "${OPENCV_DOC_INSTALL_PATH}" OPTIONAL)
install(FILES "${f}" DESTINATION "${OPENCV_DOC_INSTALL_PATH}" OPTIONAL COMPONENT docs)
endforeach()
endif()

@ -54,7 +54,7 @@ master_doc = 'index'
# General information about the project.
project = u'OpenCV'
copyright = u'2011-2013, opencv dev team'
copyright = u'2011-2014, opencv dev team'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the

@ -492,7 +492,7 @@ class=Typewch><span lang=EN-US>- weighttrimming &lt;weight_trimming&gt;</span></
<p class=MsoNormal style='margin-left:17.1pt;text-indent:-17.1pt'><span
class=Typewch><span lang=EN-US>  </span></span><span class=Typewch><span
lang=EN-US style='font-family:"Times New Roman";font-weight:normal'>Specifies
wheter and how much weight trimming should be used. A decent choice is 0.90.</span></span></p>
whether and how much weight trimming should be used. A decent choice is 0.90.</span></span></p>
<p class=MsoNormal style='margin-left:17.1pt;text-indent:-17.1pt'><span
class=Typewch><span lang=EN-US>- eqw</span></span></p>

@ -110,11 +110,11 @@ Once we find the corners, we can increase their accuracy using **cv2.cornerSubPi
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (7,6), corners2,ret)
cv2.drawChessboardCorners(img, (7,6), corners2,ret)
cv2.imshow('img',img)
cv2.waitKey(500)

@ -33,21 +33,21 @@ To draw a line, you need to pass starting and ending coordinates of line. We wil
img = np.zeros((512,512,3), np.uint8)
# Draw a diagonal blue line with thickness of 5 px
img = cv2.line(img,(0,0),(511,511),(255,0,0),5)
cv2.line(img,(0,0),(511,511),(255,0,0),5)
Drawing Rectangle
-------------------
To draw a rectangle, you need top-left corner and bottom-right corner of rectangle. This time we will draw a green rectangle at the top-right corner of image.
::
img = cv2.rectangle(img,(384,0),(510,128),(0,255,0),3)
cv2.rectangle(img,(384,0),(510,128),(0,255,0),3)
Drawing Circle
----------------
To draw a circle, you need its center coordinates and radius. We will draw a circle inside the rectangle drawn above.
::
img = cv2.circle(img,(447,63), 63, (0,0,255), -1)
cv2.circle(img,(447,63), 63, (0,0,255), -1)
Drawing Ellipse
--------------------
@ -55,7 +55,7 @@ Drawing Ellipse
To draw the ellipse, we need to pass several arguments. One argument is the center location (x,y). Next argument is axes lengths (major axis length, minor axis length). ``angle`` is the angle of rotation of ellipse in anti-clockwise direction. ``startAngle`` and ``endAngle`` denotes the starting and ending of ellipse arc measured in clockwise direction from major axis. i.e. giving values 0 and 360 gives the full ellipse. For more details, check the documentation of **cv2.ellipse()**. Below example draws a half ellipse at the center of the image.
::
img = cv2.ellipse(img,(256,256),(100,50),0,0,180,255,-1)
cv2.ellipse(img,(256,256),(100,50),0,0,180,255,-1)
Drawing Polygon
@ -65,7 +65,7 @@ To draw a polygon, first you need coordinates of vertices. Make those points int
pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32)
pts = pts.reshape((-1,1,2))
img = cv2.polylines(img,[pts],True,(0,255,255))
cv2.polylines(img,[pts],True,(0,255,255))
.. Note:: If third argument is ``False``, you will get a polylines joining all the points, not a closed shape.
@ -103,4 +103,4 @@ Additional Resources
Exercises
==============
#. Try to create the logo of OpenCV using drawing functions available in OpenCV
#. Try to create the logo of OpenCV using drawing functions available in OpenCV.

@ -59,6 +59,8 @@ A screenshot of the window will look like this (in Fedora-Gnome machine):
**cv2.waitKey()** is a keyboard binding function. Its argument is the time in milliseconds. The function waits for specified milliseconds for any keyboard event. If you press any key in that time, the program continues. If **0** is passed, it waits indefinitely for a key stroke. It can also be set to detect specific key strokes like, if key `a` is pressed etc which we will discuss below.
.. note:: Besides binding keyboard events this function also processes many other GUI events, so you MUST use it to actually display the image.
**cv2.destroyAllWindows()** simply destroys all the windows we created. If you want to destroy any specific window, use the function **cv2.destroyWindow()** where you pass the exact window name as the argument.
.. note:: There is a special case where you can already create a window and load image to it later. In that case, you can specify whether window is resizable or not. It is done with the function **cv2.namedWindow()**. By default, the flag is ``cv2.WINDOW_AUTOSIZE``. But if you specify flag to be ``cv2.WINDOW_NORMAL``, you can resize window. It will be helpful when image is too large in dimension and adding track bar to windows.

@ -119,7 +119,7 @@ Let (x,y) be the top-left coordinate of the rectangle and (w,h) be its width and
::
x,y,w,h = cv2.boundingRect(cnt)
img = cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
7.b. Rotated Rectangle
-----------------------
@ -129,7 +129,7 @@ Here, bounding rectangle is drawn with minimum area, so it considers the rotatio
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
im = cv2.drawContours(im,[box],0,(0,0,255),2)
cv2.drawContours(img,[box],0,(0,0,255),2)
Both the rectangles are shown in a single image. Green rectangle shows the normal bounding rect. Red rectangle is the rotated rect.
@ -145,7 +145,7 @@ Next we find the circumcircle of an object using the function **cv2.minEnclosing
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
img = cv2.circle(img,center,radius,(0,255,0),2)
cv2.circle(img,center,radius,(0,255,0),2)
.. image:: images/circumcircle.png
:alt: Minimum Enclosing Circle
@ -158,7 +158,7 @@ Next one is to fit an ellipse to an object. It returns the rotated rectangle in
::
ellipse = cv2.fitEllipse(cnt)
im = cv2.ellipse(im,ellipse,(0,255,0),2)
cv2.ellipse(img,ellipse,(0,255,0),2)
.. image:: images/fitellipse.png
:alt: Fitting an Ellipse
@ -175,7 +175,7 @@ Similarly we can fit a line to a set of points. Below image contains a set of wh
[vx,vy,x,y] = cv2.fitLine(cnt, cv2.DIST_L2,0,0.01,0.01)
lefty = int((-x*vy/vx) + y)
righty = int(((cols-x)*vy/vx)+y)
img = cv2.line(img,(cols-1,righty),(0,lefty),(0,255,0),2)
cv2.line(img,(cols-1,righty),(0,lefty),(0,255,0),2)
.. image:: images/fitline.jpg
:alt: Fitting a Line

@ -28,9 +28,9 @@ Let's see how to find contours of a binary image:
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
image, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
See, there are three arguments in **cv2.findContours()** function, first one is source image, second is contour retrieval mode, third is contour approximation method. And it outputs the image, contours and hierarchy. ``contours`` is a Python list of all the contours in the image. Each individual contour is a Numpy array of (x,y) coordinates of boundary points of the object.
See, there are three arguments in **cv2.findContours()** function, first one is source image, second is contour retrieval mode, third is contour approximation method. And it outputs the contours and hierarchy. ``contours`` is a Python list of all the contours in the image. Each individual contour is a Numpy array of (x,y) coordinates of boundary points of the object.
.. note:: We will discuss second and third arguments and about hierarchy in details later. Until then, the values given to them in code sample will work fine for all images.
@ -43,18 +43,18 @@ To draw the contours, ``cv2.drawContours`` function is used. It can also be used
To draw all the contours in an image:
::
img = cv2.drawContour(img, contours, -1, (0,255,0), 3)
cv2.drawContours(img, contours, -1, (0,255,0), 3)
To draw an individual contour, say 4th contour:
::
img = cv2.drawContours(img, contours, 3, (0,255,0), 3)
cv2.drawContours(img, contours, 3, (0,255,0), 3)
But most of the time, below method will be useful:
::
cnt = contours[4]
img = cv2.drawContours(img, [cnt], 0, (0,255,0), 3)
cv2.drawContours(img, [cnt], 0, (0,255,0), 3)
.. note:: Last two methods are same, but when you go forward, you will see last one is more useful.

@ -73,7 +73,7 @@ Now we find the faces in the image. If faces are found, it returns the positions
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

@ -1,418 +0,0 @@
.. _Retina_Model:
Discovering the human retina and its use for image processing
*************************************************************
Goal
=====
I present here a model of human retina that shows some interesting properties for image preprocessing and enhancement.
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
+ discover the main two channels outing from your retina
+ see the basics to use the retina model
+ discover some parameters tweaks
General overview
================
The proposed model originates from Jeanny Herault's research [herault2010]_ at `Gipsa <http://www.gipsa-lab.inpg.fr>`_. It is involved in image processing applications with `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer and user) lab. This is not a complete model but it already present interesting properties that can be involved for enhanced image processing experience. The model allows the following human retina properties to be used :
* spectral whitening that has 3 important effects: high spatio-temporal frequency signals canceling (noise), mid-frequencies details enhancement and low frequencies luminance energy reduction. This *all in one* property directly allows visual signals cleaning of classical undesired distortions introduced by image sensors and input luminance range.
* local logarithmic luminance compression allows details to be enhanced even in low light conditions.
* decorrelation of the details information (Parvocellular output channel) and transient information (events, motion made available at the Magnocellular output channel).
The first two points are illustrated below :
In the figure below, the OpenEXR image sample *CrissyField.exr*, a High Dynamic Range image is shown. In order to make it visible on this web-page, the original input image is linearly rescaled to the classical image luminance range [0-255] and is converted to 8bit/channel format. Such strong conversion hides many details because of too strong local contrasts. Furthermore, noise energy is also strong and pollutes visual information.
.. image:: images/retina_TreeHdr_small.jpg
:alt: A High dynamic range image linearly rescaled within range [0-255].
:align: center
In the following image, applying the ideas proposed in [benoit2010]_, as your retina does, local luminance adaptation, spatial noise removal and spectral whitening work together and transmit accurate information on lower range 8bit data channels. On this picture, noise in significantly removed, local details hidden by strong luminance contrasts are enhanced. Output image keeps its naturalness and visual content is enhanced. Color processing is based on the color multiplexing/demultiplexing method proposed in [chaix2007]_.
.. image:: images/retina_TreeHdr_retina.jpg
:alt: A High dynamic range image compressed within range [0-255] using the retina.
:align: center
*Note :* image sample can be downloaded from the `OpenEXR website <http://www.openexr.com>`_. Regarding this demonstration, before retina processing, input image has been linearly rescaled within 0-255 keeping its channels float format. 5% of its histogram ends has been cut (mostly removes wrong HDR pixels). Check out the sample *opencv/samples/cpp/OpenEXRimages_HighDynamicRange_Retina_toneMapping.cpp* for similar processing. The following demonstration will only consider classical 8bit/channel images.
The retina model output channels
================================
The retina model presents two outputs that benefit from the above cited behaviors.
* The first one is called the Parvocellular channel. It is mainly active in the foveal retina area (high resolution central vision with color sensitive photo-receptors), its aim is to provide accurate color vision for visual details remaining static on the retina. On the other hand objects moving on the retina projection are blurred.
* The second well known channel is the Magnocellular channel. It is mainly active in the retina peripheral vision and send signals related to change events (motion, transient events, etc.). These outing signals also help visual system to focus/center retina on 'transient'/moving areas for more detailed analysis thus improving visual scene context and object classification.
**NOTE :** regarding the proposed model, contrary to the real retina, we apply these two channels on the entire input images using the same resolution. This allows enhanced visual details and motion information to be extracted on all the considered images... but remember, that these two channels are complementary. For example, if Magnocellular channel gives strong energy in an area, then, the Parvocellular channel is certainly blurred there since there is a transient event.
As an illustration, we apply in the following the retina model on a webcam video stream of a dark visual scene. In this visual scene, captured in an amphitheater of the university, some students are moving while talking to the teacher.
In this video sequence, because of the dark ambiance, signal to noise ratio is low and color artifacts are present on visual features edges because of the low quality image capture tool-chain.
.. image:: images/studentsSample_input.jpg
:alt: an input video stream extract sample
:align: center
Below is shown the retina foveal vision applied on the entire image. In the used retina configuration, global luminance is preserved and local contrasts are enhanced. Also, signal to noise ratio is improved : since high frequency spatio-temporal noise is reduced, enhanced details are not corrupted by any enhanced noise.
.. image:: images/studentsSample_parvo.jpg
:alt: the retina Parvocellular output. Enhanced details, luminance adaptation and noise removal. A processing tool for image analysis.
:align: center
Below is the output of the Magnocellular output of the retina model. Its signals are strong where transient events occur. Here, a student is moving at the bottom of the image thus generating high energy. The remaining of the image is static however, it is corrupted by a strong noise. Here, the retina filters out most of the noise thus generating low false motion area 'alarms'. This channel can be used as a transient/moving areas detector : it would provide relevant information for a low cost segmentation tool that would highlight areas in which an event is occurring.
.. image:: images/studentsSample_magno.jpg
:alt: the retina Magnocellular output. Enhanced transient signals (motion, etc.). A preprocessing tool for event detection.
:align: center
Retina use case
===============
This model can be used basically for spatio-temporal video effects but also in the aim of :
* performing texture analysis with enhanced signal to noise ratio and enhanced details robust against input images luminance ranges (check out the Parvocellular retina channel output)
* performing motion analysis also taking benefit of the previously cited properties.
Literature
==========
For more information, refer to the following papers :
.. [benoit2010] Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
* Please have a look at the reference work of Jeanny Herault that you can read in his book :
.. [herault2010] Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
This retina filter code includes the research contributions of phd/research collegues from which code has been redrawn by the author :
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper:
.. [chaix2007] B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. More informations in the above cited Jeanny Heraults's book.
Code tutorial
=============
Please refer to the original tutorial source code in file *opencv_folder/samples/cpp/tutorial_code/bioinspired/retina_tutorial.cpp*.
**Note :** do not forget that the retina model is included in the following namespace : *cv::bioinspired*.
To compile it, assuming OpenCV is correctly installed, use the following command. It requires the opencv_core *(cv::Mat and friends objects management)*, opencv_highgui *(display and image/video read)* and opencv_bioinspired *(Retina description)* libraries to compile.
.. code-block:: cpp
// compile
gcc retina_tutorial.cpp -o Retina_tuto -lopencv_core -lopencv_highgui -lopencv_bioinspired
// Run commands : add 'log' as a last parameter to apply a spatial log sampling (simulates retina sampling)
// run on webcam
./Retina_tuto -video
// run on video file
./Retina_tuto -video myVideo.avi
// run on an image
./Retina_tuto -image myPicture.jpg
// run on an image with log sampling
./Retina_tuto -image myPicture.jpg log
Here is a code explanation :
Retina definition is present in the bioinspired package and a simple include allows to use it. You can rather use the specific header : *opencv2/bioinspired.hpp* if you prefer but then include the other required openv modules : *opencv2/core.hpp* and *opencv2/highgui.hpp*
.. code-block:: cpp
#include "opencv2/opencv.hpp"
Provide user some hints to run the program with a help function
.. code-block:: cpp
// the help procedure
static void help(std::string errorMessage)
{
std::cout<<"Program init error : "<<errorMessage<<std::endl;
std::cout<<"\nProgram call procedure : retinaDemo [processing mode] [Optional : media target] [Optional LAST parameter: \"log\" to activate retina log sampling]"<<std::endl;
std::cout<<"\t[processing mode] :"<<std::endl;
std::cout<<"\t -image : for still image processing"<<std::endl;
std::cout<<"\t -video : for video stream processing"<<std::endl;
std::cout<<"\t[Optional : media target] :"<<std::endl;
std::cout<<"\t if processing an image or video file, then, specify the path and filename of the target to process"<<std::endl;
std::cout<<"\t leave empty if processing video stream coming from a connected video device"<<std::endl;
std::cout<<"\t[Optional : activate retina log sampling] : an optional last parameter can be specified for retina spatial log sampling"<<std::endl;
std::cout<<"\t set \"log\" without quotes to activate this sampling, output frame size will be divided by 4"<<std::endl;
std::cout<<"\nExamples:"<<std::endl;
std::cout<<"\t-Image processing : ./retinaDemo -image lena.jpg"<<std::endl;
std::cout<<"\t-Image processing with log sampling : ./retinaDemo -image lena.jpg log"<<std::endl;
std::cout<<"\t-Video processing : ./retinaDemo -video myMovie.mp4"<<std::endl;
std::cout<<"\t-Live video processing : ./retinaDemo -video"<<std::endl;
std::cout<<"\nPlease start again with new parameters"<<std::endl;
std::cout<<"****************************************************"<<std::endl;
std::cout<<" NOTE : this program generates the default retina parameters file 'RetinaDefaultParameters.xml'"<<std::endl;
std::cout<<" => you can use this to fine tune parameters and load them if you save to file 'RetinaSpecificParameters.xml'"<<std::endl;
}
Then, start the main program and first declare a *cv::Mat* matrix in which input images will be loaded. Also allocate a *cv::VideoCapture* object ready to load video streams (if necessary)
.. code-block:: cpp
int main(int argc, char* argv[]) {
// declare the retina input buffer... that will be fed differently in regard of the input media
cv::Mat inputFrame;
cv::VideoCapture videoCapture; // in case a video media is used, its manager is declared here
In the main program, before processing, first check input command parameters. Here it loads a first input image coming from a single loaded image (if user chose command *-image*) or from a video stream (if user chose command *-video*). Also, if the user added *log* command at the end of its program call, the spatial logarithmic image sampling performed by the retina is taken into account by the Boolean flag *useLogSampling*.
.. code-block:: cpp
// welcome message
std::cout<<"****************************************************"<<std::endl;
std::cout<<"* Retina demonstration : demonstrates the use of is a wrapper class of the Gipsa/Listic Labs retina model."<<std::endl;
std::cout<<"* This demo will try to load the file 'RetinaSpecificParameters.xml' (if exists).\nTo create it, copy the autogenerated template 'RetinaDefaultParameters.xml'.\nThen twaek it with your own retina parameters."<<std::endl;
// basic input arguments checking
if (argc<2)
{
help("bad number of parameter");
return -1;
}
bool useLogSampling = !strcmp(argv[argc-1], "log"); // check if user wants retina log sampling processing
std::string inputMediaType=argv[1];
//////////////////////////////////////////////////////////////////////////////
// checking input media type (still image, video file, live video acquisition)
if (!strcmp(inputMediaType.c_str(), "-image") && argc >= 3)
{
std::cout<<"RetinaDemo: processing image "<<argv[2]<<std::endl;
// image processing case
inputFrame = cv::imread(std::string(argv[2]), 1); // load image in RGB mode
}else
if (!strcmp(inputMediaType.c_str(), "-video"))
{
if (argc == 2 || (argc == 3 && useLogSampling)) // attempt to grab images from a video capture device
{
videoCapture.open(0);
}else// attempt to grab images from a video filestream
{
std::cout<<"RetinaDemo: processing video stream "<<argv[2]<<std::endl;
videoCapture.open(argv[2]);
}
// grab a first frame to check if everything is ok
videoCapture>>inputFrame;
}else
{
// bad command parameter
help("bad command parameter");
return -1;
}
Once all input parameters are processed, a first image should have been loaded, if not, display error and stop program :
.. code-block:: cpp
if (inputFrame.empty())
{
help("Input media could not be loaded, aborting");
return -1;
}
Now, everything is ready to run the retina model. I propose here to allocate a retina instance and to manage the eventual log sampling option. The Retina constructor expects at least a cv::Size object that shows the input data size that will have to be managed. One can activate other options such as color and its related color multiplexing strategy (here Bayer multiplexing is chosen using *enum cv::bioinspired::RETINA_COLOR_BAYER*). If using log sampling, the image reduction factor (smaller output images) and log sampling strengh can be adjusted.
.. code-block:: cpp
// pointer to a retina object
cv::Ptr<cv::bioinspired::Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = cv::bioinspired::createRetina(inputFrame.size(), true, cv::bioinspired::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = cv::bioinspired::createRetina(inputFrame.size());
Once done, the proposed code writes a default xml file that contains the default parameters of the retina. This is useful to make your own config using this template. Here generated template xml file is called *RetinaDefaultParameters.xml*.
.. code-block:: cpp
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
In the following line, the retina attempts to load another xml file called *RetinaSpecificParameters.xml*. If you created it and introduced your own setup, it will be loaded, in the other case, default retina parameters are used.
.. code-block:: cpp
// load parameters if file exists
myRetina->setup("RetinaSpecificParameters.xml");
It is not required here but just to show it is possible, you can reset the retina buffers to zero to force it to forget past events.
.. code-block:: cpp
// reset all retina buffers (imagine you close your eyes for a long time)
myRetina->clearBuffers();
Now, it is time to run the retina ! First create some output buffers ready to receive the two retina channels outputs
.. code-block:: cpp
// declare retina output buffers
cv::Mat retinaOutput_parvo;
cv::Mat retinaOutput_magno;
Then, run retina in a loop, load new frames from video sequence if necessary and get retina outputs back to dedicated buffers.
.. code-block:: cpp
// processing loop with no stop condition
while(true)
{
// if using video stream, then, grabbing a new frame, else, input remains the same
if (videoCapture.isOpened())
videoCapture>>inputFrame;
// run retina filter on the loaded input frame
myRetina->run(inputFrame);
// Retrieve and display retina output
myRetina->getParvo(retinaOutput_parvo);
myRetina->getMagno(retinaOutput_magno);
cv::imshow("retina input", inputFrame);
cv::imshow("Retina Parvo", retinaOutput_parvo);
cv::imshow("Retina Magno", retinaOutput_magno);
cv::waitKey(10);
}
That's done ! But if you want to secure the system, take care and manage Exceptions. The retina can throw some when it sees irrelevant data (no input frame, wrong setup, etc.).
Then, i recommend to surround all the retina code by a try/catch system like this :
.. code-block:: cpp
try{
// pointer to a retina object
cv::Ptr<cv::Retina> myRetina;
[---]
// processing loop with no stop condition
while(true)
{
[---]
}
}catch(cv::Exception e)
{
std::cerr<<"Error using Retina : "<<e.what()<<std::endl;
}
Retina parameters, what to do ?
===============================
First, it is recommended to read the reference paper :
* Benoit A., Caplier A., Durette B., Herault, J., *"Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing"*, Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
Once done open the configuration file *RetinaDefaultParameters.xml* generated by the demo and let's have a look at it.
.. code-block:: cpp
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>7.5e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.7e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.01</horizontalCellsGain>
<hcellsTemporalConstant>0.5</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>7.5e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>
Here are some hints but actually, the best parameter setup depends more on what you want to do with the retina rather than the images input that you give to retina. Apart from the more specific case of High Dynamic Range images (HDR) that require more specific setup for specific luminance compression objective, the retina behaviors should be rather stable from content to content. Note that OpenCV is able to manage such HDR format thanks to the OpenEXR images compatibility.
Then, if the application target requires details enhancement prior to specific image processing, you need to know if mean luminance information is required or not. If not, the the retina can cancel or significantly reduce its energy thus giving more visibility to higher spatial frequency details.
Basic parameters
----------------
The most simple parameters are the following :
* **colorMode** : let the retina process color information (if 1) or gray scale images (if 0). In this last case, only the first channel of the input will be processed.
* **normaliseOutput** : each channel has this parameter, if value is 1, then the considered channel output is rescaled between 0 and 255. Take care in this case at the Magnocellular output level (motion/transient channel detection). Residual noise will also be rescaled !
**Note :** using color requires color channels multiplexing/demultipexing which requires more processing. You can expect much faster processing using gray levels : it would require around 30 product per pixel for all the retina processes and it has recently been parallelized for multicore architectures.
Photo-receptors parameters
--------------------------
The following parameters act on the entry point of the retina - photo-receptors - and impact all the following processes. These sensors are low pass spatio-temporal filters that smooth temporal and spatial data and also adjust there sensitivity to local luminance thus improving details extraction and high frequency noise canceling.
* **photoreceptorsLocalAdaptationSensitivity** between 0 and 1. Values close to 1 allow high luminance log compression effect at the photo-receptors level. Values closer to 0 give a more linear sensitivity. Increased alone, it can burn the *Parvo (details channel)* output image. If adjusted in collaboration with **ganglionCellsSensitivity** images can be very contrasted whatever the local luminance there is... at the price of a naturalness decrease.
* **photoreceptorsTemporalConstant** this setups the temporal constant of the low pass filter effect at the entry of the retina. High value lead to strong temporal smoothing effect : moving objects are blurred and can disappear while static object are favored. But when starting the retina processing, stable state is reached lately.
* **photoreceptorsSpatialConstant** specifies the spatial constant related to photo-receptors low pass filter effect. This parameters specify the minimum allowed spatial signal period allowed in the following. Typically, this filter should cut high frequency noise. Then a 0 value doesn't cut anything noise while higher values start to cut high spatial frequencies and more and more lower frequencies... Then, do not go to high if you wanna see some details of the input images ! A good compromise for color images is 0.53 since this won't affect too much the color spectrum. Higher values would lead to gray and blurred output images.
Horizontal cells parameters
---------------------------
This parameter set tunes the neural network connected to the photo-receptors, the horizontal cells. It modulates photo-receptors sensitivity and completes the processing for final spectral whitening (part of the spatial band pass effect thus favoring visual details enhancement).
* **horizontalCellsGain** here is a critical parameter ! If you are not interested by the mean luminance and focus on details enhancement, then, set to zero. But if you want to keep some environment luminance data, let some low spatial frequencies pass into the system and set a higher value (<1).
* **hcellsTemporalConstant** similar to photo-receptors, this acts on the temporal constant of a low pass temporal filter that smooths input data. Here, a high value generates a high retina after effect while a lower value makes the retina more reactive. This value should be lower than **photoreceptorsTemporalConstant** to limit strong retina after effects.
* **hcellsSpatialConstant** is the spatial constant of the low pass filter of these cells filter. It specifies the lowest spatial frequency allowed in the following. Visually, a high value leads to very low spatial frequencies processing and leads to salient halo effects. Lower values reduce this effect but the limit is : do not go lower than the value of **photoreceptorsSpatialConstant**. Those 2 parameters actually specify the spatial band-pass of the retina.
**NOTE** after the processing managed by the previous parameters, input data is cleaned from noise and luminance in already partly enhanced. The following parameters act on the last processing stages of the two outing retina signals.
Parvo (details channel) dedicated parameter
-------------------------------------------
* **ganglionCellsSensitivity** specifies the strength of the final local adaptation occurring at the output of this details dedicated channel. Parameter values remain between 0 and 1. Low value tend to give a linear response while higher values enforces the remaining low contrasted areas.
**Note :** this parameter can correct eventual burned images by favoring low energetic details of the visual scene, even in bright areas.
IPL Magno (motion/transient channel) parameters
-----------------------------------------------
Once image information is cleaned, this channel acts as a high pass temporal filter that only selects signals related to transient signals (events, motion, etc.). A low pass spatial filter smooths extracted transient data and a final logarithmic compression enhances low transient events thus enhancing event sensitivity.
* **parasolCells_beta** generally set to zero, can be considered as an amplifier gain at the entry point of this processing stage. Generally set to 0.
* **parasolCells_tau** the temporal smoothing effect that can be added
* **parasolCells_k** the spatial constant of the spatial filtering effect, set it at a high value to favor low spatial frequency signals that are lower subject to residual noise.
* **amacrinCellsTemporalCutFrequency** specifies the temporal constant of the high pass filter. High values let slow transient events to be selected.
* **V0CompressionParameter** specifies the strength of the log compression. Similar behaviors to previous description but here it enforces sensitivity of transient events.
* **localAdaptintegration_tau** generally set to 0, no real use here actually
* **localAdaptintegration_k** specifies the size of the area on which local adaptation is performed. Low values lead to short range local adaptation (higher sensitivity to noise), high values secure log compression.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

@ -1,36 +0,0 @@
.. _Table-Of-Content-Bioinspired:
*bioinspired* module. Algorithms inspired from biological models
----------------------------------------------------------------
Here you will learn how to use additional modules of OpenCV defined in the "bioinspired" module.
.. include:: ../../definitions/tocDefinitions.rst
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=============== ======================================================
|RetinaDemoImg| **Title:** :ref:`Retina_Model`
*Compatibility:* > OpenCV 2.4
*Author:* |Author_AlexB|
You will learn how to process images and video streams with a model of retina filter for details enhancement, spatio-temporal noise removal, luminance correction and spatio-temporal events detection.
=============== ======================================================
.. |RetinaDemoImg| image:: images/retina_TreeHdr_small.jpg
:height: 90pt
:width: 90pt
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../retina_model/retina_model

@ -1,36 +0,0 @@
.. _Table-Of-Content-Contrib:
*contrib* module. The additional contributions made available !
----------------------------------------------------------------
Here you will learn how to use additional modules of OpenCV defined in the "contrib" module.
.. include:: ../../definitions/tocDefinitions.rst
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=============== ======================================================
|RetinaDemoImg| **Title:** :ref:`Retina_Model`
*Compatibility:* > OpenCV 2.4
*Author:* |Author_AlexB|
You will learn how to process images and video streams with a model of retina filter for details enhancement, spatio-temporal noise removal, luminance correction and spatio-temporal events detection.
=============== ======================================================
.. |RetinaDemoImg| image:: images/retina_TreeHdr_small.jpg
:height: 90pt
:width: 90pt
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../retina_model/retina_model

@ -11,5 +11,6 @@
.. |Author_EricCh| unicode:: Eric U+0020 Christiansen
.. |Author_AndreyP| unicode:: Andrey U+0020 Pavlenko
.. |Author_AlexS| unicode:: Alexander U+0020 Smorkalov
.. |Author_MimmoC| unicode:: Mimmo U+0020 Cosenza
.. |Author_BarisD| unicode:: Bar U+0131 U+015F U+0020 Evrim U+0020 Demir U+00F6 z
.. |Author_DomenicoB| unicode:: Domenico U+0020 Daniele U+0020 Bloisi

@ -85,7 +85,7 @@ d. **method=CV\_TM\_CCORR\_NORMED**
.. math::
R(x,y)= \frac{\sum_{x',y'} (T(x',y') \cdot I'(x+x',y+y'))}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}
R(x,y)= \frac{\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y'))}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}
e. **method=CV\_TM\_CCOEFF**

@ -22,13 +22,13 @@ Hough Circle Transform
C : ( x_{center}, y_{center}, r )
where :math:`(x_{center}, y_{center})` define the center position (gree point) and :math:`r` is the radius, which allows us to completely define a circle, as it can be seen below:
where :math:`(x_{center}, y_{center})` define the center position (green point) and :math:`r` is the radius, which allows us to completely define a circle, as it can be seen below:
.. image:: images/Hough_Circle_Tutorial_Theory_0.jpg
:alt: Result of detecting circles with Hough Transform
:align: center
* For sake of efficiency, OpenCV implements a detection method slightly trickier than the standard Hough Transform: *The Hough gradient method*. For more details, please check the book *Learning OpenCV* or your favorite Computer Vision bibliography
* For sake of efficiency, OpenCV implements a detection method slightly trickier than the standard Hough Transform: *The Hough gradient method*, which is made up of two main stages. The first stage involves edge detection and finding the possible circle centers and the second stage finds the best radius for each candidate center. For more details, please check the book *Learning OpenCV* or your favorite Computer Vision bibliography
Code
======
@ -44,7 +44,7 @@ Code
.. |TutorialHoughCirclesFancyDownload| replace:: here
.. _TutorialHoughCirclesFancyDownload: https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/HoughCircle_Demo.cpp
#. The sample code that we will explain can be downloaded from |TutorialHoughCirclesSimpleDownload|_. A slightly fancier version (which shows both Hough standard and probabilistic with trackbars for changing the threshold values) can be found |TutorialHoughCirclesFancyDownload|_.
#. The sample code that we will explain can be downloaded from |TutorialHoughCirclesSimpleDownload|_. A slightly fancier version (which shows trackbars for changing the threshold values) can be found |TutorialHoughCirclesFancyDownload|_.
.. code-block:: cpp
@ -132,15 +132,15 @@ Explanation
with the arguments:
* *src_gray*: Input image (grayscale)
* *src_gray*: Input image (grayscale).
* *circles*: A vector that stores sets of 3 values: :math:`x_{c}, y_{c}, r` for each detected circle.
* *CV_HOUGH_GRADIENT*: Define the detection method. Currently this is the only one available in OpenCV
* *dp = 1*: The inverse ratio of resolution
* *min_dist = src_gray.rows/8*: Minimum distance between detected centers
* *param_1 = 200*: Upper threshold for the internal Canny edge detector
* *CV_HOUGH_GRADIENT*: Define the detection method. Currently this is the only one available in OpenCV.
* *dp = 1*: The inverse ratio of resolution.
* *min_dist = src_gray.rows/8*: Minimum distance between detected centers.
* *param_1 = 200*: Upper threshold for the internal Canny edge detector.
* *param_2* = 100*: Threshold for center detection.
* *min_radius = 0*: Minimum radio to be detected. If unknown, put zero as default.
* *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default
* *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default.
#. Draw the detected circles:

@ -48,10 +48,10 @@ The structure of package contents looks as follows:
::
OpenCV-2.4.7-android-sdk
OpenCV-2.4.8-android-sdk
|_ apk
| |_ OpenCV_2.4.7_binary_pack_armv7a.apk
| |_ OpenCV_2.4.7_Manager_2.14_XXX.apk
| |_ OpenCV_2.4.8_binary_pack_armv7a.apk
| |_ OpenCV_2.4.8_Manager_2.16_XXX.apk
|
|_ doc
|_ samples
@ -66,7 +66,7 @@ The structure of package contents looks as follows:
| |_ armeabi-v7a
| |_ x86
|
|_ license.txt
|_ LICENSE
|_ README.android
* :file:`sdk` folder contains OpenCV API and libraries for Android:
@ -157,10 +157,10 @@ Get the OpenCV4Android SDK
.. code-block:: bash
unzip ~/Downloads/OpenCV-2.4.7-android-sdk.zip
unzip ~/Downloads/OpenCV-2.4.8-android-sdk.zip
.. |opencv_android_bin_pack| replace:: :file:`OpenCV-2.4.7-android-sdk.zip`
.. _opencv_android_bin_pack_url: http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.4.7/OpenCV-2.4.7-android-sdk.zip/download
.. |opencv_android_bin_pack| replace:: :file:`OpenCV-2.4.8-android-sdk.zip`
.. _opencv_android_bin_pack_url: http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.4.8/OpenCV-2.4.8-android-sdk.zip/download
.. |opencv_android_bin_pack_url| replace:: |opencv_android_bin_pack|
.. |seven_zip| replace:: 7-Zip
.. _seven_zip: http://www.7-zip.org/
@ -295,7 +295,7 @@ Well, running samples from Eclipse is very simple:
.. code-block:: sh
:linenos:
<Android SDK path>/platform-tools/adb install <OpenCV4Android SDK path>/apk/OpenCV_2.4.7_Manager_2.14_armv7a-neon.apk
<Android SDK path>/platform-tools/adb install <OpenCV4Android SDK path>/apk/OpenCV_2.4.8_Manager_2.16_armv7a-neon.apk
.. note:: ``armeabi``, ``armv7a-neon``, ``arm7a-neon-android8``, ``mips`` and ``x86`` stand for
platform targets:

@ -55,14 +55,14 @@ Manager to access OpenCV libraries externally installed in the target system.
:guilabel:`File -> Import -> Existing project in your workspace`.
Press :guilabel:`Browse` button and locate OpenCV4Android SDK
(:file:`OpenCV-2.4.7-android-sdk/sdk`).
(:file:`OpenCV-2.4.8-android-sdk/sdk`).
.. image:: images/eclipse_opencv_dependency0.png
:alt: Add dependency from OpenCV library
:align: center
#. In application project add a reference to the OpenCV Java SDK in
:guilabel:`Project -> Properties -> Android -> Library -> Add` select ``OpenCV Library - 2.4.7``.
:guilabel:`Project -> Properties -> Android -> Library -> Add` select ``OpenCV Library - 2.4.8``.
.. image:: images/eclipse_opencv_dependency1.png
:alt: Add dependency from OpenCV library
@ -128,27 +128,27 @@ described above.
#. Add the OpenCV library project to your workspace the same way as for the async initialization
above. Use menu :guilabel:`File -> Import -> Existing project in your workspace`,
press :guilabel:`Browse` button and select OpenCV SDK path
(:file:`OpenCV-2.4.7-android-sdk/sdk`).
(:file:`OpenCV-2.4.8-android-sdk/sdk`).
.. image:: images/eclipse_opencv_dependency0.png
:alt: Add dependency from OpenCV library
:align: center
#. In the application project add a reference to the OpenCV4Android SDK in
:guilabel:`Project -> Properties -> Android -> Library -> Add` select ``OpenCV Library - 2.4.7``;
:guilabel:`Project -> Properties -> Android -> Library -> Add` select ``OpenCV Library - 2.4.8``;
.. image:: images/eclipse_opencv_dependency1.png
:alt: Add dependency from OpenCV library
:align: center
#. If your application project **doesn't have a JNI part**, just copy the corresponding OpenCV
native libs from :file:`<OpenCV-2.4.7-android-sdk>/sdk/native/libs/<target_arch>` to your
native libs from :file:`<OpenCV-2.4.8-android-sdk>/sdk/native/libs/<target_arch>` to your
project directory to folder :file:`libs/<target_arch>`.
In case of the application project **with a JNI part**, instead of manual libraries copying you
need to modify your ``Android.mk`` file:
add the following two code lines after the ``"include $(CLEAR_VARS)"`` and before
``"include path_to_OpenCV-2.4.7-android-sdk/sdk/native/jni/OpenCV.mk"``
``"include path_to_OpenCV-2.4.8-android-sdk/sdk/native/jni/OpenCV.mk"``
.. code-block:: make
:linenos:
@ -221,7 +221,7 @@ taken:
.. code-block:: make
include C:\Work\OpenCV4Android\OpenCV-2.4.7-android-sdk\sdk\native\jni\OpenCV.mk
include C:\Work\OpenCV4Android\OpenCV-2.4.8-android-sdk\sdk\native\jni\OpenCV.mk
Should be inserted into the :file:`jni/Android.mk` file **after** this line:
@ -382,7 +382,7 @@ result.
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_6, this, mLoaderCallback);
}
#. Defines that your activity implements ``CvViewFrameListener2`` interface and fix activity related
#. Defines that your activity implements ``CvCameraViewListener2`` interface and fix activity related
errors by defining missed methods. For this activity define ``onCreate``, ``onDestroy`` and
``onPause`` and implement them according code snippet bellow. Fix errors by adding requited
imports.
@ -432,7 +432,7 @@ result.
Lets discuss some most important steps. Every Android application with UI must implement Activity
and View. By the first steps we create blank activity and default view layout. The simplest
OpenCV-centric application must implement OpenCV initialization, create its own view to show
preview from camera and implements ``CvViewFrameListener2`` interface to get frames from camera and
preview from camera and implements ``CvCameraViewListener2`` interface to get frames from camera and
process it.
First of all we create our application view using xml layout. Our layout consists of the only

@ -0,0 +1,728 @@
.. _clojure_dev_intro:
Introduction to OpenCV Development with Clojure
***********************************************
As of OpenCV 2.4.4, OpenCV supports desktop Java development using
nearly the same interface as for Android development.
`Clojure <http://clojure.org/>`_ is a contemporary LISP dialect hosted
by the Java Virtual Machine and it offers a complete interoperability
with the underlying JVM. This means that we should even be able to use
the Clojure REPL (Read Eval Print Loop) as and interactive programmable
interface to the underlying OpenCV engine.
What we'll do in this tutorial
==============================
This tutorial will help you in setting up a basic Clojure environment
for interactively learning OpenCV within the fully programmable
CLojure REPL.
Tutorial source code
--------------------
You can find a runnable source code of the sample in the
:file:`samples/java/clojure/simple-sample` folder of the OpenCV
repository. After having installed OpenCV and Clojure as explained in
the tutorial, issue the following command to run the sample from the
command line.
.. code:: bash
cd path/to/samples/java/clojure/simple-sample
lein run
Preamble
========
For detailed instruction on installing OpenCV with desktop Java support
refer to the `corresponding tutorial <http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_java/java_dev_intro.html>`_.
If you are in hurry, here is a minimum quick start guide to install
OpenCV on Mac OS X:
NOTE 1: I'm assuming you already installed
`xcode <https://developer.apple.com/xcode/>`_,
`jdk <http://www.oracle.com/technetwork/java/javase/downloads/index.html>`_
and `Cmake <http://www.cmake.org/cmake/resources/software.html>`_.
.. code:: bash
cd ~/
mkdir opt
git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 2.4
mkdir build
cd build
cmake -DBUILD_SHARED_LIBS=OFF ..
...
...
make -j8
# optional
# make install
Install Leiningen
=================
Once you installed OpenCV with desktop java support the only other
requirement is to install
`Leiningeng <https://github.com/technomancy/leiningen>`_ which allows
you to manage the entire life cycle of your CLJ projects.
The available `installation guide <https://github.com/technomancy/leiningen#installation>`_ is very easy to be followed:
1. `Download the script <https://raw.github.com/technomancy/leiningen/stable/bin/lein>`_
2. Place it on your ``$PATH`` (cf. ``~/bin`` is a good choice if it is
on your ``path``.)
3. Set the script to be executable. (i.e. ``chmod 755 ~/bin/lein``).
If you work on Windows, follow `this instruction <https://github.com/technomancy/leiningen#windows>`_
You now have both the OpenCV library and a fully installed basic Clojure
environment. What is now needed is to configure the Clojure environment
to interact with the OpenCV library.
Install the localrepo Leiningen plugin
=======================================
The set of commands (tasks in Leiningen parlance) natively supported by
Leiningen can be very easily extended by various plugins. One of them is
the `lein-localrepo <https://github.com/kumarshantanu/lein-localrepo>`_
plugin which allows to install any jar lib as an artifact in the local
maven repository of your machine (typically in the ``~/.m2/repository``
directory of your username).
We're going to use this ``lein`` plugin to add to the local maven
repository the opencv components needed by Java and Clojure to use the
opencv lib.
Generally speaking, if you want to use a plugin on project base only, it
can be added directly to a CLJ project created by ``lein``.
Instead, when you want a plugin to be available to any CLJ project in
your username space, you can add it to the ``profiles.clj`` in the
``~/.lein/`` directory.
The ``lein-localrepo`` plugin will be useful to me in other CLJ
projects where I need to call native libs wrapped by a Java interface.
So I decide to make it available to any CLJ project:
.. code:: bash
mkdir ~/.lein
Create a file named ``profiles.clj`` in the ``~/.lein`` directory and
copy into it the following content:
.. code:: clojure
{:user {:plugins [[lein-localrepo "0.5.2"]]}}
Here we're saying that the version release ``"0.5.2"`` of the
``lein-localrepo`` plugin will be available to the ``:user`` profile for
any CLJ project created by ``lein``.
You do not need to do anything else to install the plugin because it
will be automatically downloaded from a remote repository the very first
time you issue any ``lein`` task.
Install the java specific libs as local repository
==================================================
If you followed the standard documentation for installing OpenCV on your
computer, you should find the following two libs under the directory
where you built OpenCV:
- the ``build/bin/opencv-247.jar`` java lib
- the ``build/lib/libopencv_java247.dylib`` native lib (or ``.so`` in
you built OpenCV a GNU/Linux OS)
They are the only opencv libs needed by the JVM to interact with OpenCV.
Take apart the needed opencv libs
---------------------------------
Create a new directory to store in the above two libs. Start by copying
into it the ``opencv-247.jar`` lib.
.. code:: bash
cd ~/opt
mkdir clj-opencv
cd clj-opencv
cp ~/opt/opencv/build/bin/opencv-247.jar .
First lib done.
Now, to be able to add the ``libopencv_java247.dylib`` shared native lib
to the local maven repository, we first need to package it as a jar
file.
The native lib has to be copied into a directories layout which mimics
the names of your operating system and architecture. I'm using a Mac OS
X with a X86 64 bit architecture. So my layout will be the following:
.. code:: bash
mkdir -p native/macosx/x86_64
Copy into the ``x86_64`` directory the ``libopencv_java247.dylib`` lib.
.. code:: bash
cp ~/opt/opencv/build/lib/libopencv_java247.dylib native/macosx/x86_64/
If you're running OpenCV from a different OS/Architecture pair, here
is a summary of the mapping you can choose from.
.. code:: bash
OS
Mac OS X -> macosx
Windows -> windows
Linux -> linux
SunOS -> solaris
Architectures
amd64 -> x86_64
x86_64 -> x86_64
x86 -> x86
i386 -> x86
arm -> arm
sparc -> sparc
Package the native lib as a jar
-------------------------------
Next you need to package the native lib in a jar file by using the
``jar`` command to create a new jar file from a directory.
.. code:: bash
jar -cMf opencv-native-247.jar native
Note that ehe ``M`` option instructs the ``jar`` command to not create
a MANIFEST file for the artifact.
Your directories layout should look like the following:
.. code:: bash
tree
.
|__ native
|   |__ macosx
|   |__ x86_64
|   |__ libopencv_java247.dylib
|
|__ opencv-247.jar
|__ opencv-native-247.jar
3 directories, 3 files
Locally install the jars
------------------------
We are now ready to add the two jars as artifacts to the local maven
repository with the help of the ``lein-localrepo`` plugin.
.. code:: bash
lein localrepo install opencv-247.jar opencv/opencv 2.4.7
Here the ``localrepo install`` task creates the ``2.4.7.`` release of
the ``opencv/opencv`` maven artifact from the ``opencv-247.jar`` lib and
then installs it into the local maven repository. The ``opencv/opencv``
artifact will then be available to any maven compliant project
(Leiningen is internally based on maven).
Do the same thing with the native lib previously wrapped in a new jar
file.
.. code:: bash
lein localrepo install opencv-native-247.jar opencv/opencv-native 2.4.7
Note that the groupId, ``opencv``, of the two artifacts is the same. We
are now ready to create a new CLJ project to start interacting with
OpenCV.
Create a project
----------------
Create a new CLJ project by using the ``lein new`` task from the
terminal.
.. code:: bash
# cd in the directory where you work with your development projects (e.g. ~/devel)
lein new simple-sample
Generating a project called simple-sample based on the 'default' template.
To see other templates (app, lein plugin, etc), try `lein help new`.
The above task creates the following ``simple-sample`` directories
layout:
.. code:: bash
tree simple-sample/
simple-sample/
|__ LICENSE
|__ README.md
|__ doc
|   |__ intro.md
|
|__ project.clj
|__ resources
|__ src
|   |__ simple_sample
|   |__ core.clj
|__ test
|__ simple_sample
|__ core_test.clj
6 directories, 6 files
We need to add the two ``opencv`` artifacts as dependencies of the newly
created project. Open the ``project.clj`` and modify its dependencies
section as follows:
.. code:: bash
(defproject simple-sample "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url "http://example.com/FIXME"
:license {:name "Eclipse Public License"
:url "http://www.eclipse.org/legal/epl-v10.html"}
:dependencies [[org.clojure/clojure "1.5.1"]
[opencv/opencv "2.4.7"] ; added line
[opencv/opencv-native "2.4.7"]]) ;added line
Note that The Clojure Programming Language is a jar artifact too. This
is why Clojure is called an hosted language.
To verify that everything went right issue the ``lein deps`` task. The
very first time you run a ``lein`` task it will take sometime to
download all the required dependencies before executing the task
itself.
.. code:: bash
cd simple-sample
lein deps
...
The ``deps`` task reads and merges from the ``project.clj`` and the
``~/.lein/profiles.clj`` files all the dependencies of the
``simple-sample`` project and verifies if they have already been
cached in the local maven repository. If the task returns without
messages about not being able to retrieve the two new artifacts your
installation is correct, otherwise go back and double check that you
did everything right.
REPLing with OpenCV
-------------------
Now ``cd`` in the ``simple-sample`` directory and issue the following
``lein`` task:
.. code:: bash
cd simple-sample
lein repl
...
...
nREPL server started on port 50907 on host 127.0.0.1
REPL-y 0.3.0
Clojure 1.5.1
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
user=>
You can immediately interact with the REPL by issuing any CLJ expression
to be evaluated.
.. code:: clojure
user=> (+ 41 1)
42
user=> (println "Hello, OpenCV!")
Hello, OpenCV!
nil
user=> (defn foo [] (str "bar"))
#'user/foo
user=> (foo)
"bar"
When ran from the home directory of a lein based project, even if the
``lein repl`` task automatically loads all the project dependencies, you
still need to load the opencv native library to be able to interact with
the OpenCV.
.. code:: clojure
user=> (clojure.lang.RT/loadLibrary org.opencv.core.Core/NATIVE_LIBRARY_NAME)
nil
Then you can start interacting with OpenCV by just referencing the fully
qualified names of its classes.
NOTE 2: `Here <http://docs.opencv.org/java/>`_ you can find the
full OpenCV Java API.
.. code:: clojure
user=> (org.opencv.core.Point. 0 0)
#<Point {0.0, 0.0}>
Here we created a two dimensions opencv ``Point`` instance. Even if all
the java packages included within the java interface to OpenCV are
immediately available from the CLJ REPL, it's very annoying to prefix
the ``Point.`` instance constructors with the fully qualified package
name.
Fortunately CLJ offer a very easy way to overcome this annoyance by
directly importing the ``Point`` class.
.. code:: clojure
user=> (import 'org.opencv.core.Point)
org.opencv.core.Point
user=> (def p1 (Point. 0 0))
#'user/p1
user=> p1
#<Point {0.0, 0.0}>
user=> (def p2 (Point. 100 100))
#'user/p2
We can even inspect the class of an instance and verify if the value of
a symbol is an instance of a ``Point`` java class.
.. code:: clojure
user=> (class p1)
org.opencv.core.Point
user=> (instance? org.opencv.core.Point p1)
true
If we now want to use the opencv ``Rect`` class to create a rectangle,
we again have to fully qualify its constructor even if it leaves in
the same ``org.opencv.core`` package of the ``Point`` class.
.. code:: clojure
user=> (org.opencv.core.Rect. p1 p2)
#<Rect {0, 0, 100x100}>
Again, the CLJ importing facilities is very handy and let you to map
more symbols in one shot.
.. code:: clojure
user=> (import '[org.opencv.core Point Rect Size])
org.opencv.core.Size
user=> (def r1 (Rect. p1 p2))
#'user/r1
user=> r1
#<Rect {0, 0, 100x100}>
user=> (class r1)
org.opencv.core.Rect
user=> (instance? org.opencv.core.Rect r1)
true
user=> (Size. 100 100)
#<Size 100x100>
user=> (def sq-100 (Size. 100 100))
#'user/sq-100
user=> (class sq-100)
org.opencv.core.Size
user=> (instance? org.opencv.core.Size sq-100)
true
Obviously you can call methods on instances as well.
.. code:: clojure
user=> (.area r1)
10000.0
user=> (.area sq-100)
10000.0
Or modify the value of a member field.
.. code:: clojure
user=> (set! (.x p1) 10)
10
user=> p1
#<Point {10.0, 0.0}>
user=> (set! (.width sq-100) 10)
10
user=> (set! (.height sq-100) 10)
10
user=> (.area sq-100)
100.0
If you find yourself not remembering a OpenCV class behavior, the
REPL gives you the opportunity to easily search the corresponding
javadoc documention:
.. code:: clojure
user=> (javadoc Rect)
"http://www.google.com/search?btnI=I%27m%20Feeling%20Lucky&q=allinurl:org/opencv/core/Rect.html"
Mimic the OpenCV Java Tutorial Sample in the REPL
-------------------------------------------------
Let's now try to port to Clojure the `opencv java tutorial sample <http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_java/java_dev_intro.html>`_.
Instead of writing it in a source file we're going to evaluate it at the
REPL.
Following is the original Java source code of the cited sample.
.. code:: java
import org.opencv.core.Mat;
import org.opencv.core.CvType;
import org.opencv.core.Scalar;
class SimpleSample {
static{ System.loadLibrary("opencv_java244"); }
public static void main(String[] args) {
Mat m = new Mat(5, 10, CvType.CV_8UC1, new Scalar(0));
System.out.println("OpenCV Mat: " + m);
Mat mr1 = m.row(1);
mr1.setTo(new Scalar(1));
Mat mc5 = m.col(5);
mc5.setTo(new Scalar(5));
System.out.println("OpenCV Mat data:\n" + m.dump());
}
}
Add injections to the project
-----------------------------
Before start coding, we'd like to eliminate the boring need of
interactively loading the native opencv lib any time we start a new REPL
to interact with it.
First, stop the REPL by evaluating the ``(exit)`` expression at the REPL
prompt.
.. code:: clojure
user=> (exit)
Bye for now!
Then open your ``project.clj`` file and edit it as follows:
.. code:: clojure
(defproject simple-sample "0.1.0-SNAPSHOT"
...
:injections [(clojure.lang.RT/loadLibrary org.opencv.core.Core/NATIVE_LIBRARY_NAME)])
Here we're saying to load the opencv native lib anytime we run the REPL
in such a way that we have not anymore to remember to manually do it.
Rerun the ``lein repl`` task
.. code:: bash
lein repl
nREPL server started on port 51645 on host 127.0.0.1
REPL-y 0.3.0
Clojure 1.5.1
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
user=>
Import the interested OpenCV java interfaces.
.. code:: clojure
user=> (import '[org.opencv.core Mat CvType Scalar])
org.opencv.core.Scalar
We're going to mimic almost verbatim the original OpenCV java tutorial
to:
- create a 5x10 matrix with all its elements intialized to 0
- change the value of every element of the second row to 1
- change the value of every element of the 6th column to 5
- print the content of the obtained matrix
.. code:: clojure
user=> (def m (Mat. 5 10 CvType/CV_8UC1 (Scalar. 0 0)))
#'user/m
user=> (def mr1 (.row m 1))
#'user/mr1
user=> (.setTo mr1 (Scalar. 1 0))
#<Mat Mat [ 1*10*CV_8UC1, isCont=true, isSubmat=true, nativeObj=0x7fc9dac49880, dataAddr=0x7fc9d9c98d5a ]>
user=> (def mc5 (.col m 5))
#'user/mc5
user=> (.setTo mc5 (Scalar. 5 0))
#<Mat Mat [ 5*1*CV_8UC1, isCont=false, isSubmat=true, nativeObj=0x7fc9d9c995a0, dataAddr=0x7fc9d9c98d55 ]>
user=> (println (.dump m))
[0, 0, 0, 0, 0, 5, 0, 0, 0, 0;
1, 1, 1, 1, 1, 5, 1, 1, 1, 1;
0, 0, 0, 0, 0, 5, 0, 0, 0, 0;
0, 0, 0, 0, 0, 5, 0, 0, 0, 0;
0, 0, 0, 0, 0, 5, 0, 0, 0, 0]
nil
If you are accustomed to a functional language all those abused and
mutating nouns are going to irritate your preference for verbs. Even
if the CLJ interop syntax is very handy and complete, there is still
an impedance mismatch between any OOP language and any FP language
(bein Scala a mixed paradigms programming language).
To exit the REPL type ``(exit)``, ``ctr-D`` or ``(quit)`` at the REPL
prompt.
.. code:: clojure
user=> (exit)
Bye for now!
Interactively load and blur an image
------------------------------------
In the next sample you will learn how to interactively load and blur and
image from the REPL by using the following OpenCV methods:
- the ``imread`` static method from the ``Highgui`` class to read an
image from a file
- the ``imwrite`` static method from the ``Highgui`` class to write an
image to a file
- the ``GaussianBlur`` static method from the ``Imgproc`` class to
apply to blur the original image
We're also going to use the ``Mat`` class which is returned from the
``imread`` method and accpeted as the main argument to both the
``GaussianBlur`` and the ``imwrite`` methods.
Add an image to the project
---------------------------
First we want to add an image file to a newly create directory for
storing static resources of the project.
.. image:: images/lena.png
:alt: Original Image
:align: center
.. code:: bash
mkdir -p resources/images
cp ~/opt/opencv/doc/tutorials/introduction/desktop_java/images/lena.png resource/images/
Read the image
--------------
Now launch the REPL as usual and start by importing all the OpenCV
classes we're going to use:
.. code:: clojure
lein repl
nREPL server started on port 50624 on host 127.0.0.1
REPL-y 0.3.0
Clojure 1.5.1
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
user=> (import '[org.opencv.core Mat Size CvType]
'[org.opencv.highgui Highgui]
'[org.opencv.imgproc Imgproc])
org.opencv.imgproc.Imgproc
Now read the image from the ``resources/images/lena.png`` file.
.. code:: clojure
user=> (def lena (Highgui/imread "resources/images/lena.png"))
#'user/lena
user=> lena
#<Mat Mat [ 512*512*CV_8UC3, isCont=true, isSubmat=false, nativeObj=0x7f9ab3054c40, dataAddr=0x19fea9010 ]>
As you see, by simply evaluating the ``lena`` symbol we know that
``lena.png`` is a ``512x512`` matrix of ``CV_8UC3`` elements type. Let's
create a new ``Mat`` instance of the same dimensions and elements type.
.. code:: clojure
user=> (def blurred (Mat. 512 512 CvType/CV_8UC3))
#'user/blurred
user=>
Now apply a ``GaussianBlur`` filter using ``lena`` as the source matrix
and ``blurred`` as the destination matrix.
.. code:: clojure
user=> (Imgproc/GaussianBlur lena blurred (Size. 5 5) 3 3)
nil
As a last step just save the ``blurred`` matrix in a new image file.
.. code:: clojure
user=> (Highgui/imwrite "resources/images/blurred.png" blurred)
true
user=> (exit)
Bye for now!
Following is the new blurred image of Lena.
.. image:: images/blurred.png
:alt: Blurred Image
:align: center
Next Steps
==========
This tutorial only introduces the very basic environment set up to be
able to interact with OpenCV in a CLJ REPL.
I recommend any Clojure newbie to read the `Clojure Java Interop chapter <http://clojure.org/java_interop>`_ to get all you need to know
to interoperate with any plain java lib that has not been wrapped in
Clojure to make it usable in a more idiomatic and functional way within
Clojure.
The OpenCV Java API does not wrap the ``highgui`` module
functionalities depending on ``Qt`` (e.g. ``namedWindow`` and
``imshow``. If you want to create windows and show images into them
while interacting with OpenCV from the REPL, at the moment you're left
at your own. You could use Java Swing to fill the gap.
License
-------
Copyright © 2013 Giacomo (Mimmo) Cosenza aka Magomimmo
Distributed under the BSD 3-clause License, the same of OpenCV.

Binary file not shown.

After

Width:  |  Height:  |  Size: 351 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 KiB

@ -106,8 +106,8 @@ Enable hardware optimizations
-----------------------------
Depending on target platform architecture different instruction sets can be used. By default
compiler generates code for armv5l without VFPv3 and NEON extensions. Add ``-DUSE_VFPV3=ON``
to cmake command line to enable code generation for VFPv3 and ``-DUSE_NEON=ON`` for using
compiler generates code for armv5l without VFPv3 and NEON extensions. Add ``-DENABLE_VFPV3=ON``
to cmake command line to enable code generation for VFPv3 and ``-DENABLE_NEON=ON`` for using
NEON SIMD extensions.
TBB is supported on multi core ARM SoCs also.

@ -30,19 +30,24 @@ Let's use a simple program such as DisplayImage.cpp shown below.
using namespace cv;
int main( int argc, char** argv )
int main(int argc, char** argv )
{
if ( argc != 2 )
{
printf("usage: DisplayImage.out <Image_Path>\n");
return -1;
}
Mat image;
image = imread( argv[1], 1 );
if( argc != 2 || !image.data )
if ( !image.data )
{
printf( "No image data \n" );
printf("No image data \n");
return -1;
}
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", image );
namedWindow("Display Image", WINDOW_AUTOSIZE );
imshow("Display Image", image);
waitKey(0);

@ -13,10 +13,10 @@ Required Packages
sudo apt-get install build-essential
* CMake 2.6 or higher;
* CMake 2.8.7 or higher;
* Git;
* GTK+2.x or higher, including headers (libgtk2.0-dev);
* pkgconfig;
* pkg-config;
* Python 2.6 or later and Numpy 1.5 or later with developer packages (python-dev, python-numpy);
* ffmpeg or libav development packages: libavcodec-dev, libavformat-dev, libswscale-dev;
* [optional] libdc1394 2.x;
@ -74,7 +74,8 @@ Building OpenCV from Source Using CMake, Using the Command Line
.. code-block:: bash
make
make -j8 # -j8 runs 8 jobs in parallel.
# Change 8 to number of hardware threads available.
sudo make install
.. note::

@ -99,7 +99,7 @@ Explanation
imshow( imageName, image );
imshow( "Gray image", gray_image );
#. Add add the *waitKey(0)* function call for the program to wait forever for an user key press.
#. Add the *waitKey(0)* function call for the program to wait forever for an user key press.
Result

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

@ -156,6 +156,21 @@ world of the OpenCV.
:height: 90pt
:width: 90pt
================ =================================================
|ClojureLogo| **Title:** :ref:`clojure_dev_intro`
*Compatibility:* > OpenCV 2.4.4
*Author:* |Author_MimmoC|
A tutorial on how to interactively use OpenCV from the Clojure REPL.
================ =================================================
.. |ClojureLogo| image:: images/clojure-logo.png
:height: 90pt
:width: 90pt
* **Android**
.. tabularcolumns:: m{100pt} m{300pt}
@ -314,6 +329,7 @@ world of the OpenCV.
../windows_visual_studio_image_watch/windows_visual_studio_image_watch
../desktop_java/java_dev_intro
../java_eclipse/java_eclipse
../clojure_dev_intro/clojure_dev_intro
../android_binary_package/android_dev_intro
../android_binary_package/O4A_SDK
../android_binary_package/dev_with_OCV_on_Android

@ -81,7 +81,7 @@ Building the OpenCV library from scratch requires a couple of tools installed be
+ An IDE of choice (preferably), or just a C\C++ compiler that will actually make the binary files. Here we will use the `Microsoft Visual Studio <https://www.microsoft.com/visualstudio/en-us>`_. However, you can use any other IDE that has a valid C\C++ compiler.
+ |CMake|_, which is a neat tool to make the project files (for your choosen IDE) from the OpenCV source files. It will also allow an easy configuration of the OpenCV build files, in order to make binary files that fits exactly to your needs.
+ |CMake|_, which is a neat tool to make the project files (for your chosen IDE) from the OpenCV source files. It will also allow an easy configuration of the OpenCV build files, in order to make binary files that fits exactly to your needs.
+ Git to acquire the OpenCV source files. A good tool for this is |TortoiseGit|_. Alternatively, you can just download an archived version of the source files from our `page on Sourceforge <http://sourceforge.net/projects/opencvlibrary/files/opencv-win/>`_
@ -320,7 +320,7 @@ First we set an enviroment variable to make easier our work. This will hold the
Here the directory is where you have your OpenCV binaries (*extracted* or *built*). You can have different platform (e.g. x64 instead of x86) or compiler type, so substitute appropriate value. Inside this you should have two folders called *lib* and *bin*. The -m should be added if you wish to make the settings computer wise, instead of user wise.
If you built static libraries then you are done. Otherwise, you need to add the *bin* folders path to the systems path. This is cause you will use the OpenCV library in form of *\"Dynamic-link libraries\"* (also known as **DLL**). Inside these are stored all the algorithms and information the OpenCV library contains. The operating system will load them only on demand, during runtime. However, to do this he needs to know where they are. The systems **PATH** contains a list of folders where DLLs can be found. Add the OpenCV library path to this and the OS will know where to look if he ever needs the OpenCV binaries. Otherwise, you will need to copy the used DLLs right beside the applications executable file (*exe*) for the OS to find it, which is highly unpleasent if you work on many projects. To do this start up again the |PathEditor|_ and add the following new entry (right click in the application to bring up the menu):
If you built static libraries then you are done. Otherwise, you need to add the *bin* folders path to the systems path. This is because you will use the OpenCV library in form of *\"Dynamic-link libraries\"* (also known as **DLL**). Inside these are stored all the algorithms and information the OpenCV library contains. The operating system will load them only on demand, during runtime. However, to do this the operating system needs to know where they are. The systems **PATH** contains a list of folders where DLLs can be found. Add the OpenCV library path to this and the OS will know where to look if he ever needs the OpenCV binaries. Otherwise, you will need to copy the used DLLs right beside the applications executable file (*exe*) for the OS to find it, which is highly unpleasent if you work on many projects. To do this start up again the |PathEditor|_ and add the following new entry (right click in the application to bring up the menu):
::

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save