Merge remote-tracking branch 'upstream/3.4' into merge-3.4

pull/20516/head
Alexander Alekhin 4 years ago
commit 424eaba4c5
  1. 6
      3rdparty/readme.txt
  2. 3
      doc/js_tutorials/js_imgproc/js_contours/js_contour_features/js_contour_features.markdown
  3. 3
      doc/js_tutorials/js_imgproc/js_contours/js_contour_properties/js_contour_properties.markdown
  4. 2
      doc/js_tutorials/js_imgproc/js_contours/js_contours_begin/js_contours_begin.markdown
  5. 2
      doc/js_tutorials/js_imgproc/js_contours/js_contours_hierarchy/js_contours_hierarchy.markdown
  6. 3
      doc/js_tutorials/js_imgproc/js_contours/js_contours_more_functions/js_contours_more_functions.markdown
  7. 5
      doc/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.markdown
  8. 3
      doc/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.markdown
  9. 2
      doc/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.markdown
  10. 2
      doc/py_tutorials/py_imgproc/py_contours/py_contours_hierarchy/py_contours_hierarchy.markdown
  11. 4
      doc/py_tutorials/py_imgproc/py_contours/py_contours_more_functions/py_contours_more_functions.markdown
  12. 2
      doc/tutorials/others/traincascade.markdown
  13. 6
      modules/core/include/opencv2/core/bindings_utils.hpp
  14. 36
      modules/dnn/misc/python/test/test_dnn.py
  15. 12
      modules/python/src2/gen2.py
  16. 6
      modules/python/src2/hdr_parser.py
  17. 17
      modules/python/test/test_misc.py
  18. 4
      platforms/winrt/readme.txt

@ -31,7 +31,7 @@ libpng Portable Network Graphics library.
libtiff Tag Image File Format (TIFF) Software
Copyright (c) 1988-1997 Sam Leffler
Copyright (c) 1991-1997 Silicon Graphics, Inc.
See libtiff home page http://www.remotesensing.org/libtiff/
See libtiff home page http://www.libtiff.org/
for details and links to the source code
WITH_TIFF CMake option must be ON to add libtiff & zlib support to imgcodecs.
@ -51,7 +51,9 @@ jasper JasPer is a collection of software
Copyright (c) 1999-2000 The University of British Columbia
Copyright (c) 2001-2003 Michael David Adams
The JasPer license can be found in libjasper.
See JasPer official GitHub repository
https://github.com/jasper-software/jasper.git
for details and links to source code
------------------------------------------------------------------------------------
openexr OpenEXR is a high dynamic-range (HDR) image file format developed
by Industrial Light & Magic for use in computer imaging applications.

@ -1,6 +1,9 @@
Contour Features {#tutorial_js_contour_features}
================
@prev_tutorial{tutorial_js_contours_begin}
@next_tutorial{tutorial_js_contour_properties}
Goal
----

@ -1,6 +1,9 @@
Contour Properties {#tutorial_js_contour_properties}
==================
@prev_tutorial{tutorial_js_contour_features}
@next_tutorial{tutorial_js_contours_more_functions}
Goal
----

@ -1,6 +1,8 @@
Contours : Getting Started {#tutorial_js_contours_begin}
==========================
@next_tutorial{tutorial_js_contour_features}
Goal
----

@ -1,6 +1,8 @@
Contours Hierarchy {#tutorial_js_contours_hierarchy}
==================
@prev_tutorial{tutorial_js_contours_more_functions}
Goal
----

@ -1,6 +1,9 @@
Contours : More Functions {#tutorial_js_contours_more_functions}
=========================
@prev_tutorial{tutorial_js_contour_properties}
@next_tutorial{tutorial_js_contours_hierarchy}
Goal
----

@ -1,6 +1,9 @@
Contour Features {#tutorial_py_contour_features}
================
@prev_tutorial{tutorial_py_contours_begin}
@next_tutorial{tutorial_py_contour_properties}
Goal
----
@ -91,7 +94,7 @@ convexity defects, which are the local maximum deviations of hull from contours.
There is a little bit things to discuss about it its syntax:
@code{.py}
hull = cv.convexHull(points[, hull[, clockwise[, returnPoints]]
hull = cv.convexHull(points[, hull[, clockwise[, returnPoints]]])
@endcode
Arguments details:

@ -1,6 +1,9 @@
Contour Properties {#tutorial_py_contour_properties}
==================
@prev_tutorial{tutorial_py_contour_features}
@next_tutorial{tutorial_py_contours_more_functions}
Here we will learn to extract some frequently used properties of objects like Solidity, Equivalent
Diameter, Mask image, Mean Intensity etc. More features can be found at [Matlab regionprops
documentation](http://www.mathworks.in/help/images/ref/regionprops.html).

@ -1,6 +1,8 @@
Contours : Getting Started {#tutorial_py_contours_begin}
==========================
@next_tutorial{tutorial_py_contour_features}
Goal
----

@ -1,6 +1,8 @@
Contours Hierarchy {#tutorial_py_contours_hierarchy}
==================
@prev_tutorial{tutorial_py_contours_more_functions}
Goal
----

@ -1,6 +1,10 @@
Contours : More Functions {#tutorial_py_contours_more_functions}
=========================
@prev_tutorial{tutorial_py_contour_properties}
@next_tutorial{tutorial_py_contours_hierarchy}
Goal
----

@ -13,6 +13,8 @@ Working with a boosted cascade of weak classifiers includes two major stages: th
To support this tutorial, several official OpenCV applications will be used: [opencv_createsamples](https://github.com/opencv/opencv/tree/master/apps/createsamples), [opencv_annotation](https://github.com/opencv/opencv/tree/master/apps/annotation), [opencv_traincascade](https://github.com/opencv/opencv/tree/master/apps/traincascade) and [opencv_visualisation](https://github.com/opencv/opencv/tree/master/apps/visualisation).
@note Createsamples and traincascade are disabled since OpenCV 4.0. Consider using these apps for training from 3.4 branch for Cascade Classifier. Model format is the same between 3.4 and 4.x.
### Important notes
- If you come across any tutorial mentioning the old opencv_haartraining tool <i>(which is deprecated and still using the OpenCV1.x interface)</i>, then please ignore that tutorial and stick to the opencv_traincascade tool. This tool is a newer version, written in C++ in accordance to the OpenCV 2.x and OpenCV 3.x API. The opencv_traincascade supports both HAAR like wavelet features @cite Viola01 and LBP (Local Binary Patterns) @cite Liao2007 features. LBP features yield integer precision in contrast to HAAR features, yielding floating point precision, so both training and detection with LBP are several times faster then with HAAR features. Regarding the LBP and HAAR detection quality, it mainly depends on the training data used and the training parameters selected. It's possible to train a LBP-based classifier that will provide almost the same quality as HAAR-based one, within a percentage of the training time.

@ -116,6 +116,12 @@ String dumpRange(const Range& argument)
}
}
CV_WRAP static inline
String testReservedKeywordConversion(int positional_argument, int lambda = 2, int from = 3)
{
return format("arg=%d, lambda=%d, from=%d", positional_argument, lambda, from);
}
CV_WRAP static inline
void testRaiseGeneralException()
{

@ -62,6 +62,12 @@ def printParams(backend, target):
}
print('%s/%s' % (backendNames[backend], targetNames[target]))
def getDefaultThreshold(target):
if target == cv.dnn.DNN_TARGET_OPENCL_FP16 or target == cv.dnn.DNN_TARGET_MYRIAD:
return 4e-3
else:
return 1e-5
testdata_required = bool(os.environ.get('OPENCV_DNN_TEST_REQUIRE_TESTDATA', False))
g_dnnBackendsAndTargets = None
@ -373,5 +379,35 @@ class dnn_test(NewOpenCVTests):
cv.dnn_unregisterLayer('CropCaffe')
# check that dnn module can work with 3D tensor as input for network
def test_input_3d(self):
model = self.find_dnn_file('dnn/onnx/models/hidden_lstm.onnx')
input_file = self.find_dnn_file('dnn/onnx/data/input_hidden_lstm.npy')
output_file = self.find_dnn_file('dnn/onnx/data/output_hidden_lstm.npy')
if model is None:
raise unittest.SkipTest("Missing DNN test files (dnn/onnx/models/hidden_lstm.onnx). "
"Verify OPENCV_DNN_TEST_DATA_PATH configuration parameter.")
if input_file is None or output_file is None:
raise unittest.SkipTest("Missing DNN test files (dnn/onnx/data/{input/output}_hidden_lstm.npy). "
"Verify OPENCV_DNN_TEST_DATA_PATH configuration parameter.")
net = cv.dnn.readNet(model)
input = np.load(input_file)
# we have to expand the shape of input tensor because Python bindings cut 3D tensors to 2D
# it should be fixed in future. see : https://github.com/opencv/opencv/issues/19091
# please remove `expand_dims` after that
input = np.expand_dims(input, axis=3)
gold_output = np.load(output_file)
net.setInput(input)
for backend, target in self.dnnBackendsAndTargets:
printParams(backend, target)
net.setPreferableBackend(backend)
net.setPreferableTarget(target)
real_output = net.forward()
normAssert(self, real_output, gold_output, "", getDefaultThreshold(target))
if __name__ == '__main__':
NewOpenCVTests.bootstrap()

@ -214,6 +214,16 @@ simple_argtype_mapping = {
"Stream": ArgTypeInfo("Stream", FormatStrings.object, 'Stream::Null()', True),
}
# Set of reserved keywords for Python. Can be acquired via the following call
# $ python -c "help('keywords')"
# Keywords that are reserved in C/C++ are excluded because they can not be
# used as variables identifiers
python_reserved_keywords = {
"True", "None", "False", "as", "assert", "def", "del", "elif", "except", "exec",
"finally", "from", "global", "import", "in", "is", "lambda", "nonlocal",
"pass", "print", "raise", "with", "yield"
}
def normalize_class_name(name):
return re.sub(r"^cv\.", "", name).replace(".", "_")
@ -371,6 +381,8 @@ class ArgInfo(object):
def __init__(self, arg_tuple):
self.tp = handle_ptr(arg_tuple[0])
self.name = arg_tuple[1]
if self.name in python_reserved_keywords:
self.name += "_"
self.defval = arg_tuple[2]
self.isarray = False
self.arraylen = 0

@ -979,7 +979,8 @@ class CppHeaderParser(object):
has_mat = len(list(filter(lambda x: x[0] in {"Mat", "vector_Mat"}, args))) > 0
if has_mat:
_, _, _, gpumat_decl = self.parse_stmt(stmt, token, mat="cuda::GpuMat", docstring=docstring)
decls.append(gpumat_decl)
if gpumat_decl != decl:
decls.append(gpumat_decl)
if self._generate_umat_decls:
# If function takes as one of arguments Mat or vector<Mat> - we want to create the
@ -988,7 +989,8 @@ class CppHeaderParser(object):
has_mat = len(list(filter(lambda x: x[0] in {"Mat", "vector_Mat"}, args))) > 0
if has_mat:
_, _, _, umat_decl = self.parse_stmt(stmt, token, mat="UMat", docstring=docstring)
decls.append(umat_decl)
if umat_decl != decl:
decls.append(umat_decl)
docstring = ""
if stmt_type == "namespace":

@ -464,6 +464,23 @@ class Arguments(NewOpenCVTests):
with self.assertRaises((TypeError), msg=get_no_exception_msg(not_convertible)):
_ = cv.utils.dumpRange(not_convertible)
def test_reserved_keywords_are_transformed(self):
default_lambda_value = 2
default_from_value = 3
format_str = "arg={}, lambda={}, from={}"
self.assertEqual(
cv.utils.testReservedKeywordConversion(20), format_str.format(20, default_lambda_value, default_from_value)
)
self.assertEqual(
cv.utils.testReservedKeywordConversion(10, lambda_=10), format_str.format(10, 10, default_from_value)
)
self.assertEqual(
cv.utils.testReservedKeywordConversion(10, from_=10), format_str.format(10, default_lambda_value, 10)
)
self.assertEqual(
cv.utils.testReservedKeywordConversion(20, lambda_=-4, from_=12), format_str.format(20, -4, 12)
)
class SamplesFindFile(NewOpenCVTests):

@ -13,7 +13,7 @@ Install Visual Studio 2013 Community Edition
http://go.microsoft.com/?linkid=9863608
Install Visual Studio Express 2012 for Windows Desktop
http://www.microsoft.com/en-us/download/details.aspx?id=34673
https://devblogs.microsoft.com/visualstudio/visual-studio-express-2012-for-windows-desktop-is-here/
@ -156,4 +156,4 @@ Manual build
cmake -G "Visual Studio 12 2013 Win64" -DCMAKE_SYSTEM_NAME:String=WindowsStore -DCMAKE_SYSTEM_VERSION:String=8.1 -DCMAKE_VS_EFFECTIVE_PLATFORMS:String=x64 -DCMAKE_INSTALL_PREFIX:PATH=.\install\WS\8.1\x64\ ..
Return to "Running tests for Windows Store", list item 4.
Return to "Running tests for Windows Store", list item 4.

Loading…
Cancel
Save