SVM sigmoid kernel fix (issue #13621) (#13718)
* Added test for sigmoid case for retrieving support vectors
* undo unhelpful test
* add test for sigmoid SVM with data that is easily separable into two concentric circles
* Update sigmoid kernel to use tanh(gamma * <x, y> + coef0) instead of -tanh(gamma * <x, y> + coef0)
* remove unnecessary constraint on coef0
* cleanup
* fixing inappropriate use of doubles
* Add f to float literal
* replace CV_Assert with ASSERT_EQ where appropriate
* integrated the new C++ persistence; removed old persistence; most of OpenCV compiles fine! the tests have not been run yet
* fixed multiple bugs in the new C++ persistence
* fixed raw size of the parsed empty sequences
* [temporarily] excluded obsolete applications traincascade and createsamples from build
* fixed several compiler warnings and multiple test failures
* undo changes in cocoa window rendering (that was fixed in another PR)
* fixed more compile warnings and the remaining test failures (hopefully)
* trying to fix the last little warning
* Add HPX backend for OpenCV implementation
Adds hpx backend for cv::parallel_for_() calls respecting the nstripes chunking parameter. C++ code for the backend is added to modules/core/parallel.cpp. Also, the necessary changes to cmake files are introduced.
Backend can operate in 2 versions (selectable by cmake build option WITH_HPX_STARTSTOP): hpx (runtime always on) and hpx_startstop (start and stop the backend for each cv::parallel_for_() call)
* WIP: Conditionally include hpx_main.hpp to tests in core module
Header hpx_main.hpp is included to both core/perf/perf_main.cpp and core/test/test_main.cpp.
The changes to cmake files for linking hpx library to above mentioned test executalbles are proposed but have issues.
* Add coditional iclusion of hpx_main.hpp to cpp cpu modules
* Remove start/stop version of hpx backend
In case of regression trees, node risk is computed as sum of squared
error. To get a meaningfull value to compare with it needs to be
normalized to the number of samples in the node (or more generally to
the sum of sample weights in this node). Otherwise the sum of squared
error is highly dependend on the number of samples in the node and
comparision with `regressionAccuracy` parameter is not very meaningful.
After normalization `node_risk` means in fact sample variance for all
samples in the node, which makes much more sence and seams to be what
was originaly intended by the code given that node risk is later used as
a split termination criteria by
```
sqrt(node.node_risk) < params.getRegressionAccuracy()
```
- removed tr1 usage (dropped in C++17)
- moved includes of vector/map/iostream/limits into ts.hpp
- require opencv_test + anonymous namespace (added compile check)
- fixed norm() usage (must be from cvtest::norm for checks) and other conflict functions
- added missing license headers
* Simulated Annealing for ANN_MLP training method
* EXPECT_LT
* just to test new data
* manage RNG
* Try again
* Just run buildbot with new data
* try to understand
* Test layer
* New data- new test
* Force RNG in backprop
* Use Impl to avoid virtual method
* reset all weights
* try to solve ABI
* retry
* ABI solved?
* till problem with dynamic_cast
* Something is wrong
* Solved?
* disable backprop test
* remove ANN_MLP_ANNEALImpl
* Disable weight in varmap
* Add example for SimulatedAnnealing
* export SVM::trainAuto to python #7224
* workaround for ABI compatibility of SVM::trainAuto
* add parameter comments to new SVM::trainAuto function
* Export ParamGrid member variables
Finished with several samples support, need regression testing
Gave a more relevant name to function (getVotes)
Finished implicit implementation
Removed printf, finished regresion testing
Fixed conversion warning
Finished test for Rtrees
Fixed documentation
Initialized variable
Added doxygen documentation
Added parameter name