The number of gaussians involved in a mixture is supposed
to be dynamically adjusted. After being increased, the number
of gaussians can't be reduced anymore.
It seems to be a regression as the legacy code
located in modules/legacy/src/bgfg_gaussmix.cpp allows to reduce
such number of gaussians.
The number of gaussians involved in a mixture is supposed
to be dynamically adjusted. After being increased, the number
of gaussians can't be reduced anymore.
It seems to be a regression as the legacy code
located in modules/legacy/src/bgfg_gaussmix.cpp allows to reduce
such number of gaussians.
Tests of the mask are also included.
This is useful for registering a non-square image against a non-square
template.
This also needs to relax a sanity check as per
https://github.com/Itseez/opencv/pull/3851
The matrix templateZM needs to be initialized because otherwise
uninitialized values leak into the correlation in:
const double correlation = templateZM.dot(imageWarped)
In the worst case this will lead the correlation to be NaN ruining the
whole routine. The subtraction does not initialize templateZM due to the
mask.
Unfortunately, the uninitialized values (by altering the correlation)
have the side effect of dragging out the computation a little longer
giving a slightly better error bound. This means that fixing this bug
breaks perf_ecc where
SANITY_CHECK(warpMat, 1e-3);
is just a little too tight and happens to work due to the uninitialized
values. Since this is a performance not a accuracy test I think it is OK
to just relax the error bound a little bit (the tight error bound being
after all the result of a bug).
The function parameters were different from the ones described below.
P.S. Why is ``flow`` InputOutputArray, shouldn't it be just OutputArray? If so, shouldn't the reason be specified - e.g. so others can benefit as well (e.g. not allocating memory on every frame?)