Merge pull request #21276 from No-Plane-Cannot-Be-Detected:5.x-ptcloud

Add support for 3D point cloud segmentation, using the USAC framework.

* Modify the RANSAC framework in usac such that RANSAC can be used in 3D point cloud segmentation.

* 1. Add support for 3D point cloud segmentation, using the USAC framework.
2. Add solvers, error estimators for plane model and sphere model.

* Added code samples to the comments of class SACSegmentation.

* 1. Update the segment interface parameters of SACSegmentation.
2. Fix some errors in variable naming.

* Add tests for plane detection.

* 1. Add tests for sphere segmentation.
2. Fix some bugs found by tests.
3. Rename "segmentation" to "sac segmentation".
4. Rename "detect" to "segment".
TODO: Too much duplicate code, the structure of the test needs to be rebuilt.

* 1. Use SIMD acceleration for plane model and sphere model error estimation.
2. Optimize the RansacQualityImpl#getScore function to avoid multiple calls to the error#getError function.
3. Fix a warning in test_sac_segmentation.cpp.

* 1. Fix the warning of ModelConstraintFunction ambiguity.
2. Fix warning: no previous declaration for'void cv::usac::modelParamsToUsacConfig(cv::Ptr<cv::usac::SimpleUsacConfig>&, const cv::Ptr<const cv::usac::Model>& )

* Fix a warning in test_sac_segmentation.cpp about direct comparison of different types of data.

* Add code comments related to the interpretation of model coefficients.

* Update the use of custom model constraint functions.

* Simplified test code structure.

* Update the method of checking plane models.

* Delete test for cylinder.

* Add some comments about UniversalRANSAC.

* 1. The RANSAC paper in the code comments is referenced using the bibtex format.
2. The sample code in the code comments is replaced using @snippet.
3. Change the public API class SACSegmentation to interface.
4. Clean up the old useless code.

* fix warning(no previous declaration) in 3d_sac_segmentation.cpp.

* Fix compilation errors caused by 3d_sac_segmentation.cpp.

* Move the function sacModelMinimumSampleSize() from ptcloud.hpp to sac_segmentation.cpp.

* 1. Change the interface for setting the number of threads to the interface for setting whether to be parallel.
2. Move interface implementation code in ptcloud_utils.hpp to ptcloud_utils.cpp.

* SACSegmentation no longer inherits Algorithm.

* Add the constructor and destructor of SACSegmentation.

* 1. For the declaration of the common API, the prefix and suffix of the parameter names no longer contain underscores.
2. Rename the function _getMatFromInputArray -> getPointsMatFromInputArray.
3. Change part of CV_CheckDepth to CV_CheckDepthEQ.
4. Remove the doxygen flag from the source code.
5. Update the loop termination condition of SIMD in the point cloud section of 3D module.

* fix warning: passing 'bool' chooses 'int' over 'size_t {aka unsigned int}' .

* fix warning: passing 'bool' chooses 'int' over 'size_t {aka unsigned int}' .
pull/21371/head
Ruan 3 years ago committed by GitHub
parent 7d05f92855
commit 80c5d18f9c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 11
      modules/3d/doc/3d.bib
  2. 163
      modules/3d/include/opencv2/3d/ptcloud.hpp
  3. 161
      modules/3d/src/ptcloud/ptcloud_utils.cpp
  4. 75
      modules/3d/src/ptcloud/ptcloud_utils.hpp
  5. 46
      modules/3d/src/ptcloud/ptcloud_wrapper.hpp
  6. 273
      modules/3d/src/ptcloud/sac_segmentation.cpp
  7. 143
      modules/3d/src/ptcloud/sac_segmentation.hpp
  8. 15
      modules/3d/src/ptcloud/sampling.cpp
  9. 161
      modules/3d/src/usac.hpp
  10. 249
      modules/3d/src/usac/estimator.cpp
  11. 149
      modules/3d/src/usac/plane_solver.cpp
  12. 4
      modules/3d/src/usac/quality.cpp
  13. 392
      modules/3d/src/usac/ransac_solvers.cpp
  14. 271
      modules/3d/src/usac/sphere_solver.cpp
  15. 399
      modules/3d/test/test_sac_segmentation.cpp
  16. 90
      samples/cpp/tutorial_code/snippets/3d_sac_segmentation.cpp

@ -84,4 +84,15 @@
volume = {},
chapter = {},
isbn = {978-1-4503-0716-1},
}
@article{fischler1981random,
title={Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography},
author={Fischler, Martin A and Bolles, Robert C},
journal={Communications of the ACM},
volume={24},
number={6},
pages={381--395},
year={1981},
publisher={ACM New York, NY, USA}
}

@ -11,6 +11,159 @@ namespace cv {
//! @addtogroup _3d
//! @{
//! type of the robust estimation algorithm
enum SacMethod
{
/** The RANSAC algorithm described in @cite fischler1981random.
*/
SAC_METHOD_RANSAC,
// SAC_METHOD_MAGSAC,
// SAC_METHOD_LMEDS,
// SAC_METHOD_MSAC,
// SAC_METHOD_RRANSAC,
// SAC_METHOD_RMSAC,
// SAC_METHOD_MLESAC,
// SAC_METHOD_PROSAC
};
enum SacModelType
{
/** The 3D PLANE model coefficients in list **[a, b, c, d]**,
corresponding to the coefficients of equation
\f$ ax + by + cz + d = 0 \f$. */
SAC_MODEL_PLANE,
/** The 3D SPHERE model coefficients in list **[center_x, center_y, center_z, radius]**,
corresponding to the coefficients of equation
\f$ (x - center\_x)^2 + (y - center\_y)^2 + (z - center\_z)^2 = radius^2 \f$.*/
SAC_MODEL_SPHERE,
// SAC_MODEL_CYLINDER,
};
/** @brief Sample Consensus algorithm segmentation of 3D point cloud model.
Example of segmenting plane from a 3D point cloud using the RANSAC algorithm:
@snippet snippets/3d_sac_segmentation.cpp planeSegmentationUsingRANSAC
@see
1. Supported algorithms: enum SacMethod in ptcloud.hpp.
2. Supported models: enum SacModelType in ptcloud.hpp.
*/
class CV_EXPORTS SACSegmentation
{
public:
/** @brief Custom function that take the model coefficients and return whether the model is acceptable or not.
Example of constructing SACSegmentation::ModelConstraintFunction:
@snippet snippets/3d_sac_segmentation.cpp usageExampleSacModelConstraintFunction
@note The content of model_coefficients depends on the model.
Refer to the comments inside enumeration type SacModelType.
*/
using ModelConstraintFunction =
std::function<bool(const std::vector<double> &/*model_coefficients*/)>;
//-------------------------- CREATE -----------------------
static Ptr<SACSegmentation> create(SacModelType sac_model_type = SAC_MODEL_PLANE,
SacMethod sac_method = SAC_METHOD_RANSAC,
double threshold = 0.5, int max_iterations = 1000);
// -------------------------- CONSTRUCTOR, DESTRUCTOR --------------------------
SACSegmentation() = default;
virtual ~SACSegmentation() = default;
//-------------------------- SEGMENT -----------------------
/**
* @brief Execute segmentation using the sample consensus method.
*
* @param input_pts Original point cloud, vector of Point3 or Mat of size Nx3/3xN.
* @param[out] labels The label corresponds to the model number, 0 means it
* does not belong to any model, range [0, Number of final resultant models obtained].
* @param[out] models_coefficients The resultant models coefficients.
* Currently supports passing in cv::Mat. Models coefficients are placed in a matrix of NxK,
* where N is the number of models and K is the number of coefficients of one model.
* The coefficients for each model refer to the comments inside enumeration type SacModelType.
* @return Number of final resultant models obtained by segmentation.
*/
virtual int
segment(InputArray input_pts, OutputArray labels, OutputArray models_coefficients) = 0;
//-------------------------- Getter and Setter -----------------------
//! Set the type of sample consensus model to use.
virtual void setSacModelType(SacModelType sac_model_type) = 0;
//! Get the type of sample consensus model used.
virtual SacModelType getSacModelType() const = 0;
//! Set the type of sample consensus method to use.
virtual void setSacMethodType(SacMethod sac_method) = 0;
//! Get the type of sample consensus method used.
virtual SacMethod getSacMethodType() const = 0;
//! Set the distance to the model threshold.
//! Considered as inlier point if distance to the model less than threshold.
virtual void setDistanceThreshold(double threshold) = 0;
//! Get the distance to the model threshold.
virtual double getDistanceThreshold() const = 0;
//! Set the minimum and maximum radius limits for the model.
//! Only used for models whose model parameters include a radius.
virtual void setRadiusLimits(double radius_min, double radius_max) = 0;
//! Get the minimum and maximum radius limits for the model.
virtual void getRadiusLimits(double &radius_min, double &radius_max) const = 0;
//! Set the maximum number of iterations to attempt.
virtual void setMaxIterations(int max_iterations) = 0;
//! Get the maximum number of iterations to attempt.
virtual int getMaxIterations() const = 0;
//! Set the confidence that ensure at least one of selections is an error-free set of data points.
virtual void setConfidence(double confidence) = 0;
//! Get the confidence that ensure at least one of selections is an error-free set of data points.
virtual double getConfidence() const = 0;
//! Set the number of models expected.
virtual void setNumberOfModelsExpected(int number_of_models_expected) = 0;
//! Get the expected number of models.
virtual int getNumberOfModelsExpected() const = 0;
//! Set whether to use parallelism or not.
//! The number of threads is set by cv::setNumThreads(int nthreads).
virtual void setParallel(bool is_parallel) = 0;
//! Get whether to use parallelism or not.
virtual bool isParallel() const = 0;
//! Set state used to initialize the RNG(Random Number Generator).
virtual void setRandomGeneratorState(uint64 rng_state) = 0;
//! Get state used to initialize the RNG(Random Number Generator).
virtual uint64 getRandomGeneratorState() const = 0;
//! Set custom model coefficient constraint function.
//! A custom function that takes model coefficients and returns whether the model is acceptable or not.
virtual void
setCustomModelConstraints(const ModelConstraintFunction &custom_model_constraints) = 0;
//! Get custom model coefficient constraint function.
virtual const ModelConstraintFunction &getCustomModelConstraints() const = 0;
};
/**
* @brief Point cloud sampling by Voxel Grid filter downsampling.
*
@ -27,7 +180,7 @@ namespace cv {
* @return The number of points actually sampled.
*/
CV_EXPORTS int voxelGridSampling(OutputArray sampled_point_flags, InputArray input_pts,
float length, float width, float height);
float length, float width, float height);
/**
* @brief Point cloud sampling by randomly select points.
@ -43,7 +196,7 @@ CV_EXPORTS int voxelGridSampling(OutputArray sampled_point_flags, InputArray inp
* if it is nullptr, theRNG () is used instead.
*/
CV_EXPORTS void randomSampling(OutputArray sampled_pts, InputArray input_pts,
int sampled_pts_size, RNG *rng = nullptr);
int sampled_pts_size, RNG *rng = nullptr);
/**
* @overload
@ -57,7 +210,7 @@ CV_EXPORTS void randomSampling(OutputArray sampled_pts, InputArray input_pts,
* if it is nullptr, theRNG () is used instead.
*/
CV_EXPORTS void randomSampling(OutputArray sampled_pts, InputArray input_pts,
float sampled_scale, RNG *rng = nullptr);
float sampled_scale, RNG *rng = nullptr);
/**
* @brief Point cloud sampling by Farthest Point Sampling(FPS).
@ -84,7 +237,7 @@ CV_EXPORTS void randomSampling(OutputArray sampled_pts, InputArray input_pts,
* @return The number of points actually sampled.
*/
CV_EXPORTS int farthestPointSampling(OutputArray sampled_point_flags, InputArray input_pts,
int sampled_pts_size, float dist_lower_limit = 0, RNG *rng = nullptr);
int sampled_pts_size, float dist_lower_limit = 0, RNG *rng = nullptr);
/**
* @overload
@ -101,7 +254,7 @@ CV_EXPORTS int farthestPointSampling(OutputArray sampled_point_flags, InputArray
* @return The number of points actually sampled.
*/
CV_EXPORTS int farthestPointSampling(OutputArray sampled_point_flags, InputArray input_pts,
float sampled_scale, float dist_lower_limit = 0, RNG *rng = nullptr);
float sampled_scale, float dist_lower_limit = 0, RNG *rng = nullptr);
//! @} _3d
} //end namespace cv

@ -0,0 +1,161 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#include "../precomp.hpp"
#include "ptcloud_utils.hpp"
namespace cv {
void
getPointsMatFromInputArray(InputArray input_pts, Mat &mat, int arrangement_of_points, bool clone_data)
{
CV_Check(input_pts.dims(), input_pts.dims() < 3,
"Only support data with dimension less than 3.");
// Guaranteed data can construct N×3 point clouds
int rows = input_pts.rows(), cols = input_pts.cols(), channels = input_pts.channels();
size_t total = rows * cols * channels;
CV_Check(total, total % 3 == 0,
"total = input_pts.rows() * input_pts.cols() * input_pts.channels() must be an integer multiple of 3");
/**
Layout of point cloud data in memory space.
arrangement 0 : x1, y1, z1, ..., xn, yn, zn
For example, the input is std::vector<Point3d>, or std::vector<int>,
or cv::Mat with type N×1 CV_32FC3
arrangement 1 : x1, ..., xn, y1, ..., yn, z1, ..., zn
For example, the input is cv::Mat with type 3×N CV_32FC1
*/
int ori_arrangement = (channels == 1 && rows == 3 && cols != 3) ? 1 : 0;
// Convert to single channel without copying the data.
mat = ori_arrangement == 0 ? input_pts.getMat().reshape(1, (int) (total / 3))
: input_pts.getMat();
if (ori_arrangement != arrangement_of_points)
{
Mat tmp;
transpose(mat, tmp);
swap(mat, tmp);
}
if (mat.type() != CV_32F)
{
Mat tmp;
mat.convertTo(tmp, CV_32F); // Use float to store data
swap(mat, tmp);
}
if (clone_data || (!mat.isContinuous()))
{
mat = mat.clone();
}
}
void
copyPointDataByIdxs(const Mat &src, Mat &dst, const std::vector<int> &idxs, int dst_size,
int arrangement_of_points, bool is_parallel)
{
CV_CheckDepthEQ(src.depth(), CV_32F, "Data with only depth CV_32F are supported");
CV_CheckChannelsEQ(src.channels(), 1, "Data with only one channel are supported");
const int idxs_size = (int) idxs.size();
dst_size = (dst_size < 0 || dst_size > idxs_size) ? idxs_size : dst_size;
if (arrangement_of_points == 1)
{
dst = Mat(3, dst_size, CV_32F);
const int src_size = src.rows * src.cols / 3;
const float *const src_ptr_x = (float *) src.data;
const float *const src_ptr_y = src_ptr_x + src_size;
const float *const src_ptr_z = src_ptr_y + src_size;
float *const dst_ptr_x = (float *) dst.data;
float *const dst_ptr_y = dst_ptr_x + dst_size;
float *const dst_ptr_z = dst_ptr_y + dst_size;
int i = 0;
if (is_parallel) {
int number_of_threads = std::max(getNumThreads(), 1);
int block_size = dst_size / number_of_threads;
parallel_for_(Range(0, number_of_threads), [&](const Range &range) {
int _start = range.start * block_size;
int _end = _start + block_size;
for (int _i = _start; _i < _end; ++_i) {
int _src_idx = idxs[_i];
dst_ptr_x[_i] = src_ptr_x[_src_idx];
dst_ptr_y[_i] = src_ptr_y[_src_idx];
dst_ptr_z[_i] = src_ptr_z[_src_idx];
}
});
i = block_size * number_of_threads;
}
for (; i < dst_size; ++i)
{
int src_idx = idxs[i];
dst_ptr_x[i] = src_ptr_x[src_idx];
dst_ptr_y[i] = src_ptr_y[src_idx];
dst_ptr_z[i] = src_ptr_z[src_idx];
}
}
else if (arrangement_of_points == 0)
{
dst = Mat(dst_size, 3, CV_32F);
const float *const src_ptr = (float *) src.data;
float *const dst_ptr = (float *) dst.data;
int i = 0;
if (is_parallel) {
int number_of_threads = std::max(getNumThreads(), 1);
int block_size = dst_size / number_of_threads;
parallel_for_(Range(0, number_of_threads), [&](const Range &range) {
int _start = range.start * block_size;
int _end = _start + block_size;
for (int _i = _start; _i < _end; ++_i) {
const float *_src_ptr_base = src_ptr + 3 * idxs[_i];
float *_dst_ptr_base = dst_ptr + 3 * _i;
_dst_ptr_base[0] = _src_ptr_base[0];
_dst_ptr_base[1] = _src_ptr_base[1];
_dst_ptr_base[2] = _src_ptr_base[2];
}
});
i = block_size * number_of_threads;
}
for (; i < dst_size; ++i)
{
const float *src_ptr_base = src_ptr + 3 * idxs[i];
float *dst_ptr_base = dst_ptr + 3 * i;
dst_ptr_base[0] = src_ptr_base[0];
dst_ptr_base[1] = src_ptr_base[1];
dst_ptr_base[2] = src_ptr_base[2];
}
}
}
void copyPointDataByFlags(const Mat &src, Mat &dst, const std::vector<bool> &flags,
int arrangement_of_points, bool is_parallel)
{
int pt_size = (int) flags.size();
std::vector<int> idxs;
for (int i = 0; i < pt_size; ++i)
{
if (flags[i])
{
idxs.emplace_back(i);
}
}
copyPointDataByIdxs(src, dst, idxs, -1, arrangement_of_points, is_parallel);
}
}

@ -26,52 +26,39 @@ namespace cv {
* Nx3(x1, y1, z1, ..., xn, yn, zn) to 3xN(x1, ..., xn, y1, ..., yn, z1, ..., zn)
*
*/
inline void _getMatFromInputArray(InputArray input_pts, Mat &mat,
int arrangement_of_points = 1, bool clone_data = false)
{
CV_Check(input_pts.dims(), input_pts.dims() < 3,
"Only support data with dimension less than 3.");
void getPointsMatFromInputArray(InputArray input_pts, Mat &mat,
int arrangement_of_points = 1, bool clone_data = false);
// Guaranteed data can construct N×3 point clouds
int rows = input_pts.rows(), cols = input_pts.cols(), channels = input_pts.channels();
size_t total = rows * cols * channels;
CV_Check(total, total % 3 == 0,
"total = input_pts.rows() * input_pts.cols() * input_pts.channels() must be an integer multiple of 3");
/**
Layout of point cloud data in memory space.
arrangement 0 : x1, y1, z1, ..., xn, yn, zn
For example, the input is std::vector<Point3d>, or std::vector<int>,
or cv::Mat with type N×1 CV_32FC3
arrangement 1 : x1, ..., xn, y1, ..., yn, z1, ..., zn
For example, the input is cv::Mat with type 3×N CV_32FC1
*/
int ori_arrangement = (channels == 1 && rows == 3 && cols != 3) ? 1 : 0;
// Convert to single channel without copying the data.
mat = ori_arrangement == 0 ? input_pts.getMat().reshape(1, (int) (total / 3))
: input_pts.getMat();
if (ori_arrangement != arrangement_of_points)
{
Mat tmp;
transpose(mat, tmp);
swap(mat, tmp);
}
if (mat.type() != CV_32F)
{
Mat tmp;
mat.convertTo(tmp, CV_32F); // Use float to store data
swap(mat, tmp);
}
if (clone_data || (!mat.isContinuous()))
{
mat = mat.clone();
}
/** @brief Copy the data xyz of the point by specifying the indexs.
*
* @param src CV_32F Mat with size Nx3/3xN.
* @param[out] dst CV_32F Mat with size Mx3/3xM.
* @param idxs The index of the point copied from src to dst.
* @param dst_size The first dst_size of idxs is valid.
* If it is less than 0 or greater than idxs.size(),
* it will be automatically adjusted to idxs.size().
* @param arrangement_of_points The arrangement of point data in the matrix, \n
* 0 by row (Nx3, [x1, y1, z1, ..., xn, yn, zn]), \n
* 1 by column (3xN, [x1, ..., xn, y1, ..., yn, z1, ..., zn]).
* @param is_parallel Whether to enable parallelism.
* The number of threads used is obtained through cv::getnumthreads().
*/
void copyPointDataByIdxs(const Mat &src, Mat &dst, const std::vector<int> &idxs, int dst_size = -1,
int arrangement_of_points = 1, bool is_parallel = false);
}
/** @overload
*
* @param src CV_32F Mat with size Nx3/3xN.
* @param[out] dst CV_32F Mat with size Mx3/3xM.
* @param flags If flags[i] is true, the i-th point will be copied from src to dst.
* @param arrangement_of_points The arrangement of point data in the matrix, \n
* 0 by row (Nx3, [x1, y1, z1, ..., xn, yn, zn]), \n
* 1 by column (3xN, [x1, ..., xn, y1, ..., yn, z1, ..., zn]).
* @param is_parallel Whether to enable parallelism.
* The number of threads used is obtained through cv::getnumthreads().
*/
void copyPointDataByFlags(const Mat &src, Mat &dst, const std::vector<bool> &flags,
int arrangement_of_points = 1, bool is_parallel = false);
}

@ -0,0 +1,46 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#ifndef OPENCV_3D_PTCLOUD_WRAPPER_HPP
#define OPENCV_3D_PTCLOUD_WRAPPER_HPP
namespace cv {
/** @brief 3D Point Cloud Wrapper.
A wrapper that encapsulates pointers to access point cloud data,
making the construction of point cloud data access simpler.
@note The point cloud data XYZ matrix should be 3xN, single channel, CV_32F, continuous in memory.
*/
class PointCloudWrapper
{
protected:
const Mat *points_mat;
const int pts_cnt;
const float *pts_ptr_x;
const float *pts_ptr_y;
const float *pts_ptr_z;
public:
explicit PointCloudWrapper(const Mat &points_)
: points_mat(&points_), pts_cnt(points_.rows * points_.cols / 3),
pts_ptr_x((float *) points_.data), pts_ptr_y(pts_ptr_x + pts_cnt),
pts_ptr_z(pts_ptr_y + pts_cnt)
{
CV_CheckDepthEQ(points_.depth(), CV_32F,
"Data with only depth CV_32F are supported");
CV_CheckChannelsEQ(points_.channels(), 1,
"Data with only one channel are supported");
CV_CheckEQ(points_.rows, 3,
"Data with only Mat with 3xN are supported");
CV_Assert(points_.isContinuous());
}
};
} // cv::
#endif //OPENCV_3D_PTCLOUD_WRAPPER_HPP

@ -0,0 +1,273 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#include "../precomp.hpp"
#include "sac_segmentation.hpp"
#include "opencv2/3d/ptcloud.hpp"
#include "ptcloud_utils.hpp"
#include "../usac.hpp"
namespace cv {
// namespace _3d {
/**
*
* @param model_type 3D Model
* @return At least a few are needed to determine a model.
*/
inline int sacModelMinimumSampleSize(SacModelType model_type)
{
switch (model_type)
{
case SAC_MODEL_PLANE:
return 3;
case SAC_MODEL_SPHERE:
return 4;
default:
CV_Error(cv::Error::StsNotImplemented, "SacModel Minimum Sample Size not defined!");
}
}
/////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////// SACSegmentationImpl //////////////////////////////////////
//-------------------------- segmentSingle -----------------------
int
SACSegmentationImpl::segmentSingle(Mat &points, std::vector<bool> &label, Mat &model_coefficients)
{
CV_CheckDepthEQ(points.depth(), CV_32F, "Data with only depth CV_32F are supported");
CV_CheckChannelsEQ(points.channels(), 1, "Data with only one channel are supported");
CV_CheckEQ(points.rows, 3, "Data with only Mat with 3xN are supported");
// Since error function output squared error distance, so make
// threshold squared as well
double _threshold = threshold * threshold;
int state = (int) rng_state;
const int points_size = points.rows * points.cols / 3;
const double _radius_min = radius_min, _radius_max = radius_max;
const ModelConstraintFunction &_custom_constraint = custom_model_constraints;
ModelConstraintFunction &constraint_func = custom_model_constraints;
// RANSAC
using namespace usac;
int _max_iterations_before_lo = 100, _max_num_hypothesis_to_test_before_rejection = 15;
SamplingMethod _sampling_method = SamplingMethod::SAMPLING_UNIFORM;
LocalOptimMethod _lo_method = LocalOptimMethod::LOCAL_OPTIM_INNER_LO;
ScoreMethod _score_method = ScoreMethod::SCORE_METHOD_RANSAC;
NeighborSearchMethod _neighbors_search_method = NeighborSearchMethod::NEIGH_GRID;
// Local optimization
int lo_sample_size = 16, lo_inner_iterations = 15, lo_iterative_iterations = 8,
lo_thr_multiplier = 15, lo_iter_sample_size = 30;
Ptr <Sampler> sampler;
Ptr <Quality> quality;
Ptr <ModelVerifier> verifier;
Ptr <LocalOptimization> lo;
Ptr <Degeneracy> degeneracy;
Ptr <TerminationCriteria> termination;
Ptr <FinalModelPolisher> polisher;
Ptr <MinimalSolver> min_solver;
Ptr <NonMinimalSolver> non_min_solver;
Ptr <Estimator> estimator;
Ptr <usac::Error> error;
switch (sac_model_type)
{
case SAC_MODEL_PLANE:
min_solver = PlaneModelMinimalSolver::create(points);
non_min_solver = PlaneModelNonMinimalSolver::create(points);
error = PlaneModelError::create(points);
break;
// case SAC_MODEL_CYLINDER:
// min_solver = CylinderModelMinimalSolver::create(points);
// non_min_solver = CylinderModelNonMinimalSolver::create(points);
// error = CylinderModelError::create(points);
// break;
case SAC_MODEL_SPHERE:
min_solver = SphereModelMinimalSolver::create(points);
non_min_solver = SphereModelNonMinimalSolver::create(points);
error = SphereModelError::create(points);
constraint_func = [_radius_min, _radius_max, _custom_constraint]
(const std::vector<double> &model_coeffs) {
double radius = model_coeffs[3];
return radius >= _radius_min && radius <= _radius_max &&
(!_custom_constraint || _custom_constraint(model_coeffs));
};
break;
default:
CV_Error(cv::Error::StsNotImplemented, "SAC_MODEL type is not implemented!");
}
const int min_sample_size = min_solver->getSampleSize();
if (points_size < min_sample_size)
{
return 0;
}
estimator = PointCloudModelEstimator::create(min_solver, non_min_solver, constraint_func);
sampler = UniformSampler::create(state++, min_sample_size, points_size);
quality = RansacQuality::create(points_size, _threshold, error);
verifier = ModelVerifier::create();
Ptr <RandomGenerator> lo_sampler = UniformRandomGenerator::create(state++, points_size,
lo_sample_size);
lo = InnerIterativeLocalOptimization::create(estimator, quality, lo_sampler, points_size,
_threshold, false, lo_iter_sample_size, lo_inner_iterations,
lo_iterative_iterations, lo_thr_multiplier);
degeneracy = makePtr<Degeneracy>();
termination = StandardTerminationCriteria::create
(confidence, points_size, min_sample_size, max_iterations);
Ptr <SimpleUsacConfig> usacConfig = SimpleUsacConfig::create();
usacConfig->setMaxIterations(max_iterations);
usacConfig->setMaxIterationsBeforeLo(_max_iterations_before_lo);
usacConfig->setMaxNumHypothesisToTestBeforeRejection(
_max_num_hypothesis_to_test_before_rejection);
usacConfig->setRandomGeneratorState(state);
usacConfig->setParallel(is_parallel);
usacConfig->setNeighborsSearchMethod(_neighbors_search_method);
usacConfig->setSamplingMethod(_sampling_method);
usacConfig->setScoreMethod(_score_method);
usacConfig->setLoMethod(_lo_method);
// The mask is needed to remove the points of the model that has been segmented
usacConfig->maskRequired(true);
UniversalRANSAC ransac(usacConfig, points_size, estimator, quality, sampler,
termination, verifier, degeneracy, lo, polisher);
Ptr <usac::RansacOutput> ransac_output;
if (!ransac.run(ransac_output))
{
return 0;
}
model_coefficients = ransac_output->getModel();
label = ransac_output->getInliersMask();
return ransac_output->getNumberOfInliers();
}
//-------------------------- segment -----------------------
int
SACSegmentationImpl::segment(InputArray input_pts, OutputArray labels,
OutputArray models_coefficients)
{
Mat points;
// Get Mat with 3xN CV_32F
getPointsMatFromInputArray(input_pts, points, 1);
int pts_size = points.rows * points.cols / 3;
std::vector<int> _labels(pts_size, 0);
std::vector<Mat> _models_coefficients;
// Keep the index array of the point corresponding to the original point
AutoBuffer<int> ori_pts_idx(pts_size);
int *pts_idx_ptr = ori_pts_idx.data();
for (int i = 0; i < pts_size; ++i)
pts_idx_ptr[i] = i;
int min_sample_size = sacModelMinimumSampleSize(sac_model_type);
for (int model_num = 1;
pts_size >= min_sample_size && model_num <= number_of_models_expected; ++model_num)
{
Mat model_coefficients;
std::vector<bool> label;
int best_inls = segmentSingle(points, label, model_coefficients);
if (best_inls < min_sample_size)
break;
_models_coefficients.emplace_back(model_coefficients);
// Move the outlier to the new point cloud and continue to segment the model
if (model_num != number_of_models_expected)
{
cv::Mat tmp_pts(points);
int next_pts_size = pts_size - best_inls;
points = cv::Mat(3, next_pts_size, CV_32F);
// Pointer (base address) of access point data x,y,z
float *const tmp_pts_ptr_x = (float *) tmp_pts.data;
float *const tmp_pts_ptr_y = tmp_pts_ptr_x + pts_size;
float *const tmp_pts_ptr_z = tmp_pts_ptr_y + pts_size;
float *const next_pts_ptr_x = (float *) points.data;
float *const next_pts_ptr_y = next_pts_ptr_x + next_pts_size;
float *const next_pts_ptr_z = next_pts_ptr_y + next_pts_size;
for (int j = 0, k = 0; k < pts_size; ++k)
{
if (label[k])
{
// mark a label on this point
_labels[pts_idx_ptr[k]] = model_num;
}
else
{
// If it is not inlier of the known plane,
// add the next iteration to find a new plane
pts_idx_ptr[j] = pts_idx_ptr[k];
next_pts_ptr_x[j] = tmp_pts_ptr_x[k];
next_pts_ptr_y[j] = tmp_pts_ptr_y[k];
next_pts_ptr_z[j] = tmp_pts_ptr_z[k];
++j;
}
}
pts_size = next_pts_size;
}
else
{
for (int k = 0; k < pts_size; ++k)
{
if (label[k])
_labels[pts_idx_ptr[k]] = model_num;
}
}
}
int number_of_models = (int) _models_coefficients.size();
if (labels.needed())
{
if (number_of_models != 0)
{
Mat(_labels).copyTo(labels);
}
else
{
labels.clear();
}
}
if (models_coefficients.needed())
{
if (number_of_models != 0)
{
Mat result;
for (int i = 0; i < number_of_models; i++)
{
result.push_back(_models_coefficients[i]);
}
result.copyTo(models_coefficients);
}
else
{
models_coefficients.clear();
}
}
return number_of_models;
}
// } // _3d::
} // cv::

@ -0,0 +1,143 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#ifndef OPENCV_3D_SAC_SEGMENTATION_HPP
#define OPENCV_3D_SAC_SEGMENTATION_HPP
#include "opencv2/3d/ptcloud.hpp"
namespace cv {
class SACSegmentationImpl : public SACSegmentation
{
private:
//! The type of sample consensus model used.
SacModelType sac_model_type;
//! The type of sample consensus method used.
SacMethod sac_method;
//! Considered as inlier point if distance to the model less than threshold.
double threshold;
//! The minimum and maximum radius limits for the model.
//! Only used for models whose model parameters include a radius.
double radius_min, radius_max;
//! The maximum number of iterations to attempt.
int max_iterations;
//! Confidence that ensure at least one of selections is an error-free set of data points.
double confidence;
//! Expected number of models.
int number_of_models_expected;
bool is_parallel;
//! 64-bit value used to initialize the RNG(Random Number Generator).
uint64 rng_state;
//! A user defined function that takes model coefficients and returns whether the model is acceptable or not.
ModelConstraintFunction custom_model_constraints;
/**
* @brief Execute segmentation of a single model using the sample consensus method.
*
* @param model_coeffs Point cloud data, it must be a 3xN CV_32F Mat.
* @param label label[i] is 1 means point i is inlier point of model
* @param model_coefficients The resultant model coefficients.
* @return number of model inliers
*/
int segmentSingle(Mat &model_coeffs, std::vector<bool> &label, Mat &model_coefficients);
public:
//! No-argument constructor using default configuration
SACSegmentationImpl(SacModelType sac_model_type_, SacMethod sac_method_,
double threshold_, int max_iterations_)
: sac_model_type(sac_model_type_), sac_method(sac_method_), threshold(threshold_),
radius_min(DBL_MIN), radius_max(DBL_MAX),
max_iterations(max_iterations_), confidence(0.999), number_of_models_expected(1),
is_parallel(false), rng_state(0),
custom_model_constraints()
{
}
int segment(InputArray input_pts, OutputArray labels, OutputArray models_coefficients) override;
//-------------------------- Getter and Setter -----------------------
void setSacModelType(SacModelType sac_model_type_) override
{ sac_model_type = sac_model_type_; }
SacModelType getSacModelType() const override
{ return sac_model_type; }
void setSacMethodType(SacMethod sac_method_) override
{ sac_method = sac_method_; }
SacMethod getSacMethodType() const override
{ return sac_method; }
void setDistanceThreshold(double threshold_) override
{ threshold = threshold_; }
double getDistanceThreshold() const override
{ return threshold; }
void setRadiusLimits(double radius_min_, double radius_max_) override
{ radius_min = radius_min_;
radius_max = radius_max_; }
void getRadiusLimits(double &radius_min_, double &radius_max_) const override
{ radius_min_ = radius_min;
radius_max_ = radius_max; }
void setMaxIterations(int max_iterations_) override
{ max_iterations = max_iterations_; }
int getMaxIterations() const override
{ return max_iterations; }
void setConfidence(double confidence_) override
{ confidence = confidence_; }
double getConfidence() const override
{ return confidence; }
void setNumberOfModelsExpected(int number_of_models_expected_) override
{ number_of_models_expected = number_of_models_expected_; }
int getNumberOfModelsExpected() const override
{ return number_of_models_expected; }
void setParallel(bool is_parallel_) override { is_parallel = is_parallel_; }
bool isParallel() const override { return is_parallel; }
void setRandomGeneratorState(uint64 rng_state_) override
{ rng_state = rng_state_; }
uint64 getRandomGeneratorState() const override
{ return rng_state; }
void
setCustomModelConstraints(const ModelConstraintFunction &custom_model_constraints_) override
{ custom_model_constraints = custom_model_constraints_; }
const ModelConstraintFunction &getCustomModelConstraints() const override
{ return custom_model_constraints; }
};
Ptr <SACSegmentation> SACSegmentation::create(SacModelType sac_model_type_, SacMethod sac_method_,
double threshold_, int max_iterations_)
{
return makePtr<SACSegmentationImpl>(sac_model_type_, sac_method_, threshold_, max_iterations_);
}
} //end namespace cv
#endif //OPENCV_3D_SAC_SEGMENTATION_HPP

@ -30,7 +30,7 @@ int voxelGridSampling(OutputArray sampled_point_flags, InputArray input_pts,
// Get input point cloud data
Mat ori_pts;
_getMatFromInputArray(input_pts, ori_pts, 0);
getPointsMatFromInputArray(input_pts, ori_pts, 0);
const int ori_pts_size = ori_pts.rows;
@ -150,7 +150,7 @@ randomSampling(OutputArray sampled_pts, InputArray input_pts, const int sampled_
// Get input point cloud data
Mat ori_pts;
_getMatFromInputArray(input_pts, ori_pts, 0);
getPointsMatFromInputArray(input_pts, ori_pts, 0);
const int ori_pts_size = ori_pts.rows;
CV_CheckLT(sampled_pts_size, ori_pts_size,
@ -193,7 +193,7 @@ randomSampling(OutputArray sampled_pts, InputArray input_pts, const float sample
CV_CheckGT(sampled_scale, 0.0f, "The point cloud sampled scale must greater than 0.");
CV_CheckLT(sampled_scale, 1.0f, "The point cloud sampled scale must less than 1.");
Mat ori_pts;
_getMatFromInputArray(input_pts, ori_pts, 0);
getPointsMatFromInputArray(input_pts, ori_pts, 0);
randomSampling(sampled_pts, input_pts, cvCeil(sampled_scale * ori_pts.rows), rng);
} // randomSampling()
@ -220,7 +220,7 @@ int farthestPointSampling(OutputArray sampled_point_flags, InputArray input_pts,
Mat ori_pts;
// In order to keep the points continuous in memory (which allows better support for SIMD),
// the position of the points may be changed, data copying is mandatory
_getMatFromInputArray(input_pts, ori_pts, 1, true);
getPointsMatFromInputArray(input_pts, ori_pts, 1, true);
const int ori_pts_size = ori_pts.rows * ori_pts.cols / 3;
CV_CheckLT(sampled_pts_size, ori_pts_size,
@ -270,14 +270,11 @@ int farthestPointSampling(OutputArray sampled_point_flags, InputArray input_pts,
int next_pt = sampled_cnt;
int i = sampled_cnt;
#ifdef CV_SIMD
int k = (ori_pts_size - sampled_cnt) / v_float32::nlanes;
int end = sampled_cnt + v_float32::nlanes * k;
v_float32 v_last_p_x = vx_setall_f32(last_pt_x);
v_float32 v_last_p_y = vx_setall_f32(last_pt_y);
v_float32 v_last_p_z = vx_setall_f32(last_pt_z);
for (; i < end; i += v_float32::nlanes)
for (; i <= ori_pts_size - v_float32::nlanes; i += v_float32::nlanes)
{
v_float32 vx_diff = v_last_p_x - vx_load(ori_pts_ptr_x + i);
v_float32 vy_diff = v_last_p_y - vx_load(ori_pts_ptr_y + i);
@ -359,7 +356,7 @@ int farthestPointSampling(OutputArray sampled_point_flags, InputArray input_pts,
CV_CheckGE(dist_lower_limit, 0.0f,
"The distance lower bound must be greater than or equal to 0.");
Mat ori_pts;
_getMatFromInputArray(input_pts, ori_pts, 1);
getPointsMatFromInputArray(input_pts, ori_pts, 1);
return farthestPointSampling(sampled_point_flags, input_pts,
cvCeil(sampled_scale * ori_pts.cols), dist_lower_limit, rng);
} // farthestPointSampling()

@ -11,6 +11,42 @@ enum VerificationMethod { NullVerifier, SprtVerifier };
enum PolishingMethod { NonePolisher, LSQPolisher };
enum ErrorMetric {DIST_TO_LINE, SAMPSON_ERR, SGD_ERR, SYMM_REPR_ERR, FORW_REPR_ERR, RERPOJ};
class UsacConfig : public Algorithm {
public:
virtual int getMaxIterations () const = 0;
virtual int getMaxIterationsBeforeLO () const = 0;
virtual int getMaxNumHypothesisToTestBeforeRejection() const = 0;
virtual int getRandomGeneratorState () const = 0;
virtual bool isParallel() const = 0;
virtual NeighborSearchMethod getNeighborsSearchMethod () const = 0;
virtual SamplingMethod getSamplingMethod () const = 0;
virtual ScoreMethod getScoreMethod () const = 0;
virtual LocalOptimMethod getLOMethod () const = 0;
virtual bool isMaskRequired () const = 0;
};
class SimpleUsacConfig : public UsacConfig {
public:
virtual void setMaxIterations(int max_iterations_) = 0;
virtual void setMaxIterationsBeforeLo(int max_iterations_before_lo_) = 0;
virtual void setMaxNumHypothesisToTestBeforeRejection(int max_num_hypothesis_to_test_before_rejection_) = 0;
virtual void setRandomGeneratorState(int random_generator_state_) = 0;
virtual void setParallel(bool is_parallel) = 0;
virtual void setNeighborsSearchMethod(NeighborSearchMethod neighbors_search_method_) = 0;
virtual void setSamplingMethod(SamplingMethod sampling_method_) = 0;
virtual void setScoreMethod(ScoreMethod score_method_) = 0;
virtual void setLoMethod(LocalOptimMethod lo_method_) = 0;
virtual void maskRequired(bool need_mask_) = 0;
static Ptr<SimpleUsacConfig> create();
};
// Abstract Error class
class Error : public Algorithm {
public:
@ -58,6 +94,18 @@ public:
static Ptr<ReprojectionErrorAffine> create(const Mat &points);
};
// Error for plane model
class PlaneModelError : public Error {
public:
static Ptr<PlaneModelError> create(const Mat &points);
};
// Error for sphere model
class SphereModelError : public Error {
public:
static Ptr<SphereModelError> create(const Mat &points);
};
// Normalizing transformation of data points
class NormTransform : public Algorithm {
public:
@ -125,6 +173,18 @@ public:
static Ptr<AffineMinimalSolver> create(const Mat &points_);
};
//-------------------------- 3D PLANE -----------------------
class PlaneModelMinimalSolver : public MinimalSolver {
public:
static Ptr<PlaneModelMinimalSolver> create(const Mat &points_);
};
//-------------------------- 3D SPHERE -----------------------
class SphereModelMinimalSolver : public MinimalSolver {
public:
static Ptr<SphereModelMinimalSolver> create(const Mat &points_);
};
//////////////////////////////////////// NON MINIMAL SOLVER ///////////////////////////////////////
class NonMinimalSolver : public Algorithm {
public:
@ -173,6 +233,18 @@ public:
static Ptr<AffineNonMinimalSolver> create(const Mat &points_);
};
//-------------------------- 3D PLANE -----------------------
class PlaneModelNonMinimalSolver : public NonMinimalSolver {
public:
static Ptr<PlaneModelNonMinimalSolver> create(const Mat &points_);
};
//-------------------------- 3D SPHERE -----------------------
class SphereModelNonMinimalSolver : public NonMinimalSolver {
public:
static Ptr<SphereModelNonMinimalSolver> create(const Mat &points_);
};
//////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////// SCORE ///////////////////////////////////////////
class Score {
@ -385,6 +457,23 @@ public:
const Ptr<NonMinimalSolver> &non_min_solver_);
};
class PointCloudModelEstimator : public Estimator {
public:
//! Custom function that take the model coefficients and return whether the model is acceptable or not.
//! Same as cv::SACSegmentation::ModelConstraintFunction in ptcloud.hpp.
using ModelConstraintFunction = std::function<bool(const std::vector<double> &/*model_coefficients*/)>;
/** @brief Methods for creating PointCloudModelEstimator.
*
* @param min_solver_ Minimum solver for estimating the model with minimum samples.
* @param non_min_solver_ Non-minimum solver for estimating the model with non-minimum samples.
* @param custom_model_constraints_ Custom model constraints for filtering the estimated obtained model.
* @return Ptr\<PointCloudModelEstimator\>
*/
static Ptr<PointCloudModelEstimator> create (const Ptr<MinimalSolver> &min_solver_,
const Ptr<NonMinimalSolver> &non_min_solver_,
const ModelConstraintFunction &custom_model_constraints_ = nullptr);
};
//////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////// MODEL VERIFIER ////////////////////////////////////
class ModelVerifier : public Algorithm {
@ -783,6 +872,78 @@ public:
double confidence_=0.95, int max_iterations_=5000, ScoreMethod score_ =ScoreMethod::SCORE_METHOD_MSAC);
};
///////////////////////////////////////// UniversalRANSAC ////////////////////////////////////////
/** Implementation of the Universal RANSAC algorithm.
UniversalRANSAC represents an implementation of the Universal RANSAC
(Universal RANdom SAmple Consensus) algorithm, as described in:
"USAC: A Universal Framework for Random Sample Consensus", Raguram, R., et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 35, no. 8, 2013, pp. 20222038.
USAC extends the simple hypothesize-and-verify structure of standard RANSAC
to incorporate a number of important practical and computational considerations.
The optimization of RANSAC algorithms such as NAPSAC, GroupSAC, and MAGSAC
can be considered as a special case of the USAC framework.
The algorithm works as following stages:
+ [Stage 0] Pre-filtering
- [**0. Pre-filtering**] Filtering of the input data, e.g. removing some noise points.
+ [Stage 1] Sample minimal subset
- [**1a. Sampling**] Sample minimal subset. It may be possible to incorporate prior information
and bias the sampling with a view toward preferentially generating models that are more
likely to be correct, or like the standard RANSAC, sampling uniformly at random.
- [**1b. Sample check**] Check whether the sample is suitable for computing model parameters.
Note that this simple test requires very little overhead, particularly when compared to
the expensive model generation and verification stages.
+ [Stage 2] Generate minimal-sample model(s)
- [**2a. Model generation**] Using the data points sampled in the previous step to fit the model
(calculate model parameters).
- [**2b. Model check**] A preliminary test that checks the model based on application-specific
constraints and then performs the verification only if required. For example, fitting
a sphere to a limited radius range.
+ [Stage 3] Is the model interesting?
- [**3a. Verification**] Verify that the current model is likely to obtain the maximum
objective function (in other words, better than the current best model), a score can be
used (e.g., the data point's voting support for this model), or conduct a statistical
test on a small number of data points, and discard or accept the model based on the results
of the test. For example, T(d, d) Test, Bail-Out Test, SPRT Test, Preemptive Verification.
- [**3b. Degeneracy**] Determine if sufficient constraints are provided to produce
a unique solution.
+ [Stage 4] Generate non-minimal sample model
- [**4. Model refinement**] Handle the issue of noisy models (by Local Optimization,
Error Propagation, etc).
+ [Stage 5] Confidence in solution achieved?
- [**5. Judgment termination**] Determine whether the specified maximum number of iterations
is reached or whether the desired model is obtained with a certain confidence level.
Stage 1b, 2b, 3a, 3b, 5 may jump back to Stage 1a.
*/
class UniversalRANSAC {
protected:
const Ptr<const UsacConfig> config;
const Ptr<const Estimator> _estimator;
const Ptr<Quality> _quality;
const Ptr<Sampler> _sampler;
const Ptr<TerminationCriteria> _termination_criteria;
const Ptr<ModelVerifier> _model_verifier;
const Ptr<Degeneracy> _degeneracy;
const Ptr<LocalOptimization> _local_optimization;
const Ptr<FinalModelPolisher> _model_polisher;
const int points_size;
public:
UniversalRANSAC (const Ptr<const UsacConfig> &config_, int points_size_, const Ptr<const Estimator> &estimator_, const Ptr<Quality> &quality_,
const Ptr<Sampler> &sampler_, const Ptr<TerminationCriteria> &termination_criteria_,
const Ptr<ModelVerifier> &model_verifier_, const Ptr<Degeneracy> &degeneracy_,
const Ptr<LocalOptimization> &local_optimization_, const Ptr<FinalModelPolisher> &model_polisher_);
bool run(Ptr<RansacOutput> &ransac_output);
};
Mat findHomography(InputArray srcPoints, InputArray dstPoints, int method,
double ransacReprojThreshold, OutputArray mask,
const int maxIters, const double confidence);

@ -4,6 +4,7 @@
#include "../precomp.hpp"
#include "../usac.hpp"
#include "../ptcloud/ptcloud_wrapper.hpp"
namespace cv { namespace usac {
class HomographyEstimatorImpl : public HomographyEstimator {
@ -217,6 +218,73 @@ Ptr<PnPEstimator> PnPEstimator::create (const Ptr<MinimalSolver> &min_solver_,
return makePtr<PnPEstimatorImpl>(min_solver_, non_min_solver_);
}
/////////////////////////////////////////////////////////////////////////
class PointCloudModelEstimatorImpl : public PointCloudModelEstimator {
private:
const Ptr<MinimalSolver> min_solver;
const Ptr<NonMinimalSolver> non_min_solver;
const ModelConstraintFunction custom_model_constraints;
inline int filteringModel(std::vector<Mat> M, int models_count,
std::vector<Mat> &valid_models) const {
int valid_models_count = 0;
if (!custom_model_constraints) {
valid_models_count = models_count;
for (int i = 0; i < models_count; ++i)
valid_models[i] = M[i];
} else {
for (int i = 0; i < models_count; ++i)
// filtering Models with custom_model_constraints
if (custom_model_constraints(M[i]))
valid_models[valid_models_count++] = M[i];
}
return valid_models_count;
}
public:
explicit PointCloudModelEstimatorImpl (const Ptr<MinimalSolver> &min_solver_,
const Ptr<NonMinimalSolver> &non_min_solver_,
ModelConstraintFunction custom_model_constraints_) :
min_solver(min_solver_), non_min_solver(non_min_solver_),
custom_model_constraints(std::move(custom_model_constraints_)) {}
int estimateModels (const std::vector<int> &sample, std::vector<Mat> &models) const override {
std::vector<Mat> M;
const int models_count = min_solver->estimate(sample, M);
return filteringModel(M, models_count, models);
}
int estimateModelNonMinimalSample (const std::vector<int> &sample, int sample_size,
std::vector<Mat> &models, const std::vector<double> &weights) const override {
std::vector<Mat> M;
const int models_count = non_min_solver->estimate(sample, sample_size, M, weights);
return filteringModel(M, models_count, models);
}
int getMinimalSampleSize() const override {
return min_solver->getSampleSize();
}
int getNonMinimalSampleSize() const override {
return non_min_solver->getMinimumRequiredSampleSize();
}
int getMaxNumSolutions () const override {
return min_solver->getMaxNumberOfSolutions();
}
int getMaxNumSolutionsNonMinimal () const override {
return non_min_solver->getMaxNumberOfSolutions();
}
Ptr<Estimator> clone() const override {
return makePtr<PointCloudModelEstimatorImpl>(min_solver->clone(), non_min_solver->clone(),
custom_model_constraints);
}
};
Ptr<PointCloudModelEstimator> PointCloudModelEstimator::create (const Ptr<MinimalSolver> &min_solver_,
const Ptr<NonMinimalSolver> &non_min_solver_,
const ModelConstraintFunction &custom_model_constraints_) {
return makePtr<PointCloudModelEstimatorImpl>(min_solver_, non_min_solver_, custom_model_constraints_);
}
///////////////////////////////////////////// ERROR /////////////////////////////////////////
// Symmetric Reprojection Error
class ReprojectionErrorSymmetricImpl : public ReprojectionErrorSymmetric {
@ -534,6 +602,187 @@ Ptr<ReprojectionErrorPmatrix> ReprojectionErrorPmatrix::create(const Mat &points
return makePtr<ReprojectionErrorPmatrixImpl>(points);
}
class PlaneModelErrorImpl : public PlaneModelError, public PointCloudWrapper {
private:
//! ax + by + cz + d = 0
float a, b, c ,d;
std::vector<float> errors_cache;
bool cache_valid;
public:
explicit PlaneModelErrorImpl(const Mat &points_)
: PointCloudWrapper(points_),
a(0), b(0), c(0), d(0), errors_cache(pts_cnt), cache_valid(false)
{
}
inline void setModelParameters(const Mat &model) override
{
CV_Assert(!model.empty());
CV_CheckTypeEQ(model.depth(), CV_64F, "Model parameter type should use double");
const double * const p = (double *) model.data;
double coeff_a = p[0], coeff_b = p[1], coeff_c = p[2], coeff_d = p[3];
// Format for easy distance calculation
double magnitude_abc = sqrt(coeff_a * coeff_a + coeff_b * coeff_b + coeff_c * coeff_c);
if (0 != magnitude_abc) {
double inv_magnitude_abc = 1.0 / magnitude_abc;
coeff_a = coeff_a * inv_magnitude_abc;
coeff_b = coeff_b * inv_magnitude_abc;
coeff_c = coeff_c * inv_magnitude_abc;
coeff_d = coeff_d * inv_magnitude_abc;
}
cache_valid = cache_valid && a == (float) coeff_a && b == (float) coeff_b
&& c == (float) coeff_c && d == (float) coeff_d;
a = (float) coeff_a;
b = (float) coeff_b;
c = (float) coeff_c;
d = (float) coeff_d;
}
inline float getError(int point_idx) const override
{
float error = a * pts_ptr_x[point_idx] + b * pts_ptr_y[point_idx]
+ c * pts_ptr_z[point_idx] + d;
return error * error;
}
const std::vector<float> &getErrors(const Mat &model) override
{
setModelParameters(model);
if (cache_valid)
return errors_cache;
int i = 0;
#ifdef CV_SIMD
v_float32 v_a = vx_setall_f32(a);
v_float32 v_b = vx_setall_f32(b);
v_float32 v_c = vx_setall_f32(c);
v_float32 v_d = vx_setall_f32(d);
float* errors_cache_ptr = errors_cache.data();
for (; i <= pts_cnt - v_float32::nlanes; i += v_float32::nlanes)
{
v_float32 v_error = v_a * vx_load(pts_ptr_x + i) + v_b * vx_load(pts_ptr_y + i)
+ v_c * vx_load(pts_ptr_z + i) + v_d;
v_store(errors_cache_ptr + i, v_error * v_error);
}
#endif
for (; i < pts_cnt; ++i) {
float error = a * pts_ptr_x[i] + b * pts_ptr_y[i] + c * pts_ptr_z[i] + d;
errors_cache[i] = error * error;
}
cache_valid = true;
return errors_cache;
}
Ptr<Error> clone () const override {
return makePtr<PlaneModelErrorImpl>(*points_mat);
}
};
Ptr<PlaneModelError> PlaneModelError::create(const Mat &points) {
return makePtr<PlaneModelErrorImpl>(points);
}
class SphereModelErrorImpl : public SphereModelError, public PointCloudWrapper {
private:
//! Center and radius
float center_x, center_y, center_z, radius;
std::vector<float> errors_cache;
bool cache_valid;
public:
explicit SphereModelErrorImpl(const Mat &points_)
: PointCloudWrapper(points_),
center_x(0), center_y(0), center_z(0), radius(0),
errors_cache(pts_cnt), cache_valid(false)
{
}
inline void setModelParameters(const Mat &model) override
{
CV_Assert(!model.empty());
CV_CheckTypeEQ(model.depth(), CV_64F, "Model parameter type should use double");
const double *const p = (double *) model.data;
float coeff_x = (float) p[0], coeff_y = (float) p[1],
coeff_z = (float) p[2], coeff_r = (float) p[3];
cache_valid = cache_valid && center_x == coeff_x && coeff_y == coeff_y
&& coeff_z == coeff_z && radius == coeff_r;
center_x = coeff_x;
center_y = coeff_y;
center_z = coeff_z;
radius = coeff_r;
}
inline float getError(int point_idx) const override
{
float diff_x = center_x - pts_ptr_x[point_idx];
float diff_y = center_y - pts_ptr_y[point_idx];
float diff_z = center_z - pts_ptr_z[point_idx];
// A better way to avoid sqrt() ?
float distance_from_center = sqrt(diff_x * diff_x + diff_y * diff_y + diff_z * diff_z);
float diff_dist = distance_from_center - radius;
double distance_square_from_surface = diff_dist * diff_dist;
return (float) distance_square_from_surface;
}
const std::vector<float> &getErrors(const Mat &model) override
{
setModelParameters(model);
if (cache_valid)
return errors_cache;
int i = 0;
#ifdef CV_SIMD
v_float32 v_center_x = vx_setall_f32(center_x);
v_float32 v_center_y = vx_setall_f32(center_y);
v_float32 v_center_z = vx_setall_f32(center_z);
v_float32 v_radius = vx_setall_f32(radius);
float* errors_cache_ptr = errors_cache.data();
for (; i <= pts_cnt - v_float32::nlanes; i += v_float32::nlanes)
{
v_float32 v_diff_x = v_center_x - vx_load(pts_ptr_x + i);
v_float32 v_diff_y = v_center_y - vx_load(pts_ptr_y + i);
v_float32 v_diff_z = v_center_z - vx_load(pts_ptr_z + i);
v_float32 v_distance_from_center = v_sqrt(v_diff_x * v_diff_x +
v_diff_y * v_diff_y + v_diff_z * v_diff_z);
v_float32 v_diff_dist = v_distance_from_center - v_radius;
v_store(errors_cache_ptr + i, v_diff_dist * v_diff_dist);
}
#endif
for (; i < pts_cnt; ++i)
{
float diff_x = center_x - pts_ptr_x[i];
float diff_y = center_y - pts_ptr_y[i];
float diff_z = center_z - pts_ptr_z[i];
// A better way to avoid sqrt() ?
float distance_from_center = sqrt(diff_x * diff_x + diff_y * diff_y + diff_z * diff_z);
float diff_dist = distance_from_center - radius;
errors_cache[i] = diff_dist * diff_dist;
}
return errors_cache;
}
Ptr<Error> clone () const override {
return makePtr<SphereModelErrorImpl>(*points_mat);
}
};
Ptr<SphereModelError> SphereModelError::create(const Mat &points) {
return makePtr<SphereModelErrorImpl>(points);
}
///////////////////////////////////////////////////////////////////////////////////////////////////
// Computes forward reprojection error for affine transformation.
class ReprojectionDistanceAffineImpl : public ReprojectionErrorAffine {

@ -0,0 +1,149 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#include "../precomp.hpp"
#include "../ptcloud/ptcloud_wrapper.hpp"
#include "../usac.hpp"
#include "../ptcloud/ptcloud_utils.hpp"
namespace cv { namespace usac {
class PlaneModelMinimalSolverImpl : public PlaneModelMinimalSolver, public PointCloudWrapper
{
public:
explicit PlaneModelMinimalSolverImpl(const Mat &points_)
: PointCloudWrapper(points_)
{
}
int getSampleSize() const override
{
return 3;
}
int getMaxNumberOfSolutions() const override
{
return 1;
}
Ptr <MinimalSolver> clone() const override
{
return makePtr<PlaneModelMinimalSolverImpl>(*points_mat);
}
/** [a, b, c, d] <--> ax + by + cz + d =0
Use the cross product of the vectors in the plane to calculate the plane normal vector.
Use the plane normal vector and the points in the plane to calculate the coefficient d.
*/
int estimate(const std::vector<int> &sample, std::vector<Mat> &models) const override
{
models.clear();
// Get point data
const int p1_idx = sample[0], p2_idx = sample[1], p3_idx = sample[2];
float x1 = pts_ptr_x[p1_idx], y1 = pts_ptr_y[p1_idx], z1 = pts_ptr_z[p1_idx];
float x2 = pts_ptr_x[p2_idx], y2 = pts_ptr_y[p2_idx], z2 = pts_ptr_z[p2_idx];
float x3 = pts_ptr_x[p3_idx], y3 = pts_ptr_y[p3_idx], z3 = pts_ptr_z[p3_idx];
// v1 = p1p2 v2 = p1p3
float a1 = x2 - x1;
float b1 = y2 - y1;
float c1 = z2 - z1;
float a2 = x3 - x1;
float b2 = y3 - y1;
float c2 = z3 - z1;
// Get the plane normal vector v = v1 x v2
float a = b1 * c2 - b2 * c1;
float b = a2 * c1 - a1 * c2;
float c = a1 * b2 - b1 * a2;
float d = (-a * x1 - b * y1 - c * z1);
double plane_coeff[4] = {a, b, c, d};
models.emplace_back(cv::Mat(1, 4, CV_64F, plane_coeff).clone());
// models = std::vector<Mat>{Mat_<double>(3, 3)};
// auto *f = (double *) models[0].data;
// f[0] = a, f[1] = b, f[2] = c, f[3] = d;
return 1;
}
};
Ptr <PlaneModelMinimalSolver> PlaneModelMinimalSolver::create(const Mat &points_)
{
return makePtr<PlaneModelMinimalSolverImpl>(points_);
}
class PlaneModelNonMinimalSolverImpl : public PlaneModelNonMinimalSolver, public PointCloudWrapper
{
public:
explicit PlaneModelNonMinimalSolverImpl(const Mat &points_)
: PointCloudWrapper(points_)
{
}
int getMinimumRequiredSampleSize() const override
{
return 3;
}
int getMaxNumberOfSolutions() const override
{
return 1;
}
Ptr <NonMinimalSolver> clone() const override
{
return makePtr<PlaneModelNonMinimalSolverImpl>(*points_mat);
}
/** [a, b, c, d] <--> ax + by + cz + d =0
Use total least squares (PCA can achieve the same calculation) to estimate the plane model equation.
*/
int estimate(const std::vector<int> &sample, int sample_size, std::vector<Mat> &models,
const std::vector<double> &/*weights*/) const override
{
models.clear();
cv::Mat pcaset;
copyPointDataByIdxs(*points_mat, pcaset, sample, sample_size);
cv::PCA pca(pcaset, // pass the data
cv::Mat(), // we do not have a pre-computed mean vector,
// so let the PCA engine to compute it
cv::PCA::DATA_AS_COL, // indicate that the vectors
// are stored as matrix columns (3xN Mat, points are arranged in columns)
3 // specify, how many principal components to retain
);
Mat eigenvectors = pca.eigenvectors;
const float *eig_ptr = (float *) eigenvectors.data;
float a = eig_ptr[6], b = eig_ptr[7], c = eig_ptr[8];
if (std::isinf(a) || std::isinf(b) || std::isinf(c) || (a == 0 && b == 0 && c == 0))
return 0;
Mat mean = pca.mean;
const float *mean_ptr = (float *) mean.data;
float d = (-a * mean_ptr[0] - b * mean_ptr[1] - c * mean_ptr[2]);
double plane_coeffs[4] = {a, b, c, d};
models.emplace_back(cv::Mat(1, 4, CV_64F, plane_coeffs).clone());
return 1;
}
};
Ptr <PlaneModelNonMinimalSolver> PlaneModelNonMinimalSolver::create(const Mat &points_)
{
return makePtr<PlaneModelNonMinimalSolverImpl>(points_);
}
}}

@ -39,10 +39,10 @@ public:
}
Score getScore (const Mat &model) const override {
error->setModelParameters(model);
const std::vector<float>& errors = error->getErrors(model);
int inlier_number = 0;
for (int point = 0; point < points_size; point++) {
if (error->getError(point) < threshold)
if (errors[point] < threshold)
inlier_number++;
if (inlier_number + (points_size - point) < -best_score)
break;

@ -10,6 +10,8 @@ namespace cv { namespace usac {
int mergePoints (InputArray pts1_, InputArray pts2_, Mat &pts, bool ispnp);
void setParameters (int flag, Ptr<Model> &params, EstimationMethod estimator, double thr,
int max_iters, double conf, bool mask_needed);
//! Adapter between SimpleUsacConfig and Model.
void modelParamsToUsacConfig (Ptr<SimpleUsacConfig> &config, const Ptr<const Model> &params);
class RansacOutputImpl : public RansacOutput {
private:
@ -82,144 +84,178 @@ Ptr<RansacOutput> RansacOutput::create(const Mat &model_, const std::vector<bool
number_iterations_, number_estimated_models_, number_good_models_);
}
class Ransac {
protected:
const Ptr<const Model> params;
const Ptr<const Estimator> _estimator;
const Ptr<Quality> _quality;
const Ptr<Sampler> _sampler;
const Ptr<TerminationCriteria> _termination_criteria;
const Ptr<ModelVerifier> _model_verifier;
const Ptr<Degeneracy> _degeneracy;
const Ptr<LocalOptimization> _local_optimization;
const Ptr<FinalModelPolisher> model_polisher;
const int points_size, state;
const bool parallel;
class SimpleUsacConfigImpl : public SimpleUsacConfig {
public:
SimpleUsacConfigImpl() : max_iterations(2500), max_iterations_before_lo(100),
max_num_hypothesis_to_test_before_rejection(15),
random_generator_state(0), is_parallel(false),
neighbors_search_method(NeighborSearchMethod::NEIGH_GRID),
sampling_method(SamplingMethod::SAMPLING_UNIFORM),
score_method(ScoreMethod::SCORE_METHOD_RANSAC),
lo_method(LocalOptimMethod::LOCAL_OPTIM_INNER_LO), need_mask(true) {}
int getMaxIterations() const override { return max_iterations; }
int getMaxIterationsBeforeLO() const override { return max_iterations_before_lo; }
int getMaxNumHypothesisToTestBeforeRejection() const override { return max_num_hypothesis_to_test_before_rejection; }
int getRandomGeneratorState() const override { return random_generator_state; }
bool isParallel () const override { return is_parallel; }
NeighborSearchMethod getNeighborsSearchMethod() const override { return neighbors_search_method; }
SamplingMethod getSamplingMethod() const override { return sampling_method; }
ScoreMethod getScoreMethod() const override { return score_method; }
LocalOptimMethod getLOMethod() const override { return lo_method; }
bool isMaskRequired() const override { return need_mask; }
void setMaxIterations(int max_iterations_) override { max_iterations = max_iterations_; }
void setMaxIterationsBeforeLo(int max_iterations_before_lo_) override { max_iterations_before_lo = max_iterations_before_lo_; }
void setMaxNumHypothesisToTestBeforeRejection(int max_num_hypothesis_to_test_before_rejection_)
override { max_num_hypothesis_to_test_before_rejection = max_num_hypothesis_to_test_before_rejection_; }
void setRandomGeneratorState(int random_generator_state_) override { random_generator_state = random_generator_state_; }
void setParallel (bool is_parallel_) override { is_parallel = is_parallel_; }
void setNeighborsSearchMethod(NeighborSearchMethod neighbors_search_method_) override { neighbors_search_method = neighbors_search_method_; }
void setSamplingMethod(SamplingMethod sampling_method_) override { sampling_method = sampling_method_; }
void setScoreMethod(ScoreMethod score_method_) override { score_method = score_method_; }
void setLoMethod(LocalOptimMethod lo_method_) override { lo_method = lo_method_; }
void maskRequired (bool need_mask_) override { need_mask = need_mask_; }
Ransac (const Ptr<const Model> &params_, int points_size_, const Ptr<const Estimator> &estimator_, const Ptr<Quality> &quality_,
const Ptr<Sampler> &sampler_, const Ptr<TerminationCriteria> &termination_criteria_,
const Ptr<ModelVerifier> &model_verifier_, const Ptr<Degeneracy> &degeneracy_,
const Ptr<LocalOptimization> &local_optimization_, const Ptr<FinalModelPolisher> &model_polisher_,
bool parallel_=false, int state_ = 0) :
params (params_), _estimator (estimator_), _quality (quality_), _sampler (sampler_),
_termination_criteria (termination_criteria_), _model_verifier (model_verifier_),
_degeneracy (degeneracy_), _local_optimization (local_optimization_),
model_polisher (model_polisher_), points_size (points_size_), state(state_),
parallel(parallel_) {}
bool run(Ptr<RansacOutput> &ransac_output) {
if (points_size < params->getSampleSize())
return false;
const auto begin_time = std::chrono::steady_clock::now();
// check if LO
const bool LO = params->getLO() != LocalOptimMethod::LOCAL_OPTIM_NULL;
const bool is_magsac = params->getLO() == LocalOptimMethod::LOCAL_OPTIM_SIGMA;
const int max_hyp_test_before_ver = params->getMaxNumHypothesisToTestBeforeRejection();
const int repeat_magsac = 10, max_iters_before_LO = params->getMaxItersBeforeLO();
Score best_score;
Mat best_model;
int final_iters;
if (! parallel) {
auto update_best = [&] (const Mat &new_model, const Score &new_score) {
best_score = new_score;
// remember best model
new_model.copyTo(best_model);
// update quality and verifier to save evaluation time of a model
_quality->setBestScore(best_score.score);
// update verifier
_model_verifier->update(best_score.inlier_number);
// update upper bound of iterations
return _termination_criteria->update(best_model, best_score.inlier_number);
};
bool was_LO_run = false;
Mat non_degenerate_model, lo_model;
Score current_score, lo_score, non_denegenerate_model_score;
// reallocate memory for models
std::vector<Mat> models(_estimator->getMaxNumSolutions());
// allocate memory for sample
std::vector<int> sample(_estimator->getMinimalSampleSize());
int iters = 0, max_iters = params->getMaxIters();
for (; iters < max_iters; iters++) {
_sampler->generateSample(sample);
const int number_of_models = _estimator->estimateModels(sample, models);
for (int i = 0; i < number_of_models; i++) {
if (iters < max_hyp_test_before_ver) {
current_score = _quality->getScore(models[i]);
} else {
if (is_magsac && iters % repeat_magsac == 0) {
if (!_local_optimization->refineModel
(models[i], best_score, models[i], current_score))
continue;
} else if (_model_verifier->isModelGood(models[i])) {
if (!_model_verifier->getScore(current_score)) {
if (_model_verifier->hasErrors())
current_score = _quality->getScore(_model_verifier->getErrors());
else current_score = _quality->getScore(models[i]);
}
} else continue;
}
protected:
int max_iterations;
int max_iterations_before_lo;
int max_num_hypothesis_to_test_before_rejection;
int random_generator_state;
bool is_parallel;
NeighborSearchMethod neighbors_search_method;
SamplingMethod sampling_method;
ScoreMethod score_method;
LocalOptimMethod lo_method;
bool need_mask;
};
if (current_score.isBetter(best_score)) {
if (_degeneracy->recoverIfDegenerate(sample, models[i],
non_degenerate_model, non_denegenerate_model_score)) {
// check if best non degenerate model is better than so far the best model
if (non_denegenerate_model_score.isBetter(best_score))
max_iters = update_best(non_degenerate_model, non_denegenerate_model_score);
else continue;
} else max_iters = update_best(models[i], current_score);
if (LO && iters >= max_iters_before_LO) {
// do magsac if it wasn't already run
if (is_magsac && iters % repeat_magsac == 0 && iters >= max_hyp_test_before_ver) continue; // magsac has already run
was_LO_run = true;
// update model by Local optimization
if (_local_optimization->refineModel
(best_model, best_score, lo_model, lo_score)) {
if (lo_score.isBetter(best_score)){
max_iters = update_best(lo_model, lo_score);
}
}
}
if (iters > max_iters)
break;
} // end of if so far the best score
} // end loop of number of models
if (LO && !was_LO_run && iters >= max_iters_before_LO) {
was_LO_run = true;
if (_local_optimization->refineModel(best_model, best_score, lo_model, lo_score))
if (lo_score.isBetter(best_score)){
max_iters = update_best(lo_model, lo_score);
Ptr<SimpleUsacConfig> SimpleUsacConfig::create() {
return makePtr<SimpleUsacConfigImpl>();
}
UniversalRANSAC::UniversalRANSAC (const Ptr<const UsacConfig> &config_, int points_size_,
const Ptr<const Estimator> &estimator_, const Ptr<Quality> &quality_,
const Ptr<Sampler> &sampler_, const Ptr<TerminationCriteria> &termination_criteria_,
const Ptr<ModelVerifier> &model_verifier_, const Ptr<Degeneracy> &degeneracy_,
const Ptr<LocalOptimization> &local_optimization_,
const Ptr<FinalModelPolisher> &model_polisher_) :
config (config_), _estimator (estimator_), _quality (quality_), _sampler (sampler_),
_termination_criteria (termination_criteria_), _model_verifier (model_verifier_),
_degeneracy (degeneracy_), _local_optimization (local_optimization_),
_model_polisher (model_polisher_), points_size (points_size_) {}
bool UniversalRANSAC::run(Ptr<RansacOutput> &ransac_output) {
if (points_size < _estimator->getMinimalSampleSize())
return false;
const auto begin_time = std::chrono::steady_clock::now();
// check if LO
const bool LO = config->getLOMethod() != LocalOptimMethod::LOCAL_OPTIM_NULL;
const bool is_magsac = config->getLOMethod() == LocalOptimMethod::LOCAL_OPTIM_SIGMA;
const int max_hyp_test_before_ver = config->getMaxNumHypothesisToTestBeforeRejection();
const int repeat_magsac = 10, max_iters_before_LO = config->getMaxIterationsBeforeLO();
Score best_score;
Mat best_model;
int final_iters;
if (! config->isParallel()) {
auto update_best = [&] (const Mat &new_model, const Score &new_score) {
best_score = new_score;
// remember best model
new_model.copyTo(best_model);
// update quality and verifier to save evaluation time of a model
_quality->setBestScore(best_score.score);
// update verifier
_model_verifier->update(best_score.inlier_number);
// update upper bound of iterations
return _termination_criteria->update(best_model, best_score.inlier_number);
};
bool was_LO_run = false;
Mat non_degenerate_model, lo_model;
Score current_score, lo_score, non_denegenerate_model_score;
// reallocate memory for models
std::vector<Mat> models(_estimator->getMaxNumSolutions());
// allocate memory for sample
std::vector<int> sample(_estimator->getMinimalSampleSize());
int iters = 0, max_iters = config->getMaxIterations();
for (; iters < max_iters; iters++) {
_sampler->generateSample(sample);
const int number_of_models = _estimator->estimateModels(sample, models);
for (int i = 0; i < number_of_models; i++) {
if (iters < max_hyp_test_before_ver) {
current_score = _quality->getScore(models[i]);
} else {
if (is_magsac && iters % repeat_magsac == 0) {
if (!_local_optimization->refineModel
(models[i], best_score, models[i], current_score))
continue;
} else if (_model_verifier->isModelGood(models[i])) {
if (!_model_verifier->getScore(current_score)) {
if (_model_verifier->hasErrors())
current_score = _quality->getScore(_model_verifier->getErrors());
else current_score = _quality->getScore(models[i]);
}
} else continue;
}
} // end main while loop
final_iters = iters;
} else {
const int MAX_THREADS = getNumThreads();
const bool is_prosac = params->getSampler() == SamplingMethod::SAMPLING_PROSAC;
if (current_score.isBetter(best_score)) {
if (_degeneracy->recoverIfDegenerate(sample, models[i],
non_degenerate_model, non_denegenerate_model_score)) {
// check if best non degenerate model is better than so far the best model
if (non_denegenerate_model_score.isBetter(best_score))
max_iters = update_best(non_degenerate_model, non_denegenerate_model_score);
else continue;
} else max_iters = update_best(models[i], current_score);
if (LO && iters >= max_iters_before_LO) {
// do magsac if it wasn't already run
if (is_magsac && iters % repeat_magsac == 0 && iters >= max_hyp_test_before_ver) continue; // magsac has already run
was_LO_run = true;
// update model by Local optimization
if (_local_optimization->refineModel
(best_model, best_score, lo_model, lo_score)) {
if (lo_score.isBetter(best_score)){
max_iters = update_best(lo_model, lo_score);
}
}
}
if (iters > max_iters)
break;
} // end of if so far the best score
} // end loop of number of models
if (LO && !was_LO_run && iters >= max_iters_before_LO) {
was_LO_run = true;
if (_local_optimization->refineModel(best_model, best_score, lo_model, lo_score))
if (lo_score.isBetter(best_score)){
max_iters = update_best(lo_model, lo_score);
}
}
} // end main while loop
std::atomic_bool success(false);
std::atomic_int num_hypothesis_tested(0);
std::atomic_int thread_cnt(0);
std::vector<Score> best_scores(MAX_THREADS);
std::vector<Mat> best_models(MAX_THREADS);
final_iters = iters;
} else {
const int MAX_THREADS = getNumThreads();
const bool is_prosac = config->getSamplingMethod() == SamplingMethod::SAMPLING_PROSAC;
Mutex mutex; // only for prosac
std::atomic_bool success(false);
std::atomic_int num_hypothesis_tested(0);
std::atomic_int thread_cnt(0);
std::vector<Score> best_scores(MAX_THREADS);
std::vector<Mat> best_models(MAX_THREADS);
///////////////////////////////////////////////////////////////////////////////////////////////////////
parallel_for_(Range(0, MAX_THREADS), [&](const Range & /*range*/) {
Mutex mutex; // only for prosac
///////////////////////////////////////////////////////////////////////////////////////////////////////
parallel_for_(Range(0, MAX_THREADS), [&](const Range & /*range*/) {
if (!success) { // cover all if not success to avoid thread creating new variables
const int thread_rng_id = thread_cnt++;
int thread_state = state + 10*thread_rng_id;
int thread_state = config->getRandomGeneratorState() + 10*thread_rng_id;
Ptr<Estimator> estimator = _estimator->clone();
Ptr<Degeneracy> degeneracy = _degeneracy->clone(thread_state++);
@ -238,7 +274,7 @@ public:
best_score_all_threads;
std::vector<int> sample(estimator->getMinimalSampleSize());
std::vector<Mat> models(estimator->getMaxNumSolutions());
int iters, max_iters = params->getMaxIters();
int iters, max_iters = config->getMaxIterations();
auto update_best = [&] (const Score &new_score, const Mat &new_model) {
// copy new score to best score
best_score_thread = new_score;
@ -333,49 +369,48 @@ public:
}
} // end of loop over iters
}}); // end parallel
///////////////////////////////////////////////////////////////////////////////////////////////////////
// find best model from all threads' models
best_score = best_scores[0];
int best_thread_idx = 0;
for (int i = 1; i < MAX_THREADS; i++) {
if (best_scores[i].isBetter(best_score)) {
best_score = best_scores[i];
best_thread_idx = i;
}
///////////////////////////////////////////////////////////////////////////////////////////////////////
// find best model from all threads' models
best_score = best_scores[0];
int best_thread_idx = 0;
for (int i = 1; i < MAX_THREADS; i++) {
if (best_scores[i].isBetter(best_score)) {
best_score = best_scores[i];
best_thread_idx = i;
}
best_model = best_models[best_thread_idx];
final_iters = num_hypothesis_tested;
}
best_model = best_models[best_thread_idx];
final_iters = num_hypothesis_tested;
}
if (best_model.empty())
return false;
// polish final model
if (params->getFinalPolisher() != PolishingMethod::NonePolisher) {
Mat polished_model;
Score polisher_score;
if (model_polisher->polishSoFarTheBestModel(best_model, best_score,
polished_model, polisher_score))
if (polisher_score.isBetter(best_score)) {
best_score = polisher_score;
polished_model.copyTo(best_model);
}
}
// ================= here is ending ransac main implementation ===========================
std::vector<bool> inliers_mask;
if (params->isMaskRequired()) {
inliers_mask = std::vector<bool>(points_size);
// get final inliers from the best model
_quality->getInliers(best_model, inliers_mask);
}
// Store results
ransac_output = RansacOutput::create(best_model, inliers_mask,
static_cast<int>(std::chrono::duration_cast<std::chrono::microseconds>
(std::chrono::steady_clock::now() - begin_time).count()), best_score.score,
best_score.inlier_number, final_iters, -1, -1);
return true;
if (best_model.empty())
return false;
// polish final model
if (!_model_polisher.empty()) {
Mat polished_model;
Score polisher_score;
if (_model_polisher->polishSoFarTheBestModel(best_model, best_score,
polished_model, polisher_score))
if (polisher_score.isBetter(best_score)) {
best_score = polisher_score;
polished_model.copyTo(best_model);
}
}
};
// ================= here is ending ransac main implementation ===========================
std::vector<bool> inliers_mask;
if (config->isMaskRequired()) {
inliers_mask = std::vector<bool>(points_size);
// get final inliers from the best model
_quality->getInliers(best_model, inliers_mask);
}
// Store results
ransac_output = RansacOutput::create(best_model, inliers_mask,
static_cast<int>(std::chrono::duration_cast<std::chrono::microseconds>
(std::chrono::steady_clock::now() - begin_time).count()), best_score.score,
best_score.inlier_number, final_iters, -1, -1);
return true;
}
/*
* pts1, pts2 are matrices either N x a, N x b or a x N or b x N, where N > a and N > b
@ -757,6 +792,21 @@ Ptr<Model> Model::create(double threshold_, EstimationMethod estimator_, Samplin
max_iterations_, score_);
}
void modelParamsToUsacConfig (Ptr<SimpleUsacConfig> &config, const Ptr<const Model> &params) {
config = SimpleUsacConfig::create();
config->setMaxIterations(params->getMaxIters());
config->setMaxIterationsBeforeLo(params->getMaxItersBeforeLO());
config->setMaxNumHypothesisToTestBeforeRejection(params->getMaxNumHypothesisToTestBeforeRejection());
config->setRandomGeneratorState(params->getRandomGeneratorState());
config->setParallel(params->isParallel());
config->setNeighborsSearchMethod(params->getNeighborsSearch());
config->setSamplingMethod(params->getSampler());
config->setScoreMethod(params->getScore());
config->setLoMethod(params->getLO());
config->maskRequired(params->isMaskRequired());
}
bool run (const Ptr<const Model> &params, InputArray points1, InputArray points2, int state,
Ptr<RansacOutput> &ransac_output, InputArray K1_, InputArray K2_,
InputArray dist_coeff1, InputArray dist_coeff2) {
@ -1004,8 +1054,12 @@ bool run (const Ptr<const Model> &params, InputArray points1, InputArray points2
if (params->getFinalPolisher() == PolishingMethod::LSQPolisher)
polisher = LeastSquaresPolishing::create(estimator, quality, params->getFinalLSQIterations());
Ransac ransac (params, points_size, estimator, quality, sampler,
termination, verifier, degeneracy, lo, polisher, params->isParallel(), state);
Ptr<SimpleUsacConfig> usacConfig;
modelParamsToUsacConfig(usacConfig, params);
usacConfig->setRandomGeneratorState(state);
UniversalRANSAC ransac (usacConfig, points_size, estimator, quality, sampler,
termination, verifier, degeneracy, lo, polisher);
if (ransac.run(ransac_output)) {
if (params->isPnP()) {
// convert R to rodrigues and back and recalculate inliers which due to numerical

@ -0,0 +1,271 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#include "../precomp.hpp"
#include "../ptcloud/ptcloud_wrapper.hpp"
#include "../usac.hpp"
namespace cv { namespace usac {
class SphereModelMinimalSolverImpl : public SphereModelMinimalSolver, public PointCloudWrapper
{
public:
explicit SphereModelMinimalSolverImpl(const Mat &points_)
: PointCloudWrapper(points_)
{
}
int getSampleSize() const override
{
return 4;
}
int getMaxNumberOfSolutions() const override
{
return 1;
}
Ptr <MinimalSolver> clone() const override
{
return makePtr<SphereModelMinimalSolverImpl>(*points_mat);
}
/** [center_x, center_y, center_z, radius] <--> (x - center_x)^2 + (y - center_y)^2 + (z - center_z)^2 = radius^2
Fitting the sphere using Cramer's Rule.
*/
int estimate(const std::vector<int> &sample, std::vector<Mat> &models) const override
{
models.clear();
// Get point data
const int p1_idx = sample[0], p2_idx = sample[1], p3_idx = sample[2], p4_idx = sample[3];
float x1 = pts_ptr_x[p1_idx], y1 = pts_ptr_y[p1_idx], z1 = pts_ptr_z[p1_idx];
float x2 = pts_ptr_x[p2_idx], y2 = pts_ptr_y[p2_idx], z2 = pts_ptr_z[p2_idx];
float x3 = pts_ptr_x[p3_idx], y3 = pts_ptr_y[p3_idx], z3 = pts_ptr_z[p3_idx];
float x4 = pts_ptr_x[p4_idx], y4 = pts_ptr_y[p4_idx], z4 = pts_ptr_z[p4_idx];
double center_x, center_y, center_z, radius; // Cramer's Rule
{
double a11, a12, a13, a21, a22, a23, a31, a32, a33, t1, t2, t3, t4, d, d1, d2, d3;
a11 = x2 - x1;
a12 = y2 - y1;
a13 = z2 - z1;
a21 = x3 - x2;
a22 = y3 - y2;
a23 = z3 - z2;
a31 = x4 - x3;
a32 = y4 - y3;
a33 = z4 - z3;
t1 = x1 * x1 + y1 * y1 + z1 * z1;
t2 = x2 * x2 + y2 * y2 + z2 * z2;
t3 = x3 * x3 + y3 * y3 + z3 * z3;
t4 = x4 * x4 + y4 * y4 + z4 * z4;
double b1 = t2 - t1,
b2 = t3 - t2,
b3 = t4 - t3;
// d = a11 * a22 * a33 + a12 * a23 * a31 + a13 * a21 * a32 - a11 * a23 * a32 -
// a12 * a21 * a33 - a13 * a22 * a31;
// d1 = b1 * a22 * a33 + a12 * a23 * b3 + a13 * b2 * a32 - b1 * a23 * a32 -
// a12 * b2 * a33 - a13 * a22 * b3;
// d2 = a11 * b2 * a33 + b1 * a23 * a31 + a13 * a21 * b3 - a11 * a23 * b3 -
// b1 * a21 * a33 - a13 * b2 * a31;
// d3 = a11 * a22 * b3 + a12 * b2 * a31 + b1 * a21 * a32 - a11 * b2 * a32 -
// a12 * a21 * b3 - b1 * a22 * a31;
double tmp1 = a22 * a33 - a23 * a32,
tmp2 = a23 * a31 - a21 * a33,
tmp3 = a21 * a32 - a22 * a31;
d = a11 * tmp1 + a12 * tmp2 + a13 * tmp3;
d1 = b1 * tmp1 + a12 * (a23 * b3 - b2 * a33) + a13 * (b2 * a32 - a22 * b3);
d2 = a11 * (b2 * a33 - a23 * b3) + b1 * tmp2 + a13 * (a21 * b3 - b2 * a31);
d3 = a11 * (a22 * b3 - b2 * a32) + a12 * (b2 * a31 - a21 * b3) + b1 * tmp3;
if (d == 0) {
return 0;
}
d = 0.5 / d;
center_x = d1 * d;
center_y = d2 * d;
center_z = d3 * d;
tmp1 = center_x - x1;
tmp2 = center_y - y1;
tmp3 = center_z - z1;
radius = sqrt(tmp1 * tmp1 + tmp2 * tmp2 + tmp3 * tmp3);
}
double sphere_coeff[4] = {center_x, center_y, center_z, radius};
models.emplace_back(cv::Mat(1, 4, CV_64F, sphere_coeff).clone());
return 1;
}
};
Ptr <SphereModelMinimalSolver> SphereModelMinimalSolver::create(const Mat &points_)
{
return makePtr<SphereModelMinimalSolverImpl>(points_);
}
class SphereModelNonMinimalSolverImpl : public SphereModelNonMinimalSolver, public PointCloudWrapper
{
public:
explicit SphereModelNonMinimalSolverImpl(const Mat &points_)
: PointCloudWrapper(points_)
{
}
int getMinimumRequiredSampleSize() const override
{
return 4;
}
int getMaxNumberOfSolutions() const override
{
return 1;
}
Ptr <NonMinimalSolver> clone() const override
{
return makePtr<SphereModelNonMinimalSolverImpl>(*points_mat);
}
/** [center_x, center_y, center_z, radius] <--> (x - center_x)^2 + (y - center_y)^2 + (z - center_z)^2 = radius^2
Fitting Sphere Using Differences of Squared Lengths and Squared Radius.
Reference https://www.geometrictools.com/Documentation/LeastSquaresFitting.pdf section 5.2.
*/
int estimate(const std::vector<int> &sample, int sample_size, std::vector<Mat> &models,
const std::vector<double> &/*weights*/) const override
{
const double inv_sample_size = 1.0 / (double) sample_size;
// Compute the average of the data points.
// Vec3d A(0, 0, 0);
double A0 = 0, A1 = 0, A2 = 0;
for (int i = 0; i < sample_size; ++i)
{
int sample_idx = sample[i];
float x = pts_ptr_x[sample_idx];
float y = pts_ptr_y[sample_idx];
float z = pts_ptr_z[sample_idx];
// A += Vec3d(x, y, z);
A0 += x;
A1 += y;
A2 += z;
}
// A *= inv_sample_size;
A0 *= inv_sample_size;
A1 *= inv_sample_size;
A2 *= inv_sample_size;
// Compute the covariance matrix M of the Y[i] = X[i]-A and the
// right-hand side R of the linear system M*(C-A) = R.
double M00 = 0, M01 = 0, M02 = 0, M11 = 0, M12 = 0, M22 = 0;
// Vec3d R(0, 0, 0);
double R0 = 0, R1 = 0, R2 = 0;
for (int i = 0; i < sample_size; ++i)
{
int sample_idx = sample[i];
float x = pts_ptr_x[sample_idx];
float y = pts_ptr_y[sample_idx];
float z = pts_ptr_z[sample_idx];
// Vec3d Y = Vec3d(x, y, z) - A;
// double Y0Y0 = Y[0] * Y[0];
// double Y0Y1 = Y[0] * Y[1];
// double Y0Y2 = Y[0] * Y[2];
// double Y1Y1 = Y[1] * Y[1];
// double Y1Y2 = Y[1] * Y[2];
// double Y2Y2 = Y[2] * Y[2];
// M00 += Y0Y0;
// M01 += Y0Y1;
// M02 += Y0Y2;
// M11 += Y1Y1;
// M12 += Y1Y2;
// M22 += Y2Y2;
// R += (Y0Y0 + Y1Y1 + Y2Y2) * Y;
double Y0 = x - A0, Y1 = y - A1, Y2 = z - A2;
double Y0Y0 = Y0 * Y0;
double Y1Y1 = Y1 * Y1;
double Y2Y2 = Y2 * Y2;
M00 += Y0Y0;
M01 += Y0 * Y1;
M02 += Y0 * Y2;
M11 += Y1Y1;
M12 += Y1 * Y2;
M22 += Y2Y2;
double sum_diag = Y0Y0 + Y1Y1 + Y2Y2;
R0 += sum_diag * Y0;
R1 += sum_diag * Y1;
R2 += sum_diag * Y2;
}
// R *= 0.5;
R0 *= 0.5;
R1 *= 0.5;
R2 *= 0.5;
double center_x, center_y, center_z, radius;
// Solve the linear system M*(C-A) = R for the center C.
double cof00 = M11 * M22 - M12 * M12;
double cof01 = M02 * M12 - M01 * M22;
double cof02 = M01 * M12 - M02 * M11;
double det = M00 * cof00 + M01 * cof01 + M02 * cof02;
if (det != 0)
{
double cof11 = M00 * M22 - M02 * M02;
double cof12 = M01 * M02 - M00 * M12;
double cof22 = M00 * M11 - M01 * M01;
// center_x = A[0] + (cof00 * R[0] + cof01 * R[1] + cof02 * R[2]) / det;
// center_y = A[1] + (cof01 * R[0] + cof11 * R[1] + cof12 * R[2]) / det;
// center_z = A[2] + (cof02 * R[0] + cof12 * R[1] + cof22 * R[2]) / det;
center_x = A0 + (cof00 * R0 + cof01 * R1 + cof02 * R2) / det;
center_y = A1 + (cof01 * R0 + cof11 * R1 + cof12 * R2) / det;
center_z = A2 + (cof02 * R0 + cof12 * R1 + cof22 * R2) / det;
double rsqr = 0;
for (int i = 0; i < sample_size; ++i)
{
int sample_idx = sample[i];
float x = pts_ptr_x[sample_idx];
float y = pts_ptr_y[sample_idx];
float z = pts_ptr_z[sample_idx];
double diff_x = x - center_x, diff_y = y - center_y, diff_z = z - center_z;
rsqr += diff_x * diff_x + diff_y * diff_y + diff_z * diff_z;
}
rsqr *= inv_sample_size;
radius = std::sqrt(rsqr);
double sphere_coeff[4] = {center_x, center_y, center_z, radius};
models.emplace_back(cv::Mat(1, 4, CV_64F, sphere_coeff).clone());
return 1;
}
else
{
return 0;
}
}
};
Ptr <SphereModelNonMinimalSolver> SphereModelNonMinimalSolver::create(const Mat &points_)
{
return makePtr<SphereModelNonMinimalSolverImpl>(points_);
}
}}

@ -0,0 +1,399 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#include "test_precomp.hpp"
namespace opencv_test { namespace {
int countNum(const vector<int> &m, int num)
{
int t = 0;
for (int a: m)
if (a == num) t++;
return t;
}
string getHeader(const Ptr<SACSegmentation> &s){
string r;
if(!s->isParallel())
r += "One thread ";
else
r += std::to_string(getNumThreads()) + "-thread ";
if(s->getNumberOfModelsExpected() == 1)
r += "single model segmentation ";
else
r += std::to_string(s->getNumberOfModelsExpected()) + " models segmentation ";
if(s->getCustomModelConstraints() == nullptr)
r += "without constraint:\n";
else
r += "with constraint:\n";
r += "Confidence: " + std::to_string(s->getConfidence()) + "\n";
r += "Max Iterations: " + std::to_string(s->getMaxIterations()) + "\n";
r += "Expected Models Number: " + std::to_string(s->getNumberOfModelsExpected()) + "\n";
r += "Distance Threshold: " + std::to_string(s->getDistanceThreshold());
return r;
}
class SacSegmentationTest : public ::testing::Test
{
public:
// Used to store the parameters of model generation
vector<vector<float>> models, limits;
vector<float> thrs;
vector<int> pt_nums;
int models_num = 0;
// Used to store point cloud, generated plane and model
Mat pt_cloud, generated_pts, segmented_models;
vector<int> label;
Ptr<SACSegmentation> sacSegmentation = SACSegmentation::create();
SACSegmentation::ModelConstraintFunction model_constraint = nullptr;
using CheckDiffFunction = std::function<bool(const Mat &, const Mat &)>;
void singleModelSegmentation(int iter_num, const CheckDiffFunction &checkDiff, int idx)
{
sacSegmentation->setSacMethodType(SAC_METHOD_RANSAC);
sacSegmentation->setConfidence(1);
sacSegmentation->setMaxIterations(iter_num);
sacSegmentation->setNumberOfModelsExpected(1);
//A point with a distance equal to the threshold is not considered an inliner point
sacSegmentation->setDistanceThreshold(thrs[idx] + 0.01);
int num = sacSegmentation->segment(pt_cloud, label, segmented_models);
string header = getHeader(sacSegmentation);
ASSERT_EQ(1, num)
<< header << endl
<< "Model number should be equal to 1.";
ASSERT_EQ(pt_cloud.rows, (int) (label.size()))
<< header << endl
<< "Label size should be equal to point number.";
Mat ans_model, segmented_model;
ans_model = Mat(1, (int) models[0].size(), CV_32F, models[idx].data());
segmented_models.row(0).convertTo(segmented_model, CV_32F);
ASSERT_TRUE(checkDiff(ans_model, segmented_model))
<< header << endl
<< "Initial model is " << ans_model << ". Segmented model is " << segmented_model
<< ". The difference in coefficients should not be too large.";
ASSERT_EQ(pt_nums[idx], countNum(label, 1))
<< header << endl
<< "There are " << pt_nums[idx] << " points need to be marked.";
int start_idx = 0;
for (int i = 0; i < idx; i++) start_idx += pt_nums[i];
for (int i = 0; i < pt_cloud.rows; i++)
{
if (i >= start_idx && i < start_idx + pt_nums[idx])
ASSERT_EQ(1, label[i])
<< header << endl
<< "This index should be marked: " << i
<< ". This point is " << pt_cloud.row(i);
else
ASSERT_EQ(0, label[i])
<< header << endl
<< "This index should not be marked: "
<< i << ". This point is " << pt_cloud.row(i);
}
}
void multiModelSegmentation(int iter_num, const CheckDiffFunction &checkDiff)
{
sacSegmentation->setSacMethodType(SAC_METHOD_RANSAC);
sacSegmentation->setConfidence(1);
sacSegmentation->setMaxIterations(iter_num);
sacSegmentation->setNumberOfModelsExpected(models_num);
sacSegmentation->setDistanceThreshold(thrs[models_num - 1] + 0.01);
int num = sacSegmentation->segment(pt_cloud, label, segmented_models);
string header = getHeader(sacSegmentation);
ASSERT_EQ(models_num, num)
<< header << endl
<< "Model number should be equal to " << models_num << ".";
ASSERT_EQ(pt_cloud.rows, (int) (label.size()))
<< header << endl
<< "Label size should be equal to point number.";
int checked_num = 0;
for (int i = 0; i < models_num; i++)
{
Mat ans_model, segmented_model;
ans_model = Mat(1, (int) models[0].size(), CV_32F, models[models_num - 1 - i].data());
segmented_models.row(i).convertTo(segmented_model, CV_32F);
ASSERT_TRUE(checkDiff(ans_model, segmented_model))
<< header << endl
<< "Initial model is " << ans_model << ". Segmented model is " << segmented_model
<< ". The difference in coefficients should not be too large.";
ASSERT_EQ(pt_nums[models_num - 1 - i], countNum(label, i + 1))
<< header << endl
<< "There are " << pt_nums[i] << " points need to be marked.";
for (int j = checked_num; j < pt_nums[i]; j++)
ASSERT_EQ(models_num - i, label[j])
<< header << endl
<< "This index " << j << " should be marked as " << models_num - i
<< ". This point is " << pt_cloud.row(j);
checked_num += pt_nums[i];
}
}
};
TEST_F(SacSegmentationTest, PlaneSacSegmentation)
{
sacSegmentation->setSacModelType(SAC_MODEL_PLANE);
models = {
{0, 0, 1, 0},
{1, 0, 0, 0},
{0, 1, 0, 0},
{1, 1, 1, -150},
};
thrs = {0.1f, 0.2f, 0.3f, 0.4f};
pt_nums = {100, 200, 300, 400};
limits = {
{5, 55, 5, 55, 0, 0},
{0, 0, 5, 55, 5, 55},
{5, 55, 0, 0, 5, 55},
{10, 50, 10, 50, 0, 0},
};
models_num = (int) models.size();
/**
* Used to generate a specific plane with random points
* model: plane coefficient [a,b,c,d] means ax+by+cz+d=0
* thr: generate the maximum distance from the point to the plane
* limit: the range of xyz coordinates of the generated plane
**/
auto generatePlane = [](Mat &plane_pts, const vector<float> &model, float thr, int num,
const vector<float> &limit) {
plane_pts = Mat(num, 3, CV_32F);
cv::RNG rng(0);
auto *plane_pts_ptr = (float *) plane_pts.data;
// Part of the points are generated for the specific model
// The other part of the points are used to increase the thickness of the plane
int std_num = (int) (num / 2);
// Difference of maximum d between two parallel planes
float d_thr = thr * sqrt(model[0] * model[0] + model[1] * model[1] + model[2] * model[2]);
for (int i = 0; i < num; i++)
{
// Let d change then generate thickness
float d = i < std_num ? model[3] : rng.uniform(model[3] - d_thr, model[3] + d_thr);
float x, y, z;
// c is 0 means the plane is vertical
if (model[2] == 0)
{
z = rng.uniform(limit[4], limit[5]);
if (model[0] == 0)
{
x = rng.uniform(limit[0], limit[1]);
y = -d / model[1];
}
else if (model[1] == 0)
{
x = -d / model[0];
y = rng.uniform(limit[2], limit[3]);
}
else
{
x = rng.uniform(limit[0], limit[1]);
y = -(model[0] * x + d) / model[1];
}
}
// c is not 0
else
{
x = rng.uniform(limit[0], limit[1]);
y = rng.uniform(limit[2], limit[3]);
z = -(model[0] * x + model[1] * y + d) / model[2];
}
plane_pts_ptr[3 * i] = x;
plane_pts_ptr[3 * i + 1] = y;
plane_pts_ptr[3 * i + 2] = z;
}
};
// 1 * 3.1415926f / 180
float vector_radian_tolerance = 0.0174533f, ratio_tolerance = 0.1f;
CheckDiffFunction planeCheckDiff = [vector_radian_tolerance, ratio_tolerance](const Mat &a,
const Mat &b) -> bool {
Mat m1, m2;
a.convertTo(m1, CV_32F);
b.convertTo(m2, CV_32F);
auto p1 = (float *) m1.data, p2 = (float *) m2.data;
Vec3f n1(p1[0], p1[1], p1[2]);
Vec3f n2(p2[0], p2[1], p2[2]);
float cos_theta_square = n1.dot(n2) * n1.dot(n2) / (n1.dot(n1) * n2.dot(n2));
float r1 = p1[3] * p1[3] / n1.dot(n1);
float r2 = p2[3] * p2[3] / n2.dot(n2);
return cos_theta_square >= cos(vector_radian_tolerance) * cos(vector_radian_tolerance)
&& abs(r1 - r2) <= ratio_tolerance * ratio_tolerance;
};
// Single plane segmentation
for (int i = 0; i < models_num; i++)
{
generatePlane(generated_pts, models[i], thrs[i], pt_nums[i], limits[i]);
pt_cloud.push_back(generated_pts);
singleModelSegmentation(1000, planeCheckDiff, i);
}
// Single plane segmentation with constraint
for (int i = models_num / 2; i < models_num; i++)
{
vector<float> constraint_normal = {models[i][0], models[i][1], models[i][2]};
// Normal vector constraint function
model_constraint = [constraint_normal](const vector<double> &model) -> bool {
// The angle between the model normals and the constraints must be less than 1 degree
// 1 * 3.1415926f / 180
float radian_thr = 0.0174533f;
vector<float> model_normal = {(float) model[0], (float) model[1], (float) model[2]};
float dot12 = constraint_normal[0] * model_normal[0] +
constraint_normal[1] * model_normal[1] +
constraint_normal[2] * model_normal[2];
float m1m1 = constraint_normal[0] * constraint_normal[0] +
constraint_normal[1] * constraint_normal[1] +
constraint_normal[2] * constraint_normal[2];
float m2m2 = model_normal[0] * model_normal[0] +
model_normal[1] * model_normal[1] +
model_normal[2] * model_normal[2];
float square_cos_theta = dot12 * dot12 / (m1m1 * m2m2);
return square_cos_theta >= cos(radian_thr) * cos(radian_thr);
};
sacSegmentation->setCustomModelConstraints(model_constraint);
singleModelSegmentation(5000, planeCheckDiff, i);
}
pt_cloud.release();
sacSegmentation->setCustomModelConstraints(nullptr);
sacSegmentation->setParallel(true);
// Multi-plane segmentation
for (int i = 0; i < models_num; i++)
{
generatePlane(generated_pts, models[i], thrs[models_num - 1], pt_nums[i], limits[i]);
pt_cloud.push_back(generated_pts);
}
multiModelSegmentation(1000, planeCheckDiff);
}
TEST_F(SacSegmentationTest, SphereSacSegmentation)
{
sacSegmentation->setSacModelType(cv::SAC_MODEL_SPHERE);
models = {
{15, 15, 30, 5},
{-15, -15, -30, 8},
{0, 0, -35, 10},
{0, 0, 0, 15},
{0, 0, 0, 20},
};
thrs = {0.1f, 0.2f, 0.3f, 0.4f, 0.5f};
pt_nums = {100, 200, 300, 400, 500};
limits = {
{0, 1, 0, 1, 0, 1},
{-1, 0, -1, 0, -1, 0},
{-1, 1, -1, 1, 0, 1},
{-1, 1, -1, 1, -1, 1},
{-1, 1, -1, 1, -1, 0},
};
models_num = (int) models.size();
/**
* Used to generate a specific sphere with random points
* model: sphere coefficient [x,y,z,r] means x^2+y^2+z^2=r^2
* thr: generate the maximum distance from the point to the surface of sphere
* limit: the range of vector to make the generated sphere incomplete
**/
auto generateSphere = [](Mat &sphere_pts, const vector<float> &model, float thr, int num,
const vector<float> &limit) {
sphere_pts = cv::Mat(num, 3, CV_32F);
cv::RNG rng(0);
auto *sphere_pts_ptr = (float *) sphere_pts.data;
// Part of the points are generated for the specific model
// The other part of the points are used to increase the thickness of the sphere
int sphere_num = (int) (num / 1.5);
for (int i = 0; i < num; i++)
{
// Let r change then generate thickness
float r = i < sphere_num ? model[3] : rng.uniform(model[3] - thr, model[3] + thr);
// Generate a random vector and normalize it.
Vec3f vec(rng.uniform(limit[0], limit[1]), rng.uniform(limit[2], limit[3]),
rng.uniform(limit[4], limit[5]));
float l = sqrt(vec.dot(vec));
// Normalizes it to have a magnitude of r
vec /= l / r;
sphere_pts_ptr[3 * i] = model[0] + vec[0];
sphere_pts_ptr[3 * i + 1] = model[1] + vec[1];
sphere_pts_ptr[3 * i + 2] = model[2] + vec[2];
}
};
float distance_tolerance = 0.1f, radius_tolerance = 0.1f;
CheckDiffFunction sphereCheckDiff = [distance_tolerance, radius_tolerance](const Mat &a,
const Mat &b) -> bool {
Mat d = a - b;
auto d_ptr = (float *) d.data;
// Distance square between sphere centers
float d_square = d_ptr[0] * d_ptr[0] + d_ptr[1] * d_ptr[1] + d_ptr[2] * d_ptr[2];
// Difference square between radius of two spheres
float r_square = d_ptr[3] * d_ptr[3];
return d_square <= distance_tolerance * distance_tolerance &&
r_square <= radius_tolerance * radius_tolerance;
};
// Single sphere segmentation
for (int i = 0; i < models_num; i++)
{
generateSphere(generated_pts, models[i], thrs[i], pt_nums[i], limits[i]);
pt_cloud.push_back(generated_pts);
singleModelSegmentation(3000, sphereCheckDiff, i);
}
// Single sphere segmentation with constraint
for (int i = models_num / 2; i < models_num; i++)
{
float constraint_radius = models[i][3] + 0.5f;
// Radius constraint function
model_constraint = [constraint_radius](
const vector<double> &model) -> bool {
auto model_radius = (float) model[3];
return model_radius <= constraint_radius;
};
sacSegmentation->setCustomModelConstraints(model_constraint);
singleModelSegmentation(10000, sphereCheckDiff, i);
}
pt_cloud.release();
sacSegmentation->setCustomModelConstraints(nullptr);
sacSegmentation->setParallel(true);
// Multi-sphere segmentation
for (int i = 0; i < models_num; i++)
{
generateSphere(generated_pts, models[i], thrs[models_num - 1], pt_nums[i], limits[i]);
pt_cloud.push_back(generated_pts);
}
multiModelSegmentation(5000, sphereCheckDiff);
}
} // namespace
} // opencv_test

@ -0,0 +1,90 @@
/**
* @file 3d_sac_segmentation.cpp
* @brief It demonstrates the usage of cv::SACSegmentation.
*
* It shows how to implement 3D point cloud plane segmentation
* using the RANSAC algorithm via cv::SACSegmentation, and
* the construction of cv::SACSegmentation::ModelConstraintFunction
*
* @author Yechun Ruan
* @date December 2021
*/
#include <opencv2/core.hpp>
#include <opencv2/3d.hpp>
bool customFunc(const std::vector<double> &model_coefficients);
void usageExampleSacModelConstraintFunction();
int planeSegmentationUsingRANSAC(const cv::Mat &pt_cloud,
std::vector<cv::Vec4d> &planes_coeffs, std::vector<char> &labels);
//! [usageExampleSacModelConstraintFunction]
bool customFunc(const std::vector<double> &model_coefficients)
{
// check model_coefficients
// The plane needs to pass through the origin, i.e. ax+by+cz+d=0 --> d==0
return model_coefficients[3] == 0;
} // end of function customFunc()
void usageExampleSacModelConstraintFunction()
{
using namespace cv;
SACSegmentation::ModelConstraintFunction func_example1 = customFunc;
SACSegmentation::ModelConstraintFunction func_example2 =
[](const std::vector<double> &model_coefficients) {
// check model_coefficients
// The plane needs to pass through the origin, i.e. ax+by+cz+d=0 --> d==0
return model_coefficients[3] == 0;
};
// Using local variables
float x0 = 0.0, y0 = 0.0, z0 = 0.0;
SACSegmentation::ModelConstraintFunction func_example3 =
[x0, y0, z0](const std::vector<double> &model_coeffs) -> bool {
// check model_coefficients
// The plane needs to pass through the point (x0, y0, z0), i.e. ax0+by0+cz0+d == 0
return model_coeffs[0] * x0 + model_coeffs[1] * y0 + model_coeffs[2] * z0
+ model_coeffs[3] == 0;
};
// Next, use the constructed SACSegmentation::ModelConstraintFunction func_example1, 2, 3 ......
}
//! [usageExampleSacModelConstraintFunction]
//! [planeSegmentationUsingRANSAC]
int planeSegmentationUsingRANSAC(const cv::Mat &pt_cloud,
std::vector<cv::Vec4d> &planes_coeffs, std::vector<char> &labels)
{
using namespace cv;
Ptr<SACSegmentation> sacSegmentation =
SACSegmentation::create(SAC_MODEL_PLANE, SAC_METHOD_RANSAC);
sacSegmentation->setDistanceThreshold(0.21);
// The maximum number of iterations to attempt.(default 1000)
sacSegmentation->setMaxIterations(1500);
sacSegmentation->setNumberOfModelsExpected(2);
Mat planes_coeffs_mat;
// Number of final resultant models obtained by segmentation.
int model_cnt = sacSegmentation->segment(pt_cloud,
labels, planes_coeffs_mat);
planes_coeffs.clear();
for (int i = 0; i < model_cnt; ++i)
{
planes_coeffs.push_back(planes_coeffs_mat.row(i));
}
return model_cnt;
}
//! [planeSegmentationUsingRANSAC]
int main()
{
}
Loading…
Cancel
Save