s/seperate/separate/g in comments

pull/2320/head
Pavel Grunt 5 years ago
parent 94d1b9bcd8
commit ba08dbbef8
  1. 2
      README.md
  2. 2
      modules/cnn_3dobj/README.md
  3. 2
      modules/cnn_3dobj/include/opencv2/cnn_3dobj.hpp
  4. 2
      modules/ovis/include/opencv2/ovis.hpp
  5. 2
      modules/saliency/samples/computeSaliency.cpp

@ -55,4 +55,4 @@ In order to keep a clean overview containing all contributed modules the followi
1. Update the README.md file under the modules folder. Here you add your model with a single line description.
2. Add a README.md inside your own module folder. This README explains which functionality (seperate functions) is available, links to the corresponding samples and explains in somewhat more detail what the module is expected to do. If any extra requirements are needed to build the module without problems, add them here also.
2. Add a README.md inside your own module folder. This README explains which functionality (separate functions) is available, links to the corresponding samples and explains in somewhat more detail what the module is expected to do. If any extra requirements are needed to build the module without problems, add them here also.

@ -79,7 +79,7 @@ $ ./example_cnn_3dobj_classify -mean_file=../data/images_mean/triplet_mean.binar
```
===========================================================
##Demo3: Model performance test
####This demo will run a performance test of a trained CNN model on several images. If the the model fails on telling different samples from seperate classes apart, or is confused on samples with similar pose but from different classes, this will give some information for model analysis.
####This demo will run a performance test of a trained CNN model on several images. If the the model fails on telling different samples from separate classes apart, or is confused on samples with similar pose but from different classes, this will give some information for model analysis.
```
$ ./example_cnn_3dobj_model_analysis
```

@ -73,7 +73,7 @@ the use of this software, even if advised of the possibility of such damage.
As CNN based learning algorithm shows better performance on the classification issues,
the rich labeled data could be more useful in the training stage. 3D object classification and pose estimation
is a jointed mission aimming at seperate different posed apart in the descriptor form.
is a jointed mission aiming at separate different posed apart in the descriptor form.
In the training stage, we prepare 2D training images generated from our module with their
class label and pose label. We fully exploit the information lies in their labels

@ -18,7 +18,7 @@ namespace ovis {
enum SceneSettings
{
/// the window will use a seperate scene. The scene will be shared otherwise.
/// the window will use a separate scene. The scene will be shared otherwise.
SCENE_SEPERATE = 1,
/// allow the user to control the camera.
SCENE_INTERACTIVE = 2,

@ -157,7 +157,7 @@ int main( int argc, char** argv )
int ndet = int(saliencyMap.size());
std::cout << "Objectness done " << ndet << std::endl;
// The result are sorted by objectness. We only use the first maxd boxes here.
int maxd = 7, step = 255 / maxd, jitter=9; // jitter to seperate single rects
int maxd = 7, step = 255 / maxd, jitter=9; // jitter to separate single rects
Mat draw = image.clone();
for (int i = 0; i < std::min(maxd, ndet); i++) {
Vec4i bb = saliencyMap[i];

Loading…
Cancel
Save