dnn: fix documentation links

pull/8989/head
Alexander Alekhin 7 years ago
parent 14de8ac951
commit 382e38941c
  1. 4
      modules/dnn/test/pascal_semsegm_test_fcn.py
  2. 22
      modules/dnn/tutorials/tutorial_dnn_googlenet.markdown
  3. 20
      modules/dnn/tutorials/tutorial_dnn_halide.markdown
  4. 2
      samples/dnn/squeezenet_halide.cpp

@ -205,9 +205,9 @@ if __name__ == "__main__":
parser.add_argument("--val_names", help="path to file with validation set image names, download it here: "
"https://github.com/shelhamer/fcn.berkeleyvision.org/blob/master/data/pascal/seg11valid.txt")
parser.add_argument("--cls_file", help="path to file with colors for classes, download it here: "
"https://github.com/opencv/opencv_contrib/blob/master/modules/dnn/samples/pascal-classes.txt")
"https://github.com/opencv/opencv/blob/master/modules/samples/data/dnn/pascal-classes.txt")
parser.add_argument("--prototxt", help="path to caffe prototxt, download it here: "
"https://github.com/opencv/opencv_contrib/blob/master/modules/dnn/samples/fcn8s-heavy-pascal.prototxt")
"https://github.com/opencv/opencv/blob/master/modules/samples/data/dnn/fcn8s-heavy-pascal.prototxt")
parser.add_argument("--caffemodel", help="path to caffemodel file, download it here: "
"http://dl.caffe.berkeleyvision.org/fcn8s-heavy-pascal.caffemodel")
parser.add_argument("--log", help="path to logging file")

@ -13,30 +13,30 @@ We will demonstrate results of this example on the following picture.
Source Code
-----------
We will be using snippets from the example application, that can be downloaded [here](https://github.com/ludv1x/opencv_contrib/blob/master/modules/dnn/samples/caffe_googlenet.cpp).
We will be using snippets from the example application, that can be downloaded [here](https://github.com/opencv/opencv/blob/master/modules/samples/dnn/caffe_googlenet.cpp).
@include dnn/samples/caffe_googlenet.cpp
@include dnn/caffe_googlenet.cpp
Explanation
-----------
-# Firstly, download GoogLeNet model files:
[bvlc_googlenet.prototxt ](https://raw.githubusercontent.com/ludv1x/opencv_contrib/master/modules/dnn/samples/bvlc_googlenet.prototxt) and
[bvlc_googlenet.prototxt ](https://raw.githubusercontent.com/opencv/opencv/master/modules/samples/data/dnn/bvlc_googlenet.prototxt) and
[bvlc_googlenet.caffemodel](http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel)
Also you need file with names of [ILSVRC2012](http://image-net.org/challenges/LSVRC/2012/browse-synsets) classes:
[synset_words.txt](https://raw.githubusercontent.com/ludv1x/opencv_contrib/master/modules/dnn/samples/synset_words.txt).
[synset_words.txt](https://raw.githubusercontent.com/opencv/opencv/master/modules/samples/data/dnn/synset_words.txt).
Put these files into working dir of this program example.
-# Read and initialize network using path to .prototxt and .caffemodel files
@snippet dnn/samples/caffe_googlenet.cpp Read and initialize network
@snippet dnn/caffe_googlenet.cpp Read and initialize network
-# Check that network was read successfully
@snippet dnn/samples/caffe_googlenet.cpp Check that network was read successfully
@snippet dnn/caffe_googlenet.cpp Check that network was read successfully
-# Read input image and convert to the blob, acceptable by GoogleNet
@snippet dnn/samples/caffe_googlenet.cpp Prepare blob
@snippet dnn/caffe_googlenet.cpp Prepare blob
Firstly, we resize the image and change its channel sequence order.
Now image is actually a 3-dimensional array with 224x224x3 shape.
@ -44,22 +44,22 @@ Explanation
Next, we convert the image to 4-dimensional blob (so-called batch) with 1x3x224x224 shape by using special cv::dnn::blobFromImages constructor.
-# Pass the blob to the network
@snippet dnn/samples/caffe_googlenet.cpp Set input blob
@snippet dnn/caffe_googlenet.cpp Set input blob
In bvlc_googlenet.prototxt the network input blob named as "data", therefore this blob labeled as ".data" in opencv_dnn API.
Other blobs labeled as "name_of_layer.name_of_layer_output".
-# Make forward pass
@snippet dnn/samples/caffe_googlenet.cpp Make forward pass
@snippet dnn/caffe_googlenet.cpp Make forward pass
During the forward pass output of each network layer is computed, but in this example we need output from "prob" layer only.
-# Determine the best class
@snippet dnn/samples/caffe_googlenet.cpp Gather output
@snippet dnn/caffe_googlenet.cpp Gather output
We put the output of "prob" layer, which contain probabilities for each of 1000 ILSVRC2012 image classes, to the `prob` blob.
And find the index of element with maximal value in this one. This index correspond to the class of the image.
-# Print results
@snippet dnn/samples/caffe_googlenet.cpp Print results
@snippet dnn/caffe_googlenet.cpp Print results
For our image we get:
> Best class: #812 'space shuttle'
>

@ -89,42 +89,42 @@ How to build OpenCV with DNN module you may find in @ref tutorial_dnn_build.
## Sample
@include dnn/samples/squeezenet_halide.cpp
@include dnn/squeezenet_halide.cpp
## Explanation
Download Caffe model from SqueezeNet repository: [train_val.prototxt](https://github.com/DeepScale/SqueezeNet/blob/master/SqueezeNet_v1.1/train_val.prototxt) and [squeezenet_v1.1.caffemodel](https://github.com/DeepScale/SqueezeNet/blob/master/SqueezeNet_v1.1/squeezenet_v1.1.caffemodel).
Also you need file with names of [ILSVRC2012](http://image-net.org/challenges/LSVRC/2012/browse-synsets) classes:
[synset_words.txt](https://raw.githubusercontent.com/ludv1x/opencv_contrib/master/modules/dnn/samples/synset_words.txt).
[synset_words.txt](https://raw.githubusercontent.com/opencv/opencv/master/modules/samples/data/dnn/synset_words.txt).
Put these files into working dir of this program example.
-# Read and initialize network using path to .prototxt and .caffemodel files
@snippet dnn/samples/squeezenet_halide.cpp Read and initialize network
@snippet dnn/squeezenet_halide.cpp Read and initialize network
-# Check that network was read successfully
@snippet dnn/samples/squeezenet_halide.cpp Check that network was read successfully
@snippet dnn/squeezenet_halide.cpp Check that network was read successfully
-# Read input image and convert to the 4-dimensional blob, acceptable by SqueezeNet v1.1
@snippet dnn/samples/squeezenet_halide.cpp Prepare blob
@snippet dnn/squeezenet_halide.cpp Prepare blob
-# Pass the blob to the network
@snippet dnn/samples/squeezenet_halide.cpp Set input blob
@snippet dnn/squeezenet_halide.cpp Set input blob
-# Enable Halide backend for layers where it is implemented
@snippet dnn/samples/squeezenet_halide.cpp Enable Halide backend
@snippet dnn/squeezenet_halide.cpp Enable Halide backend
-# Make forward pass
@snippet dnn/samples/squeezenet_halide.cpp Make forward pass
@snippet dnn/squeezenet_halide.cpp Make forward pass
Remember that the first forward pass after initialization require quite more
time that the next ones. It's because of runtime compilation of Halide pipelines
at the first invocation.
-# Determine the best class
@snippet dnn/samples/squeezenet_halide.cpp Determine the best class
@snippet dnn/squeezenet_halide.cpp Determine the best class
-# Print results
@snippet dnn/samples/squeezenet_halide.cpp Print results
@snippet dnn/squeezenet_halide.cpp Print results
For our image we get:
> Best class: #812 'space shuttle'

@ -6,7 +6,7 @@
// Third party copyrights are property of their respective owners.
// Sample of using Halide backend in OpenCV deep learning module.
// Based on dnn/samples/caffe_googlenet.cpp.
// Based on caffe_googlenet.cpp.
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>

Loading…
Cancel
Save