start to add proper sphinx docs; remove old stuff here

pull/104/head
ahb 10 years ago
parent fcb8e31c53
commit 54c5dacf56
  1. 91
      modules/cvv/doc/Ueberblick-tut.md
  2. 34
      modules/cvv/doc/code_example/CMakeLists.txt
  3. 5
      modules/cvv/doc/code_example/README.md
  4. 137
      modules/cvv/doc/code_example/main.cpp
  5. 10
      modules/cvv/doc/cvv.rst
  6. 10
      modules/cvv/doc/cvv_api/index.rst
  7. 35
      modules/cvv/doc/filterfunction-tut.md
  8. 106
      modules/cvv/doc/filterquery-ref.md
  9. BIN
      modules/cvv/doc/images_example/filter_tab_default.png
  10. BIN
      modules/cvv/doc/images_example/match_tab_line.png
  11. BIN
      modules/cvv/doc/images_example/match_tab_translations_2.png
  12. BIN
      modules/cvv/doc/images_example/match_translations.png
  13. BIN
      modules/cvv/doc/images_example/match_translations_2_70percent.png
  14. BIN
      modules/cvv/doc/images_example/overview_all.png
  15. BIN
      modules/cvv/doc/images_example/overview_matches_filtered.png
  16. BIN
      modules/cvv/doc/images_example/overview_single_call.png
  17. BIN
      modules/cvv/doc/images_example/overview_two_calls.png
  18. BIN
      modules/cvv/doc/images_example/raw_view.png
  19. BIN
      modules/cvv/doc/images_example/single_filter_deep_zoom.png
  20. BIN
      modules/cvv/doc/images_example/single_filter_gray.png
  21. BIN
      modules/cvv/doc/images_example/single_filter_right_two_imgs_unselected.png
  22. BIN
      modules/cvv/doc/images_example/single_image_tab.png
  23. BIN
      modules/cvv/doc/images_tut/dilate_calltab_defaultfview.PNG
  24. BIN
      modules/cvv/doc/images_tut/dilate_overview.PNG
  25. BIN
      modules/cvv/doc/images_ueberblick/DefaultFilterViewTab.PNG
  26. BIN
      modules/cvv/doc/images_ueberblick/DualfilterViewDiffImg.PNG
  27. BIN
      modules/cvv/doc/images_ueberblick/LineMatchViewTab.PNG
  28. BIN
      modules/cvv/doc/images_ueberblick/LineMatchViewZoomed.PNG
  29. BIN
      modules/cvv/doc/images_ueberblick/MainWindow.PNG
  30. BIN
      modules/cvv/doc/images_ueberblick/MainWindowFull.PNG
  31. BIN
      modules/cvv/doc/images_ueberblick/MainwindowTwoCalls.PNG
  32. BIN
      modules/cvv/doc/images_ueberblick/OverviewFilterQueryGroupByID.PNG
  33. BIN
      modules/cvv/doc/images_ueberblick/RawViewTab.PNG
  34. BIN
      modules/cvv/doc/images_ueberblick/SingleImageTab.PNG
  35. BIN
      modules/cvv/doc/images_ueberblick/TranslationMatchViewTab.PNG
  36. 122
      modules/cvv/doc/index.md
  37. 41
      modules/cvv/doc/introduction-tut.md
  38. 9
      modules/cvv/doc/topics.yml
  39. 40
      modules/cvv/doc/views-ref.md

@ -1,91 +0,0 @@
#Über CVVisual
CVVisual ist eine Debug-Bibliothek für OpenCV, die verschiedene Möglichkeiten der Darstellung von Bildern und Ergebnissen von beispielsweise Filter- und Match-Operationen von OpenCV anbietet.
##Benutzung: Beispiel
Ist die Bibliothek eingebunden, das CVVISUAL\_DEBUG-Makro definiert und die benötigten Header in den Code eingebunden, kann durch den Aufruf einer CVVisual-Funktion mit den von OpenCV gelieferten Daten als Argumenten das CVV-Hauptfenster geöffnet werden.
Beispielsweise könnte ein Codestück folgendermaßen aussehen:
//...
cvv::debugDMatch(src, keypoints1, src, keypoints2, match, CVVISUAL\_LOCATION);
![](../images_ueberblick/MainWindow.PNG)
Die Bilder werden zusammen mit Informationen und Metadaten in der Overview-Tabelle angezeigt.
Ein Doppelklick darauf öffnet ein Tab, in dem die Bilder und Matches groß angezeigt werden.
![](../images_ueberblick/LineMatchViewTab.PNG)
In dieser Ansicht, genannt *Line Match View* werden die KeyPoints der Matches, d.h. die von OpenCV gelieferten ähnlichen Bildpunkte, durch Linien verbunden. Im Akkordeonmenü kann man beispielsweise deren Farbe änder. `Strg + Mausrad` erlaubt, zu zoomen.
![](../images_ueberblick/LineMatchViewZoomed.PNG)
Die Art der Darstellung kann im `View`-Dropdown-Menü geändert werden; so können die Matches etwa auch als Translationslinien angezeigt werden.
![](../images_ueberblick/TranslationMatchViewTab.PNG)
Zudem gibt es bei Matches auch die Möglichkeit, die Daten in einer Tabelle anzuzeigen, im sogenannten
*Raw View*. Die Daten können hier über einen Linksklick als JSON oder CSV ins Clipboard kopiert
werden.
![](../images_ueberblick/RawviewTab.PNG)
Wird `Step` geklickt wird die Ausführung des zu debuggenden Programmes, das beim Aufruf des Hauptfensters angehalten wurde fortgesetzt, bis es auf eine weitere CVVisual-Funktion
stößt:
//...
cvv::debugFilter(src, dest, CVVISUAL\_LOCATION, filename);
Das Hauptfenster erscheint erneut, wobei der neuen Datensatz der Tabelle hinzugefügt wird.
![](../images_ueberblick/MainWindowTwoCalls.PNG)
Da es sich hier um eine Filter-Operation handelt, ist die Anzeige im Tab eine andere:
![](../images_ueberblick/DefaultFilterViewTab.PNG)
Auch die möglichen Anzeigen unterscheiden sich von denen für Match-Operationen.
Der *Dual Filter View* erlaubt zum Beispiel zusätzlich, ein Differenzbild der beiden übergebenen anzuzeigen.
![](../images_ueberblick/DualfilterViewDiffImg.PNG)
Nach einem *fast-forward* (`>>`) über die weiteren Schritte des Programms
//...
cvv::debugDMatch(src, keypoints1, src, keypoints2, match, CVVISUAL\_LOCATION)
//...
cvv::debugFilter(src, dest, CVVISUAL\_LOCATION, filename);
//...
cvv::debugFilter(src, dest, CVVISUAL\_LOCATION, filename);
//...
cvv::debugDMatch(src, keypoints1, src, keypoints2, match, CVVISUAL\_LOCATION);
//...
cvv::showImage(img, CVVISUAL\_LOCATION);
//...
cvv::finalShow();
ergibt sich im Overview folgendes Bild:
![](../images_ueberblick/MainWindowFull.PNG)
Dabei wird durch den letzten Aufruf nur ein einziges Bild zur Anzeige übergeben:
![](../images_ueberblick/SingleImageTab.PNG)
Mithilfe der Textzeile lassen sich durch Kommandos der *Filter Query Language* von CVVisual die Datensätze ordnen, filtern und gruppieren. Hier wurde nach ID gruppiert:
![](../images_ueberblick/OverviewFilterQueryGroupByID.PNG)
Dies funktioniert auch im *Raw View*.
Hinter dem letzten Aufruf einer regulären CVVisual-Funktion muss, wie oben gesehen, `finalShow` aufgerufen werden:
//...
cvv::finalShow();
//...
Es wird ein weiteres Mal das Hauptfenster angezeigt; wird jedoch der nun der einzige verbleibende, der `Close`-Knopf betätigt, wird das Hauptfenster endgültig geschlossen.
Dies beschließt die Debug-Sitzung.
[Quelle des zur Demonstration benutzten Bildes.](http://commons.wikimedia.org/wiki/File:PNG-Gradient.png)

@ -1,34 +0,0 @@
cmake_minimum_required(VERSION 2.8)
project(cvvisual_test)
SET(CMAKE_PREFIX_PATH ~/software/opencv/install)
SET(CMAKE_CXX_COMPILER "g++-4.8")
SET(CMAKE_CXX_FLAGS "-std=c++11 -O2 -pthread -Wall -Werror")
OPTION(CVV_DEBUG_MODE "cvvisual-debug-mode" ON)
if(CVV_DEBUG_MODE MATCHES ON)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DCVVISUAL_DEBUGMODE")
endif()
FIND_PACKAGE(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
FIND_PACKAGE(Qt5Core REQUIRED)
include_directories(${Qt5Core_INCLUDE_DIRS})
FIND_PACKAGE(Qt5Widgets REQUIRED)
include_directories(${Qt5Widgets_INCLUDE_DIRS})
add_definitions(${QT_DEFINITIONS})
SET(OpenCVVisual_DIR "$ENV{HOME}/<<<SET ME>>>")
include_directories("${OpenCVVisual_DIR}/include")
link_directories("${OpenCVVisual_DIR}/build/release")
add_executable(cvvt main.cpp)
target_link_libraries(cvvt
opencv_core opencv_highgui opencv_imgproc opencv_features2d
opencv_cvv
Qt5Core Qt5Widgets Qt5Gui
)

@ -1,5 +0,0 @@
This is a tiny example of how to use CVVisual. It requires a webcam.
Note that the paths in CMakeLists.txt have to be set manually.
cvvisual_test was created by Andreas Bihlmaier.

@ -1,137 +0,0 @@
// system includes
#include <getopt.h>
#include <iostream>
// library includes
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/cvv/debug_mode.hpp>
#include <opencv2/cvv/show_image.hpp>
#include <opencv2/cvv/filter.hpp>
#include <opencv2/cvv/dmatch.hpp>
#include <opencv2/cvv/final_show.hpp>
template<class T> std::string toString(const T& p_arg)
{
std::stringstream ss;
ss << p_arg;
return ss.str();
}
void
usage()
{
printf("usage: cvvt [-r WxH]\n");
printf("-h print this help\n");
printf("-r WxH change resolution to width W and height H\n");
}
int
main(int argc, char** argv)
{
cv::Size* resolution = nullptr;
// parse options
const char* optstring = "hr:";
int opt;
while ((opt = getopt(argc, argv, optstring)) != -1) {
switch (opt) {
case 'h':
usage();
return 0;
break;
case 'r':
{
char dummych;
resolution = new cv::Size();
if (sscanf(optarg, "%d%c%d", &resolution->width, &dummych, &resolution->height) != 3) {
printf("%s not a valid resolution\n", optarg);
return 1;
}
}
break;
default: /* '?' */
usage();
return 2;
}
}
// setup video capture
cv::VideoCapture capture(0);
if (!capture.isOpened()) {
std::cout << "Could not open VideoCapture" << std::endl;
return 3;
}
if (resolution) {
printf("Setting resolution to %dx%d\n", resolution->width, resolution->height);
capture.set(CV_CAP_PROP_FRAME_WIDTH, resolution->width);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, resolution->height);
}
cv::Mat prevImgGray;
std::vector<cv::KeyPoint> prevKeypoints;
cv::Mat prevDescriptors;
int maxFeatureCount = 500;
cv::ORB detector(maxFeatureCount);
cv::BFMatcher matcher(cv::NORM_HAMMING);
for (int imgId = 0; imgId < 10; imgId++) {
// capture a frame
cv::Mat imgRead;
capture >> imgRead;
printf("%d: image captured\n", imgId);
std::string imgIdString{"imgRead"};
imgIdString += toString(imgId);
cvv::showImage(imgRead, CVVISUAL_LOCATION, imgIdString.c_str());
// convert to grayscale
cv::Mat imgGray;
cv::cvtColor(imgRead, imgGray, CV_BGR2GRAY);
cvv::debugFilter(imgRead, imgGray, CVVISUAL_LOCATION, "to gray");
// detect ORB features
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector(imgGray, cv::noArray(), keypoints, descriptors);
printf("%d: detected %zd keypoints\n", imgId, keypoints.size());
// match them to previous image (if available)
if (!prevImgGray.empty()) {
std::vector<cv::DMatch> matches;
matcher.match(prevDescriptors, descriptors, matches);
printf("%d: all matches size=%zd\n", imgId, matches.size());
std::string allMatchIdString{"all matches "};
allMatchIdString += toString(imgId-1) + "<->" + toString(imgId);
cvv::debugDMatch(prevImgGray, prevKeypoints, imgGray, keypoints, matches, CVVISUAL_LOCATION, allMatchIdString.c_str());
// remove worst (as defined by match distance) bestRatio quantile
double bestRatio = 0.8;
std::sort(matches.begin(), matches.end());
matches.resize(int(bestRatio * matches.size()));
printf("%d: best matches size=%zd\n", imgId, matches.size());
std::string bestMatchIdString{"best " + toString(bestRatio) + " matches "};
bestMatchIdString += toString(imgId-1) + "<->" + toString(imgId);
cvv::debugDMatch(prevImgGray, prevKeypoints, imgGray, keypoints, matches, CVVISUAL_LOCATION, bestMatchIdString.c_str());
}
prevImgGray = imgGray;
prevKeypoints = keypoints;
prevDescriptors = descriptors;
}
cvv::finalShow();
return 0;
}

@ -0,0 +1,10 @@
*********************************************************************
cvv. GUI for Interactive Visual Debugging of Computer Vision Programs
*********************************************************************
The module provides an interactive GUI to debug and incrementally design computer vision algorithms. The debug statements can remain in the code after development and aid in further changes because they have neglectable overhead if the program is compiled in release mode.
.. toctree::
:maxdepth: 2
CVV API Documentation <cvv_api/index>

@ -0,0 +1,10 @@
CVV : an Interactive Visual Debugging GUI
*****************************************
.. highlight:: cpp
Introduction
++++++++++++
TODO

@ -1,35 +0,0 @@
#Introduction to filter function widgets
##The class, functions and types
If you want to enable the user to provide input to a filter you can inherit the virtual class FilterFunctionWidget<In,Out>.
It provides an interface for a filter accepting In images as an input and Out images as output.
The input images are passed using:
```cpp
InputArray = std::array<util::Reference<const cv::Mat>,In>
```
and the output is provided with an output parameter of the type:
```cpp
OutputArray = std::array<util::Reference<cv::Mat>,Out>
```
You should override following functions:
```cpp
virtual void applyFilter(InputArray in,OutputArray out) const;
virtual std::pair<bool, QString> checkInput(InputArray in) const;
```
`applyFilter` has to apply your filter and `checkInput` should check weather the filter can be applied (the first member of the returned pair).
In case the filter can not be applied the second member of the returned pair should contain a message for the user.
If user input changes the setting of the filter the function _emitSignal()_ of the member _signFilterSettingsChanged__ should be called.
For a detailed example look at _CVVisual/src/qtutil/filter/grayfilterwidget.{hpp, cpp}_
https://github.com/CVVisualPSETeam/CVVisual/blob/master/src/qtutil/filter/grayfilterwidget.hpp
https://github.com/CVVisualPSETeam/CVVisual/blob/master/src/qtutil/filter/grayfilterwidget.cpp

@ -1,106 +0,0 @@
#Filter query language
The filter query language is the query language used in the overview and the raw match view to simply the task of filtering, sorting and grouping data sets in a table UI.
The following is a description of the simple syntax and the supported commands.
Just type `#` into the search field to see some supported commands, using the suggestions feature (it's inspired by the awesome z shell).
##Syntax
A query consist basically of many subqueries starting with a `#`:
`[raw filter subquery] #[subquery 1] [...] #[subquery n]`
The optional first part of the query doesn't start with a `#`, it's short for `#raw [...]`.
There three different types of subqueries:
###Sort query
A sort query has the following structure:
`sort by [sort subquery 1], [...], [sort subquery n]`
A sort subquery consist of a sort command (aka "the feature by which you want to sort the table") and a sort order:
- `[command]`: equivalent to `[command] asc`
- `[command] asc`: sorts in ascending order
- `[command] desc`: sorts in descending order
(The sort command is typically a single word.)
For your interest: The `[subquery n]` has higher priority than the `[subquery n+1]`.
###Group query
A group query has the following structure:
`group by [command 1], [...], [command n]`
A group command is a single word declaring the feature you want to group the data sets in the table by.
The group header consist of the `n` items.
For your interest: The raw view currently doesn't support group queries.
###Filter query
A filter query is the basic type of query, allowing you to filter the data sets by several criterias.
It has the following structure:
`#[filter command] [argument]`
It also supports several arguments for one filter command (via the comma seperated filters feature):
`#[cs filter command] [argument 1], [...], [argument n]`
####Range filter query
A range filter query uses basically a comma seperated filter command with two arguments, allowing you to
filter for a range of elements (`[lower bound]` <= `element` <= `[upper bound]`).
It has the following structure:
`#[filter command] [lower bound], [upper bound]`
##Overview
The following commands are supported in the overview:
feauture/command | sorting supported | grouping supported | filtering supported | description
-----------------|:-----------------:|:------------------:|:--------------------|:---------------------
id | yes | yes | yes, also range |
raw | yes | yes | only basic filter | alias for description
description | yes | yes | only basic filter |
image_count | yes | yes | yes, also range | number of images
function | yes | yes | yes | calling function
file | yes | yes | yes | inheriting file
line | yes | yes | yes, also range |
type | yes | yes | yes | call type
##Rawview
The following command are supported in the raw (match) view:
feauture/command | numeric type | description/property
-----------------|:-------------|:---------------------------------------------
match_distance | float | match distance
img_idx | integer | match img idx
query_idx | integer | match query idx
train_idx | integer | match train idx
x_1 | float | x coordinate of the "left" key point
y_1 | float | y coordinate of the "left" key point
size_1 | float | size of the "left" key point
angle_1 | float | angle of the "left" key point
response_1 | float | response (or strength) of the "left" key point
octave_1 | integer | octave of the "left" key point
x_2 | float | x coordinate of the "right" key point
y_2 | float | y coordinate of the "right" key point
size_2 | float | size of the "right" key point
angle_2 | float | angle of the "right" key point
response_2 | float | response (or strength) of the "right" key point
octave_2 | integer | octave of the "right" key point
All commands support range filtering, sorting and grouping and therefore only the used numeric type
(integer or float) is given.
See the opencv documentation for more information about the features.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 298 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 149 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 165 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 149 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 245 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 290 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 290 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

@ -1,122 +0,0 @@
#CVVisual Example
CVVisual is a debug visualization for OpenCV; thus, its main purpose is to offer different ways to visualize
the results of OpenCV functions to make it possible to see whether they are what the programmer had in mind;
and also to offer some functionality to try other operations on the images right in the debug window.
This text wants to illustrate the use of CVVisual on a code example.
Image we want to debug this program:
[code_example/main.cpp](https://github.com/CVVisualPSETeam/CVVisual/tree/master/doc/code_example/main.cpp)
Note the includes for CVVisual:
10 #include <opencv2/debug_mode.hpp>
11 #include <opencv2/show_image.hpp>
12 #include <opencv2/filter.hpp>
13#include <opencv2/dmatch.hpp>
14 #include <opencv2/final_show.hpp>
It takes 10 snapshots with the webcam.
With each, it first shows the image alone in the debug window,
97 cvv::showImage(imgRead, CVVISUAL_LOCATION, imgIdString.c_str());
then converts it to grayscale and calls CVVisual with the original and resulting image,
101 cv::cvtColor(imgRead, imgGray, CV_BGR2GRAY);
102 cvv::debugFilter(imgRead, imgGray, CVVISUAL_LOCATION, "to gray");
detects the grayscale image's ORB features
107 detector(imgGray, cv::noArray(), keypoints, descriptors);
and matches them to those of the previous image, if available. It calls cvv::debugDMatch() with the results.
113 matcher.match(prevDescriptors, descriptors, matches);
...
117 cvv::debugDMatch(prevImgGray, prevKeypoints, imgGray, keypoints, matches, CVVISUAL_LOCATION, allMatchIdString.c_str());
Finally, it removes the worst (as defined by match distance) 0.8 quantile of matches and calls cvv::debugDMatch() again.
121 std::sort(matches.begin(), matches.end());
122 matches.resize(int(bestRatio * matches.size()));
...
126 cvv::debugDMatch(prevImgGray, prevKeypoints, imgGray, keypoints, matches, CVVISUAL_LOCATION, bestMatchIdString.c_str());
After we started the program, the CVVisual Main Window opens with one _Call_, that is, the first image that a `cvv::showImage()` was called with (the program execution was halted at this call).
![](../images_example/overview_single_call.png)
The image is shown as a small thumbnail in the _Overview table_, together with additional information on it, like the line of the call and the description passed as a parameter.
We double-click it, and a tab opens, where the image is shown bigger. It looks like the webcam worked, so we press `Step` and go to the _Overview_.
![](../images_example/single_image_tab.png)
The window shows up again, this time with the first _Call_ to `cvv::debugFilter()` added.
![](../images_example/overview_two_calls.png)
We open its tab, too, because, say, the grayscale image does not exactly look like what we wanted.
![](../images_example/filter_tab_default.png)
After switching to _SingleFilterView_, which will be more useful to us here, we select to not show the right two images - the grayscale image and the one below, where results of filter operations in this tab are depicted.
![](../images_example/single_filter_right_two_imgs_unselected.png)
In `Select a filter`, a gray filter can be applied with different parameters.
![](../images_example/single_filter_gray.png)
This looks more like what we wanted.
Rechecking `Show image` for the unselected result image of the actual filter operation and zooming (`Ctrl` + `Mouse wheel`) into all images synchronously deeper than 60% shows the different values of the pixels.
![](../images_example/single_filter_deep_zoom.png)
Sadly, we can't do anything about this situation in this session, though, so we just continue.
As stepping through each single _Call_ seems quite tedious, we use the _fast-forward_ button, `>>`.
The program runs until it reaches `finalShow()`, taking images with the webcam along the way.
This saved us some clicking; on the downside, we now have quite an amount of _Calls_ in the table.
![](../images_example/overview_all.png)
Using the [filter query language](http://cvv.mostlynerdless.de/ref/filters-ref.html), the _Calls_ to `debugDMatch()` can be filtered out as they have the specific type "match".
![](../images_example/overview_matches_filtered.png)
We open the tab of the last such _Call_, and find ourselves greeted with a dense bundle of lines across both images, which represent the matches between the two.
![](../images_example/match_tab_line.png)
It is a bit unclear where there actually are matches in this case, so we switch to _TranslationMatchView_, which is a little bit better (especially after scrolling a bit to the right in the left image).
![](../images_example/match_translations.png)
_TranslationMatchView_ shows how the matching _KeyPoints_ are moved in the respective other image.
It seems more fitting for this debug session than the _LineMatchView_, thus, we `Set`it `as default`.
Still, there are too many matches for our taste.
Back in the _Overview_, we open the _Call_ before the last, which is the one where the upper 80% of matches were not yet filtered out.
![](../images_example/match_tab_translations_2.png)
Here, the best 70% of matches can be chosen. The result looks more acceptable, and we take a mental note to change the threshold to 0.7.
![](../images_example/match_translations_2_70percent.png)
The matches can also be shown in a table, the so called _RawView_:
![](../images_example/raw_view.png)
Here, you could copy a selection of them as CSV, JSON, Ruby or Python to the clipboard.
We don't need that in the moment, though; we just close the window, and the program finishes.
We now know what we might want to change in the program.
Finally, a little note on the `cvv::finalShow()` function:
It needs to be there in every program using CVVisual, after the last call to another CVVisual function, er else, the program will crash in the end.
Hopefully, this example shed some light on how CVVisual can be used.
If you want to learn more, refer to the [API](http://cvv.mostlynerdless.de/api) or other documentation on the [web page](http://cvv.mostlynerdless.de/).
Credit, and special thanks, goes to Andreas Bihlmaier, supervisor of the project, who provided the example code.

@ -1,41 +0,0 @@
#Introduction to using CVVisual
##Enabling debug mode
Define the CVV\_DEBUG_MODE macro somewhere in the translation unit.
##Opening the debug window
Open the debug window by putting one of the functions from the [CVVisual API](http://cvv.mostlynerdless.de/api) into your code.
In this example, we want to debug a call to 'dilate' (line.1) which is a filter, so we use debugFilter.
###Example: Code
src, dest and the structuring element elem are of type cv::Mat (see the OpenCV doc on [dilate()](http://docs.opencv.org/modules/imgproc/doc/filtering.html#dilate));
CVVISUAL\_LOCATION is a special macro that inserts the location of the code and description and view can be either string-literals or std::strings. The later three are all optional.
```cpp
#include <filter.hpp>
//...
cv::dilate(src, dest, elem);
cvv::debugFilter(src, dest, CVVISUAL_LOCATION, description, view);
```
When executing the code, the debugFilter function will open the window and halt the execution.
##The Overview Tab
![](images_tut/dilate_overview.PNG)
You are now in the overview tab. Each time you call one of the CVVisual functions, a *Call* is added to the table.
You can see the images you passed to the funtion as well as metadata and additional information.
The text field allows you to sort or group the Calls by different criteria; see the [filter query language documentation](http://cvv.mostlynerdless.de/ref/filterquery-ref.html) on how to use it.
Now double-click on the Call or select `Open in 'CVVisual|main window'` from the context menu.
(You can also choose to remove the Call or open it in a new window there)
##Debugging a filter operation
![](images_tut/dilate_calltab_defaultfview.PNG)
A *CallTab* opens. In the center, there are the images from the call.
In the `View` drop-down menu you find different *Views* of the Call, that is, different visualizations of it. The accordion menu on the left offers information on the images and additional options depending on the View and the type of the Call.
Important here might be that `ImageInformation` offers the possibility to zoom (you can also use `Ctrl` plus the mouse wheel); if you zoom in more than 60%, the image pixels will be overlaid with the channel values, in a 3-channel image usually in order (top-down) BGR.
As our dilate seems to have produced acceptable results, we want to continue through the code.
So, we push the `Step` button in the upper left.
The window will comes up again the next time one of the CVVisual functions is called.
Then, we see two Calls in the Overview table, the one from before and the new one.
You need to put `finalShow()` after the last regular CVVisual function. If the program reaches it, `Step` and the fast-forward button `>>` will vanish, so we press `Close`, which does exactly what it says.
([Source](http://commons.wikimedia.org/wiki/File:PNG-Gradient.png) of the image used for demonstration.
Note that the screenshots were taken during development and may not depict all features of the current version.)

@ -1,9 +0,0 @@
filterquery: filterquery-ref.md
SingleImageView: views-ref.md#toc_2
DefaultFilterView: views-ref.md#toc_4
DualFilterView: views-ref.md#toc_5
SingleFilterView: views-ref.md#toc_6
DepthMatchView: views-ref.md#toc_8
LineMatchView: views-ref.md#toc_9
RawView: views-ref.md#toc_10
TranslationMatchView: views-ref.md#toc_11

@ -1,40 +0,0 @@
#Views
##General information:
Most views offer an `ImageInformation` collapsable in their accordion menus.
The zoom can be found here.
`Ctrl`+`Mouse wheel` is also zoom; `Ctrl`+`Shift`+`Mouse wheel` is a slower zoom.
If the zoom is deeper than 60%, the image's pixels will be overlaid with their channel values; usually, the order is BGR[+alpha] from the top.
##Single Image View:
Associated with the `debugSingleImage()` function.
Shows one single image with no features other than `Image Information`.
##Filter Views:
Associated with the `debugFilter()` function.
###DefaultFilterView:
Shows two images with only the basic features of `ImageInformation`, synchronized zoom and `Histogram`.
###DualFilterView:
Shows the two images given to the CVVisual function and _Result Image_ inbetween
which represents the result of a filter that was applied to the others via the `Filter selection` collapsable,
like a difference image between the two.
###SingleFilterView:
Allows to apply filters to the images it shows via the `Select a filter` collapsable.
##Match Views:
Associated with the `debugDMatch()` function.
###PointMatchView:
Interprets the translation of matches as depth value.
###LineMatchView:
Connects matching key points in the images with lines.
###Rawview:
Shows in a table data of the matches.
The table entries can be filtered, sorted and grouped by using commands from CVVisual's [filter query language](filterquery-ref.html) in the text box.
###TranslationMatchView
Shows the distance between a keypoint in one image to its match in the other as an arrow or line in one image.
Loading…
Cancel
Save