From e5084a2f12dee3346ca40d8db3c153fff224919c Mon Sep 17 00:00:00 2001 From: kallaballa Date: Mon, 10 Apr 2023 09:11:12 +0200 Subject: [PATCH] started on documentation --- README.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/README.md b/README.md index af8bd738d..a2bded921 100644 --- a/README.md +++ b/README.md @@ -11,6 +11,20 @@ Viz2D is a new way of writing graphical (on- and offscreen) applications with Op * Video pipeline: Through a simple Source/Sink system videos can be displayed, edited and saved. * Hardware acceleration: Automatic hardware acceleration usage where possible. (e.g. cl-gl sharing and VAAPI). Actually it is possible to write programs to run almost entirely on the GPU, given driver-features are available. +# Documentation + +## Basics +* Viz2D is not thread safe. Though it is possible to have several Viz2D objects in one or more threads and synchronize them using ```Viz2D::makeNonCurrent()``` and ```Viz2D::makeCurrent()```. This is a limitation of GLFW3. +* Access to different subsystems (opengl, opencl, nanovg and nanogui) is provided through "contexts". A context is simply a function that takes a functor, sets up the subsystem, executes the functor and tears-down the subsystem. + +For example, to create an OpenGL context and set the GL viewport: +```C++ +v2d->gl([](const cv::Size sz) { + glViewPort(0, 0, sz.width, sz.height); +}); +``` +* Viz2D uses InputArray/OutputArray/InputOutputArray which gives you the option to work with cv::Mat, std::vector and cv::UMat. Anyway, you should prefer to use cv::UMat whenever possible to automatically use hardware capabilities where available. + # Attribution * The author of the bunny video is **(c) copyright Blender Foundation | www.bigbuckbunny.org**. * The author of the dance video is **GNI Dance Company** ([Original video](https://www.youtube.com/watch?v=yg6LZtNeO_8))