This is a simple lightweight class that encapsulate pitched memory on GPU. It is intended to pass to nvcc-compiled code, i.e. CUDA kernels. So it is used internally by OpenCV and by users writes own device code. Its members can be called both from host and from device code. ::
This is structure is similar to :cpp:class:`gpu::DevMem2D_` but contains only pointer and row step. Width and height fields are excluded due to performance reasons. The structure is for internal use or for users who write own device code. ::
This is structure is similar to :cpp:class:`gpu::DevMem2D_` but contains only pointer and row step in elements. Width and height fields are excluded due to performance reasons. This class is can only be constructed if ``sizeof(T)`` is a multiple of 256. The structure is for internal use or for users who write own device code. ::
The base storage class for GPU memory with reference counting. Its interface is almost :c:type:`Mat` interface with some limitations, so using it won't be a problem. The limitations are no arbitrary dimensions support (only 2D), no functions that returns references to its data (because references on GPU are not valid for CPU), no expression templates technique support. Because of last limitation please take care with overloaded matrix operators - they cause memory allocations. The ``GpuMat`` class is convertible to :cpp:class:`gpu::DevMem2D_` and :cpp:class:`gpu::PtrStep_` so it can be passed to directly to kernel.
**Please note:** In contrast with :c:type:`Mat`, in most cases ``GpuMat::isContinuous() == false`` , i.e. rows are aligned to size depending on hardware. Also single row ``GpuMat`` is always a continuous matrix. ::
**Please note:** Is it a bad practice to leave static or global ``GpuMat`` variables allocated, i.e. to rely on its destructor. That is because destruction order of such variables and CUDA context is undefined and GPU memory release function returns error if CUDA context has been destroyed before.
This is a class with reference counting that wraps special memory type allocation functions from CUDA. Its interface is also :c:type:`Mat`-like but with additional memory type parameter:
*``ALLOC_WRITE_COMBINED`` Sets write combined buffer which is not cached by CPU. Such buffers are used to supply GPU with data when GPU only reads it. The advantage is better CPU cache utilization.
**Please note:** Allocation size of such memory types is usually limited. For more details please see "CUDA 2.2 Pinned Memory APIs" document or "CUDA_C Programming Guide". ::
Maps CPU memory to GPU address space and creates :cpp:class:`gpu::GpuMat` header without reference counting for it. This can be done only if memory was allocated with ``ALLOC_ZEROCOPY`` flag and if it is supported by hardware (laptops often share video and CPU memory, so address spaces can be mapped, and that eliminates extra copy).
This class encapsulated queue of the asynchronous calls. Some functions have overloads with additional ``gpu::Stream`` parameter. The overloads do initialization work (allocate output buffers, upload constants, etc.), start GPU kernel and return before results are ready. A check if all operation are complete can be performed via :cpp:func:`gpu::Stream::queryIfComplete`. Asynchronous upload/download have to be performed from/to page-locked buffers, i.e. using :cpp:class:`gpu::CudaMem` or :c:type:`Mat` header that points to a region of :cpp:class:`gpu::CudaMem`.
**Please note the limitation**: currently it is not guaranteed that all will work properly if one operation will be enqueued twice with different data. Some functions use constant GPU memory and next call may update the memory before previous has been finished. But calling asynchronously different operations is safe because each operation has own constant buffer. Memory copy/upload/download/set operations to buffers hold by user are also safe. ::
This class provides possibility to get ``cudaStream_t`` from :cpp:class:`gpu::Stream`. This class is declared in ``stream_accessor.hpp`` because that is only public header that depend on Cuda Runtime API. Including it will bring the dependency to your code. ::
Ensures that size of matrix is big enough and matrix has proper type. The function doesn't reallocate memory if the matrix has proper attributes already.