mirror of https://github.com/opencv/opencv.git
parent
891e2ff310
commit
85e5de67e4
3 changed files with 93 additions and 0 deletions
Binary file not shown.
@ -0,0 +1,88 @@ |
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
||||
% % |
||||
% C++ % |
||||
% % |
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
||||
|
||||
\ifCpp |
||||
\section{Using Kinect sensor.} |
||||
To get Kinect data there is support in VideoCapture class. So the user can retrieve depth map, |
||||
rgb image and some other formats of Kinect output by using familiar interface of \texttt{VideoCapture}.\par |
||||
|
||||
To use existing support of Kinect sensor the user should do the following preliminary steps:\newline |
||||
1.) Install OpenNI library and PrimeSensor Module for OpenNI from here \url{http://www.openni. |
||||
org/downloadfiles}. The installation should be made in default folders listed in install instrac- |
||||
tions of these products: |
||||
\begin{lstlisting} |
||||
OpenNI: |
||||
Linux & MacOSX: |
||||
Libs into: /usr/lib |
||||
Includes into: /usr/include/ni |
||||
Windows: |
||||
Libs into: c:/Program Files/OpenNI/Lib |
||||
Includes into: c:/Program Files/OpenNI/Include |
||||
PrimeSensor Module: |
||||
Linux & MacOSX: |
||||
Libs into: /usr/lib |
||||
Bins into: /usr/bin |
||||
Windows: |
||||
Libs into: c:/Program Files/Prime Sense/Sensor/Lib |
||||
Bins into: c:/Program Files/Prime Sense/Sensor/Bin |
||||
\end{lstlisting} |
||||
2.) Configure OpenCV with OpenNI support by setting \texttt{WITH\_OPENNI} flag in CMake. If OpenNI |
||||
is found in default install folders OpenCV will be built with OpenNI library regardless of whether |
||||
PrimeSensor Module is found or not. If PrimeSensor Module was not found the user get warning |
||||
about this in CMake log. OpenCV is compiled with OpenNI library even though PrimeSensor |
||||
Module was not detected, but \texttt{VideoCapture} object can not grab the data from Kinect sensor in |
||||
such case. Build OpenCV.\par |
||||
|
||||
VideoCapture provides retrieving the following Kinect data: |
||||
\begin{lstlisting} |
||||
a.) data given from depth generator: |
||||
OPENNI_DEPTH_MAP - depth values in mm (CV_16UC1) |
||||
OPENNI_POINT_CLOUD_MAP - XYZ in meters (CV_32FC3) |
||||
OPENNI_DISPARITY_MAP - disparity in pixels (CV_8UC1) |
||||
OPENNI_DISPARITY_MAP_32F - disparity in pixels (CV_32FC1) |
||||
OPENNI_VALID_DEPTH_MASK - mask of valid pixels (not ocluded, |
||||
not shaded etc.) (CV_8UC1) |
||||
b.) data given from RGB image generator: |
||||
OPENNI_BGR_IMAGE - color image (CV_8UC3) |
||||
OPENNI_GRAY_IMAGE - gray image (CV_8UC1) |
||||
\end{lstlisting} |
||||
|
||||
To get depth map from Kinect the user can use \texttt{VideoCapture::operator >>}, e. g. |
||||
\begin{lstlisting} |
||||
VideoCapture capture(0); // or CV_CAP_OPENNI |
||||
for(;;) |
||||
{ |
||||
Mat depthMap; |
||||
|
||||
capture >> depthMap; |
||||
|
||||
if( waitKey( 30 ) >= 0 ) |
||||
break; |
||||
} |
||||
\end{lstlisting} |
||||
To get several Kinect maps the user should use \texttt{VideoCapture::grab + VideoCapture::retrieve}, |
||||
e. g. |
||||
\begin{lstlisting} |
||||
VideoCapture capture(0); // or CV_CAP_OPENNI |
||||
for(;;) |
||||
{ |
||||
Mat depthMap; |
||||
Mat rgbImage |
||||
|
||||
capture.grab(); |
||||
|
||||
capture.retrieve( depthMap, OPENNI_DEPTH_MAP ); |
||||
capture.retrieve( bgrImage, OPENNI_BGR_IMAGE ); |
||||
|
||||
if( waitKey( 30 ) >= 0 ) |
||||
break; |
||||
} |
||||
\end{lstlisting} |
||||
|
||||
For more information see example kinect maps.cpp in sample folder. |
||||
|
||||
\fi |
Loading…
Reference in new issue