Add tutorial on how to use Orbbec Astra 3D cameras

pull/18854/head
Igor Murzov 4 years ago
parent e8536c4a0e
commit f8c7862f69
  1. 2
      doc/tutorials/videoio/intelperc.markdown
  2. 2
      doc/tutorials/videoio/kinect_openni.markdown
  3. BIN
      doc/tutorials/videoio/orbbec-astra/images/astra_color.jpg
  4. BIN
      doc/tutorials/videoio/orbbec-astra/images/astra_depth.png
  5. 150
      doc/tutorials/videoio/orbbec-astra/orbbec_astra.markdown
  6. 4
      doc/tutorials/videoio/table_of_content_videoio.markdown
  7. 195
      samples/cpp/tutorial_code/videoio/orbbec_astra/orbbec_astra.cpp
  8. 2
      samples/cpp/videocapture_openni.cpp

@ -1,7 +1,7 @@
Using Creative Senz3D and other Intel RealSense SDK compatible depth sensors {#tutorial_intelperc}
=======================================================================================
@prev_tutorial{tutorial_kinect_openni}
@prev_tutorial{tutorial_orbbec_astra}
**Note**: This tutorial is partially obsolete since PerC SDK has been replaced with RealSense SDK

@ -2,7 +2,7 @@ Using Kinect and other OpenNI compatible depth sensors {#tutorial_kinect_openni}
======================================================
@prev_tutorial{tutorial_video_write}
@next_tutorial{tutorial_intelperc}
@next_tutorial{tutorial_orbbec_astra}
Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

@ -0,0 +1,150 @@
Using Orbbec Astra 3D cameras {#tutorial_orbbec_astra}
======================================================
@prev_tutorial{tutorial_kinect_openni}
@next_tutorial{tutorial_intelperc}
### Introduction
This tutorial is devoted to the Astra Series of Orbbec 3D cameras (https://orbbec3d.com/product-astra-pro/).
That cameras have a depth sensor in addition to a common color sensor. The depth sensors can be read using
the OpenNI interface with @ref cv::VideoCapture class. The video stream is provided through the regular camera
interface.
### Installation Instructions
In order to use a depth sensor with OpenCV you should do the following steps:
-# Download the latest version of Orbbec OpenNI SDK (from here <https://orbbec3d.com/develop/>).
Unzip the archive, choose the build according to your operating system and follow installation
steps provided in the Readme file. For instance, if you use 64bit GNU/Linux run:
@code{.bash}
$ cd Linux/OpenNI-Linux-x64-2.3.0.63/
$ sudo ./install.sh
@endcode
When you are done with the installation, make sure to replug your device for udev rules to take
effect. The camera should now work as a general camera device. Note that your current user should
belong to group `video` to have access to the camera. Also, make sure to source `OpenNIDevEnvironment` file:
@code{.bash}
$ source OpenNIDevEnvironment
@endcode
-# Run the following commands to verify that OpenNI library and header files can be found. You should see
something similar in your terminal:
@code{.bash}
$ echo $OPENNI2_INCLUDE
/home/user/OpenNI_2.3.0.63/Linux/OpenNI-Linux-x64-2.3.0.63/Include
$ echo $OPENNI2_REDIST
/home/user/OpenNI_2.3.0.63/Linux/OpenNI-Linux-x64-2.3.0.63/Redist
@endcode
If the above two variables are empty, then you need to source `OpenNIDevEnvironment` again. Now you can
configure OpenCV with OpenNI support enabled by setting the `WITH_OPENNI2` flag in CMake.
You may also like to enable the `BUILD_EXAMPLES` flag to get a code sample working with your Astra camera.
Run the following commands in the directory containing OpenCV source code to enable OpenNI support:
@code{.bash}
$ mkdir build
$ cd build
$ cmake -DWITH_OPENNI2=ON ..
@endcode
If the OpenNI library is found, OpenCV will be built with OpenNI2 support. You can see the status of OpenNI2
support in the CMake log:
@code{.text}
-- Video I/O:
-- DC1394: YES (2.2.6)
-- FFMPEG: YES
-- avcodec: YES (58.91.100)
-- avformat: YES (58.45.100)
-- avutil: YES (56.51.100)
-- swscale: YES (5.7.100)
-- avresample: NO
-- GStreamer: YES (1.18.1)
-- OpenNI2: YES (2.3.0)
-- v4l/v4l2: YES (linux/videodev2.h)
@endcode
-# Build OpenCV:
@code{.bash}
$ make
@endcode
### Code
To get both depth and color frames, two @ref cv::VideoCapture objects should be created:
@snippetlineno samples/cpp/tutorial_code/videoio/orbbec_astra/orbbec_astra.cpp Open streams
The first object will use the regular Video4Linux2 interface to access the color sensor. The second one
is using OpenNI2 API to retrieve depth data.
Before using the created VideoCapture objects you may want to setup stream parameters by setting
objects' properties. The most important parameters are frame width, frame height and fps:
@snippetlineno samples/cpp/tutorial_code/videoio/orbbec_astra/orbbec_astra.cpp Setup streams
For setting and getting some property of sensor data generators use @ref cv::VideoCapture::set and
@ref cv::VideoCapture::get methods respectively, e.g. :
@snippetlineno samples/cpp/tutorial_code/videoio/orbbec_astra/orbbec_astra.cpp Get properties
The following properties of cameras available through OpenNI interfaces are supported for the depth
generator:
- @ref cv::CAP_PROP_FRAME_WIDTH -- Frame width in pixels.
- @ref cv::CAP_PROP_FRAME_HEIGHT -- Frame height in pixels.
- @ref cv::CAP_PROP_FPS -- Frame rate in FPS.
- @ref cv::CAP_PROP_OPENNI_REGISTRATION -- Flag that registers the remapping depth map to image map
by changing the depth generator's viewpoint (if the flag is "on") or sets this view point to
its normal one (if the flag is "off"). The registration process’ resulting images are
pixel-aligned, which means that every pixel in the image is aligned to a pixel in the depth
image.
- @ref cv::CAP_PROP_OPENNI2_MIRROR -- Flag to enable or disable mirroring for this stream. Set to 0
to disable mirroring
Next properties are available for getting only:
- @ref cv::CAP_PROP_OPENNI_FRAME_MAX_DEPTH -- A maximum supported depth of the camera in mm.
- @ref cv::CAP_PROP_OPENNI_BASELINE -- Baseline value in mm.
After the VideoCapture objects are set up you can start reading frames from them.
@note
OpenCV's VideoCapture provides synchronous API, so you have to grab frames in a new thread
to avoid one stream blocking while another stream is being read. VideoCapture is not a
thread-safe class, so you need to be careful to avoid any possible deadlocks or data races.
Example implementation that gets frames from each sensor in a new thread and stores them
in a list along with their timestamps:
@snippetlineno samples/cpp/tutorial_code/videoio/orbbec_astra/orbbec_astra.cpp Read streams
VideoCapture can retrieve the following data:
-# data given from the depth generator:
- @ref cv::CAP_OPENNI_DEPTH_MAP - depth values in mm (CV_16UC1)
- @ref cv::CAP_OPENNI_POINT_CLOUD_MAP - XYZ in meters (CV_32FC3)
- @ref cv::CAP_OPENNI_DISPARITY_MAP - disparity in pixels (CV_8UC1)
- @ref cv::CAP_OPENNI_DISPARITY_MAP_32F - disparity in pixels (CV_32FC1)
- @ref cv::CAP_OPENNI_VALID_DEPTH_MASK - mask of valid pixels (not occluded, not shaded, etc.)
(CV_8UC1)
-# data given from the color sensor is a regular BGR image (CV_8UC3).
When new data is available a reading thread notifies the main thread. A frame is stored in the
ordered list -- the first frame is the latest one:
@snippetlineno samples/cpp/tutorial_code/videoio/orbbec_astra/orbbec_astra.cpp Show color frame
Depth frames can be picked the same way from the `depthFrames` list.
After that, you'll have two frames: one containing color information and another one -- depth
information. In the sample images below you can see the color frame and the depth frame showing
the same scene. Looking at the color frame it's hard to distinguish plant leaves from leaves painted
on a wall, but the depth data makes it easy.
![Color frame](images/astra_color.jpg)
![Depth frame](images/astra_depth.png)
The complete implementation can be found in
[orbbec_astra.cpp](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/videoio/orbbec_astra/orbbec_astra.cpp)
in `samples/cpp/tutorial_code/videoio` directory.

@ -26,6 +26,10 @@ This section contains tutorials about how to read/save your video files.
*Languages:* C++
- @subpage tutorial_orbbec_astra
*Languages:* C++
- @subpage tutorial_intelperc
*Languages:* C++

@ -0,0 +1,195 @@
#include <opencv2/videoio/videoio.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <list>
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <atomic>
using namespace cv;
using std::cout;
using std::cerr;
using std::endl;
// Stores frames along with their timestamps
struct Frame
{
int64 timestamp;
Mat frame;
};
int main()
{
//! [Open streams]
// Open color stream
VideoCapture colorStream(CAP_V4L2);
// Open depth stream
VideoCapture depthStream(CAP_OPENNI2_ASTRA);
//! [Open streams]
// Check that stream has opened
if (!colorStream.isOpened())
{
cerr << "ERROR: Unable to open color stream" << endl;
return 1;
}
// Check that stream has opened
if (!depthStream.isOpened())
{
cerr << "ERROR: Unable to open depth stream" << endl;
return 1;
}
//! [Setup streams]
// Set color and depth stream parameters
colorStream.set(CAP_PROP_FRAME_WIDTH, 640);
colorStream.set(CAP_PROP_FRAME_HEIGHT, 480);
depthStream.set(CAP_PROP_FRAME_WIDTH, 640);
depthStream.set(CAP_PROP_FRAME_HEIGHT, 480);
depthStream.set(CAP_PROP_OPENNI2_MIRROR, 0);
//! [Setup streams]
// Print color stream parameters
cout << "Color stream: "
<< colorStream.get(CAP_PROP_FRAME_WIDTH) << "x" << colorStream.get(CAP_PROP_FRAME_HEIGHT)
<< " @" << colorStream.get(CAP_PROP_FPS) << " fps" << endl;
//! [Get properties]
// Print depth stream parameters
cout << "Depth stream: "
<< depthStream.get(CAP_PROP_FRAME_WIDTH) << "x" << depthStream.get(CAP_PROP_FRAME_HEIGHT)
<< " @" << depthStream.get(CAP_PROP_FPS) << " fps" << endl;
//! [Get properties]
//! [Read streams]
// Create two lists to store frames
std::list<Frame> depthFrames, colorFrames;
std::mutex depthFramesMtx, colorFramesMtx;
const std::size_t maxFrames = 64;
// Synchronization objects
std::mutex mtx;
std::condition_variable dataReady;
std::atomic<bool> isFinish;
isFinish = false;
// Start depth reading thread
std::thread depthReader([&]
{
while (!isFinish)
{
// Grab and decode new frame
if (depthStream.grab())
{
Frame f;
f.timestamp = cv::getTickCount();
depthStream.retrieve(f.frame, CAP_OPENNI_DEPTH_MAP);
//depthStream.retrieve(f.frame, CAP_OPENNI_DISPARITY_MAP);
//depthStream.retrieve(f.frame, CAP_OPENNI_IR_IMAGE);
if (f.frame.empty())
{
cerr << "ERROR: Failed to decode frame from depth stream" << endl;
break;
}
{
std::lock_guard<std::mutex> lk(depthFramesMtx);
if (depthFrames.size() >= maxFrames)
depthFrames.pop_front();
depthFrames.push_back(f);
}
dataReady.notify_one();
}
}
});
// Start color reading thread
std::thread colorReader([&]
{
while (!isFinish)
{
// Grab and decode new frame
if (colorStream.grab())
{
Frame f;
f.timestamp = cv::getTickCount();
colorStream.retrieve(f.frame);
if (f.frame.empty())
{
cerr << "ERROR: Failed to decode frame from color stream" << endl;
break;
}
{
std::lock_guard<std::mutex> lk(colorFramesMtx);
if (colorFrames.size() >= maxFrames)
colorFrames.pop_front();
colorFrames.push_back(f);
}
dataReady.notify_one();
}
}
});
//! [Read streams]
while (true)
{
std::unique_lock<std::mutex> lk(mtx);
while (depthFrames.empty() && colorFrames.empty())
dataReady.wait(lk);
depthFramesMtx.lock();
if (depthFrames.empty())
{
depthFramesMtx.unlock();
}
else
{
// Get a frame from the list
Mat depthMap = depthFrames.front().frame;
depthFrames.pop_front();
depthFramesMtx.unlock();
// Show depth frame
Mat d8, dColor;
depthMap.convertTo(d8, CV_8U, 255.0 / 2500);
applyColorMap(d8, dColor, COLORMAP_OCEAN);
imshow("Depth (colored)", dColor);
}
//! [Show color frame]
colorFramesMtx.lock();
if (colorFrames.empty())
{
colorFramesMtx.unlock();
}
else
{
// Get a frame from the list
Mat colorFrame = colorFrames.front().frame;
colorFrames.pop_front();
colorFramesMtx.unlock();
// Show color frame
imshow("Color", colorFrame);
}
//! [Show color frame]
// Exit on Esc key press
int key = waitKey(1);
if (key == 27) // ESC
break;
}
isFinish = true;
depthReader.join();
colorReader.join();
return 0;
}

@ -61,7 +61,7 @@ static void printCommandLineParams()
cout << "-fmd= Fixed max disparity? (0 or 1; 0 by default) Ignored if disparity map is not colorized (-cd 0)." << endl;
cout << "-mode= image mode: resolution and fps, supported three values: 0 - CAP_OPENNI_VGA_30HZ, 1 - CAP_OPENNI_SXGA_15HZ," << endl;
cout << " 2 - CAP_OPENNI_SXGA_30HZ (0 by default). Ignored if rgb image or gray image are not selected to show." << endl;
cout << "-m= Mask to set which output images are need. It is a string of size 5. Each element of this is '0' or '1' and" << endl;
cout << "-m= Mask to set which output images are need. It is a string of size 6. Each element of this is '0' or '1' and" << endl;
cout << " determine: is depth map, disparity map, valid pixels mask, rgb image, gray image need or not (correspondently), ir image" << endl ;
cout << " By default -m=010100 i.e. disparity map and rgb image will be shown." << endl ;
cout << "-r= Filename of .oni video file. The data will grabbed from it." << endl ;

Loading…
Cancel
Save