- Refreshed images, links, OpenCV API. - Added more details to Android Mobilenet sample. - Moved to new location and re-linked tutorials.pull/24680/head
Before Width: | Height: | Size: 15 KiB |
Before Width: | Height: | Size: 41 KiB |
Before Width: | Height: | Size: 55 KiB |
Before Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 5.6 KiB |
Before Width: | Height: | Size: 9.5 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 52 KiB |
Before Width: | Height: | Size: 56 KiB |
@ -1,107 +1 @@ |
||||
# How to run deep networks on Android device {#tutorial_dnn_android} |
||||
|
||||
@tableofcontents |
||||
|
||||
@prev_tutorial{tutorial_dnn_openvino} |
||||
@next_tutorial{tutorial_dnn_yolo} |
||||
|
||||
| | | |
||||
| -: | :- | |
||||
| Original author | Dmitry Kurtaev | |
||||
| Compatibility | OpenCV >= 3.3 | |
||||
|
||||
## Introduction |
||||
In this tutorial you'll know how to run deep learning networks on Android device |
||||
using OpenCV deep learning module. |
||||
|
||||
Tutorial was written for the following versions of corresponding software: |
||||
- Android Studio 2.3.3 |
||||
- OpenCV 3.3.0+ |
||||
|
||||
## Requirements |
||||
|
||||
- Download and install Android Studio from https://developer.android.com/studio. |
||||
|
||||
- Get the latest pre-built OpenCV for Android release from https://github.com/opencv/opencv/releases and unpack it (for example, `opencv-4.X.Y-android-sdk.zip`). |
||||
|
||||
- Download MobileNet object detection model from https://github.com/chuanqi305/MobileNet-SSD. We need a configuration file `MobileNetSSD_deploy.prototxt` and weights `MobileNetSSD_deploy.caffemodel`. |
||||
|
||||
## Create an empty Android Studio project |
||||
- Open Android Studio. Start a new project. Let's call it `opencv_mobilenet`. |
||||
![](1_start_new_project.png) |
||||
|
||||
- Keep default target settings. |
||||
![](2_start_new_project.png) |
||||
|
||||
- Use "Empty Activity" template. Name activity as `MainActivity` with a |
||||
corresponding layout `activity_main`. |
||||
![](3_start_new_project.png) |
||||
|
||||
![](4_start_new_project.png) |
||||
|
||||
- Wait until a project was created. Go to `Run->Edit Configurations`. |
||||
Choose `USB Device` as target device for runs. |
||||
![](5_setup.png) |
||||
Plug in your device and run the project. It should be installed and launched |
||||
successfully before we'll go next. |
||||
@note Read @ref tutorial_android_dev_intro in case of problems. |
||||
|
||||
![](6_run_empty_project.png) |
||||
|
||||
## Add OpenCV dependency |
||||
|
||||
- Go to `File->New->Import module` and provide a path to `unpacked_OpenCV_package/sdk/java`. The name of module detects automatically. |
||||
Disable all features that Android Studio will suggest you on the next window. |
||||
![](7_import_module.png) |
||||
|
||||
![](8_import_module.png) |
||||
|
||||
- Open two files: |
||||
|
||||
1. `AndroidStudioProjects/opencv_mobilenet/app/build.gradle` |
||||
|
||||
2. `AndroidStudioProjects/opencv_mobilenet/openCVLibrary330/build.gradle` |
||||
|
||||
Copy both `compileSdkVersion` and `buildToolsVersion` from the first file to |
||||
the second one. |
||||
|
||||
`compileSdkVersion 14` -> `compileSdkVersion 26` |
||||
|
||||
`buildToolsVersion "25.0.0"` -> `buildToolsVersion "26.0.1"` |
||||
|
||||
- Make the project. There is no errors should be at this point. |
||||
|
||||
- Go to `File->Project Structure`. Add OpenCV module dependency. |
||||
![](9_opencv_dependency.png) |
||||
|
||||
![](10_opencv_dependency.png) |
||||
|
||||
- Install once an appropriate OpenCV manager from `unpacked_OpenCV_package/apk` |
||||
to target device. |
||||
@code |
||||
adb install OpenCV_3.3.0_Manager_3.30_armeabi-v7a.apk |
||||
@endcode |
||||
|
||||
- Congratulations! We're ready now to make a sample using OpenCV. |
||||
|
||||
## Make a sample |
||||
Our sample will takes pictures from a camera, forwards it into a deep network and |
||||
receives a set of rectangles, class identifiers and confidence values in `[0, 1]` |
||||
range. |
||||
|
||||
- First of all, we need to add a necessary widget which displays processed |
||||
frames. Modify `app/src/main/res/layout/activity_main.xml`: |
||||
@include android/mobilenet-objdetect/res/layout/activity_main.xml |
||||
|
||||
- Put downloaded `MobileNetSSD_deploy.prototxt` and `MobileNetSSD_deploy.caffemodel` |
||||
into `app/build/intermediates/assets/debug` folder. |
||||
|
||||
- Modify `/app/src/main/AndroidManifest.xml` to enable full-screen mode, set up |
||||
a correct screen orientation and allow to use a camera. |
||||
@include android/mobilenet-objdetect/gradle/AndroidManifest.xml |
||||
|
||||
- Replace content of `app/src/main/java/org/opencv/samples/opencv_mobilenet/MainActivity.java`: |
||||
@include android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java |
||||
|
||||
- Launch an application and make a fun! |
||||
![](11_demo.jpg) |
||||
The page was moved to @ref tutorial_android_dnn_intro |
@ -0,0 +1,85 @@ |
||||
# How to run deep networks on Android device {#tutorial_android_dnn_intro} |
||||
|
||||
@tableofcontents |
||||
|
||||
@prev_tutorial{tutorial_dev_with_OCV_on_Android} |
||||
@next_tutorial{tutorial_android_ocl_intro} |
||||
|
||||
@see @ref tutorial_table_of_content_dnn |
||||
|
||||
| | | |
||||
| -: | :- | |
||||
| Original author | Dmitry Kurtaev | |
||||
| Compatibility | OpenCV >= 4.9 | |
||||
|
||||
## Introduction |
||||
In this tutorial you'll know how to run deep learning networks on Android device |
||||
using OpenCV deep learning module. |
||||
Tutorial was written for Android Studio Android Studio 2022.2.1. |
||||
|
||||
## Requirements |
||||
|
||||
- Download and install Android Studio from https://developer.android.com/studio. |
||||
|
||||
- Get the latest pre-built OpenCV for Android release from https://github.com/opencv/opencv/releases |
||||
and unpack it (for example, `opencv-4.X.Y-android-sdk.zip`). |
||||
|
||||
- Download MobileNet object detection model from https://github.com/chuanqi305/MobileNet-SSD. |
||||
Configuration file `MobileNetSSD_deploy.prototxt` and model weights `MobileNetSSD_deploy.caffemodel` |
||||
are required. |
||||
|
||||
## Create an empty Android Studio project and add OpenCV dependency |
||||
|
||||
Use @ref tutorial_dev_with_OCV_on_Android tutorial to initialize your project and add OpenCV. |
||||
|
||||
## Make an app |
||||
|
||||
Our sample will takes pictures from a camera, forwards it into a deep network and |
||||
receives a set of rectangles, class identifiers and confidence values in range [0, 1]. |
||||
|
||||
- First of all, we need to add a necessary widget which displays processed |
||||
frames. Modify `app/src/main/res/layout/activity_main.xml`: |
||||
@include android/mobilenet-objdetect/res/layout/activity_main.xml |
||||
|
||||
- Modify `/app/src/main/AndroidManifest.xml` to enable full-screen mode, set up |
||||
a correct screen orientation and allow to use a camera. |
||||
@code{.xml} |
||||
<?xml version="1.0" encoding="utf-8"?> |
||||
<manifest xmlns:android="http://schemas.android.com/apk/res/android"> |
||||
|
||||
<application |
||||
android:label="@string/app_name"> |
||||
@endcode |
||||
@snippet android/mobilenet-objdetect/gradle/AndroidManifest.xml mobilenet_tutorial |
||||
|
||||
- Replace content of `app/src/main/java/com/example/myapplication/MainActivity.java` and set a custom package name if necessary: |
||||
|
||||
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java mobilenet_tutorial_package |
||||
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java mobilenet_tutorial |
||||
|
||||
- Put downloaded `deploy.prototxt` and `mobilenet_iter_73000.caffemodel` |
||||
into `app/src/main/res/raw` folder. OpenCV DNN model is mainly designed to load ML and DNN models |
||||
from file. Modern Android does not allow it without extra permissions, but provides Java API to load |
||||
bytes from resources. The sample uses alternative DNN API that initializes a model from in-memory |
||||
buffer rather than a file. The following function reads model file from resources and converts it to |
||||
`MatOfBytes` (analog of `std::vector<char>` in C++ world) object suitable for OpenCV Java API: |
||||
|
||||
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java mobilenet_tutorial_resource |
||||
|
||||
And then the network initialization is done with the following lines: |
||||
|
||||
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java init_model_from_memory |
||||
|
||||
See also [Android documentation on resources](https://developer.android.com/guide/topics/resources/providing-resources.html) |
||||
|
||||
- Take a look how DNN model input is prepared and inference result is interpreted: |
||||
|
||||
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java mobilenet_handle_frame |
||||
|
||||
`Dnn.blobFromImage` converts camera frame to neural network input tensor. Resize and statistical |
||||
normalization are applied. Each line of network output tensor contains information on one detected |
||||
object in the following order: confidence in range [0, 1], class id, left, top, right, bottom box |
||||
coordinates. All coordinates are in range [0, 1] and should be scaled to image size before rendering. |
||||
|
||||
- Launch an application and make a fun! |
||||
![](images/11_demo.jpg) |
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 118 KiB |