Erlangga Yudi Pradana
fb6d8c0123
|
5 months ago | |
---|---|---|
.. | ||
CMakeLists.txt | 5 months ago | |
README.md | 5 months ago | |
inference.cc | 5 months ago | |
inference.h | 5 months ago | |
main.cc | 5 months ago |
README.md
YOLOv8 OpenVINO Inference in C++ 🦾
Welcome to the YOLOv8 OpenVINO Inference example in C++! This guide will help you get started with leveraging the powerful YOLOv8 models using OpenVINO and OpenCV API in your C++ projects. Whether you're looking to enhance performance or add flexibility to your applications, this example has got you covered.
🌟 Features
- 🚀 Model Format Support: Compatible with
ONNX
andOpenVINO IR
formats. - ⚡ Precision Options: Run models in
FP32
,FP16
, andINT8
precisions. - 🔄 Dynamic Shape Loading: Easily handle models with dynamic input shapes.
📋 Dependencies
To ensure smooth execution, please make sure you have the following dependencies installed:
Dependency | Version |
---|---|
OpenVINO | >=2023.3 |
OpenCV | >=4.5.0 |
C++ | >=14 |
CMake | >=3.12.0 |
⚙️ Build Instructions
Follow these steps to build the project:
-
Clone the repository:
git clone https://github.com/ultralytics/ultralytics.git cd ultralytics/YOLOv8-OpenVINO-CPP-Inference
-
Create a build directory and compile the project:
mkdir build cd build cmake .. make
🛠️ Usage
Once built, you can run inference on an image using the following command:
./detect <model_path.{onnx, xml}> <image_path.jpg>
🔄 Exporting YOLOv8 Models
To use your YOLOv8 model with OpenVINO, you need to export it first. Use the command below to export the model:
yolo export model=yolov8s.pt imgsz=640 format=openvino
📸 Screenshots
Running Using OpenVINO Model
Running Using ONNX Model
❤️ Contributions
We hope this example helps you integrate YOLOv8 with OpenVINO and OpenCV into your C++ projects effortlessly. Happy coding! 🚀