[Example] YOLO-Series(v5-11) ONNXRuntime Rust (#17311)

Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
pull/17318/head
Jamjamjon 2 weeks ago committed by GitHub
parent d28caa9a58
commit f95dc37311
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 1
      examples/README.md
  2. 12
      examples/YOLO-Series-ONNXRuntime-Rust/Cargo.toml
  3. 94
      examples/YOLO-Series-ONNXRuntime-Rust/README.md
  4. 236
      examples/YOLO-Series-ONNXRuntime-Rust/src/main.rs
  5. 2
      examples/YOLOv8-ONNXRuntime-Rust/Cargo.toml
  6. 27
      examples/YOLOv8-ONNXRuntime-Rust/README.md
  7. 13
      examples/YOLOv8-ONNXRuntime-Rust/src/lib.rs
  8. 6
      examples/YOLOv8-ONNXRuntime-Rust/src/model.rs

@ -21,6 +21,7 @@ This directory features a collection of real-world applications and walkthroughs
| [YOLOv8 OpenCV INT8 TFLite Python](./YOLOv8-TFLite-Python) | Python | [Wamiq Raza](https://github.com/wamiqraza) |
| [YOLOv8 All Tasks ONNXRuntime Rust](./YOLOv8-ONNXRuntime-Rust) | Rust/ONNXRuntime | [jamjamjon](https://github.com/jamjamjon) |
| [YOLOv8 OpenVINO CPP](./YOLOv8-OpenVINO-CPP-Inference) | C++/OpenVINO | [Erlangga Yudi Pradana](https://github.com/rlggyp) |
| [YOLOv5-YOLO11 ONNXRuntime Rust](./YOLO-Series-ONNXRuntime-Rust) | Rust/ONNXRuntime | [jamjamjon](https://github.com/jamjamjon) |
### How to Contribute

@ -0,0 +1,12 @@
[package]
name = "YOLO-ONNXRuntime-Rust"
version = "0.1.0"
edition = "2021"
authors = ["Jamjamjon <xxyydzml@outlook.com>"]
[dependencies]
anyhow = "1.0.92"
clap = "4.5.20"
tracing = "0.1.40"
tracing-subscriber = "0.3.18"
usls = { version = "0.0.19", features = ["auto"] }

@ -0,0 +1,94 @@
# YOLO-Series ONNXRuntime Rust Demo for Core YOLO Tasks
This repository provides a Rust demo for key YOLO-Series tasks such as `Classification`, `Segmentation`, `Detection`, `Pose Detection`, and `OBB` using ONNXRuntime. It supports various YOLO models (v5 - 11) across multiple vision tasks.
## Introduction
- This example leverages the latest versions of both ONNXRuntime and YOLO models.
- We utilize the [usls](https://github.com/jamjamjon/usls/tree/main) crate to streamline YOLO model inference, providing efficient data loading, visualization, and optimized inference performance.
## Features
- **Extensive Model Compatibility**: Supports `YOLOv5`, `YOLOv6`, `YOLOv7`, `YOLOv8`, `YOLOv9`, `YOLOv10`, `YOLO11`, `YOLO-world`, `RTDETR`, and others, covering a wide range of YOLO versions.
- **Versatile Task Coverage**: Includes `Classification`, `Segmentation`, `Detection`, `Pose`, and `OBB`.
- **Precision Flexibility**: Works with `FP16` and `FP32` ONNX models.
- **Execution Providers**: Accelerated support for `CPU`, `CUDA`, `CoreML`, and `TensorRT`.
- **Dynamic Input Shapes**: Dynamically adjusts to variable `batch`, `width`, and `height` dimensions for flexible model input.
- **Flexible Data Loading**: The `DataLoader` handles images, folders, videos, and video streams.
- **Real-Time Display and Video Export**: `Viewer` provides real-time frame visualization and video export functions, similar to OpenCV’s `imshow()` and `imwrite()`.
- **Enhanced Annotation and Visualization**: The `Annotator` facilitates comprehensive result rendering, with support for bounding boxes (HBB), oriented bounding boxes (OBB), polygons, masks, keypoints, and text labels.
## Setup Instructions
### 1. ONNXRuntime Linking
<details>
<summary>You have two options to link the ONNXRuntime library:</summary>
- **Option 1: Manual Linking**
- For detailed setup, consult the [ONNX Runtime linking documentation](https://ort.pyke.io/setup/linking).
- **Linux or macOS**:
1. Download the ONNX Runtime package from the [Releases page](https://github.com/microsoft/onnxruntime/releases).
2. Set up the library path by exporting the `ORT_DYLIB_PATH` environment variable:
```shell
export ORT_DYLIB_PATH=/path/to/onnxruntime/lib/libonnxruntime.so.1.19.0
```
- **Option 2: Automatic Download**
- Use the `--features auto` flag to handle downloading automatically:
```shell
cargo run -r --example yolo --features auto
```
</details>
### 2. \[Optional\] Install CUDA, CuDNN, and TensorRT
- The CUDA execution provider requires CUDA version `12.x`.
- The TensorRT execution provider requires both CUDA `12.x` and TensorRT `10.x`.
### 3. \[Optional\] Install ffmpeg
To view video frames and save video inferences, install `rust-ffmpeg`. For instructions, see:
[https://github.com/zmwangx/rust-ffmpeg/wiki/Notes-on-building#dependencies](https://github.com/zmwangx/rust-ffmpeg/wiki/Notes-on-building#dependencies)
## Get Started
```Shell
# customized
cargo run -r -- --task detect --ver v8 --nc 6 --model xxx.onnx # YOLOv8
# Classify
cargo run -r -- --task classify --ver v5 --scale s --width 224 --height 224 --nc 1000 # YOLOv5
cargo run -r -- --task classify --ver v8 --scale n --width 224 --height 224 --nc 1000 # YOLOv8
cargo run -r -- --task classify --ver v11 --scale n --width 224 --height 224 --nc 1000 # YOLOv11
# Detect
cargo run -r -- --task detect --ver v5 --scale n # YOLOv5
cargo run -r -- --task detect --ver v6 --scale n # YOLOv6
cargo run -r -- --task detect --ver v7 --scale t # YOLOv7
cargo run -r -- --task detect --ver v8 --scale n # YOLOv8
cargo run -r -- --task detect --ver v9 --scale t # YOLOv9
cargo run -r -- --task detect --ver v10 --scale n # YOLOv10
cargo run -r -- --task detect --ver v11 --scale n # YOLOv11
cargo run -r -- --task detect --ver rtdetr --scale l # RTDETR
# Pose
cargo run -r -- --task pose --ver v8 --scale n # YOLOv8-Pose
cargo run -r -- --task pose --ver v11 --scale n # YOLOv11-Pose
# Segment
cargo run -r -- --task segment --ver v5 --scale n # YOLOv5-Segment
cargo run -r -- --task segment --ver v8 --scale n # YOLOv8-Segment
cargo run -r -- --task segment --ver v11 --scale n # YOLOv8-Segment
cargo run -r -- --task segment --ver v8 --model yolo/FastSAM-s-dyn-f16.onnx # FastSAM
# OBB
cargo run -r -- --ver v8 --task obb --scale n --width 1024 --height 1024 --source images/dota.png # YOLOv8-Obb
cargo run -r -- --ver v11 --task obb --scale n --width 1024 --height 1024 --source images/dota.png # YOLOv11-Obb
```
**`cargo run -- --help` for more options**
For more details, please refer to [usls-yolo](https://github.com/jamjamjon/usls/tree/main/examples/yolo).

@ -0,0 +1,236 @@
use anyhow::Result;
use clap::Parser;
use usls::{
models::YOLO, Annotator, DataLoader, Device, Options, Viewer, Vision, YOLOScale, YOLOTask,
YOLOVersion, COCO_SKELETONS_16,
};
#[derive(Parser, Clone)]
#[command(author, version, about, long_about = None)]
pub struct Args {
/// Path to the ONNX model
#[arg(long)]
pub model: Option<String>,
/// Input source path
#[arg(long, default_value_t = String::from("../../ultralytics/assets/bus.jpg"))]
pub source: String,
/// YOLO Task
#[arg(long, value_enum, default_value_t = YOLOTask::Detect)]
pub task: YOLOTask,
/// YOLO Version
#[arg(long, value_enum, default_value_t = YOLOVersion::V8)]
pub ver: YOLOVersion,
/// YOLO Scale
#[arg(long, value_enum, default_value_t = YOLOScale::N)]
pub scale: YOLOScale,
/// Batch size
#[arg(long, default_value_t = 1)]
pub batch_size: usize,
/// Minimum input width
#[arg(long, default_value_t = 224)]
pub width_min: isize,
/// Input width
#[arg(long, default_value_t = 640)]
pub width: isize,
/// Maximum input width
#[arg(long, default_value_t = 1024)]
pub width_max: isize,
/// Minimum input height
#[arg(long, default_value_t = 224)]
pub height_min: isize,
/// Input height
#[arg(long, default_value_t = 640)]
pub height: isize,
/// Maximum input height
#[arg(long, default_value_t = 1024)]
pub height_max: isize,
/// Number of classes
#[arg(long, default_value_t = 80)]
pub nc: usize,
/// Class confidence
#[arg(long)]
pub confs: Vec<f32>,
/// Enable TensorRT support
#[arg(long)]
pub trt: bool,
/// Enable CUDA support
#[arg(long)]
pub cuda: bool,
/// Enable CoreML support
#[arg(long)]
pub coreml: bool,
/// Use TensorRT half precision
#[arg(long)]
pub half: bool,
/// Device ID to use
#[arg(long, default_value_t = 0)]
pub device_id: usize,
/// Enable performance profiling
#[arg(long)]
pub profile: bool,
/// Disable contour drawing, for saving time
#[arg(long)]
pub no_contours: bool,
/// Show result
#[arg(long)]
pub view: bool,
/// Do not save output
#[arg(long)]
pub nosave: bool,
}
fn main() -> Result<()> {
let args = Args::parse();
// logger
if args.profile {
tracing_subscriber::fmt()
.with_max_level(tracing::Level::INFO)
.init();
}
// model path
let path = match &args.model {
None => format!(
"yolo/{}-{}-{}.onnx",
args.ver.name(),
args.scale.name(),
args.task.name()
),
Some(x) => x.to_string(),
};
// saveout
let saveout = match &args.model {
None => format!(
"{}-{}-{}",
args.ver.name(),
args.scale.name(),
args.task.name()
),
Some(x) => {
let p = std::path::PathBuf::from(&x);
p.file_stem().unwrap().to_str().unwrap().to_string()
}
};
// device
let device = if args.cuda {
Device::Cuda(args.device_id)
} else if args.trt {
Device::Trt(args.device_id)
} else if args.coreml {
Device::CoreML(args.device_id)
} else {
Device::Cpu(args.device_id)
};
// build options
let options = Options::new()
.with_model(&path)?
.with_yolo_version(args.ver)
.with_yolo_task(args.task)
.with_device(device)
.with_trt_fp16(args.half)
.with_ixx(0, 0, (1, args.batch_size as _, 4).into())
.with_ixx(0, 2, (args.height_min, args.height, args.height_max).into())
.with_ixx(0, 3, (args.width_min, args.width, args.width_max).into())
.with_confs(if args.confs.is_empty() {
&[0.2, 0.15]
} else {
&args.confs
})
.with_nc(args.nc)
.with_find_contours(!args.no_contours) // find contours or not
// .with_names(&COCO_CLASS_NAMES_80) // detection class names
// .with_names2(&COCO_KEYPOINTS_17) // keypoints class names
// .exclude_classes(&[0])
// .retain_classes(&[0, 5])
.with_profile(args.profile);
// build model
let mut model = YOLO::new(options)?;
// build dataloader
let dl = DataLoader::new(&args.source)?
.with_batch(model.batch() as _)
.build()?;
// build annotator
let annotator = Annotator::default()
.with_skeletons(&COCO_SKELETONS_16)
.without_masks(true) // no masks plotting when doing segment task
.with_bboxes_thickness(3)
.with_keypoints_name(false) // enable keypoints names
.with_saveout_subs(&["YOLO"])
.with_saveout(&saveout);
// build viewer
let mut viewer = if args.view {
Some(Viewer::new().with_delay(5).with_scale(1.).resizable(true))
} else {
None
};
// run & annotate
for (xs, _paths) in dl {
let ys = model.forward(&xs, args.profile)?;
let images_plotted = annotator.plot(&xs, &ys, !args.nosave)?;
// show image
match &mut viewer {
Some(viewer) => viewer.imshow(&images_plotted)?,
None => continue,
}
// check out window and key event
match &mut viewer {
Some(viewer) => {
if !viewer.is_open() || viewer.is_key_pressed(usls::Key::Escape) {
break;
}
}
None => continue,
}
// write video
if !args.nosave {
match &mut viewer {
Some(viewer) => viewer.write_batch(&images_plotted)?,
None => continue,
}
}
}
// finish video write
if !args.nosave {
if let Some(viewer) = &mut viewer {
viewer.finish_write()?;
}
}
Ok(())
}

@ -12,7 +12,7 @@ clap = { version = "4.2.4", features = ["derive"] }
image = { version = "0.25.2"}
imageproc = { version = "0.25.0"}
ndarray = { version = "0.16" }
ort = { version = "2.0.0-rc.5", features = ["cuda", "tensorrt"]}
ort = { version = "2.0.0-rc.5", features = ["cuda", "tensorrt", "load-dynamic", "copy-dylibs", "half"]}
rusttype = { version = "0.9.3" }
anyhow = { version = "1.0.75" }
regex = { version = "1.5.4" }

@ -7,7 +7,7 @@ This repository provides a Rust demo for performing YOLOv8 tasks like `Classific
- Add YOLOv8-OBB demo
- Update ONNXRuntime to 1.19.x
Newly updated YOLOv8 example code is located in this repository (https://github.com/jamjamjon/usls/tree/main/examples/yolo)
Newly updated YOLOv8 example code is located in [this repository](https://github.com/jamjamjon/usls/tree/main/examples/yolo)
## Features
@ -22,25 +22,16 @@ Newly updated YOLOv8 example code is located in this repository (https://github.
Please follow the Rust official installation. (https://www.rust-lang.org/tools/install)
### 2. Install ONNXRuntime
### 2. ONNXRuntime Linking
This repository use `ort` crate, which is ONNXRuntime wrapper for Rust. (https://docs.rs/ort/latest/ort/)
- #### For detailed setup instructions, refer to the [ORT documentation](https://ort.pyke.io/setup/linking).
You can follow the instruction with `ort` doc or simply do this:
- step1: Download ONNXRuntime(https://github.com/microsoft/onnxruntime/releases)
- setp2: Set environment variable `PATH` for linking.
On ubuntu, You can do like this:
```bash
vim ~/.bashrc
# Add the path of ONNXRUntime lib
export LD_LIBRARY_PATH=/home/qweasd/Documents/onnxruntime-linux-x64-gpu-1.16.3/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
source ~/.bashrc
```
- #### For Linux or macOS Users:
- Download the ONNX Runtime package from the [Releases page](https://github.com/microsoft/onnxruntime/releases).
- Set up the library path by exporting the `ORT_DYLIB_PATH` environment variable:
```shell
export ORT_DYLIB_PATH=/path/to/onnxruntime/lib/libonnxruntime.so.1.19.0
```
### 3. \[Optional\] Install CUDA & CuDNN & TensorRT

@ -118,16 +118,15 @@ pub fn check_font(font: &str) -> rusttype::Font<'static> {
rusttype::Font::try_from_vec(buffer).unwrap()
}
use ab_glyph::FontArc;
pub fn load_font() -> FontArc{
pub fn load_font() -> FontArc {
use std::path::Path;
let font_path = Path::new("./font/Arial.ttf");
match font_path.try_exists() {
Ok(true) => {
let buffer = std::fs::read(font_path).unwrap();
FontArc::try_from_vec(buffer).unwrap()
},
}
Ok(false) => {
std::fs::create_dir_all("./font").unwrap();
println!("Downloading font...");
@ -136,7 +135,7 @@ pub fn load_font() -> FontArc{
.timeout(std::time::Duration::from_secs(500))
.call()
.unwrap_or_else(|err| panic!("> Failed to download font: {source_url}: {err:?}"));
// read to buffer
let mut buffer = vec![];
let total_size = resp
@ -153,9 +152,9 @@ pub fn load_font() -> FontArc{
fd.write_all(&buffer).unwrap();
println!("Font saved at: {:?}", font_path.display());
FontArc::try_from_vec(buffer).unwrap()
},
}
Err(e) => {
panic!("Failed to load font {}", e);
},
}
}
}
}

@ -8,7 +8,7 @@ use rand::{thread_rng, Rng};
use std::path::PathBuf;
use crate::{
load_font, gen_time_string, non_max_suppression, Args, Batch, Bbox, Embedding, OrtBackend,
gen_time_string, load_font, non_max_suppression, Args, Batch, Bbox, Embedding, OrtBackend,
OrtConfig, OrtEP, Point2, YOLOResult, YOLOTask, SKELETON,
};
@ -40,7 +40,7 @@ impl YOLOv8 {
OrtEP::CUDA(config.device_id)
} else {
OrtEP::CPU
};
};
// batch
let batch = Batch {
@ -463,7 +463,7 @@ impl YOLOv8 {
image::Rgb(self.color_palette[bbox.id()].into()),
bbox.xmin() as i32,
(bbox.ymin() - legend_size as f32) as i32,
legend_size as f32,
legend_size as f32,
&font,
&legend,
);

Loading…
Cancel
Save