You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

217 lines
6.6 KiB

2 years ago
# YOLOv8-TensorRT
2 years ago
`YOLOv8` using TensorRT accelerate !
2 years ago
---
[![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fatrox%2Fsync-dotenv%2Fbadge&style=flat)](https://github.com/triple-Mu/YOLOv8-TensorRT)
[![Python Version](https://img.shields.io/badge/Python-3.8--3.10-FFD43B?logo=python)](https://github.com/triple-Mu/YOLOv8-TensorRT)
2 years ago
[![img](https://badgen.net/badge/icon/tensorrt?icon=azurepipelines&label)](https://developer.nvidia.com/tensorrt)
2 years ago
[![C++](https://img.shields.io/badge/CPP-11%2F14-yellow)](https://github.com/triple-Mu/YOLOv8-TensorRT)
2 years ago
[![img](https://badgen.net/github/license/triple-Mu/YOLOv8-TensorRT)](https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/LICENSE)
[![img](https://badgen.net/github/prs/triple-Mu/YOLOv8-TensorRT)](https://github.com/triple-Mu/YOLOv8-TensorRT/pulls)
2 years ago
[![img](https://img.shields.io/github/stars/triple-Mu/YOLOv8-TensorRT?style=social&label=Star&maxAge=2592000)](https://github.com/triple-Mu/YOLOv8-TensorRT)
2 years ago
---
# Prepare the environment
2 years ago
1. Install TensorRT follow [`TensorRT offical website`](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
2. Install python requirement.
``` shell
pip install -r requirement.txt
```
3. (optional) Install [`ultralytics`](https://github.com/ultralytics/ultralytics) package for ONNX export or TensorRT API building.
2 years ago
``` shell
pip install ultralytics
```
2 years ago
You can download pretrained pytorch model by:
2 years ago
``` shell
wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n.pt
wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s.pt
wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m.pt
wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l.pt
wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x.pt
```
2 years ago
# Build TensorRT Engine by ONNX
2 years ago
## Export ONNX by `ultralytics` API
2 years ago
### Export Your Own ONNX model
2 years ago
You can export your onnx model by `ultralytics` API
and add postprocess into model at the same time.
``` shell
2 years ago
python3 export.py \
2 years ago
--weights yolov8s.pt \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--opset 11 \
--sim \
--input-shape 1 3 640 640 \
--device cuda:0
```
#### Description of all arguments
- `--weights` : The PyTorch model you trained.
- `--iou-thres` : IOU threshold for NMS plugin.
- `--conf-thres` : Confidence threshold for NMS plugin.
- `--topk` : Max number of detection bboxes.
- `--opset` : ONNX opset version, default is 11.
- `--sim` : Whether to simplify your onnx model.
- `--input-shape` : Input shape for you model, should be 4 dimensions.
- `--device` : The CUDA deivce you export engine .
You will get an onnx model whose prefix is the same as input weights.
2 years ago
### Just Taste First
2 years ago
If you just want to taste first, you can download the onnx model which are exported by `YOLOv8` package and modified by me.
2 years ago
2 years ago
[**YOLOv8-n**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8n_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1772936700&Signature=r6HgJTTcCSAxQxD9bKO9qBTtigQ%3D)
2 years ago
2 years ago
[**YOLOv8-s**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8s_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1682936722&Signature=JjxQFx1YElcVdsCaMoj81KJ4a5s%3D)
2 years ago
2 years ago
[**YOLOv8-m**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8m_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1682936739&Signature=IRKBELdVFemD7diixxxgzMYqsWg%3D)
2 years ago
2 years ago
[**YOLOv8-l**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8l_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1682936763&Signature=RGkJ4G2XJ4J%2BNiki5cJi3oBkDnA%3D)
2 years ago
2 years ago
[**YOLOv8-x**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8x_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1673936778&Signature=3o%2F7QKhiZg1dW3I6sDrY4ug6MQU%3D)
2 years ago
2 years ago
## Export TensorRT Engine
### 1. Export Engine by TensorRT Python api
2 years ago
You can export TensorRT engine from ONNX by [`build.py` ](build.py).
2 years ago
2 years ago
Usage:
2 years ago
``` shell
2 years ago
python3 build.py \
--weights yolov8s.onnx \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16 \
--device cuda:0
2 years ago
```
#### Description of all arguments
- `--weights` : The ONNX model you download.
- `--iou-thres` : IOU threshold for NMS plugin.
- `--conf-thres` : Confidence threshold for NMS plugin.
- `--topk` : Max number of detection bboxes.
- `--fp16` : Whether to export half-precision engine.
2 years ago
- `--device` : The CUDA deivce you export engine .
You can modify `iou-thres` `conf-thres` `topk` by yourself.
2 years ago
2 years ago
### 2. Export Engine by Trtexec Tools
2 years ago
You can export TensorRT engine by [`trtexec`](https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec) tools.
Usage:
``` shell
2 years ago
/usr/src/tensorrt/bin/trtexec \
--onnx=yolov8s.onnx \
--saveEngine=yolov8s.engine \
--fp16
2 years ago
```
**If you installed TensorRT by a debian package, then the installation path of `trtexec`
is `/usr/src/tensorrt/bin/trtexec`**
**If you installed TensorRT by a tar package, then the installation path of `trtexec` is under the `bin` folder in the path you decompressed**
2 years ago
# Build TensorRT Engine by TensorRT API
2 years ago
Please see more information in [`API-Build.md`](docs/API-Build.md)
2 years ago
***Notice !!!*** We don't support YOLOv8-seg model now !!!
2 years ago
# Inference
2 years ago
## 1. Infer with python script
2 years ago
You can infer images with the engine by [`infer.py`](infer.py) .
Usage:
``` shell
2 years ago
python3 infer.py \
--engine yolov8s.engine \
--imgs data \
--show \
--out-dir outputs \
--device cuda:0
2 years ago
```
#### Description of all arguments
- `--engine` : The Engine you export.
- `--imgs` : The images path you want to detect.
- `--show` : Whether to show detection results.
- `--out-dir` : Where to save detection results images. It will not work when use `--show` flag.
- `--device` : The CUDA deivce you use.
2 years ago
- `--profile` : Profile the TensorRT engine.
2 years ago
## 2. Infer with C++
2 years ago
You can infer with c++ in [`csrc/detect`](csrc/detect) .
2 years ago
### Build:
2 years ago
Please set you own librarys in [`CMakeLists.txt`](csrc/detect/CMakeLists.txt) and modify you own config in [`config.h`](csrc/detect/include/config.h) such as `CLASS_NAMES` and `COLORS`.
``` shell
export root=${PWD}
cd src/end2end
mkdir build
cmake ..
make
mv yolov8 ${root}
cd ${root}
```
Usage:
``` shell
# infer image
./yolov8 yolov8s.engine data/bus.jpg
# infer images
./yolov8 yolov8s.engine data
# infer video
./yolov8 yolov8s.engine data/test.mp4 # the video path
```
2 years ago
# TensorRT Segment Deploy
Please see more information in [`Segment.md`](docs/Segment.md)
# DeepStream Detection Deploy
See more in [`README.md`](csrc/deepstream/README.md)
# Profile you engine
2 years ago
If you want to profile the TensorRT engine:
Usage:
``` shell
python3 infer.py --engine yolov8s.engine --profile
2 years ago
```