tripleMu
0770f5fa60
|
2 years ago | |
---|---|---|
csrc | 2 years ago | |
data | ||
docs | 2 years ago | |
models | 2 years ago | |
.gitignore | ||
.pre-commit-config.yaml | 2 years ago | |
LICENSE | ||
README.md | 2 years ago | |
build.py | 2 years ago | |
export.py | 2 years ago | |
export_seg.py | 2 years ago | |
gen_pkl.py | ||
infer-no-torch.py | 2 years ago | |
infer.py | 2 years ago | |
requirements.txt | 2 years ago |
README.md
YOLOv8-TensorRT
YOLOv8
using TensorRT accelerate !
Prepare the environment
-
Install TensorRT follow
TensorRT official website
-
Install python requirement.
pip install -r requirement.txt
-
(optional) Install
ultralytics
package for ONNX export or TensorRT API building.pip install ultralytics
You can download pretrained pytorch model by:
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt
Normal Usage
You can export ONNX or Engine using the origin ultralytics
repo .
Please see more information in Normal.md
.
Build TensorRT Engine by ONNX
Export ONNX by ultralytics
API
Export Your Own ONNX model
You can export your onnx model by ultralytics
API
and add postprocess into model at the same time.
python3 export.py \
--weights yolov8s.pt \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--opset 11 \
--sim \
--input-shape 1 3 640 640 \
--device cuda:0
Description of all arguments
--weights
: The PyTorch model you trained.--iou-thres
: IOU threshold for NMS plugin.--conf-thres
: Confidence threshold for NMS plugin.--topk
: Max number of detection bboxes.--opset
: ONNX opset version, default is 11.--sim
: Whether to simplify your onnx model.--input-shape
: Input shape for you model, should be 4 dimensions.--device
: The CUDA deivce you export engine .
You will get an onnx model whose prefix is the same as input weights.
Just Taste First
If you just want to taste first, you can download the onnx model which are exported by YOLOv8
package and modified by me.
Export TensorRT Engine
1. Export Engine by TensorRT ONNX Python api
You can export TensorRT engine from ONNX by build.py
.
Usage:
python3 build.py \
--weights yolov8s.onnx \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16 \
--device cuda:0
Description of all arguments
--weights
: The ONNX model you download.--iou-thres
: IOU threshold for NMS plugin.--conf-thres
: Confidence threshold for NMS plugin.--topk
: Max number of detection bboxes.--fp16
: Whether to export half-precision engine.--device
: The CUDA deivce you export engine .
You can modify iou-thres
conf-thres
topk
by yourself.
2. Export Engine by Trtexec Tools
You can export TensorRT engine by trtexec
tools.
Usage:
/usr/src/tensorrt/bin/trtexec \
--onnx=yolov8s.onnx \
--saveEngine=yolov8s.engine \
--fp16
If you installed TensorRT by a debian package, then the installation path of trtexec
is /usr/src/tensorrt/bin/trtexec
If you installed TensorRT by a tar package, then the installation path of trtexec
is under the bin
folder in the path you decompressed
Build TensorRT Engine by TensorRT API
Please see more information in API-Build.md
Notice !!! We don't support YOLOv8-seg model now !!!
Inference
1. Infer with python script
You can infer images with the engine by infer.py
.
Usage:
python3 infer.py \
--engine yolov8s.engine \
--imgs data \
--show \
--out-dir outputs \
--device cuda:0
Description of all arguments
--engine
: The Engine you export.--imgs
: The images path you want to detect.--show
: Whether to show detection results.--seg
: Whether to infer with segment model.--out-dir
: Where to save detection results images. It will not work when use--show
flag.--device
: The CUDA deivce you use.--profile
: Profile the TensorRT engine.
2. Infer with C++
You can infer with c++ in csrc/detect/end2end
.
Build:
Please set you own librarys in CMakeLists.txt
and modify CLASS_NAMES
and COLORS
in main.cpp
.
export root=${PWD}
cd src/detect/end2end
mkdir build
cmake ..
make
mv yolov8 ${root}
cd ${root}
Usage:
# infer image
./yolov8 yolov8s.engine data/bus.jpg
# infer images
./yolov8 yolov8s.engine data
# infer video
./yolov8 yolov8s.engine data/test.mp4 # the video path
TensorRT Segment Deploy
Please see more information in Segment.md
DeepStream Detection Deploy
See more in README.md
Profile you engine
If you want to profile the TensorRT engine:
Usage:
python3 infer.py --engine yolov8s.engine --profile
Refuse To Use PyTorch for model inference !!!
If you need to break away from pytorch and use tensorrt inference,
you can get more information in infer-no-torch.py
,
the usage is the same as the pytorch version, but its performance is much worse.
You can use cuda-python
or pycuda
for inference.
Please install by such command:
pip install cuda-python
# or
pip install pycuda
Usage:
python3 infer-no-torch.py \
--engine yolov8s.engine \
--imgs data \
--show \
--out-dir outputs \
--method cudart
Description of all arguments
--engine
: The Engine you export.--imgs
: The images path you want to detect.--show
: Whether to show detection results.--seg
: Whether to infer with segment model.--out-dir
: Where to save detection results images. It will not work when use--show
flag.--method
: Choosecudart
orpycuda
, default iscudart
.--profile
: Profile the TensorRT engine.