triple-Mu
c37f09006c
|
2 years ago | |
---|---|---|
csrc/end2end | 2 years ago | |
data | 2 years ago | |
models | 2 years ago | |
.gitignore | 2 years ago | |
.pre-commit-config.yaml | 2 years ago | |
LICENSE | 2 years ago | |
README.md | 2 years ago | |
build.py | 2 years ago | |
gen_pkl.py | 2 years ago | |
infer.py | 2 years ago | |
requirements.txt | 2 years ago |
README.md
YOLOv8-TensorRT
YOLOv8
using TensorRT accelerate !
Prepare the environment
-
Install TensorRT follow
TensorRT offical website
-
Install python requirement.
pip install -r requirement.txt
-
(optional) Install
ultralytics
package for TensorRT API building.pip install ultralytics
You can download pretrained pytorch model by:
wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n.pt wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s.pt wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m.pt wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l.pt wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x.pt
Build TensorRT engine by ONNX
Preprocessed ONNX model
You can dowload the onnx model which are exported by YOLOv8
package and modified by me.
1. By TensorRT ONNX Python api
You can export TensorRT engine from ONNX by build.py
.
Usage:
python build.py \
--weights yolov8s_nms.onnx \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16 \
--device cuda:0
Description of all arguments
--weights
: The ONNX model you download.--iou-thres
: IOU threshold for NMS plugin.--conf-thres
: Confidence threshold for NMS plugin.--topk
: Max number of detection bboxes.--fp16
: Whether to export half-precision engine.--device
: The CUDA deivce you export engine .
You can modify iou-thres
conf-thres
topk
by yourself.
2. By trtexec tools
You can export TensorRT engine by trtexec
tools.
Usage:
/usr/src/tensorrt/bin/trtexec --onnx=yolov8s_nms.onnx --saveEngine=yolov8s_nms.engine --fp16
If you installed TensorRT by a debian package, then the installation path of trtexec
is /usr/src/tensorrt/bin/trtexec
If you installed TensorRT by a tar package, then the installation path of trtexec
is under the bin
folder in the path you decompressed
Build TensorRT engine by API
When you want to build engine by api. You should generate the pickle weights parameters first.
python gen_pkl.py -w yolov8s.pt -o yolov8s.pkl
You will get a yolov8s.pkl
which contain the operators' parameters. And you can rebuild yolov8s
model in TensorRT api.
python build.py \
--weights yolov8s.pkl \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16 \
--input-shape 1 3 640 640 \
--device cuda:0
Notice !!! Now we only support static input shape model build by TensorRT api. You'd best give the legalinput-shape
.
Infer images by the engine which you export or build
1. Python infer
You can infer images with the engine by infer.py
.
Usage:
python3 infer.py --engine yolov8s_nms.engine --imgs data --show --out-dir outputs --device cuda:0
Description of all arguments
--engine
: The Engine you export.--imgs
: The images path you want to detect.--show
: Whether to show detection results.--out-dir
: Where to save detection results images. It will not work when use--show
flag.--device
: The CUDA deivce you use.--profile
: Profile the TensorRT engine.
2. C++ infer
You can infer with c++ in csrc/end2end
.
Build:
Please set you own librarys in CMakeLists.txt
and modify you own config in config.h
such as CLASS_NAMES
and COLORS
.
export root=${PWD}
cd src/end2end
mkdir build
cmake ..
make
mv yolov8 ${root}
cd ${root}
Usage:
# infer image
./yolov8 yolov8s_nms.engine data/bus.jpg
# infer images
./yolov8 yolov8s_nms.engine data
# infer video
./yolov8 yolov8s_nms.engine data/test.mp4 # the video path
Profile you engine
If you want to profile the TensorRT engine:
Usage:
python3 infer.py --engine yolov8s_nms.engine --profile