You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 

6.6 KiB

YOLOv8-TensorRT

YOLOv8 using TensorRT accelerate !


Build Status Python Version img C++ img img img


Prepare the environment

  1. Install TensorRT follow TensorRT offical website

  2. Install python requirement.

    pip install -r requirement.txt
    
  3. (optional) Install ultralytics package for ONNX export or TensorRT API building.

    pip install ultralytics
    

    You can download pretrained pytorch model by:

    wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n.pt
    wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s.pt
    wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m.pt
    wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l.pt
    wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x.pt
    

Build TensorRT Engine by ONNX

Export ONNX by ultralytics API

Export Your Own ONNX model

You can export your onnx model by ultralytics API and add postprocess into model at the same time.

python3 export.py \
--weights yolov8s.pt \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--opset 11 \
--sim \
--input-shape 1 3 640 640 \
--device cuda:0

Description of all arguments

  • --weights : The PyTorch model you trained.
  • --iou-thres : IOU threshold for NMS plugin.
  • --conf-thres : Confidence threshold for NMS plugin.
  • --topk : Max number of detection bboxes.
  • --opset : ONNX opset version, default is 11.
  • --sim : Whether to simplify your onnx model.
  • --input-shape : Input shape for you model, should be 4 dimensions.
  • --device : The CUDA deivce you export engine .

You will get an onnx model whose prefix is the same as input weights.

Just Taste First

If you just want to taste first, you can download the onnx model which are exported by YOLOv8 package and modified by me.

YOLOv8-n

YOLOv8-s

YOLOv8-m

YOLOv8-l

YOLOv8-x

Export TensorRT Engine

1. Export Engine by TensorRT ONNX Python api

You can export TensorRT engine from ONNX by build.py .

Usage:

python3 build.py \
--weights yolov8s.onnx \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16  \
--device cuda:0

Description of all arguments

  • --weights : The ONNX model you download.
  • --iou-thres : IOU threshold for NMS plugin.
  • --conf-thres : Confidence threshold for NMS plugin.
  • --topk : Max number of detection bboxes.
  • --fp16 : Whether to export half-precision engine.
  • --device : The CUDA deivce you export engine .

You can modify iou-thres conf-thres topk by yourself.

2. Export Engine by Trtexec Tools

You can export TensorRT engine by trtexec tools.

Usage:

/usr/src/tensorrt/bin/trtexec \
--onnx=yolov8s.onnx \
--saveEngine=yolov8s.engine \
--fp16

If you installed TensorRT by a debian package, then the installation path of trtexec is /usr/src/tensorrt/bin/trtexec

If you installed TensorRT by a tar package, then the installation path of trtexec is under the bin folder in the path you decompressed

Build TensorRT Engine by TensorRT API

Please see more information in API-Build.md

Notice !!! We don't support YOLOv8-seg model now !!!

Inference

1. Infer with python script

You can infer images with the engine by infer.py .

Usage:

python3 infer.py \
--engine yolov8s.engine \
--imgs data \
--show \
--out-dir outputs \
--device cuda:0

Description of all arguments

  • --engine : The Engine you export.
  • --imgs : The images path you want to detect.
  • --show : Whether to show detection results.
  • --out-dir : Where to save detection results images. It will not work when use --show flag.
  • --device : The CUDA deivce you use.
  • --profile : Profile the TensorRT engine.

2. Infer with C++

You can infer with c++ in csrc/detect .

Build:

Please set you own librarys in CMakeLists.txt and modify you own config in config.h such as CLASS_NAMES and COLORS.

export root=${PWD}
cd src/end2end
mkdir build
cmake ..
make
mv yolov8 ${root}
cd ${root}

Usage:

# infer image
./yolov8 yolov8s.engine data/bus.jpg
# infer images
./yolov8 yolov8s.engine data
# infer video
./yolov8 yolov8s.engine data/test.mp4 # the video path

TensorRT Segment Deploy

Please see more information in Segment.md

DeepStream Detection Deploy

See more in README.md

Profile you engine

If you want to profile the TensorRT engine:

Usage:

python3 infer.py --engine yolov8s.engine --profile