You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
triple-Mu 9305126e14 Add profiler 2 years ago
data Add infer.py 2 years ago
models Add profiler 2 years ago
.gitignore Initial commit 2 years ago
LICENSE Initial commit 2 years ago
README.md Add profiler 2 years ago
build.py add converter 2 years ago
infer.py Add profiler 2 years ago

README.md

YOLOv8-TensorRT

YOLOv8 using TensorRT accelerate !

Preprocessed ONNX model

You can dowload the onnx model which is pretrained by https://github.com/ultralytics .

**YOLOv8-n **

**YOLOv8-s **

**YOLOv8-m **

**YOLOv8-l **

**YOLOv8-x **

Build TensorRT engine by ONNX

1. By TensorRT Python api

You can export TensorRT engine by build.py .

Usage:

python3 build.py --onnx yolov8s_nms.onnx --device cuda:0 --fp16

Description of all arguments

  • --onnx : The ONNX model you download.
  • --device : The CUDA deivce you export engine .
  • --half : Whether to export half-precision model.

2. By trtexec tools

You can export TensorRT engine by trtexec tools.

Usage:

/usr/src/tensorrt/bin/trtexec --onnx=yolov8s_nms.onnx --saveEngine=yolov8s_nms.engine --fp16

If you installed TensorRT by a debian package, then the installation path of trtexec is /usr/src/tensorrt/bin/trtexec

If you installed TensorRT by a tar package, then the installation path of trtexec is under the bin folder in the path you decompressed

Infer images by the engine which you export

You can infer images with the engine by infer.py .

Usage:

python3 infer.py --engine yolov8s_nms.engine --imgs data --show --out-dir outputs --device cuda:0

Description of all arguments

  • --engine : The Engine you export.

  • --imgs : The images path you want to detect.

  • --show : Whether to show detection results.

  • --out-dir : Where to save detection results images. It will not work when use --show flag.

  • --device : The CUDA deivce you use.

  • --profile : Profile the TensorRT engine.

If you want to profile the TensorRT engine:

Usage:

python3 infer.py --engine yolov8s_nms.engine --profile