Add seg README

pull/6/head
triple-Mu 2 years ago
parent 038856cb61
commit e539828600
  1. 79
      README.md
  2. 4
      build.py
  3. 31
      docs/API-Build.md
  4. 99
      docs/Segment.md
  5. 9
      models/engine.py

@ -40,15 +40,18 @@
wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x.pt wget https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x.pt
``` ```
# Build TensorRT engine by ONNX # Build TensorRT Engine by ONNX
## Export ONNX by `ultralytics` API ## Export ONNX by `ultralytics` API
### Export Your Own ONNX model
You can export your onnx model by `ultralytics` API You can export your onnx model by `ultralytics` API
and add postprocess into model at the same time. and add postprocess into model at the same time.
``` shell ``` shell
python export.py \ python3 export.py \
--weights yolov8s.pt \ --weights yolov8s.pt \
--iou-thres 0.65 \ --iou-thres 0.65 \
--conf-thres 0.25 \ --conf-thres 0.25 \
@ -72,9 +75,9 @@ python export.py \
You will get an onnx model whose prefix is the same as input weights. You will get an onnx model whose prefix is the same as input weights.
## Preprocessed ONNX model ### Just Taste First
If you just want to taste first, you can dowload the onnx model which are exported by `YOLOv8` package and modified by me. If you just want to taste first, you can download the onnx model which are exported by `YOLOv8` package and modified by me.
[**YOLOv8-n**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8n_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1772936700&Signature=r6HgJTTcCSAxQxD9bKO9qBTtigQ%3D) [**YOLOv8-n**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8n_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1772936700&Signature=r6HgJTTcCSAxQxD9bKO9qBTtigQ%3D)
@ -86,14 +89,14 @@ If you just want to taste first, you can dowload the onnx model which are export
[**YOLOv8-x**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8x_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1673936778&Signature=3o%2F7QKhiZg1dW3I6sDrY4ug6MQU%3D) [**YOLOv8-x**](https://triplemu.oss-cn-beijing.aliyuncs.com/YOLOv8/ONNX/yolov8x_nms.onnx?OSSAccessKeyId=LTAI5tN1dgmZD4PF8AJUXp3J&Expires=1673936778&Signature=3o%2F7QKhiZg1dW3I6sDrY4ug6MQU%3D)
## 1. By TensorRT ONNX Python api ## Export Engine by TensorRT Python api
You can export TensorRT engine from ONNX by [`build.py` ](build.py). You can export TensorRT engine from ONNX by [`build.py` ](build.py).
Usage: Usage:
``` shell ``` shell
python build.py \ python3 build.py \
--weights yolov8s.onnx \ --weights yolov8s.onnx \
--iou-thres 0.65 \ --iou-thres 0.65 \
--conf-thres 0.25 \ --conf-thres 0.25 \
@ -113,14 +116,17 @@ python build.py \
You can modify `iou-thres` `conf-thres` `topk` by yourself. You can modify `iou-thres` `conf-thres` `topk` by yourself.
## 2. By trtexec tools ## 2. Export Engine by Trtexec Tools
You can export TensorRT engine by [`trtexec`](https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec) tools. You can export TensorRT engine by [`trtexec`](https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec) tools.
Usage: Usage:
``` shell ``` shell
/usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16 /usr/src/tensorrt/bin/trtexec \
--onnx=yolov8s.onnx \
--saveEngine=yolov8s.engine \
--fp16
``` ```
**If you installed TensorRT by a debian package, then the installation path of `trtexec` **If you installed TensorRT by a debian package, then the installation path of `trtexec`
@ -128,41 +134,27 @@ is `/usr/src/tensorrt/bin/trtexec`**
**If you installed TensorRT by a tar package, then the installation path of `trtexec` is under the `bin` folder in the path you decompressed** **If you installed TensorRT by a tar package, then the installation path of `trtexec` is under the `bin` folder in the path you decompressed**
# Build TensorRT Engine by TensorRT API
Please see more information in [`API-Build.md`](docs/API-Build.md)
# Build TensorRT engine by API ***Notice !!!*** We don't support YOLOv8-seg model now !!!
When you want to build engine by api. You should generate the pickle weights parameters first. # Inference
``` shell ## 1. Infer with python script
python gen_pkl.py -w yolov8s.pt -o yolov8s.pkl
```
You will get a `yolov8s.pkl` which contain the operators' parameters. And you can rebuild `yolov8s` model in TensorRT api.
```
python build.py \
--weights yolov8s.pkl \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16 \
--input-shape 1 3 640 640 \
--device cuda:0
```
***Notice !!!*** Now we only support static input shape model build by TensorRT api. You'd best give the legal`input-shape`.
# Infer images by the engine which you export or build
## 1. Python infer
You can infer images with the engine by [`infer.py`](infer.py) . You can infer images with the engine by [`infer.py`](infer.py) .
Usage: Usage:
``` shell ``` shell
python3 infer.py --engine yolov8s.engine --imgs data --show --out-dir outputs --device cuda:0 python3 infer.py \
--engine yolov8s.engine \
--imgs data \
--show \
--out-dir outputs \
--device cuda:0
``` ```
#### Description of all arguments #### Description of all arguments
@ -174,13 +166,13 @@ python3 infer.py --engine yolov8s.engine --imgs data --show --out-dir outputs --
- `--device` : The CUDA deivce you use. - `--device` : The CUDA deivce you use.
- `--profile` : Profile the TensorRT engine. - `--profile` : Profile the TensorRT engine.
## 2. C++ infer ## 2. Infer with C++
You can infer with c++ in [`csrc/end2end`](csrc/end2end) . You can infer with c++ in [`csrc/detect`](csrc/detect) .
Build: ### Build:
Please set you own librarys in [`CMakeLists.txt`](csrc/end2end/CMakeLists.txt) and modify you own config in [`config.h`](csrc/end2end/include/config.h) such as `CLASS_NAMES` and `COLORS`. Please set you own librarys in [`CMakeLists.txt`](csrc/detect/CMakeLists.txt) and modify you own config in [`config.h`](csrc/detect/include/config.h) such as `CLASS_NAMES` and `COLORS`.
``` shell ``` shell
export root=${PWD} export root=${PWD}
@ -203,6 +195,15 @@ Usage:
./yolov8 yolov8s.engine data/test.mp4 # the video path ./yolov8 yolov8s.engine data/test.mp4 # the video path
``` ```
# TensorRT Segment Deploy
Please see more information in [`Segment.md`](docs/Segment.md)
# DeepStream Detection Deploy
See more in [`README.md`](csrc/deepstream/README.md)
# Profile you engine # Profile you engine
If you want to profile the TensorRT engine: If you want to profile the TensorRT engine:
@ -212,7 +213,3 @@ Usage:
``` shell ``` shell
python3 infer.py --engine yolov8s.engine --profile python3 infer.py --engine yolov8s.engine --profile
``` ```
# DeepStream Deploy
See more in [`README.md`](csrc/deepstream/README.md)

@ -36,6 +36,9 @@ def parse_args():
type=str, type=str,
default='cuda:0', default='cuda:0',
help='TensorRT builder device') help='TensorRT builder device')
parser.add_argument('--seg',
action='store_true',
help='Build seg model by onnx')
args = parser.parse_args() args = parser.parse_args()
assert len(args.input_shape) == 4 assert len(args.input_shape) == 4
return args return args
@ -43,6 +46,7 @@ def parse_args():
def main(args): def main(args):
builder = EngineBuilder(args.weights, args.device) builder = EngineBuilder(args.weights, args.device)
builder.seg = args.seg
builder.build(fp16=args.fp16, builder.build(fp16=args.fp16,
input_shape=args.input_shape, input_shape=args.input_shape,
iou_thres=args.iou_thres, iou_thres=args.iou_thres,

@ -0,0 +1,31 @@
# Build TensorRT Engine By TensorRT Python API
When you want to build engine by API. You should generate the pickle weights parameters first.
``` shell
python3 gen_pkl.py -w yolov8s.pt -o yolov8s.pkl
```
You will get a `yolov8s.pkl` which contain the operators' parameters.
And you can rebuild `yolov8s` model in TensorRT api.
```
python3 build.py \
--weights yolov8s.pkl \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16 \
--input-shape 1 3 640 640 \
--device cuda:0
```
***Notice !!!***
Now we only support static input shape model build by TensorRT api.
You'd best give the legal `input-shape`.
***Notice !!!***
Now we don't support YOLOv8-seg model building by API. It will be supported later.

@ -0,0 +1,99 @@
# YOLOv8-seg Model with TensorRT
Instance segmentation models are currently experimental.
Our conversion route is :
YOLOv8 PyTorch model -> ONNX -> TensorRT Engine
***Notice !!!*** We don't support TensorRT API building !!!
# Export Your Own ONNX model
You can export your onnx model by `ultralytics` API.
``` shell
python3 export_seg.py \
--weights yolov8s-seg.pt \
--opset 11 \
--sim \
--input-shape 1 3 640 640 \
--device cuda:0
```
#### Description of all arguments
- `--weights` : The PyTorch model you trained.
- `--opset` : ONNX opset version, default is 11.
- `--sim` : Whether to simplify your onnx model.
- `--input-shape` : Input shape for you model, should be 4 dimensions.
- `--device` : The CUDA deivce you export engine .
You will get an onnx model whose prefix is the same as input weights.
This onnx model doesn't contain postprocessing.
# Export Engine by TensorRT Python api
You can export TensorRT engine from ONNX by [`build.py` ](../build.py).
Usage:
``` shell
python3 build.py \
--weights yolov8s-seg.onnx \
--fp16 \
--device cuda:0 \
--seg
```
#### Description of all arguments
- `--weights` : The ONNX model you download.
- `--fp16` : Whether to export half-precision engine.
- `--device` : The CUDA deivce you export engine.
- `--seg` : Whether to export seg engine.
# Export Engine by Trtexec Tools
You can export TensorRT engine by [`trtexec`](https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec) tools.
Usage:
``` shell
/usr/src/tensorrt/bin/trtexec \
--onnx=yolov8s-seg.onnx \
--saveEngine=yolov8s-seg.engine \
--fp16
```
# Inference
## Infer with python script
You can infer images with the engine by [`infer.py`](../infer.py) .
Usage:
``` shell
python3 infer.py \
--engine yolov8s-seg.engine \
--imgs data \
--show \
--out-dir outputs \
--device cuda:0 \
--seg
```
#### Description of all arguments
- `--engine` : The Engine you export.
- `--imgs` : The images path you want to detect.
- `--show` : Whether to show detection results.
- `--out-dir` : Where to save detection results images. It will not work when use `--show` flag.
- `--device` : The CUDA deivce you use.
- `--profile` : Profile the TensorRT engine.
- `--seg` : Infer with seg model.
## Infer with C++
***Notice !!!*** COMMING SOON !!!

@ -9,6 +9,7 @@ import torch
class EngineBuilder: class EngineBuilder:
seg = False
def __init__( def __init__(
self, self,
@ -77,9 +78,10 @@ class EngineBuilder:
topk: int = 100): topk: int = 100):
parser = trt.OnnxParser(self.network, self.logger) parser = trt.OnnxParser(self.network, self.logger)
onnx_model = onnx.load(str(self.checkpoint)) onnx_model = onnx.load(str(self.checkpoint))
onnx_model.graph.node[-1].attribute[2].i = topk if not self.seg:
onnx_model.graph.node[-1].attribute[3].f = conf_thres onnx_model.graph.node[-1].attribute[2].i = topk
onnx_model.graph.node[-1].attribute[4].f = iou_thres onnx_model.graph.node[-1].attribute[3].f = conf_thres
onnx_model.graph.node[-1].attribute[4].f = iou_thres
if not parser.parse(onnx_model.SerializeToString()): if not parser.parse(onnx_model.SerializeToString()):
raise RuntimeError( raise RuntimeError(
@ -110,6 +112,7 @@ class EngineBuilder:
conf_thres: float = 0.25, conf_thres: float = 0.25,
topk: int = 100, topk: int = 100,
): ):
assert not self.seg
from .api import SPPF, C2f, Conv, Detect, get_depth, get_width from .api import SPPF, C2f, Conv, Detect, get_depth, get_width
with open(self.checkpoint, 'rb') as f: with open(self.checkpoint, 'rb') as f:

Loading…
Cancel
Save