7.8 KiB
comments | description | keywords |
---|---|---|
true | Step-by-step guide on exporting your YOLOv8 models to various format like ONNX, TensorRT, CoreML and more for deployment. Explore now!. | YOLO, YOLOv8, Ultralytics, Model export, ONNX, TensorRT, CoreML, TensorFlow SavedModel, OpenVINO, PyTorch, export model |
Model Export with Ultralytics YOLO
Introduction
The ultimate goal of training a model is to deploy it for real-world applications. Export mode in Ultralytics YOLOv8 offers a versatile range of options for exporting your trained model to different formats, making it deployable across various platforms and devices. This comprehensive guide aims to walk you through the nuances of model exporting, showcasing how to achieve maximum compatibility and performance.
Watch: How To Export Custom Trained Ultralytics YOLOv8 Model and Run Live Inference on Webcam.
Why Choose YOLOv8's Export Mode?
- Versatility: Export to multiple formats including ONNX, TensorRT, CoreML, and more.
- Performance: Gain up to 5x GPU speedup with TensorRT and 3x CPU speedup with ONNX or OpenVINO.
- Compatibility: Make your model universally deployable across numerous hardware and software environments.
- Ease of Use: Simple CLI and Python API for quick and straightforward model exporting.
Key Features of Export Mode
Here are some of the standout functionalities:
- One-Click Export: Simple commands for exporting to different formats.
- Batch Export: Export batched-inference capable models.
- Optimized Inference: Exported models are optimized for quicker inference times.
- Tutorial Videos: In-depth guides and tutorials for a smooth exporting experience.
!!! tip "Tip"
* Export to ONNX or OpenVINO for up to 3x CPU speedup.
* Export to TensorRT for up to 5x GPU speedup.
Usage Examples
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load an official model
model = YOLO('path/to/best.pt') # load a custom trained
# Export the model
model.export(format='onnx')
```
=== "CLI"
```bash
yolo export model=yolov8n.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
Arguments
Export settings for YOLO models refer to the various configurations and options used to save or export the model for use in other environments or platforms. These settings can affect the model's performance, size, and compatibility with different systems. Some common YOLO export settings include the format of the exported model file (e.g. ONNX, TensorFlow SavedModel), the device on which the model will be run (e.g. CPU, GPU), and the presence of additional features such as masks or multiple labels per box. Other factors that may affect the export process include the specific task the model is being used for and the requirements or constraints of the target environment or platform. It is important to carefully consider and configure these settings to ensure that the exported model is optimized for the intended use case and can be used effectively in the target environment.
Key | Value | Description |
---|---|---|
format |
'torchscript' |
format to export to |
imgsz |
640 |
image size as scalar or (h, w) list, i.e. (640, 480) |
keras |
False |
use Keras for TF SavedModel export |
optimize |
False |
TorchScript: optimize for mobile |
half |
False |
FP16 quantization |
int8 |
False |
INT8 quantization |
dynamic |
False |
ONNX/TensorRT: dynamic axes |
simplify |
False |
ONNX/TensorRT: simplify model |
opset |
None |
ONNX: opset version (optional, defaults to latest) |
workspace |
4 |
TensorRT: workspace size (GB) |
nms |
False |
CoreML: add NMS |
Export Formats
Available YOLOv8 export formats are in the table below. You can export to any format using the format
argument, i.e. format='onnx'
or format='engine'
.
Format | format Argument |
Model | Metadata | Arguments |
---|---|---|---|---|
PyTorch | - | yolov8n.pt |
✅ | - |
TorchScript | torchscript |
yolov8n.torchscript |
✅ | imgsz , optimize |
ONNX | onnx |
yolov8n.onnx |
✅ | imgsz , half , dynamic , simplify , opset |
OpenVINO | openvino |
yolov8n_openvino_model/ |
✅ | imgsz , half |
TensorRT | engine |
yolov8n.engine |
✅ | imgsz , half , dynamic , simplify , workspace |
CoreML | coreml |
yolov8n.mlpackage |
✅ | imgsz , half , int8 , nms |
TF SavedModel | saved_model |
yolov8n_saved_model/ |
✅ | imgsz , keras |
TF GraphDef | pb |
yolov8n.pb |
❌ | imgsz |
TF Lite | tflite |
yolov8n.tflite |
✅ | imgsz , half , int8 |
TF Edge TPU | edgetpu |
yolov8n_edgetpu.tflite |
✅ | imgsz |
TF.js | tfjs |
yolov8n_web_model/ |
✅ | imgsz |
PaddlePaddle | paddle |
yolov8n_paddle_model/ |
✅ | imgsz |
ncnn | ncnn |
yolov8n_ncnn_model/ |
✅ | imgsz , half |