You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
63 lines
4.1 KiB
63 lines
4.1 KiB
--- |
|
comments: true |
|
description: From training to tracking, make the most of YOLOv8 with Ultralytics. Get insights and examples for each supported mode including validation, export, and benchmarking. |
|
keywords: Ultralytics, YOLOv8, Machine Learning, Object Detection, Training, Validation, Prediction, Export, Tracking, Benchmarking |
|
--- |
|
|
|
# Ultralytics YOLOv8 Modes |
|
|
|
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png"> |
|
|
|
## Introduction |
|
|
|
Ultralytics YOLOv8 is not just another object detection model; it's a versatile framework designed to cover the entire lifecycle of machine learning models—from data ingestion and model training to validation, deployment, and real-world tracking. Each mode serves a specific purpose and is engineered to offer you the flexibility and efficiency required for different tasks and use-cases. |
|
|
|
### Modes at a Glance |
|
|
|
Understanding the different **modes** that Ultralytics YOLOv8 supports is critical to getting the most out of your models: |
|
|
|
- **Train** mode: Fine-tune your model on custom or preloaded datasets. |
|
- **Val** mode: A post-training checkpoint to validate model performance. |
|
- **Predict** mode: Unleash the predictive power of your model on real-world data. |
|
- **Export** mode: Make your model deployment-ready in various formats. |
|
- **Track** mode: Extend your object detection model into real-time tracking applications. |
|
- **Benchmark** mode: Analyze the speed and accuracy of your model in diverse deployment environments. |
|
|
|
This comprehensive guide aims to give you an overview and practical insights into each mode, helping you harness the full potential of YOLOv8. |
|
|
|
## [Train](train.md) |
|
|
|
Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can accurately predict the classes and locations of objects in an image. |
|
|
|
[Train Examples](train.md){ .md-button .md-button--primary} |
|
|
|
## [Val](val.md) |
|
|
|
Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters of the model to improve its performance. |
|
|
|
[Val Examples](val.md){ .md-button .md-button--primary} |
|
|
|
## [Predict](predict.md) |
|
|
|
Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model predicts the classes and locations of objects in the input images or videos. |
|
|
|
[Predict Examples](predict.md){ .md-button .md-button--primary} |
|
|
|
## [Export](export.md) |
|
|
|
Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is converted to a format that can be used by other software applications or hardware devices. This mode is useful when deploying the model to production environments. |
|
|
|
[Export Examples](export.md){ .md-button .md-button--primary} |
|
|
|
## [Track](track.md) |
|
|
|
Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful for applications such as surveillance systems or self-driving cars. |
|
|
|
[Track Examples](track.md){ .md-button .md-button--primary} |
|
|
|
## [Benchmark](benchmark.md) |
|
|
|
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection, segmentation and pose) |
|
or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy. |
|
|
|
[Benchmark Examples](benchmark.md){ .md-button .md-button--primary}
|
|
|