`ultralytics 8.0.177` add https://youtube.com/ultralytics videos to Docs (#4875)

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Muhammad Rizwan Munawar <62513924+RizwanMunawar@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
pull/4886/head v8.0.177
Glenn Jocher 1 year ago committed by GitHub
parent e73447effb
commit dd2262e89a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 6
      .github/workflows/docker.yaml
  2. 6
      docs/hub/datasets.md
  3. 20
      docs/hub/index.md
  4. 59
      docs/hub/integrations.md
  5. 2
      docs/hub/models.md
  6. 2
      docs/hub/projects.md
  7. 49
      docs/hub/quickstart.md
  8. 11
      docs/index.md
  9. 15
      docs/integrations/openvino.md
  10. 36
      docs/modes/benchmark.md
  11. 50
      docs/modes/export.md
  12. 49
      docs/modes/index.md
  13. 37
      docs/modes/predict.md
  14. 32
      docs/modes/track.md
  15. 35
      docs/modes/train.md
  16. 24
      docs/modes/val.md
  17. 1
      docs/stylesheets/style.css
  18. 26
      docs/tasks/classify.md
  19. 15
      docs/tasks/detect.md
  20. 25
      docs/tasks/index.md
  21. 37
      docs/tasks/pose.md
  22. 34
      docs/tasks/segment.md
  23. 9
      examples/YOLOv8-SAHI-Inference-Video/readme.md
  24. 4
      examples/YOLOv8-SAHI-Inference-Video/yolov8_sahi.py
  25. 2
      mkdocs.yml
  26. 4
      setup.py
  27. 14
      tests/conftest.py
  28. 6
      tests/test_cli.py
  29. 2
      ultralytics/__init__.py
  30. 2
      ultralytics/utils/tuner.py

@ -67,13 +67,13 @@ jobs:
fetch-depth: 0 # copy full .git directory to access full git history in Docker images fetch-depth: 0 # copy full .git directory to access full git history in Docker images
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@v2 uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub - name: Login to Docker Hub
uses: docker/login-action@v2 uses: docker/login-action@v3
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}

@ -6,14 +6,13 @@ keywords: Ultralytics, HUB datasets, YOLO model training, upload datasets, datas
# HUB Datasets # HUB Datasets
Ultralytics HUB datasets are a practical solution for managing and leveraging your custom datasets. [Ultralytics HUB](https://hub.ultralytics.com/) datasets are a practical solution for managing and leveraging your custom datasets.
Once uploaded, datasets can be immediately utilized for model training. This integrated approach facilitates a seamless transition from dataset management to model training, significantly simplifying the entire process. Once uploaded, datasets can be immediately utilized for model training. This integrated approach facilitates a seamless transition from dataset management to model training, significantly simplifying the entire process.
## Upload Dataset ## Upload Dataset
Ultralytics HUB datasets are just like YOLOv5 and YOLOv8 🚀 datasets. They use the same structure and the same label formats to keep Ultralytics HUB datasets are just like YOLOv5 and YOLOv8 🚀 datasets. They use the same structure and the same label formats to keep everything simple.
everything simple.
Before you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML file inside the dataset root directory** and that **your dataset YAML, directory and ZIP have the same name**, as shown in the example below, and then zip the dataset directory. Before you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML file inside the dataset root directory** and that **your dataset YAML, directory and ZIP have the same name**, as shown in the example below, and then zip the dataset directory.
@ -41,6 +40,7 @@ After zipping your dataset, you should validate it before uploading it to Ultral
```py ```py
from ultralytics.hub import check_dataset from ultralytics.hub import check_dataset
check_dataset('path/to/coco8.zip') check_dataset('path/to/coco8.zip')
``` ```

@ -18,16 +18,22 @@ keywords: Ultralytics HUB, YOLOv5, YOLOv8, model training, model deployment, pre
</div> </div>
<br> <br>
👋 Hello from the [Ultralytics](https://ultralytics.com/) Team! We've been working hard these last few months to 👋 Hello from the [Ultralytics](https://ultralytics.com/) Team! We've been working hard these last few months to launch [Ultralytics HUB](https://bit.ly/ultralytics_hub), a new web tool for training and deploying all your YOLOv5 and YOLOv8 🚀 models from one spot!
launch [Ultralytics HUB](https://bit.ly/ultralytics_hub), a new web tool for training and deploying all your YOLOv5 and YOLOv8 🚀
models from one spot!
## Introduction ## Introduction
HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to easily upload their data and train new models quickly. It offers a range of pre-trained models and templates to choose from, making it easy for users to get started with training their own models. Once a model is trained, it can be easily deployed and used for real-time object detection, instance segmentation and classification tasks.
easily upload their data and train new models quickly. It offers a range of pre-trained models and
templates to choose from, making it easy for users to get started with training their own models. Once a model is <p align="center">
trained, it can be easily deployed and used for real-time object detection, instance segmentation and classification tasks. <br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/lveF9iCMIzc?si=_Q4WB5kMB5qNe7q6"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Train Your Custom YOLO Models In A Few Clicks with Ultralytics HUB.
</p>
We hope that the resources here will help you get the most out of HUB. Please browse the HUB <a href="https://docs.ultralytics.com/hub">Docs</a> for details, raise an issue on <a href="https://github.com/ultralytics/hub/issues/new/choose">GitHub</a> for support, and join our <a href="https://ultralytics.com/discord">Discord</a> community for questions and discussions! We hope that the resources here will help you get the most out of HUB. Please browse the HUB <a href="https://docs.ultralytics.com/hub">Docs</a> for details, raise an issue on <a href="https://github.com/ultralytics/hub/issues/new/choose">GitHub</a> for support, and join our <a href="https://ultralytics.com/discord">Discord</a> community for questions and discussions!

@ -1,7 +1,62 @@
--- ---
comments: true comments: true
description: Explore integration options for Ultralytics HUB. Currently featuring Roboflow for dataset integration and multiple export formats for your trained models.
keywords: Ultralytics HUB, Integrations, Roboflow, Dataset, Export, YOLOv5, YOLOv8, ONNX, CoreML, TensorRT, TensorFlow
--- ---
# 🚧 Page Under Construction ⚒ # HUB Integrations
This page is currently under construction! 👷Please check back later for updates. 😃🔜 🚧 **Under Construction** 🚧
Welcome to the Integrations guide for [Ultralytics HUB](https://hub.ultralytics.com/)! We are in the process of expanding this section to provide you with comprehensive guidance on integrating your YOLOv5 and YOLOv8 models with various platforms and formats. Currently, Roboflow is our available dataset integration, with a wide range of export integrations for your trained models.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/lveF9iCMIzc?si=_Q4WB5kMB5qNe7q6"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Train Your Custom YOLO Models In A Few Clicks with Ultralytics HUB.
</p>
## Available Integrations
### Dataset Integrations
- **Roboflow**: Seamlessly import your datasets for training.
### Export Integrations
| Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](../integrations/openvino.md) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlpackage` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` |
## Coming Soon
- Additional Dataset Integrations
- Detailed Export Integration Guides
- Step-by-Step Tutorials for Each Integration
## Need Immediate Assistance?
While we're in the process of creating detailed guides:
- Browse through other [HUB Docs](https://docs.ultralytics.com/hub/) for detailed guides and tutorials.
- Raise an issue on our [GitHub](https://github.com/ultralytics/hub/) for technical support.
- Join our [Discord Community](https://ultralytics.com/discord/) for live discussions and community support.
We appreciate your patience as we work to make this section comprehensive and user-friendly. Stay tuned for updates!

@ -6,7 +6,7 @@ keywords: Ultralytics, HUB Models, AI model training, model creation, model trai
# Ultralytics HUB Models # Ultralytics HUB Models
Ultralytics HUB models provide a streamlined solution for training vision AI models on your custom datasets. [Ultralytics HUB](https://hub.ultralytics.com/) models provide a streamlined solution for training vision AI models on your custom datasets.
The process is user-friendly and efficient, involving a simple three-step creation and accelerated training powered by Utralytics YOLOv8. During training, real-time updates on model metrics are available so that you can monitor each step of the progress. Once training is completed, you can preview your model and easily deploy it to real-world applications. Therefore, Ultralytics HUB offers a comprehensive yet straightforward system for model creation, training, evaluation, and deployment. The process is user-friendly and efficient, involving a simple three-step creation and accelerated training powered by Utralytics YOLOv8. During training, real-time updates on model metrics are available so that you can monitor each step of the progress. Once training is completed, you can preview your model and easily deploy it to real-world applications. Therefore, Ultralytics HUB offers a comprehensive yet straightforward system for model creation, training, evaluation, and deployment.

@ -6,7 +6,7 @@ keywords: Ultralytics, HUB projects, Create project, Edit project, Share project
# Ultralytics HUB Projects # Ultralytics HUB Projects
Ultralytics HUB projects provide an effective solution for consolidating and managing your models. If you are working with several models that perform similar tasks or have related purposes, Ultralytics HUB projects allow you to group these models together. [Ultralytics HUB](https://hub.ultralytics.com/) projects provide an effective solution for consolidating and managing your models. If you are working with several models that perform similar tasks or have related purposes, Ultralytics HUB projects allow you to group these models together.
This creates a unified and organized workspace that facilitates easier model management, comparison and development. Having similar models or various iterations together can facilitate rapid benchmarking, as you can compare their effectiveness. This can lead to faster, more insightful iterative development and refinement of your models. This creates a unified and organized workspace that facilitates easier model management, comparison and development. Having similar models or various iterations together can facilitate rapid benchmarking, as you can compare their effectiveness. This can lead to faster, more insightful iterative development and refinement of your models.

@ -1,7 +1,52 @@
--- ---
comments: true comments: true
description: Kickstart your journey with Ultralytics HUB. Learn how to train and deploy YOLOv5 and YOLOv8 models in seconds with our Quickstart guide.
keywords: Ultralytics HUB, Quickstart, YOLOv5, YOLOv8, model training, quick deployment, drag-and-drop interface, real-time object detection
--- ---
# 🚧 Page Under Construction ⚒ # Quickstart Guide for Ultralytics HUB
This page is currently under construction! 👷Please check back later for updates. 😃🔜 🚧 **Under Construction** 🚧
Thank you for visiting the Quickstart guide for [Ultralytics HUB](https://hub.ultralytics.com/)! We're currently hard at work building out this page to provide you with step-by-step instructions on how to get up and running with HUB in no time.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/lveF9iCMIzc?si=_Q4WB5kMB5qNe7q6"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Train Your Custom YOLO Models In A Few Clicks with Ultralytics HUB.
</p>
In the meantime, here's a brief overview of what you can expect from Ultralytics HUB:
## What is Ultralytics HUB?
Ultralytics HUB is your one-stop solution for training and deploying YOLOv5 and YOLOv8 models. It's designed with user experience in mind, featuring a drag-and-drop interface to make uploading data and training new models a breeze. Whether you're a beginner or an experienced machine learning practitioner, HUB has a range of pre-trained models and templates to accelerate your projects.
## Key Features
- **User-Friendly Interface**: Simply drag and drop your data to start training.
- **Pre-Trained Models**: Choose from a selection of pre-trained models to kick-start your projects.
- **Real-Time Object Detection**: Deploy trained models easily for real-time object detection, instance segmentation, and classification tasks.
## Coming Soon
- Detailed Steps to Start Your First Project
- Guide on Preparing and Uploading Datasets
- Tutorial on Model Training and Exporting
- Integration Options and How-To's
- And much more!
## Need Help Now?
While we're polishing this page, feel free to:
- Browse through other [HUB Docs](https://docs.ultralytics.com/hub/) for detailed guides and tutorials.
- Raise an issue on our [GitHub](https://github.com/ultralytics/hub/) for technical support.
- Join our [Discord Community](https://ultralytics.com/discord/) for live discussions and community support.
Stay tuned! We'll be back soon with more detailed information to help you get the most out of Ultralytics HUB. Thank you for your patience and interest!

@ -30,6 +30,17 @@ Explore the YOLOv8 Docs, a comprehensive resource designed to help you understan
- **Train** a new YOLOv8 model on your own custom dataset &nbsp; [:fontawesome-solid-brain: Train a Model](modes/train.md){ .md-button } - **Train** a new YOLOv8 model on your own custom dataset &nbsp; [:fontawesome-solid-brain: Train a Model](modes/train.md){ .md-button }
- **Explore** YOLOv8 tasks like segment, classify, pose and track &nbsp; [:material-magnify-expand: Explore Tasks](tasks/index.md){ .md-button } - **Explore** YOLOv8 tasks like segment, classify, pose and track &nbsp; [:material-magnify-expand: Explore Tasks](tasks/index.md){ .md-button }
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/LNwODJXcvt4?si=7n1UvGRLSd9p5wKs"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Train a YOLOv8 model on Your Custom Dataset in Google Colab.
</p>
## YOLO: A Brief History ## YOLO: A Brief History
[YOLO](https://arxiv.org/abs/1506.02640) (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO quickly gained popularity for its high speed and accuracy. [YOLO](https://arxiv.org/abs/1506.02640) (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO quickly gained popularity for its high speed and accuracy.

@ -4,12 +4,25 @@ description: Discover the power of deploying your Ultralytics YOLOv8 model using
keywords: ultralytics docs, YOLOv8, export YOLOv8, YOLOv8 model deployment, exporting YOLOv8, OpenVINO, OpenVINO format keywords: ultralytics docs, YOLOv8, export YOLOv8, YOLOv8 model deployment, exporting YOLOv8, OpenVINO, OpenVINO format
--- ---
# Intel OpenVINO Export
<img width="1024" src="https://user-images.githubusercontent.com/26833433/252345644-0cf84257-4b34-404c-b7ce-eb73dfbcaff1.png" alt="OpenVINO Ecosystem"> <img width="1024" src="https://user-images.githubusercontent.com/26833433/252345644-0cf84257-4b34-404c-b7ce-eb73dfbcaff1.png" alt="OpenVINO Ecosystem">
**Export mode** is used for exporting a YOLOv8 model to a format that can be used for deployment. In this guide, we specifically cover exporting to OpenVINO, which can provide up to 3x [CPU](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html) speedup as well as accelerating on other Intel hardware ([iGPU](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html), [dGPU](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html), [VPU](https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_VPU.html), etc.). In this guide, we cover exporting YOLOv8 models to the [OpenVINO](https://docs.openvino.ai/) format, which can provide up to 3x [CPU](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html) speedup as well as accelerating on other Intel hardware ([iGPU](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html), [dGPU](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html), [VPU](https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_VPU.html), etc.).
OpenVINO, short for Open Visual Inference & Neural Network Optimization toolkit, is a comprehensive toolkit for optimizing and deploying AI inference models. Even though the name contains Visual, OpenVINO also supports various additional tasks including language, audio, time series, etc. OpenVINO, short for Open Visual Inference & Neural Network Optimization toolkit, is a comprehensive toolkit for optimizing and deploying AI inference models. Even though the name contains Visual, OpenVINO also supports various additional tasks including language, audio, time series, etc.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/kONm9nE5_Fk?si=kzquuBrxjSbntHoU"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How To Export and Optimize an Ultralytics YOLOv8 Model for Inference with OpenVINO.
</p>
## Usage Examples ## Usage Examples
Export a YOLOv8n model to OpenVINO format and run inference with the exported model. Export a YOLOv8n model to OpenVINO format and run inference with the exported model.

@ -4,13 +4,33 @@ description: Learn how to profile speed and accuracy of YOLOv8 across various ex
keywords: Ultralytics, YOLOv8, benchmarking, speed profiling, accuracy profiling, mAP50-95, accuracy_top5, ONNX, OpenVINO, TensorRT, YOLO export formats keywords: Ultralytics, YOLOv8, benchmarking, speed profiling, accuracy profiling, mAP50-95, accuracy_top5, ONNX, OpenVINO, TensorRT, YOLO export formats
--- ---
# Model Benchmarking with Ultralytics YOLO
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png"> <img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Benchmark mode** is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks ## Introduction
provide information on the size of the exported format, its `mAP50-95` metrics (for object detection, segmentation and pose)
or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export Once your model is trained and validated, the next logical step is to evaluate its performance in various real-world scenarios. Benchmark mode in Ultralytics YOLOv8 serves this purpose by providing a robust framework for assessing the speed and accuracy of your model across a range of export formats.
formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for
their specific use case based on their requirements for speed and accuracy. ## Why Is Benchmarking Crucial?
- **Informed Decisions:** Gain insights into the trade-offs between speed and accuracy.
- **Resource Allocation:** Understand how different export formats perform on different hardware.
- **Optimization:** Learn which export format offers the best performance for your specific use case.
- **Cost Efficiency:** Make more efficient use of hardware resources based on benchmark results.
### Key Metrics in Benchmark Mode
- **mAP50-95:** For object detection, segmentation, and pose estimation.
- **accuracy_top5:** For image classification.
- **Inference Time:** Time taken for each image in milliseconds.
### Supported Export Formats
- **ONNX:** For optimal CPU performance
- **TensorRT:** For maximal GPU efficiency
- **OpenVINO:** For Intel hardware optimization
- **CoreML, TensorFlow SavedModel, and More:** For diverse deployment needs.
!!! tip "Tip" !!! tip "Tip"
@ -19,8 +39,7 @@ their specific use case based on their requirements for speed and accuracy.
## Usage Examples ## Usage Examples
Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
full list of export arguments.
!!! example "" !!! example ""
@ -40,8 +59,7 @@ full list of export arguments.
## Arguments ## Arguments
Arguments such as `model`, `data`, `imgsz`, `half`, `device`, and `verbose` provide users with the flexibility to fine-tune Arguments such as `model`, `data`, `imgsz`, `half`, `device`, and `verbose` provide users with the flexibility to fine-tune the benchmarks to their specific needs and compare the performance of different export formats with ease.
the benchmarks to their specific needs and compare the performance of different export formats with ease.
| Key | Value | Description | | Key | Value | Description |
|-----------|---------|-----------------------------------------------------------------------| |-----------|---------|-----------------------------------------------------------------------|

@ -4,11 +4,40 @@ description: Step-by-step guide on exporting your YOLOv8 models to various forma
keywords: YOLO, YOLOv8, Ultralytics, Model export, ONNX, TensorRT, CoreML, TensorFlow SavedModel, OpenVINO, PyTorch, export model keywords: YOLO, YOLOv8, Ultralytics, Model export, ONNX, TensorRT, CoreML, TensorFlow SavedModel, OpenVINO, PyTorch, export model
--- ---
# Model Export with Ultralytics YOLO
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png"> <img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Export mode** is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the ## Introduction
model is converted to a format that can be used by other software applications or hardware devices. This mode is useful
when deploying the model to production environments. The ultimate goal of training a model is to deploy it for real-world applications. Export mode in Ultralytics YOLOv8 offers a versatile range of options for exporting your trained model to different formats, making it deployable across various platforms and devices. This comprehensive guide aims to walk you through the nuances of model exporting, showcasing how to achieve maximum compatibility and performance.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/WbomGeoOT_k?si=aGmuyooWftA0ue9X"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How To Export Custom Trained Ultralytics YOLOv8 Model and Run Live Inference on Webcam.
</p>
## Why Choose YOLOv8's Export Mode?
- **Versatility:** Export to multiple formats including ONNX, TensorRT, CoreML, and more.
- **Performance:** Gain up to 5x GPU speedup with TensorRT and 3x CPU speedup with ONNX or OpenVINO.
- **Compatibility:** Make your model universally deployable across numerous hardware and software environments.
- **Ease of Use:** Simple CLI and Python API for quick and straightforward model exporting.
### Key Features of Export Mode
Here are some of the standout functionalities:
- **One-Click Export:** Simple commands for exporting to different formats.
- **Batch Export:** Export batched-inference capable models.
- **Optimized Inference:** Exported models are optimized for quicker inference times.
- **Tutorial Videos:** In-depth guides and tutorials for a smooth exporting experience.
!!! tip "Tip" !!! tip "Tip"
@ -17,8 +46,7 @@ when deploying the model to production environments.
## Usage Examples ## Usage Examples
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
export arguments.
!!! example "" !!! example ""
@ -43,14 +71,7 @@ export arguments.
## Arguments ## Arguments
Export settings for YOLO models refer to the various configurations and options used to save or Export settings for YOLO models refer to the various configurations and options used to save or export the model for use in other environments or platforms. These settings can affect the model's performance, size, and compatibility with different systems. Some common YOLO export settings include the format of the exported model file (e.g. ONNX, TensorFlow SavedModel), the device on which the model will be run (e.g. CPU, GPU), and the presence of additional features such as masks or multiple labels per box. Other factors that may affect the export process include the specific task the model is being used for and the requirements or constraints of the target environment or platform. It is important to carefully consider and configure these settings to ensure that the exported model is optimized for the intended use case and can be used effectively in the target environment.
export the model for use in other environments or platforms. These settings can affect the model's performance, size,
and compatibility with different systems. Some common YOLO export settings include the format of the exported model
file (e.g. ONNX, TensorFlow SavedModel), the device on which the model will be run (e.g. CPU, GPU), and the presence of
additional features such as masks or multiple labels per box. Other factors that may affect the export process include
the specific task the model is being used for and the requirements or constraints of the target environment or platform.
It is important to carefully consider and configure these settings to ensure that the exported model is optimized for
the intended use case and can be used effectively in the target environment.
| Key | Value | Description | | Key | Value | Description |
|-------------|-----------------|------------------------------------------------------| |-------------|-----------------|------------------------------------------------------|
@ -68,8 +89,7 @@ the intended use case and can be used effectively in the target environment.
## Export Formats ## Export Formats
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument, Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`.
i.e. `format='onnx'` or `format='engine'`.
| Format | `format` Argument | Model | Metadata | Arguments | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------| |--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|

@ -8,61 +8,56 @@ keywords: Ultralytics, YOLOv8, Machine Learning, Object Detection, Training, Val
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png"> <img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
Ultralytics YOLOv8 supports several **modes** that can be used to perform different tasks. These modes are: ## Introduction
- **Train**: For training a YOLOv8 model on a custom dataset. Ultralytics YOLOv8 is not just another object detection model; it's a versatile framework designed to cover the entire lifecycle of machine learning models—from data ingestion and model training to validation, deployment, and real-world tracking. Each mode serves a specific purpose and is engineered to offer you the flexibility and efficiency required for different tasks and use-cases.
- **Val**: For validating a YOLOv8 model after it has been trained.
- **Predict**: For making predictions using a trained YOLOv8 model on new images or videos. ### Modes at a Glance
- **Export**: For exporting a YOLOv8 model to a format that can be used for deployment.
- **Track**: For tracking objects in real-time using a YOLOv8 model. Understanding the different **modes** that Ultralytics YOLOv8 supports is critical to getting the most out of your models:
- **Benchmark**: For benchmarking YOLOv8 exports (ONNX, TensorRT, etc.) speed and accuracy.
- **Train** mode: Fine-tune your model on custom or preloaded datasets.
- **Val** mode: A post-training checkpoint to validate model performance.
- **Predict** mode: Unleash the predictive power of your model on real-world data.
- **Export** mode: Make your model deployment-ready in various formats.
- **Track** mode: Extend your object detection model into real-time tracking applications.
- **Benchmark** mode: Analyze the speed and accuracy of your model in diverse deployment environments.
This comprehensive guide aims to give you an overview and practical insights into each mode, helping you harness the full potential of YOLOv8.
## [Train](train.md) ## [Train](train.md)
Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can accurately predict the classes and locations of objects in an image.
specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can
accurately predict the classes and locations of objects in an image.
[Train Examples](train.md){ .md-button .md-button--primary} [Train Examples](train.md){ .md-button .md-button--primary}
## [Val](val.md) ## [Val](val.md)
Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters of the model to improve its performance.
validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters
of the model to improve its performance.
[Val Examples](val.md){ .md-button .md-button--primary} [Val Examples](val.md){ .md-button .md-button--primary}
## [Predict](predict.md) ## [Predict](predict.md)
Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model predicts the classes and locations of objects in the input images or videos.
model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model
predicts the classes and locations of objects in the input images or videos.
[Predict Examples](predict.md){ .md-button .md-button--primary} [Predict Examples](predict.md){ .md-button .md-button--primary}
## [Export](export.md) ## [Export](export.md)
Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is converted to a format that can be used by other software applications or hardware devices. This mode is useful when deploying the model to production environments.
converted to a format that can be used by other software applications or hardware devices. This mode is useful when
deploying the model to production environments.
[Export Examples](export.md){ .md-button .md-button--primary} [Export Examples](export.md){ .md-button .md-button--primary}
## [Track](track.md) ## [Track](track.md)
Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful for applications such as surveillance systems or self-driving cars.
checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful
for applications such as surveillance systems or self-driving cars.
[Track Examples](track.md){ .md-button .md-button--primary} [Track Examples](track.md){ .md-button .md-button--primary}
## [Benchmark](benchmark.md) ## [Benchmark](benchmark.md)
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection, segmentation and pose)
information on the size of the exported format, its `mAP50-95` metrics (for object detection, segmentation and pose) or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy.
or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export
formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for
their specific use case based on their requirements for speed and accuracy.
[Benchmark Examples](benchmark.md){ .md-button .md-button--primary} [Benchmark Examples](benchmark.md){ .md-button .md-button--primary}

@ -4,9 +4,44 @@ description: Discover how to use YOLOv8 predict mode for various tasks. Learn ab
keywords: Ultralytics, YOLOv8, predict mode, inference sources, prediction tasks, streaming mode, image processing, video processing, machine learning, AI keywords: Ultralytics, YOLOv8, predict mode, inference sources, prediction tasks, streaming mode, image processing, video processing, machine learning, AI
--- ---
# Model Prediction with Ultralytics YOLO
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png"> <img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
YOLOv8 **predict mode** can generate predictions for various tasks, returning either a list of `Results` objects or a memory-efficient generator of `Results` objects when using the streaming mode. Enable streaming mode by passing `stream=True` in the predictor's call method. ## Introduction
In the world of machine learning and computer vision, the process of making sense out of visual data is called 'inference' or 'prediction'. Ultralytics YOLOv8 offers a powerful feature known as **predict mode** that is tailored for high-performance, real-time inference on a wide range of data sources.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/QtsI0TnwDZs?si=ljesw75cMO2Eas14"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Extract the Outputs from Ultralytics YOLOv8 Model for Custom Projects.
</p>
## Why Use Ultralytics YOLO for Inference?
Here's why you should consider YOLOv8's predict mode for your various inference needs:
- **Versatility:** Capable of making inferences on images, videos, and even live streams.
- **Performance:** Engineered for real-time, high-speed processing without sacrificing accuracy.
- **Ease of Use:** Intuitive Python and CLI interfaces for rapid deployment and testing.
- **Highly Customizable:** Various settings and parameters to tune the model's inference behavior according to your specific requirements.
### Key Features of Predict Mode
YOLOv8's predict mode is designed to be robust and versatile, featuring:
- **Multiple Data Source Compatibility:** Whether your data is in the form of individual images, a collection of images, video files, or real-time video streams, predict mode has you covered.
- **Streaming Mode:** Use the streaming feature to generate a memory-efficient generator of `Results` objects. Enable this by setting `stream=True` in the predictor's call method.
- **Batch Processing:** The ability to process multiple images or video frames in a single batch, further speeding up inference time.
- **Integration Friendly:** Easily integrate with existing data pipelines and other software components, thanks to its flexible API.
Ultralytics YOLO models return either a Python list of `Results` objects, or a memory-efficient Python generator of `Results` objects when `stream=True` is passed to the model during inference:
!!! example "Predict" !!! example "Predict"

@ -4,11 +4,39 @@ description: Learn how to use Ultralytics YOLO for object tracking in video stre
keywords: Ultralytics, YOLO, object tracking, video streams, BoT-SORT, ByteTrack, Python guide, CLI guide keywords: Ultralytics, YOLO, object tracking, video streams, BoT-SORT, ByteTrack, Python guide, CLI guide
--- ---
# Multi-Object Tracking with Ultralytics YOLO
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418637-1d6250fd-1515-4c10-a844-a32818ae6d46.png"> <img width="1024" src="https://user-images.githubusercontent.com/26833433/243418637-1d6250fd-1515-4c10-a844-a32818ae6d46.png">
Object tracking is a task that involves identifying the location and class of objects, then assigning a unique ID to that detection in video streams. Object tracking in the realm of video analytics is a critical task that not only identifies the location and class of objects within the frame but also maintains a unique ID for each detected object as the video progresses. The applications are limitless—ranging from surveillance and security to real-time sports analytics.
## Why Choose Ultralytics YOLO for Object Tracking?
The output from Ultralytics trackers is consistent with standard object detection but has the added value of object IDs. This makes it easy to track objects in video streams and perform subsequent analytics. Here's why you should consider using Ultralytics YOLO for your object tracking needs:
- **Efficiency:** Process video streams in real-time without compromising accuracy.
- **Flexibility:** Supports multiple tracking algorithms and configurations.
- **Ease of Use:** Simple Python API and CLI options for quick integration and deployment.
- **Customizability:** Easy to use with custom trained YOLO models, allowing integration into domain-specific applications.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/hHyHmOtmEgs?si=VNZtXmm45Nb9s-N-"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Object Detection and Tracking with Ultralytics YOLOv8.
</p>
## Features at a Glance
Ultralytics YOLO extends its object detection features to provide robust and versatile object tracking:
The output of tracker is the same as detection with an added object ID. - **Real-Time Tracking:** Seamlessly track objects in high-frame-rate videos.
- **Multiple Tracker Support:** Choose from a variety of established tracking algorithms.
- **Customizable Tracker Configurations:** Tailor the tracking algorithm to meet specific requirements by adjusting various parameters.
## Available Trackers ## Available Trackers

@ -4,9 +4,42 @@ description: Step-by-step guide to train YOLOv8 models with Ultralytics YOLO inc
keywords: Ultralytics, YOLOv8, YOLO, object detection, train mode, custom dataset, GPU training, multi-GPU, hyperparameters, CLI examples, Python examples keywords: Ultralytics, YOLOv8, YOLO, object detection, train mode, custom dataset, GPU training, multi-GPU, hyperparameters, CLI examples, Python examples
--- ---
# Model Training with Ultralytics YOLO
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png"> <img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Train mode** is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can accurately predict the classes and locations of objects in an image. ## Introduction
Training a deep learning model involves feeding it data and adjusting its parameters so that it can make accurate predictions. Train mode in Ultralytics YOLOv8 is engineered for effective and efficient training of object detection models, fully utilizing modern hardware capabilities. This guide aims to cover all the details you need to get started with training your own models using YOLOv8's robust set of features.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/LNwODJXcvt4?si=7n1UvGRLSd9p5wKs"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Train a YOLOv8 model on Your Custom Dataset in Google Colab.
</p>
## Why Choose Ultralytics YOLO for Training?
Here are some compelling reasons to opt for YOLOv8's Train mode:
- **Efficiency:** Make the most out of your hardware, whether you're on a single-GPU setup or scaling across multiple GPUs.
- **Versatility:** Train on custom datasets in addition to readily available ones like COCO, VOC, and ImageNet.
- **User-Friendly:** Simple yet powerful CLI and Python interfaces for a straightforward training experience.
- **Hyperparameter Flexibility:** A broad range of customizable hyperparameters to fine-tune model performance.
### Key Features of Train Mode
The following are some notable features of YOLOv8's Train mode:
- **Automatic Dataset Download:** Standard datasets like COCO, VOC, and ImageNet are downloaded automatically on first use.
- **Multi-GPU Support:** Scale your training efforts seamlessly across multiple GPUs to expedite the process.
- **Hyperparameter Configuration:** The option to modify hyperparameters through YAML configuration files or CLI arguments.
- **Visualization and Monitoring:** Real-time tracking of training metrics and visualization of the learning process for better insights.
!!! tip "Tip" !!! tip "Tip"

@ -4,9 +4,31 @@ description: Guide for Validating YOLOv8 Models. Learn how to evaluate the perfo
keywords: Ultralytics, YOLO Docs, YOLOv8, validation, model evaluation, hyperparameters, accuracy, metrics, Python, CLI keywords: Ultralytics, YOLO Docs, YOLOv8, validation, model evaluation, hyperparameters, accuracy, metrics, Python, CLI
--- ---
# Model Validation with Ultralytics YOLO
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png"> <img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Val mode** is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters of the model to improve its performance. ## Introduction
Validation is a critical step in the machine learning pipeline, allowing you to assess the quality of your trained models. Val mode in Ultralytics YOLOv8 provides a robust suite of tools and metrics for evaluating the performance of your object detection models. This guide serves as a complete resource for understanding how to effectively use the Val mode to ensure that your models are both accurate and reliable.
## Why Validate with Ultralytics YOLO?
Here's why using YOLOv8's Val mode is advantageous:
- **Precision:** Get accurate metrics like mAP50, mAP75, and mAP50-95 to comprehensively evaluate your model.
- **Convenience:** Utilize built-in features that remember training settings, simplifying the validation process.
- **Flexibility:** Validate your model with the same or different datasets and image sizes.
- **Hyperparameter Tuning:** Use validation metrics to fine-tune your model for better performance.
### Key Features of Val Mode
These are the notable functionalities offered by YOLOv8's Val mode:
- **Automated Settings:** Models remember their training configurations for straightforward validation.
- **Multi-Metric Support:** Evaluate your model based on a range of accuracy metrics.
- **CLI and Python API:** Choose from command-line interface or Python API based on your preference for validation.
- **Data Compatibility:** Works seamlessly with datasets used during the training phase as well as custom datasets.
!!! tip "Tip" !!! tip "Tip"

@ -37,3 +37,4 @@ div.highlight {
max-height: 20rem; max-height: 20rem;
overflow-y: auto; /* for adding a scrollbar when needed */ overflow-y: auto; /* for adding a scrollbar when needed */
} }

@ -4,14 +4,13 @@ description: Learn about YOLOv8 Classify models for image classification. Get de
keywords: Ultralytics, YOLOv8, Image Classification, Pretrained Models, YOLOv8n-cls, Training, Validation, Prediction, Model Export keywords: Ultralytics, YOLOv8, Image Classification, Pretrained Models, YOLOv8n-cls, Training, Validation, Prediction, Model Export
--- ---
Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of # Image Classification
predefined classes.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418606-adf35c62-2e11-405d-84c6-b84e7d013804.png"> <img width="1024" src="https://user-images.githubusercontent.com/26833433/243418606-adf35c62-2e11-405d-84c6-b84e7d013804.png">
The output of an image classifier is a single class label and a confidence score. Image Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of predefined classes.
classification is useful when you need to know only what class an image belongs to and don't need to know where objects
of that class are located or what their exact shape is. The output of an image classifier is a single class label and a confidence score. Image classification is useful when you need to know only what class an image belongs to and don't need to know where objects of that class are located or what their exact shape is.
!!! tip "Tip" !!! tip "Tip"
@ -19,13 +18,9 @@ of that class are located or what their exact shape is.
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8) ## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose models are pretrained on YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify
models are pretrained on
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 | | Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
|----------------------------------------------------------------------------------------------|-----------------------|------------------|------------------|--------------------------------|-------------------------------------|--------------------|--------------------------| |----------------------------------------------------------------------------------------------|-----------------------|------------------|------------------|--------------------------------|-------------------------------------|--------------------|--------------------------|
@ -43,8 +38,7 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
## Train ## Train
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
see the [Configuration](../usage/cfg.md) page.
!!! example "" !!! example ""
@ -81,8 +75,7 @@ YOLO classification dataset format can be found in detail in the [Dataset Guide]
## Val ## Val
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
it's training `data` and arguments as model attributes.
!!! example "" !!! example ""
@ -159,8 +152,7 @@ Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
yolo export model=path/to/best.pt format=onnx # export custom trained model yolo export model=path/to/best.pt format=onnx # export custom trained model
``` ```
Available YOLOv8-cls export formats are in the table below. You can predict or validate directly on exported models, Available YOLOv8-cls export formats are in the table below. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-cls.onnx`. Usage examples are shown for your model after export completes.
i.e. `yolo predict model=yolov8n-cls.onnx`. Usage examples are shown for your model after export completes.
| Format | `format` Argument | Model | Metadata | Arguments | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------| |--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------|

@ -4,12 +4,25 @@ description: Official documentation for YOLOv8 by Ultralytics. Learn how to trai
keywords: YOLOv8, Ultralytics, object detection, pretrained models, training, validation, prediction, export models, COCO, ImageNet, PyTorch, ONNX, CoreML keywords: YOLOv8, Ultralytics, object detection, pretrained models, training, validation, prediction, export models, COCO, ImageNet, PyTorch, ONNX, CoreML
--- ---
Object detection is a task that involves identifying the location and class of objects in an image or video stream. # Object Detection
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418624-5785cb93-74c9-4541-9179-d5c6782d491a.png"> <img width="1024" src="https://user-images.githubusercontent.com/26833433/243418624-5785cb93-74c9-4541-9179-d5c6782d491a.png">
Object detection is a task that involves identifying the location and class of objects in an image or video stream.
The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class labels and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a scene, but don't need to know exactly where the object is or its exact shape. The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class labels and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a scene, but don't need to know exactly where the object is or its exact shape.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/5ku7npMrW40?si=6HQO1dDXunV8gekh"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Object Detection with Pre-trained Ultralytics YOLOv8 Model.
</p>
!!! tip "Tip" !!! tip "Tip"
YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml). YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).

@ -6,46 +6,35 @@ keywords: Ultralytics, YOLOv8, Detection, Segmentation, Classification, Pose Est
# Ultralytics YOLOv8 Tasks # Ultralytics YOLOv8 Tasks
YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The framework can be used to
perform [detection](detect.md), [segmentation](segment.md), [classification](classify.md),
and [pose](pose.md) estimation. Each of these tasks has a different objective and use case.
<br> <br>
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png"> <img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png">
YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The framework can be used to perform [detection](detect.md), [segmentation](segment.md), [classification](classify.md), and [pose](pose.md) estimation. Each of these tasks has a different objective and use case.
## [Detection](detect.md) ## [Detection](detect.md)
Detection is the primary task supported by YOLOv8. It involves detecting objects in an image or video frame and drawing Detection is the primary task supported by YOLOv8. It involves detecting objects in an image or video frame and drawing bounding boxes around them. The detected objects are classified into different categories based on their features. YOLOv8 can detect multiple objects in a single image or video frame with high accuracy and speed.
bounding boxes around them. The detected objects are classified into different categories based on their features.
YOLOv8 can detect multiple objects in a single image or video frame with high accuracy and speed.
[Detection Examples](detect.md){ .md-button .md-button--primary} [Detection Examples](detect.md){ .md-button .md-button--primary}
## [Segmentation](segment.md) ## [Segmentation](segment.md)
Segmentation is a task that involves segmenting an image into different regions based on the content of the image. Each Segmentation is a task that involves segmenting an image into different regions based on the content of the image. Each region is assigned a label based on its content. This task is useful in applications such as image segmentation and medical imaging. YOLOv8 uses a variant of the U-Net architecture to perform segmentation.
region is assigned a label based on its content. This task is useful in applications such as image segmentation and
medical imaging. YOLOv8 uses a variant of the U-Net architecture to perform segmentation.
[Segmentation Examples](segment.md){ .md-button .md-button--primary} [Segmentation Examples](segment.md){ .md-button .md-button--primary}
## [Classification](classify.md) ## [Classification](classify.md)
Classification is a task that involves classifying an image into different categories. YOLOv8 can be used to classify Classification is a task that involves classifying an image into different categories. YOLOv8 can be used to classify images based on their content. It uses a variant of the EfficientNet architecture to perform classification.
images based on their content. It uses a variant of the EfficientNet architecture to perform classification.
[Classification Examples](classify.md){ .md-button .md-button--primary} [Classification Examples](classify.md){ .md-button .md-button--primary}
## [Pose](pose.md) ## [Pose](pose.md)
Pose/keypoint detection is a task that involves detecting specific points in an image or video frame. These points are Pose/keypoint detection is a task that involves detecting specific points in an image or video frame. These points are referred to as keypoints and are used to track movement or pose estimation. YOLOv8 can detect keypoints in an image or video frame with high accuracy and speed.
referred to as keypoints and are used to track movement or pose estimation. YOLOv8 can detect keypoints in an image or
video frame with high accuracy and speed.
[Pose Examples](pose.md){ .md-button .md-button--primary} [Pose Examples](pose.md){ .md-button .md-button--primary}
## Conclusion ## Conclusion
YOLOv8 supports multiple tasks, including detection, segmentation, classification, and keypoints detection. Each of YOLOv8 supports multiple tasks, including detection, segmentation, classification, and keypoints detection. Each of these tasks has different objectives and use cases. By understanding the differences between these tasks, you can choose the appropriate task for your computer vision application.
these tasks has different objectives and use cases. By understanding the differences between these tasks, you can choose
the appropriate task for your computer vision application.

@ -4,16 +4,25 @@ description: Learn how to use Ultralytics YOLOv8 for pose estimation tasks. Find
keywords: Ultralytics, YOLO, YOLOv8, pose estimation, keypoints detection, object detection, pre-trained models, machine learning, artificial intelligence keywords: Ultralytics, YOLO, YOLOv8, pose estimation, keypoints detection, object detection, pre-trained models, machine learning, artificial intelligence
--- ---
Pose estimation is a task that involves identifying the location of specific points in an image, usually referred # Pose Estimation
to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive
features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]`
coordinates.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418616-9811ac0b-a4a7-452a-8aba-484ba32bb4a8.png"> <img width="1024" src="https://user-images.githubusercontent.com/26833433/243418616-9811ac0b-a4a7-452a-8aba-484ba32bb4a8.png">
The output of a pose estimation model is a set of points that represent the keypoints on an object in the image, usually Pose estimation is a task that involves identifying the location of specific points in an image, usually referred to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]`
along with the confidence scores for each point. Pose estimation is a good choice when you need to identify specific coordinates.
parts of an object in a scene, and their location in relation to each other.
The output of a pose estimation model is a set of points that represent the keypoints on an object in the image, usually along with the confidence scores for each point. Pose estimation is a good choice when you need to identify specific parts of an object in a scene, and their location in relation to each other.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/Y28xXQmju64?si=pCY4ZwejZFu6Z4kZ"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Pose Estimation with Ultralytics YOLOv8.
</p>
!!! tip "Tip" !!! tip "Tip"
@ -21,13 +30,9 @@ parts of an object in a scene, and their location in relation to each other.
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8) ## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models are pretrained on YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify
models are pretrained on
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) | | Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|------------------------------------------------------------------------------------------------------|-----------------------|-----------------------|--------------------|--------------------------------|-------------------------------------|--------------------|-------------------| |------------------------------------------------------------------------------------------------------|-----------------------|-----------------------|--------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
@ -84,8 +89,7 @@ YOLO pose dataset format can be found in detail in the [Dataset Guide](../datase
## Val ## Val
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model` Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model`
retains it's retains it's training `data` and arguments as model attributes.
training `data` and arguments as model attributes.
!!! example "" !!! example ""
@ -164,8 +168,7 @@ Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
yolo export model=path/to/best.pt format=onnx # export custom trained model yolo export model=path/to/best.pt format=onnx # export custom trained model
``` ```
Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models, Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your model after export completes.
i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your model after export completes.
| Format | `format` Argument | Model | Metadata | Arguments | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|--------------------------------|----------|-----------------------------------------------------| |--------------------------------------------------------------------|-------------------|--------------------------------|----------|-----------------------------------------------------|

@ -4,14 +4,24 @@ description: Learn how to use instance segmentation models with Ultralytics YOLO
keywords: yolov8, instance segmentation, Ultralytics, COCO dataset, image segmentation, object detection, model training, model validation, image prediction, model export keywords: yolov8, instance segmentation, Ultralytics, COCO dataset, image segmentation, object detection, model training, model validation, image prediction, model export
--- ---
Instance segmentation goes a step further than object detection and involves identifying individual objects in an image # Instance Segmentation
and segmenting them from the rest of the image.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418644-7df320b8-098d-47f1-85c5-26604d761286.png"> <img width="1024" src="https://user-images.githubusercontent.com/26833433/243418644-7df320b8-098d-47f1-85c5-26604d761286.png">
The output of an instance segmentation model is a set of masks or Instance segmentation goes a step further than object detection and involves identifying individual objects in an image and segmenting them from the rest of the image.
contours that outline each object in the image, along with class labels and confidence scores for each object. Instance
segmentation is useful when you need to know not only where objects are in an image, but also what their exact shape is. The output of an instance segmentation model is a set of masks or contours that outline each object in the image, along with class labels and confidence scores for each object. Instance segmentation is useful when you need to know not only where objects are in an image, but also what their exact shape is.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/o4Zd-IeMlSY?si=37nusCzDTd74Obsp"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Run Segmentation with Pre-Trained Ultralytics YOLOv8 Model in Python.
</p>
!!! tip "Tip" !!! tip "Tip"
@ -19,13 +29,9 @@ segmentation is useful when you need to know not only where objects are in an im
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8) ## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models are pretrained on YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify
models are pretrained on
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) | | Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|----------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|--------------------------------|-------------------------------------|--------------------|-------------------| |----------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
@ -43,8 +49,7 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
## Train ## Train
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
arguments see the [Configuration](../usage/cfg.md) page.
!!! example "" !!! example ""
@ -164,8 +169,7 @@ Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
yolo export model=path/to/best.pt format=onnx # export custom trained model yolo export model=path/to/best.pt format=onnx # export custom trained model
``` ```
Available YOLOv8-seg export formats are in the table below. You can predict or validate directly on exported models, Available YOLOv8-seg export formats are in the table below. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-seg.onnx`. Usage examples are shown for your model after export completes.
i.e. `yolo predict model=yolov8n-seg.onnx`. Usage examples are shown for your model after export completes.
| Format | `format` Argument | Model | Metadata | Arguments | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------| |--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------|

@ -11,10 +11,17 @@
## Step 1: Install the Required Libraries ## Step 1: Install the Required Libraries
Clone the repository and install the dependencies: Clone the repository, install dependencies and `cd` to this local directory for commands in Step 2.
```bash ```bash
# Clone ultralytics repo
git clone https://github.com/ultralytics/ultralytics
# Install dependencies
pip install sahi ultralytics pip install sahi ultralytics
# cd to local directory
cd ultralytics/examples/YOLOv8-SAHI-Inference-Video
``` ```
## Step 2: Run the Inference with SAHI using Ultralytics YOLOv8 ## Step 2: Run the Inference with SAHI using Ultralytics YOLOv8

@ -21,6 +21,10 @@ def run(weights='yolov8n.pt', source='test.mp4', view_img=False, save_img=False,
exist_ok (bool): Overwrite existing files. exist_ok (bool): Overwrite existing files.
""" """
# Check source path
if not Path(source).exists():
raise FileNotFoundError(f"Source path '{source}' does not exist.")
yolov8_model_path = f'models/{weights}' yolov8_model_path = f'models/{weights}'
download_yolov8s_model(yolov8_model_path) download_yolov8s_model(yolov8_model_path)
detection_model = AutoDetectionModel.from_pretrained(model_type='yolov8', detection_model = AutoDetectionModel.from_pretrained(model_type='yolov8',

@ -255,7 +255,7 @@ nav:
- Neural Magic's DeepSparse: yolov5/tutorials/neural_magic_pruning_quantization.md - Neural Magic's DeepSparse: yolov5/tutorials/neural_magic_pruning_quantization.md
- Comet Logging: yolov5/tutorials/comet_logging_integration.md - Comet Logging: yolov5/tutorials/comet_logging_integration.md
- Clearml Logging: yolov5/tutorials/clearml_logging_integration.md - Clearml Logging: yolov5/tutorials/clearml_logging_integration.md
- Ultralytics HUB: - HUB:
- hub/index.md - hub/index.md
- Quickstart: hub/quickstart.md - Quickstart: hub/quickstart.md
- Datasets: hub/datasets.md - Datasets: hub/datasets.md

@ -4,7 +4,7 @@ import re
from pathlib import Path from pathlib import Path
import pkg_resources as pkg import pkg_resources as pkg
from setuptools import find_packages, setup from setuptools import find_namespace_packages, setup
# Settings # Settings
FILE = Path(__file__).resolve() FILE = Path(__file__).resolve()
@ -34,7 +34,7 @@ setup(
'Source': 'https://github.com/ultralytics/ultralytics'}, 'Source': 'https://github.com/ultralytics/ultralytics'},
author='Ultralytics', author='Ultralytics',
author_email='hello@ultralytics.com', author_email='hello@ultralytics.com',
packages=find_packages(), # required packages=find_namespace_packages(include=['ultralytics.*']),
include_package_data=True, include_package_data=True,
install_requires=REQUIREMENTS, install_requires=REQUIREMENTS,
extras_require={ extras_require={

@ -29,18 +29,14 @@ def pytest_configure(config):
config.addinivalue_line('markers', 'slow: mark test as slow to run') config.addinivalue_line('markers', 'slow: mark test as slow to run')
def pytest_collection_modifyitems(config, items): def pytest_runtest_setup(item):
"""Modify collected test items based on custom command-line options. """Setup hook to skip tests marked as slow if the --slow option is not provided.
Args: Args:
config (pytest.config.Config): The pytest config object. item (pytest.Item): The test item object.
items (list): List of collected test items.
""" """
if not config.getoption('--slow'): if 'slow' in item.keywords and not item.config.getoption('--slow'):
skip_slow = pytest.mark.skip(reason="remove this test because it's slow") pytest.skip('skip slow tests unless --slow is set')
for item in items:
if 'slow' in item.keywords:
item.add_marker(skip_slow)
def pytest_sessionstart(session): def pytest_sessionstart(session):

@ -114,9 +114,9 @@ def test_mobilesam():
# model(source) # model(source)
# Slow Tests # Slow Tests -----------------------------------------------------------------------------------------------------------
@pytest.mark.slow @pytest.mark.slow
@pytest.mark.parametrize('task,model,data', TASK_ARGS) @pytest.mark.parametrize('task,model,data', TASK_ARGS)
def test_train_gpu(task, model, data): def test_train_gpu(task, model, data):
run(f'yolo train {task} model={model}.yaml data={data} imgsz=32 epochs=1 device="0"') # single GPU run(f'yolo train {task} model={model}.yaml data={data} imgsz=32 epochs=1 device=0') # single GPU
run(f'yolo train {task} model={model}.pt data={data} imgsz=32 epochs=1 device="0,1"') # multi GPU run(f'yolo train {task} model={model}.pt data={data} imgsz=32 epochs=1 device=0,1') # multi GPU

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = '8.0.176' __version__ = '8.0.177'
from ultralytics.models import RTDETR, SAM, YOLO from ultralytics.models import RTDETR, SAM, YOLO
from ultralytics.models.fastsam import FastSAM from ultralytics.models.fastsam import FastSAM

@ -37,6 +37,8 @@ def run_ray_tune(model,
result_grid = model.tune(data='coco8.yaml', use_ray=True) result_grid = model.tune(data='coco8.yaml', use_ray=True)
``` ```
""" """
LOGGER.info('💡 Learn about RayTune at https://docs.ultralytics.com/integrations/ray-tune')
if train_args is None: if train_args is None:
train_args = {} train_args = {}

Loading…
Cancel
Save