`ultralytics 8.2.2` replace COCO128 with COCO8 (#10167)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
pull/10121/head v8.2.2
Glenn Jocher 7 months ago committed by GitHub
parent 626309d221
commit 1110258d37
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 2
      README.md
  2. 2
      README.zh-CN.md
  3. 8
      docs/README.md
  4. 1
      docs/en/datasets/detect/index.md
  5. 4
      docs/en/datasets/pose/index.md
  6. 4
      docs/en/datasets/segment/index.md
  7. 4
      docs/en/guides/azureml-quickstart.md
  8. 10
      docs/en/guides/nvidia-jetson.md
  9. 32
      docs/en/guides/object-counting.md
  10. 3
      docs/en/guides/workouts-monitoring.md
  11. 6
      docs/en/help/CI.md
  12. 4
      docs/en/integrations/clearml.md
  13. 2
      docs/en/integrations/comet.md
  14. 4
      docs/en/integrations/index.md
  15. 8
      docs/en/integrations/openvino.md
  16. 4
      docs/en/integrations/ray-tune.md
  17. 2
      docs/en/integrations/tensorboard.md
  18. 2
      docs/en/integrations/tfjs.md
  19. 2
      docs/en/integrations/weights-biases.md
  20. 2
      docs/en/modes/benchmark.md
  21. 20
      docs/en/modes/train.md
  22. 34
      docs/en/modes/val.md
  23. 8
      docs/en/quickstart.md
  24. 14
      docs/en/tasks/detect.md
  25. 10
      docs/en/tasks/segment.md
  26. 34
      docs/en/usage/cfg.md
  27. 16
      docs/en/usage/cli.md
  28. 14
      docs/en/usage/python.md
  29. 2
      docs/en/usage/simple-utilities.md
  30. 4
      docs/en/yolov5/tutorials/clearml_logging_integration.md
  31. 12
      docs/overrides/main.html
  32. 2
      examples/YOLOv8-ONNXRuntime/main.py
  33. 2
      examples/YOLOv8-OpenCV-ONNX-Python/main.py
  34. 2
      examples/YOLOv8-OpenCV-int8-tflite-Python/main.py
  35. 2
      examples/YOLOv8-Segmentation-ONNXRuntime-Python/main.py
  36. 6
      examples/tutorial.ipynb
  37. 2
      ultralytics/__init__.py
  38. 4
      ultralytics/cfg/__init__.py
  39. 2
      ultralytics/cfg/default.yaml
  40. 4
      ultralytics/cfg/models/README.md
  41. 2
      ultralytics/engine/trainer.py
  42. 2
      ultralytics/engine/validator.py
  43. 6
      ultralytics/utils/__init__.py

@ -88,7 +88,7 @@ model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training) model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Use the model # Use the model
model.train(data="coco128.yaml", epochs=3) # train the model model.train(data="coco8.yaml", epochs=3) # train the model
metrics = model.val() # evaluate model performance on the validation set metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format path = model.export(format="onnx") # export the model to ONNX format

@ -90,7 +90,7 @@ model = YOLO("yolov8n.yaml") # 从头开始构建新模型
model = YOLO("yolov8n.pt") # 加载预训练模型(建议用于训练) model = YOLO("yolov8n.pt") # 加载预训练模型(建议用于训练)
# 使用模型 # 使用模型
model.train(data="coco128.yaml", epochs=3) # 训练模型 model.train(data="coco8.yaml", epochs=3) # 训练模型
metrics = model.val() # 在验证集上评估模型性能 metrics = model.val() # 在验证集上评估模型性能
results = model("https://ultralytics.com/images/bus.jpg") # 对图像进行预测 results = model("https://ultralytics.com/images/bus.jpg") # 对图像进行预测
success = model.export(format="onnx") # 将模型导出为 ONNX 格式 success = model.export(format="onnx") # 将模型导出为 ONNX 格式

@ -43,13 +43,13 @@ mkdocs serve
- #### Command Breakdown: - #### Command Breakdown:
- `mkdocs` is the main MkDocs command-line interface. - `mkdocs` is the main MkDocs command-line interface.
- `serve` is the subcommand to build and locally serve your documentation. - `serve` is the subcommand to build and locally serve your documentation.
- 🧐 Note: - 🧐 Note:
- Grasp changes to the docs in real-time as `mkdocs serve` supports live reloading. - Grasp changes to the docs in real-time as `mkdocs serve` supports live reloading.
- To stop the local server, press `CTRL+C`. - To stop the local server, press `CTRL+C`.
## 🌍 Building and Serving Multi-Language ## 🌍 Building and Serving Multi-Language

@ -44,7 +44,6 @@ When using the Ultralytics YOLO format, organize your training and validation im
<p align="center"><img width="800" src="https://github.com/IvorZhu331/ultralytics/assets/26833433/a55ec82d-2bb5-40f9-ac5c-f935e7eb9f07" alt="Example dataset directory structure"></p> <p align="center"><img width="800" src="https://github.com/IvorZhu331/ultralytics/assets/26833433/a55ec82d-2bb5-40f9-ac5c-f935e7eb9f07" alt="Example dataset directory structure"></p>
## Usage ## Usage
Here's how you can use these formats to train your model: Here's how you can use these formats to train your model:

@ -75,13 +75,13 @@ The `train` and `val` fields specify the paths to the directories containing the
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training) model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
# Train the model # Train the model
results = model.train(data='coco128-pose.yaml', epochs=100, imgsz=640) results = model.train(data='coco8-pose.yaml', epochs=100, imgsz=640)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Start training from a pretrained *.pt model # Start training from a pretrained *.pt model
yolo detect train data=coco128-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640 yolo detect train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
``` ```
## Supported Datasets ## Supported Datasets

@ -77,13 +77,13 @@ The `train` and `val` fields specify the paths to the directories containing the
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training) model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
# Train the model # Train the model
results = model.train(data='coco128-seg.yaml', epochs=100, imgsz=640) results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Start training from a pretrained *.pt model # Start training from a pretrained *.pt model
yolo detect train data=coco128-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640 yolo detect train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
``` ```
## Supported Datasets ## Supported Datasets

@ -74,7 +74,7 @@ yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
Train a detection model for 10 epochs with an initial learning_rate of 0.01: Train a detection model for 10 epochs with an initial learning_rate of 0.01:
```bash ```bash
yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01 yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
``` ```
You can find more [instructions to use the Ultralytics CLI here](../quickstart.md#use-ultralytics-with-cli). You can find more [instructions to use the Ultralytics CLI here](../quickstart.md#use-ultralytics-with-cli).
@ -131,7 +131,7 @@ from ultralytics import YOLO
model = YOLO("yolov8n.pt") # load an official YOLOv8n model model = YOLO("yolov8n.pt") # load an official YOLOv8n model
# Use the model # Use the model
model.train(data="coco128.yaml", epochs=3) # train the model model.train(data="coco8.yaml", epochs=3) # train the model
metrics = model.val() # evaluate model performance on the validation set metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format path = model.export(format="onnx") # export the model to ONNX format

@ -205,17 +205,17 @@ To reproduce the above Ultralytics benchmarks on all export [formats](../modes/e
# Load a YOLOv8n PyTorch model # Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt') model = YOLO('yolov8n.pt')
# Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
results = model.benchmarks(data='coco128.yaml', imgsz=640) results = model.benchmarks(data='coco8.yaml', imgsz=640)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
yolo benchmark model=yolov8n.pt data=coco128.yaml imgsz=640 yolo benchmark model=yolov8n.pt data=coco8.yaml imgsz=640
``` ```
Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images). Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco8.yaml' (128 val images), or `data='coco.yaml'` (5000 val images).
!!! Note !!! Note

@ -219,22 +219,22 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
### Optional Arguments `set_args` ### Optional Arguments `set_args`
| Name | Type | Default | Description | | Name | Type | Default | Description |
|-----------------------|-------------|----------------------------|--------------------------------------------------| |--------------------|-------------|----------------------------|--------------------------------------------------|
| `view_img` | `bool` | `False` | Display frames with counts | | `view_img` | `bool` | `False` | Display frames with counts |
| `view_in_counts` | `bool` | `True` | Display in-counts only on video frame | | `view_in_counts` | `bool` | `True` | Display in-counts only on video frame |
| `view_out_counts` | `bool` | `True` | Display out-counts only on video frame | | `view_out_counts` | `bool` | `True` | Display out-counts only on video frame |
| `line_thickness` | `int` | `2` | Increase bounding boxes and count text thickness | | `line_thickness` | `int` | `2` | Increase bounding boxes and count text thickness |
| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area | | `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area |
| `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names | | `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names |
| `count_reg_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line | | `count_reg_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line |
| `track_thickness` | `int` | `2` | Thickness of Tracking Lines | | `track_thickness` | `int` | `2` | Thickness of Tracking Lines |
| `draw_tracks` | `bool` | `False` | Enable drawing Track lines | | `draw_tracks` | `bool` | `False` | Enable drawing Track lines |
| `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line | | `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line |
| `line_dist_thresh` | `int` | `15` | Euclidean Distance threshold for line counter | | `line_dist_thresh` | `int` | `15` | Euclidean Distance threshold for line counter |
| `count_txt_color` | `RGB Color` | `(255, 255, 255)` | Foreground color for Object counts text | | `count_txt_color` | `RGB Color` | `(255, 255, 255)` | Foreground color for Object counts text |
| `region_thickness` | `int` | `5` | Thickness for object counter region or line | | `region_thickness` | `int` | `5` | Thickness for object counter region or line |
| `count_bg_color` | `RGB Color` | `(255, 255, 255)` | Count highlighter color | | `count_bg_color` | `RGB Color` | `(255, 255, 255)` | Count highlighter color |
### Arguments `model.track` ### Arguments `model.track`

@ -19,7 +19,6 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
<strong>Watch:</strong> Workouts Monitoring using Ultralytics YOLOv8 | Pushups, Pullups, Ab Workouts <strong>Watch:</strong> Workouts Monitoring using Ultralytics YOLOv8 | Pushups, Pullups, Ab Workouts
</p> </p>
## Advantages of Workouts Monitoring? ## Advantages of Workouts Monitoring?
- **Optimized Performance:** Tailoring workouts based on monitoring data for better results. - **Optimized Performance:** Tailoring workouts based on monitoring data for better results.
@ -157,4 +156,4 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
| `conf` | `float` | `0.3` | Confidence Threshold | | `conf` | `float` | `0.3` | Confidence Threshold |
| `iou` | `float` | `0.5` | IOU Threshold | | `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] | | `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results | | `verbose` | `bool` | `True` | Display the object tracking results |

@ -1,7 +1,7 @@
--- ---
comments: true comments: true
description: Learn how Ultralytics leverages Continuous Integration (CI) for maintaining high-quality code. Explore our CI tests and the status of these tests for our repositories. description: Learn how Ultralytics leverages Continuous Integration (CI) for maintaining high-quality code. Explore our CI tests and the status of these tests for our repositories.
keywords: continuous integration, software development, CI tests, Ultralytics repositories, high-quality code, Docker Deployment, Broken Links, CodeQL, PyPi Publishing keywords: continuous integration, software development, CI tests, Ultralytics repositories, high-quality code, Docker Deployment, Broken Links, CodeQL, PyPI Publishing
--- ---
# Continuous Integration (CI) # Continuous Integration (CI)
@ -16,13 +16,13 @@ Here's a brief description of our CI actions:
- **[Docker Deployment](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml):** This test checks the deployment of the project using Docker to ensure the Dockerfile and related scripts are working correctly. - **[Docker Deployment](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml):** This test checks the deployment of the project using Docker to ensure the Dockerfile and related scripts are working correctly.
- **[Broken Links](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml):** This test scans the codebase for any broken or dead links in our markdown or HTML files. - **[Broken Links](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml):** This test scans the codebase for any broken or dead links in our markdown or HTML files.
- **[CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml):** CodeQL is a tool from GitHub that performs semantic analysis on our code, helping to find potential security vulnerabilities and maintain high-quality code. - **[CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml):** CodeQL is a tool from GitHub that performs semantic analysis on our code, helping to find potential security vulnerabilities and maintain high-quality code.
- **[PyPi Publishing](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml):** This test checks if the project can be packaged and published to PyPi without any errors. - **[PyPI Publishing](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml):** This test checks if the project can be packaged and published to PyPi without any errors.
### CI Results ### CI Results
Below is the table showing the status of these CI tests for our main repositories: Below is the table showing the status of these CI tests for our main repositories:
| Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPi and Docs Publishing | | Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPI and Docs Publishing |
|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [yolov3](https://github.com/ultralytics/yolov3) | [![YOLOv3 CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov3/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml) | | | [yolov3](https://github.com/ultralytics/yolov3) | [![YOLOv3 CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov3/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml) | |
| [yolov5](https://github.com/ultralytics/yolov5) | [![YOLOv5 CI](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov5/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml) | | | [yolov5](https://github.com/ultralytics/yolov5) | [![YOLOv5 CI](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov5/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml) | |

@ -80,7 +80,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO
model = YOLO(f'{model_variant}.pt') model = YOLO(f'{model_variant}.pt')
# Step 4: Setting Up Training Arguments # Step 4: Setting Up Training Arguments
args = dict(data="coco128.yaml", epochs=16) args = dict(data="coco8.yaml", epochs=16)
task.connect(args) task.connect(args)
# Step 5: Initiating Model Training # Step 5: Initiating Model Training
@ -97,7 +97,7 @@ Let’s understand the steps showcased in the usage code snippet above.
**Step 3: Loading the YOLOv8 Model**: The selected YOLOv8 model is loaded using Ultralytics' YOLO class, preparing it for training. **Step 3: Loading the YOLOv8 Model**: The selected YOLOv8 model is loaded using Ultralytics' YOLO class, preparing it for training.
**Step 4: Setting Up Training Arguments**: Key training arguments like the dataset (`coco128.yaml`) and the number of epochs (`16`) are organized in a dictionary and connected to the ClearML task. This allows for tracking and potential modification via the ClearML UI. For a detailed understanding of the model training process and best practices, refer to our [YOLOv8 Model Training guide](../modes/train.md). **Step 4: Setting Up Training Arguments**: Key training arguments like the dataset (`coco8.yaml`) and the number of epochs (`16`) are organized in a dictionary and connected to the ClearML task. This allows for tracking and potential modification via the ClearML UI. For a detailed understanding of the model training process and best practices, refer to our [YOLOv8 Model Training guide](../modes/train.md).
**Step 5: Initiating Model Training**: The model training is started with the specified arguments. The results of the training process are captured in the `results` variable. **Step 5: Initiating Model Training**: The model training is started with the specified arguments. The results of the training process are captured in the `results` variable.

@ -74,7 +74,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO
# train the model # train the model
results = model.train( results = model.train(
data="coco128.yaml", data="coco8.yaml",
project="comet-example-yolov8-coco128", project="comet-example-yolov8-coco128",
batch=32, batch=32,
save_period=1, save_period=1,

@ -71,8 +71,8 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
- [TFLite Edge TPU](edge-tpu.md): Developed by [Google](https://www.google.com) for optimizing TensorFlow Lite models on Edge TPUs, this model format ensures high-speed, efficient edge computing. - [TFLite Edge TPU](edge-tpu.md): Developed by [Google](https://www.google.com) for optimizing TensorFlow Lite models on Edge TPUs, this model format ensures high-speed, efficient edge computing.
- [TF.js](tfjs.md): Developed by [Google](https://www.google.com) to facilitate machine learning in browsers and Node.js, TF.js allows JavaScript-based deployment of ML models. - [TF.js](tfjs.md): Developed by [Google](https://www.google.com) to facilitate machine learning in browsers and Node.js, TF.js allows JavaScript-based deployment of ML models.
- [PaddlePaddle](paddlepaddle.md): An open-source deep learning platform by [Baidu](https://www.baidu.com/), PaddlePaddle enables the efficient deployment of AI models and focuses on the scalability of industrial applications. - [PaddlePaddle](paddlepaddle.md): An open-source deep learning platform by [Baidu](https://www.baidu.com/), PaddlePaddle enables the efficient deployment of AI models and focuses on the scalability of industrial applications.
- [NCNN](ncnn.md): Developed by [Tencent](http://www.tencent.com/), NCNN is an efficient neural network inference framework tailored for mobile devices. It enables direct deployment of AI models into apps, optimizing performance across various mobile platforms. - [NCNN](ncnn.md): Developed by [Tencent](http://www.tencent.com/), NCNN is an efficient neural network inference framework tailored for mobile devices. It enables direct deployment of AI models into apps, optimizing performance across various mobile platforms.

@ -261,14 +261,14 @@ To reproduce the Ultralytics benchmarks above on all export [formats](../modes/e
# Load a YOLOv8n PyTorch model # Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt') model = YOLO('yolov8n.pt')
# Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
results= model.benchmarks(data='coco128.yaml') results= model.benchmarks(data='coco8.yaml')
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
yolo benchmark model=yolov8n.pt data=coco128.yaml yolo benchmark model=yolov8n.pt data=coco8.yaml
``` ```
Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images). Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images).

@ -112,13 +112,13 @@ In this example, we demonstrate how to use a custom search space for hyperparame
model = YOLO("yolov8n.pt") model = YOLO("yolov8n.pt")
# Run Ray Tune on the model # Run Ray Tune on the model
result_grid = model.tune(data="coco128.yaml", result_grid = model.tune(data="coco8.yaml",
space={"lr0": tune.uniform(1e-5, 1e-1)}, space={"lr0": tune.uniform(1e-5, 1e-1)},
epochs=50, epochs=50,
use_ray=True) use_ray=True)
``` ```
In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco128.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`. In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco8.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`.
## Processing Ray Tune Results ## Processing Ray Tune Results

@ -67,7 +67,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO
model = YOLO('yolov8n.pt') model = YOLO('yolov8n.pt')
# Train the model # Train the model
results = model.train(data='coco128.yaml', epochs=100, imgsz=640) results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
``` ```
Upon running the usage code snippet above, you can expect the following output: Upon running the usage code snippet above, you can expect the following output:

@ -32,7 +32,7 @@ Here are the key features that make TF.js a powerful tool for developers:
## Deployment Options with TensorFlow.js ## Deployment Options with TensorFlow.js
Before we dive into the process of exporting YOLOv8 models to the TF.js format, let's explore some typical deployment scenarios where this format is used. Before we dive into the process of exporting YOLOv8 models to the TF.js format, let's explore some typical deployment scenarios where this format is used.
TF.js provides a range of options to deploy your machine learning models: TF.js provides a range of options to deploy your machine learning models:

@ -72,7 +72,7 @@ Before diving into the usage instructions for YOLOv8 model training with Weights
# Step 2: Define the YOLOv8 Model and Dataset # Step 2: Define the YOLOv8 Model and Dataset
model_name = "yolov8n" model_name = "yolov8n"
dataset_name = "coco128.yaml" dataset_name = "coco8.yaml"
model = YOLO(f"{model_name}.pt") model = YOLO(f"{model_name}.pt")
# Step 3: Add W&B Callback for Ultralytics # Step 3: Add W&B Callback for Ultralytics

@ -76,7 +76,7 @@ Arguments such as `model`, `data`, `imgsz`, `half`, `device`, and `verbose` prov
| Key | Default Value | Description | | Key | Default Value | Description |
|-----------|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------| |-----------|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | `None` | Specifies the path to the model file. Accepts both `.pt` and `.yaml` formats, e.g., `"yolov8n.pt"` for pre-trained models or configuration files. | | `model` | `None` | Specifies the path to the model file. Accepts both `.pt` and `.yaml` formats, e.g., `"yolov8n.pt"` for pre-trained models or configuration files. |
| `data` | `None` | Path to a YAML file defining the dataset for benchmarking, typically including paths and settings for validation data. Example: `"coco128.yaml"`. | | `data` | `None` | Path to a YAML file defining the dataset for benchmarking, typically including paths and settings for validation data. Example: `"coco8.yaml"`. |
| `imgsz` | `640` | The input image size for the model. Can be a single integer for square images or a tuple `(width, height)` for non-square, e.g., `(640, 480)`. | | `imgsz` | `640` | The input image size for the model. Can be a single integer for square images or a tuple `(width, height)` for non-square, e.g., `(640, 480)`. |
| `half` | `False` | Enables FP16 (half-precision) inference, reducing memory usage and possibly increasing speed on compatible hardware. Use `half=True` to enable. | | `half` | `False` | Enables FP16 (half-precision) inference, reducing memory usage and possibly increasing speed on compatible hardware. Use `half=True` to enable. |
| `int8` | `False` | Activates INT8 quantization for further optimized performance on supported devices, especially useful for edge devices. Set `int8=True` to use. | | `int8` | `False` | Activates INT8 quantization for further optimized performance on supported devices, especially useful for edge devices. Set `int8=True` to use. |

@ -47,7 +47,7 @@ The following are some notable features of YOLOv8's Train mode:
## Usage Examples ## Usage Examples
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device=cpu` will be used. See Arguments section below for a full list of training arguments. Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device=cpu` will be used. See Arguments section below for a full list of training arguments.
!!! Example "Single-GPU and CPU Training Example" !!! Example "Single-GPU and CPU Training Example"
@ -64,20 +64,20 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. The train
model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights
# Train the model # Train the model
results = model.train(data='coco128.yaml', epochs=100, imgsz=640) results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Build a new model from YAML and start training from scratch # Build a new model from YAML and start training from scratch
yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640 yolo detect train data=coco8.yaml model=yolov8n.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model # Start training from a pretrained *.pt model
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training # Build a new model from YAML, transfer pretrained weights to it and start training
yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640 yolo detect train data=coco8.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
``` ```
### Multi-GPU Training ### Multi-GPU Training
@ -97,14 +97,14 @@ Multi-GPU training allows for more efficient utilization of available hardware r
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training) model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
# Train the model with 2 GPUs # Train the model with 2 GPUs
results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device=[0, 1]) results = model.train(data='coco8.yaml', epochs=100, imgsz=640, device=[0, 1])
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Start training from a pretrained *.pt model using GPUs 0 and 1 # Start training from a pretrained *.pt model using GPUs 0 and 1
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 device=0,1 yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=0,1
``` ```
### Apple M1 and M2 MPS Training ### Apple M1 and M2 MPS Training
@ -124,14 +124,14 @@ To enable training on Apple M1 and M2 chips, you should specify 'mps' as your de
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training) model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
# Train the model with 2 GPUs # Train the model with 2 GPUs
results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device='mps') results = model.train(data='coco8.yaml', epochs=100, imgsz=640, device='mps')
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Start training from a pretrained *.pt model using GPUs 0 and 1 # Start training from a pretrained *.pt model using GPUs 0 and 1
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps
``` ```
While leveraging the computational power of the M1/M2 chips, this enables more efficient processing of the training tasks. For more detailed guidance and advanced configuration options, please refer to the [PyTorch MPS documentation](https://pytorch.org/docs/stable/notes/mps.html). While leveraging the computational power of the M1/M2 chips, this enables more efficient processing of the training tasks. For more detailed guidance and advanced configuration options, please refer to the [PyTorch MPS documentation](https://pytorch.org/docs/stable/notes/mps.html).
@ -178,7 +178,7 @@ The training settings for YOLO models encompass various hyperparameters and conf
| Argument | Default | Description | | Argument | Default | Description |
|-------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |-------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | `None` | Specifies the model file for training. Accepts a path to either a `.pt` pretrained model or a `.yaml` configuration file. Essential for defining the model structure or initializing weights. | | `model` | `None` | Specifies the model file for training. Accepts a path to either a `.pt` pretrained model or a `.yaml` configuration file. Essential for defining the model structure or initializing weights. |
| `data` | `None` | Path to the dataset configuration file (e.g., `coco128.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. | | `data` | `None` | Path to the dataset configuration file (e.g., `coco8.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. |
| `epochs` | `100` | Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance. | | `epochs` | `100` | Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance. |
| `time` | `None` | Maximum training time in hours. If set, this overrides the `epochs` argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios. | | `time` | `None` | Maximum training time in hours. If set, this overrides the `epochs` argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios. |
| `patience` | `100` | Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus. | | `patience` | `100` | Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus. |

@ -47,7 +47,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
## Usage Examples ## Usage Examples
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments. Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
!!! Example !!! Example
@ -79,22 +79,22 @@ Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need
When validating YOLO models, several arguments can be fine-tuned to optimize the evaluation process. These arguments control aspects such as input image size, batch processing, and performance thresholds. Below is a detailed breakdown of each argument to help you customize your validation settings effectively. When validating YOLO models, several arguments can be fine-tuned to optimize the evaluation process. These arguments control aspects such as input image size, batch processing, and performance thresholds. Below is a detailed breakdown of each argument to help you customize your validation settings effectively.
| Argument | Type | Default | Description | | Argument | Type | Default | Description |
|---------------|---------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| |---------------|---------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco128.yaml`). This file includes paths to validation data, class names, and number of classes. | | `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco8.yaml`). This file includes paths to validation data, class names, and number of classes. |
| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. | | `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. |
| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. | | `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. |
| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. | | `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. |
| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. | | `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. |
| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. | | `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. |
| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. | | `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. |
| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. | | `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. |
| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. | | `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. |
| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. | | `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. | | `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. |
| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. | | `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. | | `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. | | `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
Each of these settings plays a vital role in the validation process, allowing for a customizable and efficient evaluation of YOLO models. Adjusting these parameters according to your specific needs and resources can help achieve the best balance between accuracy and performance. Each of these settings plays a vital role in the validation process, allowing for a customizable and efficient evaluation of YOLO models. Adjusting these parameters according to your specific needs and resources can help achieve the best balance between accuracy and performance.

@ -161,7 +161,7 @@ The Ultralytics command line interface (CLI) allows for simple single-line comma
Train a detection model for 10 epochs with an initial learning_rate of 0.01 Train a detection model for 10 epochs with an initial learning_rate of 0.01
```bash ```bash
yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01 yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
``` ```
=== "Predict" === "Predict"
@ -175,7 +175,7 @@ The Ultralytics command line interface (CLI) allows for simple single-line comma
Val a pretrained detection model at batch-size 1 and image size 640: Val a pretrained detection model at batch-size 1 and image size 640:
```bash ```bash
yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640 yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
``` ```
=== "Export" === "Export"
@ -225,8 +225,8 @@ For example, users can load a model, train it, evaluate its performance on a val
# Load a pretrained YOLO model (recommended for training) # Load a pretrained YOLO model (recommended for training)
model = YOLO('yolov8n.pt') model = YOLO('yolov8n.pt')
# Train the model using the 'coco128.yaml' dataset for 3 epochs # Train the model using the 'coco8.yaml' dataset for 3 epochs
results = model.train(data='coco128.yaml', epochs=3) results = model.train(data='coco8.yaml', epochs=3)
# Evaluate the model's performance on the validation set # Evaluate the model's performance on the validation set
results = model.val() results = model.val()

@ -42,11 +42,11 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 | | [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset. <br>Reproduce by `yolo val detect data=coco.yaml device=0` - **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset. <br>Reproduce by `yolo val detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu` - **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val detect data=coco8.yaml batch=1 device=0|cpu`
## Train ## Train
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page. Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! Example !!! Example
@ -61,19 +61,19 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a ful
model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights
# Train the model # Train the model
results = model.train(data='coco128.yaml', epochs=100, imgsz=640) results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Build a new model from YAML and start training from scratch # Build a new model from YAML and start training from scratch
yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640 yolo detect train data=coco8.yaml model=yolov8n.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model # Start training from a pretrained *.pt model
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training # Build a new model from YAML, transfer pretrained weights to it and start training
yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640 yolo detect train data=coco8.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
``` ```
### Dataset format ### Dataset format
@ -82,7 +82,7 @@ YOLO detection dataset format can be found in detail in the [Dataset Guide](../d
## Val ## Val
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
!!! Example !!! Example

@ -42,7 +42,7 @@ YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 | | [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset. <br>Reproduce by `yolo val segment data=coco.yaml device=0` - **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset. <br>Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu` - **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco8-seg.yaml batch=1 device=0|cpu`
## Train ## Train
@ -61,19 +61,19 @@ Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. F
model = YOLO('yolov8n-seg.yaml').load('yolov8n.pt') # build from YAML and transfer weights model = YOLO('yolov8n-seg.yaml').load('yolov8n.pt') # build from YAML and transfer weights
# Train the model # Train the model
results = model.train(data='coco128-seg.yaml', epochs=100, imgsz=640) results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Build a new model from YAML and start training from scratch # Build a new model from YAML and start training from scratch
yolo segment train data=coco128-seg.yaml model=yolov8n-seg.yaml epochs=100 imgsz=640 yolo segment train data=coco8-seg.yaml model=yolov8n-seg.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model # Start training from a pretrained *.pt model
yolo segment train data=coco128-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640 yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training # Build a new model from YAML, transfer pretrained weights to it and start training
yolo segment train data=coco128-seg.yaml model=yolov8n-seg.yaml pretrained=yolov8n-seg.pt epochs=100 imgsz=640 yolo segment train data=coco8-seg.yaml model=yolov8n-seg.yaml pretrained=yolov8n-seg.pt epochs=100 imgsz=640
``` ```
### Dataset format ### Dataset format

@ -87,7 +87,7 @@ The training settings for YOLO models encompass various hyperparameters and conf
| Argument | Default | Description | | Argument | Default | Description |
|-------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |-------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | `None` | Specifies the model file for training. Accepts a path to either a `.pt` pretrained model or a `.yaml` configuration file. Essential for defining the model structure or initializing weights. | | `model` | `None` | Specifies the model file for training. Accepts a path to either a `.pt` pretrained model or a `.yaml` configuration file. Essential for defining the model structure or initializing weights. |
| `data` | `None` | Path to the dataset configuration file (e.g., `coco128.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. | | `data` | `None` | Path to the dataset configuration file (e.g., `coco8.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. |
| `epochs` | `100` | Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance. | | `epochs` | `100` | Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance. |
| `time` | `None` | Maximum training time in hours. If set, this overrides the `epochs` argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios. | | `time` | `None` | Maximum training time in hours. If set, this overrides the `epochs` argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios. |
| `patience` | `100` | Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus. | | `patience` | `100` | Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus. |
@ -182,22 +182,22 @@ Visualization arguments:
The val (validation) settings for YOLO models involve various hyperparameters and configurations used to evaluate the model's performance on a validation dataset. These settings influence the model's performance, speed, and accuracy. Common YOLO validation settings include batch size, validation frequency during training, and performance evaluation metrics. Other factors affecting the validation process include the validation dataset's size and composition, as well as the specific task the model is employed for. The val (validation) settings for YOLO models involve various hyperparameters and configurations used to evaluate the model's performance on a validation dataset. These settings influence the model's performance, speed, and accuracy. Common YOLO validation settings include batch size, validation frequency during training, and performance evaluation metrics. Other factors affecting the validation process include the validation dataset's size and composition, as well as the specific task the model is employed for.
| Argument | Type | Default | Description | | Argument | Type | Default | Description |
|---------------|---------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| |---------------|---------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco128.yaml`). This file includes paths to validation data, class names, and number of classes. | | `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco8.yaml`). This file includes paths to validation data, class names, and number of classes. |
| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. | | `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. |
| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. | | `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. |
| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. | | `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. |
| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. | | `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. |
| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. | | `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. |
| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. | | `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. |
| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. | | `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. |
| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. | | `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. |
| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. | | `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. | | `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. |
| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. | | `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. | | `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. | | `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
Careful tuning and experimentation with these settings are crucial to ensure optimal performance on the validation dataset and detect and prevent overfitting. Careful tuning and experimentation with these settings are crucial to ensure optimal performance on the validation dataset and detect and prevent overfitting.

@ -37,7 +37,7 @@ The YOLO command line interface (CLI) allows for simple single-line commands wit
Train a detection model for 10 epochs with an initial learning_rate of 0.01 Train a detection model for 10 epochs with an initial learning_rate of 0.01
```bash ```bash
yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01 yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
``` ```
=== "Predict" === "Predict"
@ -51,7 +51,7 @@ The YOLO command line interface (CLI) allows for simple single-line commands wit
Val a pretrained detection model at batch-size 1 and image size 640: Val a pretrained detection model at batch-size 1 and image size 640:
```bash ```bash
yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640 yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
``` ```
=== "Export" === "Export"
@ -90,15 +90,15 @@ Where:
## Train ## Train
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](cfg.md) page. Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](cfg.md) page.
!!! Example "Example" !!! Example "Example"
=== "Train" === "Train"
Start training YOLOv8n on COCO128 for 100 epochs at image-size 640. Start training YOLOv8n on COCO8 for 100 epochs at image-size 640.
```bash ```bash
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
``` ```
=== "Resume" === "Resume"
@ -110,7 +110,7 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a ful
## Val ## Val
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
!!! Example "Example" !!! Example "Example"
@ -196,7 +196,7 @@ Default arguments can be overridden by simply passing them as arguments in the C
Train a detection model for `10 epochs` with `learning_rate` of `0.01` Train a detection model for `10 epochs` with `learning_rate` of `0.01`
```bash ```bash
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01 yolo detect train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
``` ```
=== "Predict" === "Predict"
@ -210,7 +210,7 @@ Default arguments can be overridden by simply passing them as arguments in the C
Validate a pretrained detection model at batch-size 1 and image size 640: Validate a pretrained detection model at batch-size 1 and image size 640:
```bash ```bash
yolo detect val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640 yolo detect val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
``` ```
## Overriding default config file ## Overriding default config file

@ -32,8 +32,8 @@ For example, users can load a model, train it, evaluate its performance on a val
# Load a pretrained YOLO model (recommended for training) # Load a pretrained YOLO model (recommended for training)
model = YOLO('yolov8n.pt') model = YOLO('yolov8n.pt')
# Train the model using the 'coco128.yaml' dataset for 3 epochs # Train the model using the 'coco8.yaml' dataset for 3 epochs
results = model.train(data='coco128.yaml', epochs=3) results = model.train(data='coco8.yaml', epochs=3)
# Evaluate the model's performance on the validation set # Evaluate the model's performance on the validation set
results = model.val() results = model.val()
@ -66,7 +66,7 @@ Train mode is used for training a YOLOv8 model on a custom dataset. In this mode
from ultralytics import YOLO from ultralytics import YOLO
model = YOLO('yolov8n.yaml') model = YOLO('yolov8n.yaml')
results = model.train(data='coco128.yaml', epochs=5) results = model.train(data='coco8.yaml', epochs=5)
``` ```
=== "Resume" === "Resume"
@ -90,7 +90,7 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
from ultralytics import YOLO from ultralytics import YOLO
model = YOLO('yolov8n.yaml') model = YOLO('yolov8n.yaml')
model.train(data='coco128.yaml', epochs=5) model.train(data='coco8.yaml', epochs=5)
model.val() # It'll automatically evaluate the data you trained. model.val() # It'll automatically evaluate the data you trained.
``` ```
@ -103,7 +103,7 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
# It'll use the data YAML file in model.pt if you don't set data. # It'll use the data YAML file in model.pt if you don't set data.
model.val() model.val()
# or you can set the data you want to val # or you can set the data you want to val
model.val(data='coco128.yaml') model.val(data='coco8.yaml')
``` ```
[Val Examples](../modes/val.md){ .md-button } [Val Examples](../modes/val.md){ .md-button }
@ -259,7 +259,7 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi
from ultralytics import Explorer from ultralytics import Explorer
# create an Explorer object # create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt') exp = Explorer(data='coco8.yaml', model='yolov8n.pt')
exp.create_embeddings_table() exp.create_embeddings_table()
similar = exp.get_similar(img='https://ultralytics.com/images/bus.jpg', limit=10) similar = exp.get_similar(img='https://ultralytics.com/images/bus.jpg', limit=10)
@ -280,7 +280,7 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi
from ultralytics import Explorer from ultralytics import Explorer
# create an Explorer object # create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt') exp = Explorer(data='coco8.yaml', model='yolov8n.pt')
exp.create_embeddings_table() exp.create_embeddings_table()
similar = exp.get_similar(idx=1, limit=10) similar = exp.get_similar(idx=1, limit=10)

@ -233,7 +233,7 @@ boxes.bboxes
See the [`Bboxes` reference section](../reference/utils/instance.md#ultralytics.utils.instance.Bboxes) for more attributes and methods available. See the [`Bboxes` reference section](../reference/utils/instance.md#ultralytics.utils.instance.Bboxes) for more attributes and methods available.
!!! tip !!! tip
Many of the following functions (and more) can be accessed using the [`Bboxes` class](#bounding-box-horizontal-instances) but if you prefer to work with the functions directly, see the next subsections on how to import these independently. Many of the following functions (and more) can be accessed using the [`Bboxes` class](#bounding-box-horizontal-instances) but if you prefer to work with the functions directly, see the next subsections on how to import these independently.
### Scaling Boxes ### Scaling Boxes

@ -67,13 +67,13 @@ This will enable integration with the YOLOv5 training script. Every training run
If you want to change the `project_name` or `task_name`, use the `--project` and `--name` arguments of the `train.py` script, by default the project will be called `YOLOv5` and the task `Training`. PLEASE NOTE: ClearML uses `/` as a delimiter for subprojects, so be careful when using `/` in your project name! If you want to change the `project_name` or `task_name`, use the `--project` and `--name` arguments of the `train.py` script, by default the project will be called `YOLOv5` and the task `Training`. PLEASE NOTE: ClearML uses `/` as a delimiter for subprojects, so be careful when using `/` in your project name!
```bash ```bash
python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache python train.py --img 640 --batch 16 --epochs 3 --data coco8.yaml --weights yolov5s.pt --cache
``` ```
or with custom project and task name: or with custom project and task name:
```bash ```bash
python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco8.yaml --weights yolov5s.pt --cache
``` ```
This will capture: This will capture:

@ -3,10 +3,10 @@
{% extends "base.html" %} {% extends "base.html" %}
{% block announce %} {% block announce %}
<div style="text-align: center;"> <div style="text-align: center;">
<a href="https://hub.ultralytics.com/signup?utm_source=docs&utm_medium=banner&utm_campaign=cloud_training_release" <a href="https://hub.ultralytics.com/signup?utm_source=docs&utm_medium=banner&utm_campaign=cloud_training_release"
target="_blank" style="color: #FFFFFF;"> style="color: #FFFFFF;" target="_blank">
Introducing Ultralytics HUB Cloud Training! ☁ Scalable. Simple. Smart. &nbsp; Introducing Ultralytics HUB Cloud Training! ☁ Scalable. Simple. Smart. &nbsp;
</a> </a>
</div> </div>
{% endblock %} {% endblock %}

@ -30,7 +30,7 @@ class YOLOv8:
self.iou_thres = iou_thres self.iou_thres = iou_thres
# Load the class names from the COCO dataset # Load the class names from the COCO dataset
self.classes = yaml_load(check_yaml("coco128.yaml"))["names"] self.classes = yaml_load(check_yaml("coco8.yaml"))["names"]
# Generate a color palette for the classes # Generate a color palette for the classes
self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3)) self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3))

@ -8,7 +8,7 @@ import numpy as np
from ultralytics.utils import ASSETS, yaml_load from ultralytics.utils import ASSETS, yaml_load
from ultralytics.utils.checks import check_yaml from ultralytics.utils.checks import check_yaml
CLASSES = yaml_load(check_yaml("coco128.yaml"))["names"] CLASSES = yaml_load(check_yaml("coco8.yaml"))["names"]
colors = np.random.uniform(0, 255, size=(len(CLASSES), 3)) colors = np.random.uniform(0, 255, size=(len(CLASSES), 3))

@ -102,7 +102,7 @@ class Yolov8TFLite:
self.iou_thres = iou_thres self.iou_thres = iou_thres
# Load the class names from the COCO dataset # Load the class names from the COCO dataset
self.classes = yaml_load(check_yaml("coco128.yaml"))["names"] self.classes = yaml_load(check_yaml("coco8.yaml"))["names"]
# Generate a color palette for the classes # Generate a color palette for the classes
self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3)) self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3))

@ -37,7 +37,7 @@ class YOLOv8Seg:
self.model_height, self.model_width = [x.shape for x in self.session.get_inputs()][0][-2:] self.model_height, self.model_width = [x.shape for x in self.session.get_inputs()][0][-2:]
# Load COCO class names # Load COCO class names
self.classes = yaml_load(check_yaml("coco128.yaml"))["names"] self.classes = yaml_load(check_yaml("coco8.yaml"))["names"]
# Create color palette # Create color palette
self.color_palette = Colors() self.color_palette = Colors()

@ -425,7 +425,7 @@
"model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)\n", "model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)\n",
"\n", "\n",
"# Use the model\n", "# Use the model\n",
"results = model.train(data='coco128.yaml', epochs=3) # train the model\n", "results = model.train(data='coco8.yaml', epochs=3) # train the model\n",
"results = model.val() # evaluate model performance on the validation set\n", "results = model.val() # evaluate model performance on the validation set\n",
"results = model('https://ultralytics.com/images/bus.jpg') # predict on an image\n", "results = model('https://ultralytics.com/images/bus.jpg') # predict on an image\n",
"results = model.export(format='onnx') # export the model to ONNX format" "results = model.export(format='onnx') # export the model to ONNX format"
@ -467,7 +467,7 @@
"from ultralytics import YOLO\n", "from ultralytics import YOLO\n",
"\n", "\n",
"model = YOLO('yolov8n.pt') # load a pretrained YOLOv8n detection model\n", "model = YOLO('yolov8n.pt') # load a pretrained YOLOv8n detection model\n",
"model.train(data='coco128.yaml', epochs=3) # train the model\n", "model.train(data='coco8.yaml', epochs=3) # train the model\n",
"model('https://ultralytics.com/images/bus.jpg') # predict on an image" "model('https://ultralytics.com/images/bus.jpg') # predict on an image"
], ],
"metadata": { "metadata": {
@ -494,7 +494,7 @@
"from ultralytics import YOLO\n", "from ultralytics import YOLO\n",
"\n", "\n",
"model = YOLO('yolov8n-seg.pt') # load a pretrained YOLOv8n segmentation model\n", "model = YOLO('yolov8n-seg.pt') # load a pretrained YOLOv8n segmentation model\n",
"model.train(data='coco128-seg.yaml', epochs=3) # train the model\n", "model.train(data='coco8-seg.yaml', epochs=3) # train the model\n",
"model('https://ultralytics.com/images/bus.jpg') # predict on an image" "model('https://ultralytics.com/images/bus.jpg') # predict on an image"
], ],
"metadata": { "metadata": {

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = "8.2.1" __version__ = "8.2.2"
from ultralytics.data.explorer.explorer import Explorer from ultralytics.data.explorer.explorer import Explorer
from ultralytics.models import RTDETR, SAM, YOLO, YOLOWorld from ultralytics.models import RTDETR, SAM, YOLO, YOLOWorld

@ -66,13 +66,13 @@ CLI_HELP_MSG = f"""
See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg' See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg'
1. Train a detection model for 10 epochs with an initial learning_rate of 0.01 1. Train a detection model for 10 epochs with an initial learning_rate of 0.01
yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01 yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
2. Predict a YouTube video using a pretrained segmentation model at image size 320: 2. Predict a YouTube video using a pretrained segmentation model at image size 320:
yolo predict model=yolov8n-seg.pt source='https://youtu.be/LNwODJXcvt4' imgsz=320 yolo predict model=yolov8n-seg.pt source='https://youtu.be/LNwODJXcvt4' imgsz=320
3. Val a pretrained detection model at batch-size 1 and image size 640: 3. Val a pretrained detection model at batch-size 1 and image size 640:
yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640 yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
4. Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required) 4. Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required)
yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128 yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128

@ -6,7 +6,7 @@ mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmar
# Train settings ------------------------------------------------------------------------------------------------------- # Train settings -------------------------------------------------------------------------------------------------------
model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml
data: # (str, optional) path to data file, i.e. coco128.yaml data: # (str, optional) path to data file, i.e. coco8.yaml
epochs: 100 # (int) number of epochs to train for epochs: 100 # (int) number of epochs to train for
time: # (float, optional) number of hours to train for, overrides epochs if supplied time: # (float, optional) number of hours to train for, overrides epochs if supplied
patience: 100 # (int) epochs to wait for no observable improvement for early stopping of training patience: 100 # (int) epochs to wait for no observable improvement for early stopping of training

@ -11,7 +11,7 @@ To get started, simply browse through the models in this directory and find one
Model `*.yaml` files may be used directly in the Command Line Interface (CLI) with a `yolo` command: Model `*.yaml` files may be used directly in the Command Line Interface (CLI) with a `yolo` command:
```bash ```bash
yolo task=detect mode=train model=yolov8n.yaml data=coco128.yaml epochs=100 yolo task=detect mode=train model=yolov8n.yaml data=coco8.yaml epochs=100
``` ```
They may also be used directly in a Python environment, and accepts the same [arguments](https://docs.ultralytics.com/usage/cfg/) as in the CLI example above: They may also be used directly in a Python environment, and accepts the same [arguments](https://docs.ultralytics.com/usage/cfg/) as in the CLI example above:
@ -22,7 +22,7 @@ from ultralytics import YOLO
model = YOLO("model.yaml") # build a YOLOv8n model from scratch model = YOLO("model.yaml") # build a YOLOv8n model from scratch
# YOLO("model.pt") use pre-trained model if available # YOLO("model.pt") use pre-trained model if available
model.info() # display model information model.info() # display model information
model.train(data="coco128.yaml", epochs=100) # train the model model.train(data="coco8.yaml", epochs=100) # train the model
``` ```
## Pre-trained Model Architectures ## Pre-trained Model Architectures

@ -3,7 +3,7 @@
Train a model on a dataset. Train a model on a dataset.
Usage: Usage:
$ yolo mode=train model=yolov8n.pt data=coco128.yaml imgsz=640 epochs=100 batch=16 $ yolo mode=train model=yolov8n.pt data=coco8.yaml imgsz=640 epochs=100 batch=16
""" """
import gc import gc

@ -3,7 +3,7 @@
Check a model's accuracy on a test or val split of a dataset. Check a model's accuracy on a test or val split of a dataset.
Usage: Usage:
$ yolo mode=val model=yolov8n.pt data=coco128.yaml imgsz=640 $ yolo mode=val model=yolov8n.pt data=coco8.yaml imgsz=640
Usage - formats: Usage - formats:
$ yolo mode=val model=yolov8n.pt # PyTorch $ yolo mode=val model=yolov8n.pt # PyTorch

@ -61,7 +61,7 @@ HELP_MSG = """
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training) model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Use the model # Use the model
results = model.train(data="coco128.yaml", epochs=3) # train the model results = model.train(data="coco8.yaml", epochs=3) # train the model
results = model.val() # evaluate model performance on the validation set results = model.val() # evaluate model performance on the validation set
results = model('https://ultralytics.com/images/bus.jpg') # predict on an image results = model('https://ultralytics.com/images/bus.jpg') # predict on an image
success = model.export(format='onnx') # export the model to ONNX format success = model.export(format='onnx') # export the model to ONNX format
@ -78,13 +78,13 @@ HELP_MSG = """
See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg' See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg'
- Train a detection model for 10 epochs with an initial learning_rate of 0.01 - Train a detection model for 10 epochs with an initial learning_rate of 0.01
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01 yolo detect train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
- Predict a YouTube video using a pretrained segmentation model at image size 320: - Predict a YouTube video using a pretrained segmentation model at image size 320:
yolo segment predict model=yolov8n-seg.pt source='https://youtu.be/LNwODJXcvt4' imgsz=320 yolo segment predict model=yolov8n-seg.pt source='https://youtu.be/LNwODJXcvt4' imgsz=320
- Val a pretrained detection model at batch-size 1 and image size 640: - Val a pretrained detection model at batch-size 1 and image size 640:
yolo detect val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640 yolo detect val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
- Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required) - Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required)
yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128 yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128

Loading…
Cancel
Save