From 1110258d379bed8d623068ff7ceda8c9290f0774 Mon Sep 17 00:00:00 2001
From: Glenn Jocher
Date: Thu, 18 Apr 2024 20:47:21 -0700
Subject: [PATCH] `ultralytics 8.2.2` replace COCO128 with COCO8 (#10167)
Signed-off-by: Glenn Jocher
---
README.md | 2 +-
README.zh-CN.md | 2 +-
docs/README.md | 8 ++---
docs/en/datasets/detect/index.md | 1 -
docs/en/datasets/pose/index.md | 4 +--
docs/en/datasets/segment/index.md | 4 +--
docs/en/guides/azureml-quickstart.md | 4 +--
docs/en/guides/nvidia-jetson.md | 10 +++---
docs/en/guides/object-counting.md | 32 ++++++++---------
docs/en/guides/workouts-monitoring.md | 3 +-
docs/en/help/CI.md | 6 ++--
docs/en/integrations/clearml.md | 4 +--
docs/en/integrations/comet.md | 2 +-
docs/en/integrations/index.md | 4 +--
docs/en/integrations/openvino.md | 8 ++---
docs/en/integrations/ray-tune.md | 4 +--
docs/en/integrations/tensorboard.md | 2 +-
docs/en/integrations/tfjs.md | 2 +-
docs/en/integrations/weights-biases.md | 2 +-
docs/en/modes/benchmark.md | 2 +-
docs/en/modes/train.md | 20 +++++------
docs/en/modes/val.md | 34 +++++++++----------
docs/en/quickstart.md | 8 ++---
docs/en/tasks/detect.md | 14 ++++----
docs/en/tasks/segment.md | 10 +++---
docs/en/usage/cfg.md | 34 +++++++++----------
docs/en/usage/cli.md | 16 ++++-----
docs/en/usage/python.md | 14 ++++----
docs/en/usage/simple-utilities.md | 2 +-
.../tutorials/clearml_logging_integration.md | 4 +--
docs/overrides/main.html | 12 +++----
examples/YOLOv8-ONNXRuntime/main.py | 2 +-
examples/YOLOv8-OpenCV-ONNX-Python/main.py | 2 +-
.../YOLOv8-OpenCV-int8-tflite-Python/main.py | 2 +-
.../main.py | 2 +-
examples/tutorial.ipynb | 6 ++--
ultralytics/__init__.py | 2 +-
ultralytics/cfg/__init__.py | 4 +--
ultralytics/cfg/default.yaml | 2 +-
ultralytics/cfg/models/README.md | 4 +--
ultralytics/engine/trainer.py | 2 +-
ultralytics/engine/validator.py | 2 +-
ultralytics/utils/__init__.py | 6 ++--
43 files changed, 154 insertions(+), 156 deletions(-)
diff --git a/README.md b/README.md
index c5e1f62c54..ac1c0aee18 100644
--- a/README.md
+++ b/README.md
@@ -88,7 +88,7 @@ model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Use the model
-model.train(data="coco128.yaml", epochs=3) # train the model
+model.train(data="coco8.yaml", epochs=3) # train the model
metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format
diff --git a/README.zh-CN.md b/README.zh-CN.md
index 03996f3941..ad39607508 100644
--- a/README.zh-CN.md
+++ b/README.zh-CN.md
@@ -90,7 +90,7 @@ model = YOLO("yolov8n.yaml") # 从头开始构建新模型
model = YOLO("yolov8n.pt") # 加载预训练模型(建议用于训练)
# 使用模型
-model.train(data="coco128.yaml", epochs=3) # 训练模型
+model.train(data="coco8.yaml", epochs=3) # 训练模型
metrics = model.val() # 在验证集上评估模型性能
results = model("https://ultralytics.com/images/bus.jpg") # 对图像进行预测
success = model.export(format="onnx") # 将模型导出为 ONNX 格式
diff --git a/docs/README.md b/docs/README.md
index 954e130cd0..5a972d2246 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -43,13 +43,13 @@ mkdocs serve
- #### Command Breakdown:
- - `mkdocs` is the main MkDocs command-line interface.
- - `serve` is the subcommand to build and locally serve your documentation.
+ - `mkdocs` is the main MkDocs command-line interface.
+ - `serve` is the subcommand to build and locally serve your documentation.
- 🧐 Note:
- - Grasp changes to the docs in real-time as `mkdocs serve` supports live reloading.
- - To stop the local server, press `CTRL+C`.
+ - Grasp changes to the docs in real-time as `mkdocs serve` supports live reloading.
+ - To stop the local server, press `CTRL+C`.
## 🌍 Building and Serving Multi-Language
diff --git a/docs/en/datasets/detect/index.md b/docs/en/datasets/detect/index.md
index 1e11c18573..2e0279060f 100644
--- a/docs/en/datasets/detect/index.md
+++ b/docs/en/datasets/detect/index.md
@@ -44,7 +44,6 @@ When using the Ultralytics YOLO format, organize your training and validation im
-
## Usage
Here's how you can use these formats to train your model:
diff --git a/docs/en/datasets/pose/index.md b/docs/en/datasets/pose/index.md
index f99ec538a0..3b4ad54081 100644
--- a/docs/en/datasets/pose/index.md
+++ b/docs/en/datasets/pose/index.md
@@ -75,13 +75,13 @@ The `train` and `val` fields specify the paths to the directories containing the
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
# Train the model
- results = model.train(data='coco128-pose.yaml', epochs=100, imgsz=640)
+ results = model.train(data='coco8-pose.yaml', epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
- yolo detect train data=coco128-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
+ yolo detect train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
```
## Supported Datasets
diff --git a/docs/en/datasets/segment/index.md b/docs/en/datasets/segment/index.md
index cff7f8aa02..5cde021f5d 100644
--- a/docs/en/datasets/segment/index.md
+++ b/docs/en/datasets/segment/index.md
@@ -77,13 +77,13 @@ The `train` and `val` fields specify the paths to the directories containing the
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
# Train the model
- results = model.train(data='coco128-seg.yaml', epochs=100, imgsz=640)
+ results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
- yolo detect train data=coco128-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
+ yolo detect train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
```
## Supported Datasets
diff --git a/docs/en/guides/azureml-quickstart.md b/docs/en/guides/azureml-quickstart.md
index 56b1cea1f7..11fb9c5b10 100644
--- a/docs/en/guides/azureml-quickstart.md
+++ b/docs/en/guides/azureml-quickstart.md
@@ -74,7 +74,7 @@ yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
Train a detection model for 10 epochs with an initial learning_rate of 0.01:
```bash
-yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
+yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
You can find more [instructions to use the Ultralytics CLI here](../quickstart.md#use-ultralytics-with-cli).
@@ -131,7 +131,7 @@ from ultralytics import YOLO
model = YOLO("yolov8n.pt") # load an official YOLOv8n model
# Use the model
-model.train(data="coco128.yaml", epochs=3) # train the model
+model.train(data="coco8.yaml", epochs=3) # train the model
metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format
diff --git a/docs/en/guides/nvidia-jetson.md b/docs/en/guides/nvidia-jetson.md
index b8d90dffa9..dbf7e261c7 100644
--- a/docs/en/guides/nvidia-jetson.md
+++ b/docs/en/guides/nvidia-jetson.md
@@ -205,17 +205,17 @@ To reproduce the above Ultralytics benchmarks on all export [formats](../modes/e
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
- # Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats
- results = model.benchmarks(data='coco128.yaml', imgsz=640)
+ # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
+ results = model.benchmarks(data='coco8.yaml', imgsz=640)
```
=== "CLI"
```bash
- # Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats
- yolo benchmark model=yolov8n.pt data=coco128.yaml imgsz=640
+ # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
+ yolo benchmark model=yolov8n.pt data=coco8.yaml imgsz=640
```
- Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images).
+ Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco8.yaml' (128 val images), or `data='coco.yaml'` (5000 val images).
!!! Note
diff --git a/docs/en/guides/object-counting.md b/docs/en/guides/object-counting.md
index 725b1b51fb..b3cc55d9d0 100644
--- a/docs/en/guides/object-counting.md
+++ b/docs/en/guides/object-counting.md
@@ -219,22 +219,22 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
### Optional Arguments `set_args`
-| Name | Type | Default | Description |
-|-----------------------|-------------|----------------------------|--------------------------------------------------|
-| `view_img` | `bool` | `False` | Display frames with counts |
-| `view_in_counts` | `bool` | `True` | Display in-counts only on video frame |
-| `view_out_counts` | `bool` | `True` | Display out-counts only on video frame |
-| `line_thickness` | `int` | `2` | Increase bounding boxes and count text thickness |
-| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area |
-| `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names |
-| `count_reg_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line |
-| `track_thickness` | `int` | `2` | Thickness of Tracking Lines |
-| `draw_tracks` | `bool` | `False` | Enable drawing Track lines |
-| `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line |
-| `line_dist_thresh` | `int` | `15` | Euclidean Distance threshold for line counter |
-| `count_txt_color` | `RGB Color` | `(255, 255, 255)` | Foreground color for Object counts text |
-| `region_thickness` | `int` | `5` | Thickness for object counter region or line |
-| `count_bg_color` | `RGB Color` | `(255, 255, 255)` | Count highlighter color |
+| Name | Type | Default | Description |
+|--------------------|-------------|----------------------------|--------------------------------------------------|
+| `view_img` | `bool` | `False` | Display frames with counts |
+| `view_in_counts` | `bool` | `True` | Display in-counts only on video frame |
+| `view_out_counts` | `bool` | `True` | Display out-counts only on video frame |
+| `line_thickness` | `int` | `2` | Increase bounding boxes and count text thickness |
+| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area |
+| `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names |
+| `count_reg_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line |
+| `track_thickness` | `int` | `2` | Thickness of Tracking Lines |
+| `draw_tracks` | `bool` | `False` | Enable drawing Track lines |
+| `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line |
+| `line_dist_thresh` | `int` | `15` | Euclidean Distance threshold for line counter |
+| `count_txt_color` | `RGB Color` | `(255, 255, 255)` | Foreground color for Object counts text |
+| `region_thickness` | `int` | `5` | Thickness for object counter region or line |
+| `count_bg_color` | `RGB Color` | `(255, 255, 255)` | Count highlighter color |
### Arguments `model.track`
diff --git a/docs/en/guides/workouts-monitoring.md b/docs/en/guides/workouts-monitoring.md
index 69ea55f015..3c60be8f2f 100644
--- a/docs/en/guides/workouts-monitoring.md
+++ b/docs/en/guides/workouts-monitoring.md
@@ -19,7 +19,6 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
Watch: Workouts Monitoring using Ultralytics YOLOv8 | Pushups, Pullups, Ab Workouts
-
## Advantages of Workouts Monitoring?
- **Optimized Performance:** Tailoring workouts based on monitoring data for better results.
@@ -157,4 +156,4 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
| `conf` | `float` | `0.3` | Confidence Threshold |
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
-| `verbose` | `bool` | `True` | Display the object tracking results |
\ No newline at end of file
+| `verbose` | `bool` | `True` | Display the object tracking results |
diff --git a/docs/en/help/CI.md b/docs/en/help/CI.md
index 62c8d3a8f0..173886fe7d 100644
--- a/docs/en/help/CI.md
+++ b/docs/en/help/CI.md
@@ -1,7 +1,7 @@
---
comments: true
description: Learn how Ultralytics leverages Continuous Integration (CI) for maintaining high-quality code. Explore our CI tests and the status of these tests for our repositories.
-keywords: continuous integration, software development, CI tests, Ultralytics repositories, high-quality code, Docker Deployment, Broken Links, CodeQL, PyPi Publishing
+keywords: continuous integration, software development, CI tests, Ultralytics repositories, high-quality code, Docker Deployment, Broken Links, CodeQL, PyPI Publishing
---
# Continuous Integration (CI)
@@ -16,13 +16,13 @@ Here's a brief description of our CI actions:
- **[Docker Deployment](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml):** This test checks the deployment of the project using Docker to ensure the Dockerfile and related scripts are working correctly.
- **[Broken Links](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml):** This test scans the codebase for any broken or dead links in our markdown or HTML files.
- **[CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml):** CodeQL is a tool from GitHub that performs semantic analysis on our code, helping to find potential security vulnerabilities and maintain high-quality code.
-- **[PyPi Publishing](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml):** This test checks if the project can be packaged and published to PyPi without any errors.
+- **[PyPI Publishing](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml):** This test checks if the project can be packaged and published to PyPi without any errors.
### CI Results
Below is the table showing the status of these CI tests for our main repositories:
-| Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPi and Docs Publishing |
+| Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPI and Docs Publishing |
|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [yolov3](https://github.com/ultralytics/yolov3) | [![YOLOv3 CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov3/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml) | |
| [yolov5](https://github.com/ultralytics/yolov5) | [![YOLOv5 CI](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov5/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml) | |
diff --git a/docs/en/integrations/clearml.md b/docs/en/integrations/clearml.md
index 92b069a422..3af8d707f0 100644
--- a/docs/en/integrations/clearml.md
+++ b/docs/en/integrations/clearml.md
@@ -80,7 +80,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO
model = YOLO(f'{model_variant}.pt')
# Step 4: Setting Up Training Arguments
- args = dict(data="coco128.yaml", epochs=16)
+ args = dict(data="coco8.yaml", epochs=16)
task.connect(args)
# Step 5: Initiating Model Training
@@ -97,7 +97,7 @@ Let’s understand the steps showcased in the usage code snippet above.
**Step 3: Loading the YOLOv8 Model**: The selected YOLOv8 model is loaded using Ultralytics' YOLO class, preparing it for training.
-**Step 4: Setting Up Training Arguments**: Key training arguments like the dataset (`coco128.yaml`) and the number of epochs (`16`) are organized in a dictionary and connected to the ClearML task. This allows for tracking and potential modification via the ClearML UI. For a detailed understanding of the model training process and best practices, refer to our [YOLOv8 Model Training guide](../modes/train.md).
+**Step 4: Setting Up Training Arguments**: Key training arguments like the dataset (`coco8.yaml`) and the number of epochs (`16`) are organized in a dictionary and connected to the ClearML task. This allows for tracking and potential modification via the ClearML UI. For a detailed understanding of the model training process and best practices, refer to our [YOLOv8 Model Training guide](../modes/train.md).
**Step 5: Initiating Model Training**: The model training is started with the specified arguments. The results of the training process are captured in the `results` variable.
diff --git a/docs/en/integrations/comet.md b/docs/en/integrations/comet.md
index 95ada28de8..99b376deed 100644
--- a/docs/en/integrations/comet.md
+++ b/docs/en/integrations/comet.md
@@ -74,7 +74,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO
# train the model
results = model.train(
- data="coco128.yaml",
+ data="coco8.yaml",
project="comet-example-yolov8-coco128",
batch=32,
save_period=1,
diff --git a/docs/en/integrations/index.md b/docs/en/integrations/index.md
index 64b5badf38..46a90b0e54 100644
--- a/docs/en/integrations/index.md
+++ b/docs/en/integrations/index.md
@@ -71,8 +71,8 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
- [TFLite Edge TPU](edge-tpu.md): Developed by [Google](https://www.google.com) for optimizing TensorFlow Lite models on Edge TPUs, this model format ensures high-speed, efficient edge computing.
-- [TF.js](tfjs.md): Developed by [Google](https://www.google.com) to facilitate machine learning in browsers and Node.js, TF.js allows JavaScript-based deployment of ML models.
-
+- [TF.js](tfjs.md): Developed by [Google](https://www.google.com) to facilitate machine learning in browsers and Node.js, TF.js allows JavaScript-based deployment of ML models.
+
- [PaddlePaddle](paddlepaddle.md): An open-source deep learning platform by [Baidu](https://www.baidu.com/), PaddlePaddle enables the efficient deployment of AI models and focuses on the scalability of industrial applications.
- [NCNN](ncnn.md): Developed by [Tencent](http://www.tencent.com/), NCNN is an efficient neural network inference framework tailored for mobile devices. It enables direct deployment of AI models into apps, optimizing performance across various mobile platforms.
diff --git a/docs/en/integrations/openvino.md b/docs/en/integrations/openvino.md
index 4234d78d57..36d2293c7e 100644
--- a/docs/en/integrations/openvino.md
+++ b/docs/en/integrations/openvino.md
@@ -261,14 +261,14 @@ To reproduce the Ultralytics benchmarks above on all export [formats](../modes/e
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
- # Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats
- results= model.benchmarks(data='coco128.yaml')
+ # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
+ results= model.benchmarks(data='coco8.yaml')
```
=== "CLI"
```bash
- # Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats
- yolo benchmark model=yolov8n.pt data=coco128.yaml
+ # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
+ yolo benchmark model=yolov8n.pt data=coco8.yaml
```
Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images).
diff --git a/docs/en/integrations/ray-tune.md b/docs/en/integrations/ray-tune.md
index cc39682b23..c59fe2df87 100644
--- a/docs/en/integrations/ray-tune.md
+++ b/docs/en/integrations/ray-tune.md
@@ -112,13 +112,13 @@ In this example, we demonstrate how to use a custom search space for hyperparame
model = YOLO("yolov8n.pt")
# Run Ray Tune on the model
- result_grid = model.tune(data="coco128.yaml",
+ result_grid = model.tune(data="coco8.yaml",
space={"lr0": tune.uniform(1e-5, 1e-1)},
epochs=50,
use_ray=True)
```
-In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco128.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`.
+In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco8.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`.
## Processing Ray Tune Results
diff --git a/docs/en/integrations/tensorboard.md b/docs/en/integrations/tensorboard.md
index 5e0cbf1267..c73bdb3797 100644
--- a/docs/en/integrations/tensorboard.md
+++ b/docs/en/integrations/tensorboard.md
@@ -67,7 +67,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO
model = YOLO('yolov8n.pt')
# Train the model
- results = model.train(data='coco128.yaml', epochs=100, imgsz=640)
+ results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
```
Upon running the usage code snippet above, you can expect the following output:
diff --git a/docs/en/integrations/tfjs.md b/docs/en/integrations/tfjs.md
index a192f6b450..6f80ecc721 100644
--- a/docs/en/integrations/tfjs.md
+++ b/docs/en/integrations/tfjs.md
@@ -32,7 +32,7 @@ Here are the key features that make TF.js a powerful tool for developers:
## Deployment Options with TensorFlow.js
-Before we dive into the process of exporting YOLOv8 models to the TF.js format, let's explore some typical deployment scenarios where this format is used.
+Before we dive into the process of exporting YOLOv8 models to the TF.js format, let's explore some typical deployment scenarios where this format is used.
TF.js provides a range of options to deploy your machine learning models:
diff --git a/docs/en/integrations/weights-biases.md b/docs/en/integrations/weights-biases.md
index 6a69a18b67..3c43b3eaff 100644
--- a/docs/en/integrations/weights-biases.md
+++ b/docs/en/integrations/weights-biases.md
@@ -72,7 +72,7 @@ Before diving into the usage instructions for YOLOv8 model training with Weights
# Step 2: Define the YOLOv8 Model and Dataset
model_name = "yolov8n"
- dataset_name = "coco128.yaml"
+ dataset_name = "coco8.yaml"
model = YOLO(f"{model_name}.pt")
# Step 3: Add W&B Callback for Ultralytics
diff --git a/docs/en/modes/benchmark.md b/docs/en/modes/benchmark.md
index 7f8e4573d7..6842f98cdd 100644
--- a/docs/en/modes/benchmark.md
+++ b/docs/en/modes/benchmark.md
@@ -76,7 +76,7 @@ Arguments such as `model`, `data`, `imgsz`, `half`, `device`, and `verbose` prov
| Key | Default Value | Description |
|-----------|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | `None` | Specifies the path to the model file. Accepts both `.pt` and `.yaml` formats, e.g., `"yolov8n.pt"` for pre-trained models or configuration files. |
-| `data` | `None` | Path to a YAML file defining the dataset for benchmarking, typically including paths and settings for validation data. Example: `"coco128.yaml"`. |
+| `data` | `None` | Path to a YAML file defining the dataset for benchmarking, typically including paths and settings for validation data. Example: `"coco8.yaml"`. |
| `imgsz` | `640` | The input image size for the model. Can be a single integer for square images or a tuple `(width, height)` for non-square, e.g., `(640, 480)`. |
| `half` | `False` | Enables FP16 (half-precision) inference, reducing memory usage and possibly increasing speed on compatible hardware. Use `half=True` to enable. |
| `int8` | `False` | Activates INT8 quantization for further optimized performance on supported devices, especially useful for edge devices. Set `int8=True` to use. |
diff --git a/docs/en/modes/train.md b/docs/en/modes/train.md
index 1175d2e999..3d83a5149e 100644
--- a/docs/en/modes/train.md
+++ b/docs/en/modes/train.md
@@ -47,7 +47,7 @@ The following are some notable features of YOLOv8's Train mode:
## Usage Examples
-Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device=cpu` will be used. See Arguments section below for a full list of training arguments.
+Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device=cpu` will be used. See Arguments section below for a full list of training arguments.
!!! Example "Single-GPU and CPU Training Example"
@@ -64,20 +64,20 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. The train
model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights
# Train the model
- results = model.train(data='coco128.yaml', epochs=100, imgsz=640)
+ results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Build a new model from YAML and start training from scratch
- yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolov8n.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
- yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
- yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
```
### Multi-GPU Training
@@ -97,14 +97,14 @@ Multi-GPU training allows for more efficient utilization of available hardware r
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
# Train the model with 2 GPUs
- results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device=[0, 1])
+ results = model.train(data='coco8.yaml', epochs=100, imgsz=640, device=[0, 1])
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model using GPUs 0 and 1
- yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 device=0,1
+ yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=0,1
```
### Apple M1 and M2 MPS Training
@@ -124,14 +124,14 @@ To enable training on Apple M1 and M2 chips, you should specify 'mps' as your de
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
# Train the model with 2 GPUs
- results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device='mps')
+ results = model.train(data='coco8.yaml', epochs=100, imgsz=640, device='mps')
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model using GPUs 0 and 1
- yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps
+ yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps
```
While leveraging the computational power of the M1/M2 chips, this enables more efficient processing of the training tasks. For more detailed guidance and advanced configuration options, please refer to the [PyTorch MPS documentation](https://pytorch.org/docs/stable/notes/mps.html).
@@ -178,7 +178,7 @@ The training settings for YOLO models encompass various hyperparameters and conf
| Argument | Default | Description |
|-------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | `None` | Specifies the model file for training. Accepts a path to either a `.pt` pretrained model or a `.yaml` configuration file. Essential for defining the model structure or initializing weights. |
-| `data` | `None` | Path to the dataset configuration file (e.g., `coco128.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. |
+| `data` | `None` | Path to the dataset configuration file (e.g., `coco8.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. |
| `epochs` | `100` | Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance. |
| `time` | `None` | Maximum training time in hours. If set, this overrides the `epochs` argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios. |
| `patience` | `100` | Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus. |
diff --git a/docs/en/modes/val.md b/docs/en/modes/val.md
index 0c77425a18..96703cbac9 100644
--- a/docs/en/modes/val.md
+++ b/docs/en/modes/val.md
@@ -47,7 +47,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
## Usage Examples
-Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
+Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
!!! Example
@@ -79,22 +79,22 @@ Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need
When validating YOLO models, several arguments can be fine-tuned to optimize the evaluation process. These arguments control aspects such as input image size, batch processing, and performance thresholds. Below is a detailed breakdown of each argument to help you customize your validation settings effectively.
-| Argument | Type | Default | Description |
-|---------------|---------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco128.yaml`). This file includes paths to validation data, class names, and number of classes. |
-| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. |
-| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. |
-| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. |
-| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. |
-| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. |
-| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. |
-| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. |
-| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. |
-| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
-| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. |
-| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
-| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
-| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
+| Argument | Type | Default | Description |
+|---------------|---------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco8.yaml`). This file includes paths to validation data, class names, and number of classes. |
+| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. |
+| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. |
+| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. |
+| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. |
+| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. |
+| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. |
+| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. |
+| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. |
+| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
+| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. |
+| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
+| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
+| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
Each of these settings plays a vital role in the validation process, allowing for a customizable and efficient evaluation of YOLO models. Adjusting these parameters according to your specific needs and resources can help achieve the best balance between accuracy and performance.
diff --git a/docs/en/quickstart.md b/docs/en/quickstart.md
index cf54217534..b8317e38fb 100644
--- a/docs/en/quickstart.md
+++ b/docs/en/quickstart.md
@@ -161,7 +161,7 @@ The Ultralytics command line interface (CLI) allows for simple single-line comma
Train a detection model for 10 epochs with an initial learning_rate of 0.01
```bash
- yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
+ yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
=== "Predict"
@@ -175,7 +175,7 @@ The Ultralytics command line interface (CLI) allows for simple single-line comma
Val a pretrained detection model at batch-size 1 and image size 640:
```bash
- yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
+ yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
```
=== "Export"
@@ -225,8 +225,8 @@ For example, users can load a model, train it, evaluate its performance on a val
# Load a pretrained YOLO model (recommended for training)
model = YOLO('yolov8n.pt')
- # Train the model using the 'coco128.yaml' dataset for 3 epochs
- results = model.train(data='coco128.yaml', epochs=3)
+ # Train the model using the 'coco8.yaml' dataset for 3 epochs
+ results = model.train(data='coco8.yaml', epochs=3)
# Evaluate the model's performance on the validation set
results = model.val()
diff --git a/docs/en/tasks/detect.md b/docs/en/tasks/detect.md
index 5aed8c3f79..6a32001bba 100644
--- a/docs/en/tasks/detect.md
+++ b/docs/en/tasks/detect.md
@@ -42,11 +42,11 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset.
Reproduce by `yolo val detect data=coco.yaml device=0`
-- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu`
+- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val detect data=coco8.yaml batch=1 device=0|cpu`
## Train
-Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
+Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! Example
@@ -61,19 +61,19 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a ful
model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights
# Train the model
- results = model.train(data='coco128.yaml', epochs=100, imgsz=640)
+ results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Build a new model from YAML and start training from scratch
- yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolov8n.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
- yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
- yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
```
### Dataset format
@@ -82,7 +82,7 @@ YOLO detection dataset format can be found in detail in the [Dataset Guide](../d
## Val
-Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
+Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
!!! Example
diff --git a/docs/en/tasks/segment.md b/docs/en/tasks/segment.md
index 4453fb775d..e9d0199b89 100644
--- a/docs/en/tasks/segment.md
+++ b/docs/en/tasks/segment.md
@@ -42,7 +42,7 @@ YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset.
Reproduce by `yolo val segment data=coco.yaml device=0`
-- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`
+- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val segment data=coco8-seg.yaml batch=1 device=0|cpu`
## Train
@@ -61,19 +61,19 @@ Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. F
model = YOLO('yolov8n-seg.yaml').load('yolov8n.pt') # build from YAML and transfer weights
# Train the model
- results = model.train(data='coco128-seg.yaml', epochs=100, imgsz=640)
+ results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Build a new model from YAML and start training from scratch
- yolo segment train data=coco128-seg.yaml model=yolov8n-seg.yaml epochs=100 imgsz=640
+ yolo segment train data=coco8-seg.yaml model=yolov8n-seg.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
- yolo segment train data=coco128-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
+ yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
- yolo segment train data=coco128-seg.yaml model=yolov8n-seg.yaml pretrained=yolov8n-seg.pt epochs=100 imgsz=640
+ yolo segment train data=coco8-seg.yaml model=yolov8n-seg.yaml pretrained=yolov8n-seg.pt epochs=100 imgsz=640
```
### Dataset format
diff --git a/docs/en/usage/cfg.md b/docs/en/usage/cfg.md
index 833932e870..17d711815e 100644
--- a/docs/en/usage/cfg.md
+++ b/docs/en/usage/cfg.md
@@ -87,7 +87,7 @@ The training settings for YOLO models encompass various hyperparameters and conf
| Argument | Default | Description |
|-------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | `None` | Specifies the model file for training. Accepts a path to either a `.pt` pretrained model or a `.yaml` configuration file. Essential for defining the model structure or initializing weights. |
-| `data` | `None` | Path to the dataset configuration file (e.g., `coco128.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. |
+| `data` | `None` | Path to the dataset configuration file (e.g., `coco8.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. |
| `epochs` | `100` | Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance. |
| `time` | `None` | Maximum training time in hours. If set, this overrides the `epochs` argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios. |
| `patience` | `100` | Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus. |
@@ -182,22 +182,22 @@ Visualization arguments:
The val (validation) settings for YOLO models involve various hyperparameters and configurations used to evaluate the model's performance on a validation dataset. These settings influence the model's performance, speed, and accuracy. Common YOLO validation settings include batch size, validation frequency during training, and performance evaluation metrics. Other factors affecting the validation process include the validation dataset's size and composition, as well as the specific task the model is employed for.
-| Argument | Type | Default | Description |
-|---------------|---------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco128.yaml`). This file includes paths to validation data, class names, and number of classes. |
-| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. |
-| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. |
-| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. |
-| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. |
-| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. |
-| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. |
-| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. |
-| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. |
-| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
-| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. |
-| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
-| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
-| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
+| Argument | Type | Default | Description |
+|---------------|---------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco8.yaml`). This file includes paths to validation data, class names, and number of classes. |
+| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. |
+| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. |
+| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. |
+| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. |
+| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. |
+| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. |
+| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. |
+| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. |
+| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
+| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. |
+| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
+| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
+| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
Careful tuning and experimentation with these settings are crucial to ensure optimal performance on the validation dataset and detect and prevent overfitting.
diff --git a/docs/en/usage/cli.md b/docs/en/usage/cli.md
index c71d7d0662..1a32119756 100644
--- a/docs/en/usage/cli.md
+++ b/docs/en/usage/cli.md
@@ -37,7 +37,7 @@ The YOLO command line interface (CLI) allows for simple single-line commands wit
Train a detection model for 10 epochs with an initial learning_rate of 0.01
```bash
- yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
+ yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
=== "Predict"
@@ -51,7 +51,7 @@ The YOLO command line interface (CLI) allows for simple single-line commands wit
Val a pretrained detection model at batch-size 1 and image size 640:
```bash
- yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
+ yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
```
=== "Export"
@@ -90,15 +90,15 @@ Where:
## Train
-Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](cfg.md) page.
+Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](cfg.md) page.
!!! Example "Example"
=== "Train"
- Start training YOLOv8n on COCO128 for 100 epochs at image-size 640.
+ Start training YOLOv8n on COCO8 for 100 epochs at image-size 640.
```bash
- yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
```
=== "Resume"
@@ -110,7 +110,7 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a ful
## Val
-Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
+Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
!!! Example "Example"
@@ -196,7 +196,7 @@ Default arguments can be overridden by simply passing them as arguments in the C
Train a detection model for `10 epochs` with `learning_rate` of `0.01`
```bash
- yolo detect train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
+ yolo detect train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
=== "Predict"
@@ -210,7 +210,7 @@ Default arguments can be overridden by simply passing them as arguments in the C
Validate a pretrained detection model at batch-size 1 and image size 640:
```bash
- yolo detect val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
+ yolo detect val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
```
## Overriding default config file
diff --git a/docs/en/usage/python.md b/docs/en/usage/python.md
index db4fd2ede9..755c8e439d 100644
--- a/docs/en/usage/python.md
+++ b/docs/en/usage/python.md
@@ -32,8 +32,8 @@ For example, users can load a model, train it, evaluate its performance on a val
# Load a pretrained YOLO model (recommended for training)
model = YOLO('yolov8n.pt')
- # Train the model using the 'coco128.yaml' dataset for 3 epochs
- results = model.train(data='coco128.yaml', epochs=3)
+ # Train the model using the 'coco8.yaml' dataset for 3 epochs
+ results = model.train(data='coco8.yaml', epochs=3)
# Evaluate the model's performance on the validation set
results = model.val()
@@ -66,7 +66,7 @@ Train mode is used for training a YOLOv8 model on a custom dataset. In this mode
from ultralytics import YOLO
model = YOLO('yolov8n.yaml')
- results = model.train(data='coco128.yaml', epochs=5)
+ results = model.train(data='coco8.yaml', epochs=5)
```
=== "Resume"
@@ -90,7 +90,7 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
from ultralytics import YOLO
model = YOLO('yolov8n.yaml')
- model.train(data='coco128.yaml', epochs=5)
+ model.train(data='coco8.yaml', epochs=5)
model.val() # It'll automatically evaluate the data you trained.
```
@@ -103,7 +103,7 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
# It'll use the data YAML file in model.pt if you don't set data.
model.val()
# or you can set the data you want to val
- model.val(data='coco128.yaml')
+ model.val(data='coco8.yaml')
```
[Val Examples](../modes/val.md){ .md-button }
@@ -259,7 +259,7 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi
from ultralytics import Explorer
# create an Explorer object
- exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
+ exp = Explorer(data='coco8.yaml', model='yolov8n.pt')
exp.create_embeddings_table()
similar = exp.get_similar(img='https://ultralytics.com/images/bus.jpg', limit=10)
@@ -280,7 +280,7 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi
from ultralytics import Explorer
# create an Explorer object
- exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
+ exp = Explorer(data='coco8.yaml', model='yolov8n.pt')
exp.create_embeddings_table()
similar = exp.get_similar(idx=1, limit=10)
diff --git a/docs/en/usage/simple-utilities.md b/docs/en/usage/simple-utilities.md
index 7492f0ebb8..a2d40d0ffe 100644
--- a/docs/en/usage/simple-utilities.md
+++ b/docs/en/usage/simple-utilities.md
@@ -233,7 +233,7 @@ boxes.bboxes
See the [`Bboxes` reference section](../reference/utils/instance.md#ultralytics.utils.instance.Bboxes) for more attributes and methods available.
!!! tip
-
+
Many of the following functions (and more) can be accessed using the [`Bboxes` class](#bounding-box-horizontal-instances) but if you prefer to work with the functions directly, see the next subsections on how to import these independently.
### Scaling Boxes
diff --git a/docs/en/yolov5/tutorials/clearml_logging_integration.md b/docs/en/yolov5/tutorials/clearml_logging_integration.md
index 48fce1ee2b..baed07dc8b 100644
--- a/docs/en/yolov5/tutorials/clearml_logging_integration.md
+++ b/docs/en/yolov5/tutorials/clearml_logging_integration.md
@@ -67,13 +67,13 @@ This will enable integration with the YOLOv5 training script. Every training run
If you want to change the `project_name` or `task_name`, use the `--project` and `--name` arguments of the `train.py` script, by default the project will be called `YOLOv5` and the task `Training`. PLEASE NOTE: ClearML uses `/` as a delimiter for subprojects, so be careful when using `/` in your project name!
```bash
-python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
+python train.py --img 640 --batch 16 --epochs 3 --data coco8.yaml --weights yolov5s.pt --cache
```
or with custom project and task name:
```bash
-python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
+python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco8.yaml --weights yolov5s.pt --cache
```
This will capture:
diff --git a/docs/overrides/main.html b/docs/overrides/main.html
index bc088cee51..f0e5c109b5 100644
--- a/docs/overrides/main.html
+++ b/docs/overrides/main.html
@@ -3,10 +3,10 @@
{% extends "base.html" %}
{% block announce %}
-
+
{% endblock %}
diff --git a/examples/YOLOv8-ONNXRuntime/main.py b/examples/YOLOv8-ONNXRuntime/main.py
index e1e83f3dcb..41af64b9f3 100644
--- a/examples/YOLOv8-ONNXRuntime/main.py
+++ b/examples/YOLOv8-ONNXRuntime/main.py
@@ -30,7 +30,7 @@ class YOLOv8:
self.iou_thres = iou_thres
# Load the class names from the COCO dataset
- self.classes = yaml_load(check_yaml("coco128.yaml"))["names"]
+ self.classes = yaml_load(check_yaml("coco8.yaml"))["names"]
# Generate a color palette for the classes
self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3))
diff --git a/examples/YOLOv8-OpenCV-ONNX-Python/main.py b/examples/YOLOv8-OpenCV-ONNX-Python/main.py
index 8d9c7a5050..c58b9ced5d 100644
--- a/examples/YOLOv8-OpenCV-ONNX-Python/main.py
+++ b/examples/YOLOv8-OpenCV-ONNX-Python/main.py
@@ -8,7 +8,7 @@ import numpy as np
from ultralytics.utils import ASSETS, yaml_load
from ultralytics.utils.checks import check_yaml
-CLASSES = yaml_load(check_yaml("coco128.yaml"))["names"]
+CLASSES = yaml_load(check_yaml("coco8.yaml"))["names"]
colors = np.random.uniform(0, 255, size=(len(CLASSES), 3))
diff --git a/examples/YOLOv8-OpenCV-int8-tflite-Python/main.py b/examples/YOLOv8-OpenCV-int8-tflite-Python/main.py
index 53fba1f5bc..0a08756bd4 100644
--- a/examples/YOLOv8-OpenCV-int8-tflite-Python/main.py
+++ b/examples/YOLOv8-OpenCV-int8-tflite-Python/main.py
@@ -102,7 +102,7 @@ class Yolov8TFLite:
self.iou_thres = iou_thres
# Load the class names from the COCO dataset
- self.classes = yaml_load(check_yaml("coco128.yaml"))["names"]
+ self.classes = yaml_load(check_yaml("coco8.yaml"))["names"]
# Generate a color palette for the classes
self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3))
diff --git a/examples/YOLOv8-Segmentation-ONNXRuntime-Python/main.py b/examples/YOLOv8-Segmentation-ONNXRuntime-Python/main.py
index 141d21b993..923f8c52a5 100644
--- a/examples/YOLOv8-Segmentation-ONNXRuntime-Python/main.py
+++ b/examples/YOLOv8-Segmentation-ONNXRuntime-Python/main.py
@@ -37,7 +37,7 @@ class YOLOv8Seg:
self.model_height, self.model_width = [x.shape for x in self.session.get_inputs()][0][-2:]
# Load COCO class names
- self.classes = yaml_load(check_yaml("coco128.yaml"))["names"]
+ self.classes = yaml_load(check_yaml("coco8.yaml"))["names"]
# Create color palette
self.color_palette = Colors()
diff --git a/examples/tutorial.ipynb b/examples/tutorial.ipynb
index bdb0b27178..69eb09209a 100644
--- a/examples/tutorial.ipynb
+++ b/examples/tutorial.ipynb
@@ -425,7 +425,7 @@
"model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)\n",
"\n",
"# Use the model\n",
- "results = model.train(data='coco128.yaml', epochs=3) # train the model\n",
+ "results = model.train(data='coco8.yaml', epochs=3) # train the model\n",
"results = model.val() # evaluate model performance on the validation set\n",
"results = model('https://ultralytics.com/images/bus.jpg') # predict on an image\n",
"results = model.export(format='onnx') # export the model to ONNX format"
@@ -467,7 +467,7 @@
"from ultralytics import YOLO\n",
"\n",
"model = YOLO('yolov8n.pt') # load a pretrained YOLOv8n detection model\n",
- "model.train(data='coco128.yaml', epochs=3) # train the model\n",
+ "model.train(data='coco8.yaml', epochs=3) # train the model\n",
"model('https://ultralytics.com/images/bus.jpg') # predict on an image"
],
"metadata": {
@@ -494,7 +494,7 @@
"from ultralytics import YOLO\n",
"\n",
"model = YOLO('yolov8n-seg.pt') # load a pretrained YOLOv8n segmentation model\n",
- "model.train(data='coco128-seg.yaml', epochs=3) # train the model\n",
+ "model.train(data='coco8-seg.yaml', epochs=3) # train the model\n",
"model('https://ultralytics.com/images/bus.jpg') # predict on an image"
],
"metadata": {
diff --git a/ultralytics/__init__.py b/ultralytics/__init__.py
index 8827f65897..044a3045e1 100644
--- a/ultralytics/__init__.py
+++ b/ultralytics/__init__.py
@@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
-__version__ = "8.2.1"
+__version__ = "8.2.2"
from ultralytics.data.explorer.explorer import Explorer
from ultralytics.models import RTDETR, SAM, YOLO, YOLOWorld
diff --git a/ultralytics/cfg/__init__.py b/ultralytics/cfg/__init__.py
index b907a8eb13..322abeb668 100644
--- a/ultralytics/cfg/__init__.py
+++ b/ultralytics/cfg/__init__.py
@@ -66,13 +66,13 @@ CLI_HELP_MSG = f"""
See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg'
1. Train a detection model for 10 epochs with an initial learning_rate of 0.01
- yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
+ yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
2. Predict a YouTube video using a pretrained segmentation model at image size 320:
yolo predict model=yolov8n-seg.pt source='https://youtu.be/LNwODJXcvt4' imgsz=320
3. Val a pretrained detection model at batch-size 1 and image size 640:
- yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
+ yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
4. Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required)
yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128
diff --git a/ultralytics/cfg/default.yaml b/ultralytics/cfg/default.yaml
index 78b5f43ee6..e1d9ce60a9 100644
--- a/ultralytics/cfg/default.yaml
+++ b/ultralytics/cfg/default.yaml
@@ -6,7 +6,7 @@ mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmar
# Train settings -------------------------------------------------------------------------------------------------------
model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml
-data: # (str, optional) path to data file, i.e. coco128.yaml
+data: # (str, optional) path to data file, i.e. coco8.yaml
epochs: 100 # (int) number of epochs to train for
time: # (float, optional) number of hours to train for, overrides epochs if supplied
patience: 100 # (int) epochs to wait for no observable improvement for early stopping of training
diff --git a/ultralytics/cfg/models/README.md b/ultralytics/cfg/models/README.md
index c022fb57a6..5a127626d4 100644
--- a/ultralytics/cfg/models/README.md
+++ b/ultralytics/cfg/models/README.md
@@ -11,7 +11,7 @@ To get started, simply browse through the models in this directory and find one
Model `*.yaml` files may be used directly in the Command Line Interface (CLI) with a `yolo` command:
```bash
-yolo task=detect mode=train model=yolov8n.yaml data=coco128.yaml epochs=100
+yolo task=detect mode=train model=yolov8n.yaml data=coco8.yaml epochs=100
```
They may also be used directly in a Python environment, and accepts the same [arguments](https://docs.ultralytics.com/usage/cfg/) as in the CLI example above:
@@ -22,7 +22,7 @@ from ultralytics import YOLO
model = YOLO("model.yaml") # build a YOLOv8n model from scratch
# YOLO("model.pt") use pre-trained model if available
model.info() # display model information
-model.train(data="coco128.yaml", epochs=100) # train the model
+model.train(data="coco8.yaml", epochs=100) # train the model
```
## Pre-trained Model Architectures
diff --git a/ultralytics/engine/trainer.py b/ultralytics/engine/trainer.py
index a33aac6ec1..233ce40b7d 100644
--- a/ultralytics/engine/trainer.py
+++ b/ultralytics/engine/trainer.py
@@ -3,7 +3,7 @@
Train a model on a dataset.
Usage:
- $ yolo mode=train model=yolov8n.pt data=coco128.yaml imgsz=640 epochs=100 batch=16
+ $ yolo mode=train model=yolov8n.pt data=coco8.yaml imgsz=640 epochs=100 batch=16
"""
import gc
diff --git a/ultralytics/engine/validator.py b/ultralytics/engine/validator.py
index 9e7b6c16f8..8a2765c98f 100644
--- a/ultralytics/engine/validator.py
+++ b/ultralytics/engine/validator.py
@@ -3,7 +3,7 @@
Check a model's accuracy on a test or val split of a dataset.
Usage:
- $ yolo mode=val model=yolov8n.pt data=coco128.yaml imgsz=640
+ $ yolo mode=val model=yolov8n.pt data=coco8.yaml imgsz=640
Usage - formats:
$ yolo mode=val model=yolov8n.pt # PyTorch
diff --git a/ultralytics/utils/__init__.py b/ultralytics/utils/__init__.py
index 5a293f5c2d..41baedca93 100644
--- a/ultralytics/utils/__init__.py
+++ b/ultralytics/utils/__init__.py
@@ -61,7 +61,7 @@ HELP_MSG = """
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Use the model
- results = model.train(data="coco128.yaml", epochs=3) # train the model
+ results = model.train(data="coco8.yaml", epochs=3) # train the model
results = model.val() # evaluate model performance on the validation set
results = model('https://ultralytics.com/images/bus.jpg') # predict on an image
success = model.export(format='onnx') # export the model to ONNX format
@@ -78,13 +78,13 @@ HELP_MSG = """
See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg'
- Train a detection model for 10 epochs with an initial learning_rate of 0.01
- yolo detect train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
+ yolo detect train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
- Predict a YouTube video using a pretrained segmentation model at image size 320:
yolo segment predict model=yolov8n-seg.pt source='https://youtu.be/LNwODJXcvt4' imgsz=320
- Val a pretrained detection model at batch-size 1 and image size 640:
- yolo detect val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
+ yolo detect val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
- Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required)
yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128