diff --git a/README.md b/README.md index c5e1f62c54..ac1c0aee18 100644 --- a/README.md +++ b/README.md @@ -88,7 +88,7 @@ model = YOLO("yolov8n.yaml") # build a new model from scratch model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training) # Use the model -model.train(data="coco128.yaml", epochs=3) # train the model +model.train(data="coco8.yaml", epochs=3) # train the model metrics = model.val() # evaluate model performance on the validation set results = model("https://ultralytics.com/images/bus.jpg") # predict on an image path = model.export(format="onnx") # export the model to ONNX format diff --git a/README.zh-CN.md b/README.zh-CN.md index 03996f3941..ad39607508 100644 --- a/README.zh-CN.md +++ b/README.zh-CN.md @@ -90,7 +90,7 @@ model = YOLO("yolov8n.yaml") # 从头开始构建新模型 model = YOLO("yolov8n.pt") # 加载预训练模型(建议用于训练) # 使用模型 -model.train(data="coco128.yaml", epochs=3) # 训练模型 +model.train(data="coco8.yaml", epochs=3) # 训练模型 metrics = model.val() # 在验证集上评估模型性能 results = model("https://ultralytics.com/images/bus.jpg") # 对图像进行预测 success = model.export(format="onnx") # 将模型导出为 ONNX 格式 diff --git a/docs/README.md b/docs/README.md index 954e130cd0..5a972d2246 100644 --- a/docs/README.md +++ b/docs/README.md @@ -43,13 +43,13 @@ mkdocs serve - #### Command Breakdown: - - `mkdocs` is the main MkDocs command-line interface. - - `serve` is the subcommand to build and locally serve your documentation. + - `mkdocs` is the main MkDocs command-line interface. + - `serve` is the subcommand to build and locally serve your documentation. - 🧐 Note: - - Grasp changes to the docs in real-time as `mkdocs serve` supports live reloading. - - To stop the local server, press `CTRL+C`. + - Grasp changes to the docs in real-time as `mkdocs serve` supports live reloading. + - To stop the local server, press `CTRL+C`. ## 🌍 Building and Serving Multi-Language diff --git a/docs/en/datasets/detect/index.md b/docs/en/datasets/detect/index.md index 1e11c18573..2e0279060f 100644 --- a/docs/en/datasets/detect/index.md +++ b/docs/en/datasets/detect/index.md @@ -44,7 +44,6 @@ When using the Ultralytics YOLO format, organize your training and validation im
- ## Usage Here's how you can use these formats to train your model: diff --git a/docs/en/datasets/pose/index.md b/docs/en/datasets/pose/index.md index f99ec538a0..3b4ad54081 100644 --- a/docs/en/datasets/pose/index.md +++ b/docs/en/datasets/pose/index.md @@ -75,13 +75,13 @@ The `train` and `val` fields specify the paths to the directories containing the model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training) # Train the model - results = model.train(data='coco128-pose.yaml', epochs=100, imgsz=640) + results = model.train(data='coco8-pose.yaml', epochs=100, imgsz=640) ``` === "CLI" ```bash # Start training from a pretrained *.pt model - yolo detect train data=coco128-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640 + yolo detect train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640 ``` ## Supported Datasets diff --git a/docs/en/datasets/segment/index.md b/docs/en/datasets/segment/index.md index cff7f8aa02..5cde021f5d 100644 --- a/docs/en/datasets/segment/index.md +++ b/docs/en/datasets/segment/index.md @@ -77,13 +77,13 @@ The `train` and `val` fields specify the paths to the directories containing the model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training) # Train the model - results = model.train(data='coco128-seg.yaml', epochs=100, imgsz=640) + results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640) ``` === "CLI" ```bash # Start training from a pretrained *.pt model - yolo detect train data=coco128-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640 + yolo detect train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640 ``` ## Supported Datasets diff --git a/docs/en/guides/azureml-quickstart.md b/docs/en/guides/azureml-quickstart.md index 56b1cea1f7..11fb9c5b10 100644 --- a/docs/en/guides/azureml-quickstart.md +++ b/docs/en/guides/azureml-quickstart.md @@ -74,7 +74,7 @@ yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg' Train a detection model for 10 epochs with an initial learning_rate of 0.01: ```bash -yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01 +yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01 ``` You can find more [instructions to use the Ultralytics CLI here](../quickstart.md#use-ultralytics-with-cli). @@ -131,7 +131,7 @@ from ultralytics import YOLO model = YOLO("yolov8n.pt") # load an official YOLOv8n model # Use the model -model.train(data="coco128.yaml", epochs=3) # train the model +model.train(data="coco8.yaml", epochs=3) # train the model metrics = model.val() # evaluate model performance on the validation set results = model("https://ultralytics.com/images/bus.jpg") # predict on an image path = model.export(format="onnx") # export the model to ONNX format diff --git a/docs/en/guides/nvidia-jetson.md b/docs/en/guides/nvidia-jetson.md index b8d90dffa9..dbf7e261c7 100644 --- a/docs/en/guides/nvidia-jetson.md +++ b/docs/en/guides/nvidia-jetson.md @@ -205,17 +205,17 @@ To reproduce the above Ultralytics benchmarks on all export [formats](../modes/e # Load a YOLOv8n PyTorch model model = YOLO('yolov8n.pt') - # Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats - results = model.benchmarks(data='coco128.yaml', imgsz=640) + # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats + results = model.benchmarks(data='coco8.yaml', imgsz=640) ``` === "CLI" ```bash - # Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats - yolo benchmark model=yolov8n.pt data=coco128.yaml imgsz=640 + # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats + yolo benchmark model=yolov8n.pt data=coco8.yaml imgsz=640 ``` - Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images). + Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco8.yaml' (128 val images), or `data='coco.yaml'` (5000 val images). !!! Note diff --git a/docs/en/guides/object-counting.md b/docs/en/guides/object-counting.md index 725b1b51fb..b3cc55d9d0 100644 --- a/docs/en/guides/object-counting.md +++ b/docs/en/guides/object-counting.md @@ -219,22 +219,22 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly ### Optional Arguments `set_args` -| Name | Type | Default | Description | -|-----------------------|-------------|----------------------------|--------------------------------------------------| -| `view_img` | `bool` | `False` | Display frames with counts | -| `view_in_counts` | `bool` | `True` | Display in-counts only on video frame | -| `view_out_counts` | `bool` | `True` | Display out-counts only on video frame | -| `line_thickness` | `int` | `2` | Increase bounding boxes and count text thickness | -| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area | -| `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names | -| `count_reg_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line | -| `track_thickness` | `int` | `2` | Thickness of Tracking Lines | -| `draw_tracks` | `bool` | `False` | Enable drawing Track lines | -| `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line | -| `line_dist_thresh` | `int` | `15` | Euclidean Distance threshold for line counter | -| `count_txt_color` | `RGB Color` | `(255, 255, 255)` | Foreground color for Object counts text | -| `region_thickness` | `int` | `5` | Thickness for object counter region or line | -| `count_bg_color` | `RGB Color` | `(255, 255, 255)` | Count highlighter color | +| Name | Type | Default | Description | +|--------------------|-------------|----------------------------|--------------------------------------------------| +| `view_img` | `bool` | `False` | Display frames with counts | +| `view_in_counts` | `bool` | `True` | Display in-counts only on video frame | +| `view_out_counts` | `bool` | `True` | Display out-counts only on video frame | +| `line_thickness` | `int` | `2` | Increase bounding boxes and count text thickness | +| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area | +| `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names | +| `count_reg_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line | +| `track_thickness` | `int` | `2` | Thickness of Tracking Lines | +| `draw_tracks` | `bool` | `False` | Enable drawing Track lines | +| `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line | +| `line_dist_thresh` | `int` | `15` | Euclidean Distance threshold for line counter | +| `count_txt_color` | `RGB Color` | `(255, 255, 255)` | Foreground color for Object counts text | +| `region_thickness` | `int` | `5` | Thickness for object counter region or line | +| `count_bg_color` | `RGB Color` | `(255, 255, 255)` | Count highlighter color | ### Arguments `model.track` diff --git a/docs/en/guides/workouts-monitoring.md b/docs/en/guides/workouts-monitoring.md index 69ea55f015..3c60be8f2f 100644 --- a/docs/en/guides/workouts-monitoring.md +++ b/docs/en/guides/workouts-monitoring.md @@ -19,7 +19,6 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi Watch: Workouts Monitoring using Ultralytics YOLOv8 | Pushups, Pullups, Ab Workouts - ## Advantages of Workouts Monitoring? - **Optimized Performance:** Tailoring workouts based on monitoring data for better results. @@ -157,4 +156,4 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi | `conf` | `float` | `0.3` | Confidence Threshold | | `iou` | `float` | `0.5` | IOU Threshold | | `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] | -| `verbose` | `bool` | `True` | Display the object tracking results | \ No newline at end of file +| `verbose` | `bool` | `True` | Display the object tracking results | diff --git a/docs/en/help/CI.md b/docs/en/help/CI.md index 62c8d3a8f0..173886fe7d 100644 --- a/docs/en/help/CI.md +++ b/docs/en/help/CI.md @@ -1,7 +1,7 @@ --- comments: true description: Learn how Ultralytics leverages Continuous Integration (CI) for maintaining high-quality code. Explore our CI tests and the status of these tests for our repositories. -keywords: continuous integration, software development, CI tests, Ultralytics repositories, high-quality code, Docker Deployment, Broken Links, CodeQL, PyPi Publishing +keywords: continuous integration, software development, CI tests, Ultralytics repositories, high-quality code, Docker Deployment, Broken Links, CodeQL, PyPI Publishing --- # Continuous Integration (CI) @@ -16,13 +16,13 @@ Here's a brief description of our CI actions: - **[Docker Deployment](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml):** This test checks the deployment of the project using Docker to ensure the Dockerfile and related scripts are working correctly. - **[Broken Links](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml):** This test scans the codebase for any broken or dead links in our markdown or HTML files. - **[CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml):** CodeQL is a tool from GitHub that performs semantic analysis on our code, helping to find potential security vulnerabilities and maintain high-quality code. -- **[PyPi Publishing](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml):** This test checks if the project can be packaged and published to PyPi without any errors. +- **[PyPI Publishing](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml):** This test checks if the project can be packaged and published to PyPi without any errors. ### CI Results Below is the table showing the status of these CI tests for our main repositories: -| Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPi and Docs Publishing | +| Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPI and Docs Publishing | |-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [yolov3](https://github.com/ultralytics/yolov3) | [![YOLOv3 CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov3/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml) | | | [yolov5](https://github.com/ultralytics/yolov5) | [![YOLOv5 CI](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov5/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml) | | diff --git a/docs/en/integrations/clearml.md b/docs/en/integrations/clearml.md index 92b069a422..3af8d707f0 100644 --- a/docs/en/integrations/clearml.md +++ b/docs/en/integrations/clearml.md @@ -80,7 +80,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO model = YOLO(f'{model_variant}.pt') # Step 4: Setting Up Training Arguments - args = dict(data="coco128.yaml", epochs=16) + args = dict(data="coco8.yaml", epochs=16) task.connect(args) # Step 5: Initiating Model Training @@ -97,7 +97,7 @@ Let’s understand the steps showcased in the usage code snippet above. **Step 3: Loading the YOLOv8 Model**: The selected YOLOv8 model is loaded using Ultralytics' YOLO class, preparing it for training. -**Step 4: Setting Up Training Arguments**: Key training arguments like the dataset (`coco128.yaml`) and the number of epochs (`16`) are organized in a dictionary and connected to the ClearML task. This allows for tracking and potential modification via the ClearML UI. For a detailed understanding of the model training process and best practices, refer to our [YOLOv8 Model Training guide](../modes/train.md). +**Step 4: Setting Up Training Arguments**: Key training arguments like the dataset (`coco8.yaml`) and the number of epochs (`16`) are organized in a dictionary and connected to the ClearML task. This allows for tracking and potential modification via the ClearML UI. For a detailed understanding of the model training process and best practices, refer to our [YOLOv8 Model Training guide](../modes/train.md). **Step 5: Initiating Model Training**: The model training is started with the specified arguments. The results of the training process are captured in the `results` variable. diff --git a/docs/en/integrations/comet.md b/docs/en/integrations/comet.md index 95ada28de8..99b376deed 100644 --- a/docs/en/integrations/comet.md +++ b/docs/en/integrations/comet.md @@ -74,7 +74,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO # train the model results = model.train( - data="coco128.yaml", + data="coco8.yaml", project="comet-example-yolov8-coco128", batch=32, save_period=1, diff --git a/docs/en/integrations/index.md b/docs/en/integrations/index.md index 64b5badf38..46a90b0e54 100644 --- a/docs/en/integrations/index.md +++ b/docs/en/integrations/index.md @@ -71,8 +71,8 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of - [TFLite Edge TPU](edge-tpu.md): Developed by [Google](https://www.google.com) for optimizing TensorFlow Lite models on Edge TPUs, this model format ensures high-speed, efficient edge computing. -- [TF.js](tfjs.md): Developed by [Google](https://www.google.com) to facilitate machine learning in browsers and Node.js, TF.js allows JavaScript-based deployment of ML models. - +- [TF.js](tfjs.md): Developed by [Google](https://www.google.com) to facilitate machine learning in browsers and Node.js, TF.js allows JavaScript-based deployment of ML models. + - [PaddlePaddle](paddlepaddle.md): An open-source deep learning platform by [Baidu](https://www.baidu.com/), PaddlePaddle enables the efficient deployment of AI models and focuses on the scalability of industrial applications. - [NCNN](ncnn.md): Developed by [Tencent](http://www.tencent.com/), NCNN is an efficient neural network inference framework tailored for mobile devices. It enables direct deployment of AI models into apps, optimizing performance across various mobile platforms. diff --git a/docs/en/integrations/openvino.md b/docs/en/integrations/openvino.md index 4234d78d57..36d2293c7e 100644 --- a/docs/en/integrations/openvino.md +++ b/docs/en/integrations/openvino.md @@ -261,14 +261,14 @@ To reproduce the Ultralytics benchmarks above on all export [formats](../modes/e # Load a YOLOv8n PyTorch model model = YOLO('yolov8n.pt') - # Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats - results= model.benchmarks(data='coco128.yaml') + # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats + results= model.benchmarks(data='coco8.yaml') ``` === "CLI" ```bash - # Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats - yolo benchmark model=yolov8n.pt data=coco128.yaml + # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats + yolo benchmark model=yolov8n.pt data=coco8.yaml ``` Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images). diff --git a/docs/en/integrations/ray-tune.md b/docs/en/integrations/ray-tune.md index cc39682b23..c59fe2df87 100644 --- a/docs/en/integrations/ray-tune.md +++ b/docs/en/integrations/ray-tune.md @@ -112,13 +112,13 @@ In this example, we demonstrate how to use a custom search space for hyperparame model = YOLO("yolov8n.pt") # Run Ray Tune on the model - result_grid = model.tune(data="coco128.yaml", + result_grid = model.tune(data="coco8.yaml", space={"lr0": tune.uniform(1e-5, 1e-1)}, epochs=50, use_ray=True) ``` -In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco128.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`. +In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco8.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`. ## Processing Ray Tune Results diff --git a/docs/en/integrations/tensorboard.md b/docs/en/integrations/tensorboard.md index 5e0cbf1267..c73bdb3797 100644 --- a/docs/en/integrations/tensorboard.md +++ b/docs/en/integrations/tensorboard.md @@ -67,7 +67,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO model = YOLO('yolov8n.pt') # Train the model - results = model.train(data='coco128.yaml', epochs=100, imgsz=640) + results = model.train(data='coco8.yaml', epochs=100, imgsz=640) ``` Upon running the usage code snippet above, you can expect the following output: diff --git a/docs/en/integrations/tfjs.md b/docs/en/integrations/tfjs.md index a192f6b450..6f80ecc721 100644 --- a/docs/en/integrations/tfjs.md +++ b/docs/en/integrations/tfjs.md @@ -32,7 +32,7 @@ Here are the key features that make TF.js a powerful tool for developers: ## Deployment Options with TensorFlow.js -Before we dive into the process of exporting YOLOv8 models to the TF.js format, let's explore some typical deployment scenarios where this format is used. +Before we dive into the process of exporting YOLOv8 models to the TF.js format, let's explore some typical deployment scenarios where this format is used. TF.js provides a range of options to deploy your machine learning models: diff --git a/docs/en/integrations/weights-biases.md b/docs/en/integrations/weights-biases.md index 6a69a18b67..3c43b3eaff 100644 --- a/docs/en/integrations/weights-biases.md +++ b/docs/en/integrations/weights-biases.md @@ -72,7 +72,7 @@ Before diving into the usage instructions for YOLOv8 model training with Weights # Step 2: Define the YOLOv8 Model and Dataset model_name = "yolov8n" - dataset_name = "coco128.yaml" + dataset_name = "coco8.yaml" model = YOLO(f"{model_name}.pt") # Step 3: Add W&B Callback for Ultralytics diff --git a/docs/en/modes/benchmark.md b/docs/en/modes/benchmark.md index 7f8e4573d7..6842f98cdd 100644 --- a/docs/en/modes/benchmark.md +++ b/docs/en/modes/benchmark.md @@ -76,7 +76,7 @@ Arguments such as `model`, `data`, `imgsz`, `half`, `device`, and `verbose` prov | Key | Default Value | Description | |-----------|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | `model` | `None` | Specifies the path to the model file. Accepts both `.pt` and `.yaml` formats, e.g., `"yolov8n.pt"` for pre-trained models or configuration files. | -| `data` | `None` | Path to a YAML file defining the dataset for benchmarking, typically including paths and settings for validation data. Example: `"coco128.yaml"`. | +| `data` | `None` | Path to a YAML file defining the dataset for benchmarking, typically including paths and settings for validation data. Example: `"coco8.yaml"`. | | `imgsz` | `640` | The input image size for the model. Can be a single integer for square images or a tuple `(width, height)` for non-square, e.g., `(640, 480)`. | | `half` | `False` | Enables FP16 (half-precision) inference, reducing memory usage and possibly increasing speed on compatible hardware. Use `half=True` to enable. | | `int8` | `False` | Activates INT8 quantization for further optimized performance on supported devices, especially useful for edge devices. Set `int8=True` to use. | diff --git a/docs/en/modes/train.md b/docs/en/modes/train.md index 1175d2e999..3d83a5149e 100644 --- a/docs/en/modes/train.md +++ b/docs/en/modes/train.md @@ -47,7 +47,7 @@ The following are some notable features of YOLOv8's Train mode: ## Usage Examples -Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device=cpu` will be used. See Arguments section below for a full list of training arguments. +Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device=cpu` will be used. See Arguments section below for a full list of training arguments. !!! Example "Single-GPU and CPU Training Example" @@ -64,20 +64,20 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. The train model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights # Train the model - results = model.train(data='coco128.yaml', epochs=100, imgsz=640) + results = model.train(data='coco8.yaml', epochs=100, imgsz=640) ``` === "CLI" ```bash # Build a new model from YAML and start training from scratch - yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640 + yolo detect train data=coco8.yaml model=yolov8n.yaml epochs=100 imgsz=640 # Start training from a pretrained *.pt model - yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 + yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 # Build a new model from YAML, transfer pretrained weights to it and start training - yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640 + yolo detect train data=coco8.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640 ``` ### Multi-GPU Training @@ -97,14 +97,14 @@ Multi-GPU training allows for more efficient utilization of available hardware r model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training) # Train the model with 2 GPUs - results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device=[0, 1]) + results = model.train(data='coco8.yaml', epochs=100, imgsz=640, device=[0, 1]) ``` === "CLI" ```bash # Start training from a pretrained *.pt model using GPUs 0 and 1 - yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 device=0,1 + yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=0,1 ``` ### Apple M1 and M2 MPS Training @@ -124,14 +124,14 @@ To enable training on Apple M1 and M2 chips, you should specify 'mps' as your de model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training) # Train the model with 2 GPUs - results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device='mps') + results = model.train(data='coco8.yaml', epochs=100, imgsz=640, device='mps') ``` === "CLI" ```bash # Start training from a pretrained *.pt model using GPUs 0 and 1 - yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps + yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640 device=mps ``` While leveraging the computational power of the M1/M2 chips, this enables more efficient processing of the training tasks. For more detailed guidance and advanced configuration options, please refer to the [PyTorch MPS documentation](https://pytorch.org/docs/stable/notes/mps.html). @@ -178,7 +178,7 @@ The training settings for YOLO models encompass various hyperparameters and conf | Argument | Default | Description | |-------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `model` | `None` | Specifies the model file for training. Accepts a path to either a `.pt` pretrained model or a `.yaml` configuration file. Essential for defining the model structure or initializing weights. | -| `data` | `None` | Path to the dataset configuration file (e.g., `coco128.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. | +| `data` | `None` | Path to the dataset configuration file (e.g., `coco8.yaml`). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes. | | `epochs` | `100` | Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance. | | `time` | `None` | Maximum training time in hours. If set, this overrides the `epochs` argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios. | | `patience` | `100` | Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus. | diff --git a/docs/en/modes/val.md b/docs/en/modes/val.md index 0c77425a18..96703cbac9 100644 --- a/docs/en/modes/val.md +++ b/docs/en/modes/val.md @@ -47,7 +47,7 @@ These are the notable functionalities offered by YOLOv8's Val mode: ## Usage Examples -Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments. +Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments. !!! Example @@ -79,22 +79,22 @@ Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need When validating YOLO models, several arguments can be fine-tuned to optimize the evaluation process. These arguments control aspects such as input image size, batch processing, and performance thresholds. Below is a detailed breakdown of each argument to help you customize your validation settings effectively. -| Argument | Type | Default | Description | -|---------------|---------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco128.yaml`). This file includes paths to validation data, class names, and number of classes. | -| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. | -| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. | -| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. | -| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. | -| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. | -| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. | -| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. | -| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. | -| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. | -| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. | -| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. | -| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. | -| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. | +| Argument | Type | Default | Description | +|---------------|---------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `data` | `str` | `None` | Specifies the path to the dataset configuration file (e.g., `coco8.yaml`). This file includes paths to validation data, class names, and number of classes. | +| `imgsz` | `int` | `640` | Defines the size of input images. All images are resized to this dimension before processing. | +| `batch` | `int` | `16` | Sets the number of images per batch. Use `-1` for AutoBatch, which automatically adjusts based on GPU memory availability. | +| `save_json` | `bool` | `False` | If `True`, saves the results to a JSON file for further analysis or integration with other tools. | +| `save_hybrid` | `bool` | `False` | If `True`, saves a hybrid version of labels that combines original annotations with additional model predictions. | +| `conf` | `float` | `0.001` | Sets the minimum confidence threshold for detections. Detections with confidence below this threshold are discarded. | +| `iou` | `float` | `0.6` | Sets the Intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS). Helps in reducing duplicate detections. | +| `max_det` | `int` | `300` | Limits the maximum number of detections per image. Useful in dense scenes to prevent excessive detections. | +| `half` | `bool` | `True` | Enables half-precision (FP16) computation, reducing memory usage and potentially increasing speed with minimal impact on accuracy. | +| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. | +| `dnn` | `bool` | `False` | If `True`, uses the OpenCV DNN module for ONNX model inference, offering an alternative to PyTorch inference methods. | +| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. | +| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. | +| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. | Each of these settings plays a vital role in the validation process, allowing for a customizable and efficient evaluation of YOLO models. Adjusting these parameters according to your specific needs and resources can help achieve the best balance between accuracy and performance. diff --git a/docs/en/quickstart.md b/docs/en/quickstart.md index cf54217534..b8317e38fb 100644 --- a/docs/en/quickstart.md +++ b/docs/en/quickstart.md @@ -161,7 +161,7 @@ The Ultralytics command line interface (CLI) allows for simple single-line comma Train a detection model for 10 epochs with an initial learning_rate of 0.01 ```bash - yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01 + yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01 ``` === "Predict" @@ -175,7 +175,7 @@ The Ultralytics command line interface (CLI) allows for simple single-line comma Val a pretrained detection model at batch-size 1 and image size 640: ```bash - yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640 + yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640 ``` === "Export" @@ -225,8 +225,8 @@ For example, users can load a model, train it, evaluate its performance on a val # Load a pretrained YOLO model (recommended for training) model = YOLO('yolov8n.pt') - # Train the model using the 'coco128.yaml' dataset for 3 epochs - results = model.train(data='coco128.yaml', epochs=3) + # Train the model using the 'coco8.yaml' dataset for 3 epochs + results = model.train(data='coco8.yaml', epochs=3) # Evaluate the model's performance on the validation set results = model.val() diff --git a/docs/en/tasks/detect.md b/docs/en/tasks/detect.md index 5aed8c3f79..6a32001bba 100644 --- a/docs/en/tasks/detect.md +++ b/docs/en/tasks/detect.md @@ -42,11 +42,11 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models | [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 | - **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset.