From 51e93d611110c88e230c901ce4b796da53d28827 Mon Sep 17 00:00:00 2001
From: Ultralytics Assistant
<135830346+UltralyticsAssistant@users.noreply.github.com>
Date: Tue, 1 Oct 2024 15:41:15 +0200
Subject: [PATCH] YOLO11 Tasks, Modes, Usage, Macros and Solutions Updates
(#16593)
Signed-off-by: UltralyticsAssistant
- Watch: How to Train [Image Classification](https://www.ultralytics.com/glossary/image-classification) Model using Caltech-256 Dataset with Ultralytics HUB
+ Watch: How to Train Image Classification Model using Caltech-256 Dataset with Ultralytics HUB
@@ -50,7 +50,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
## Usage Examples
-Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
+Run YOLO11n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
!!! example
@@ -60,13 +60,13 @@ Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT
from ultralytics.utils.benchmarks import benchmark
# Benchmark on GPU
- benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
+ benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
```
=== "CLI"
```bash
- yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0
+ yolo benchmark model=yolo11n.pt data='coco8.yaml' imgsz=640 half=False device=0
```
## Arguments
@@ -75,7 +75,7 @@ Arguments such as `model`, `data`, `imgsz`, `half`, `device`, and `verbose` prov
| Key | Default Value | Description |
| --------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `model` | `None` | Specifies the path to the model file. Accepts both `.pt` and `.yaml` formats, e.g., `"yolov8n.pt"` for pre-trained models or configuration files. |
+| `model` | `None` | Specifies the path to the model file. Accepts both `.pt` and `.yaml` formats, e.g., `"yolo11n.pt"` for pre-trained models or configuration files. |
| `data` | `None` | Path to a YAML file defining the dataset for benchmarking, typically including paths and settings for [validation data](https://www.ultralytics.com/glossary/validation-data). Example: `"coco8.yaml"`. |
| `imgsz` | `640` | The input image size for the model. Can be a single integer for square images or a tuple `(width, height)` for non-square, e.g., `(640, 480)`. |
| `half` | `False` | Enables FP16 (half-precision) inference, reducing memory usage and possibly increasing speed on compatible hardware. Use `half=True` to enable. |
@@ -93,9 +93,9 @@ See full `export` details in the [Export](../modes/export.md) page.
## FAQ
-### How do I benchmark my YOLOv8 model's performance using Ultralytics?
+### How do I benchmark my YOLO11 model's performance using Ultralytics?
-Ultralytics YOLOv8 offers a Benchmark mode to assess your model's performance across different export formats. This mode provides insights into key metrics such as [mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP50-95), accuracy, and inference time in milliseconds. To run benchmarks, you can use either Python or CLI commands. For example, to benchmark on a GPU:
+Ultralytics YOLO11 offers a Benchmark mode to assess your model's performance across different export formats. This mode provides insights into key metrics such as [mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP50-95), accuracy, and inference time in milliseconds. To run benchmarks, you can use either Python or CLI commands. For example, to benchmark on a GPU:
!!! example
@@ -105,29 +105,29 @@ Ultralytics YOLOv8 offers a Benchmark mode to assess your model's performance ac
from ultralytics.utils.benchmarks import benchmark
# Benchmark on GPU
- benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
+ benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
```
=== "CLI"
```bash
- yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0
+ yolo benchmark model=yolo11n.pt data='coco8.yaml' imgsz=640 half=False device=0
```
For more details on benchmark arguments, visit the [Arguments](#arguments) section.
-### What are the benefits of exporting YOLOv8 models to different formats?
+### What are the benefits of exporting YOLO11 models to different formats?
-Exporting YOLOv8 models to different formats such as ONNX, TensorRT, and OpenVINO allows you to optimize performance based on your deployment environment. For instance:
+Exporting YOLO11 models to different formats such as ONNX, TensorRT, and OpenVINO allows you to optimize performance based on your deployment environment. For instance:
- **ONNX:** Provides up to 3x CPU speedup.
- **TensorRT:** Offers up to 5x GPU speedup.
- **OpenVINO:** Specifically optimized for Intel hardware.
These formats enhance both the speed and accuracy of your models, making them more efficient for various real-world applications. Visit the [Export](../modes/export.md) page for complete details.
-### Why is benchmarking crucial in evaluating YOLOv8 models?
+### Why is benchmarking crucial in evaluating YOLO11 models?
-Benchmarking your YOLOv8 models is essential for several reasons:
+Benchmarking your YOLO11 models is essential for several reasons:
- **Informed Decisions:** Understand the trade-offs between speed and accuracy.
- **Resource Allocation:** Gauge the performance across different hardware options.
@@ -135,9 +135,9 @@ Benchmarking your YOLOv8 models is essential for several reasons:
- **Cost Efficiency:** Optimize hardware usage based on benchmark results.
Key metrics such as mAP50-95, Top-5 accuracy, and inference time help in making these evaluations. Refer to the [Key Metrics](#key-metrics-in-benchmark-mode) section for more information.
-### Which export formats are supported by YOLOv8, and what are their advantages?
+### Which export formats are supported by YOLO11, and what are their advantages?
-YOLOv8 supports a variety of export formats, each tailored for specific hardware and use cases:
+YOLO11 supports a variety of export formats, each tailored for specific hardware and use cases:
- **ONNX:** Best for CPU performance.
- **TensorRT:** Ideal for GPU efficiency.
@@ -145,11 +145,11 @@ YOLOv8 supports a variety of export formats, each tailored for specific hardware
- **CoreML & [TensorFlow](https://www.ultralytics.com/glossary/tensorflow):** Useful for iOS and general ML applications.
For a complete list of supported formats and their respective advantages, check out the [Supported Export Formats](#supported-export-formats) section.
-### What arguments can I use to fine-tune my YOLOv8 benchmarks?
+### What arguments can I use to fine-tune my YOLO11 benchmarks?
When running benchmarks, several arguments can be customized to suit specific needs:
-- **model:** Path to the model file (e.g., "yolov8n.pt").
+- **model:** Path to the model file (e.g., "yolo11n.pt").
- **data:** Path to a YAML file defining the dataset (e.g., "coco8.yaml").
- **imgsz:** The input image size, either as a single integer or a tuple.
- **half:** Enable FP16 inference for better performance.
diff --git a/docs/en/modes/export.md b/docs/en/modes/export.md
index 706dd91cdc..048a2cdf3c 100644
--- a/docs/en/modes/export.md
+++ b/docs/en/modes/export.md
@@ -1,7 +1,7 @@
---
comments: true
-description: Learn how to export your YOLOv8 model to various formats like ONNX, TensorRT, and CoreML. Achieve maximum compatibility and performance.
-keywords: YOLOv8, Model Export, ONNX, TensorRT, CoreML, Ultralytics, AI, Machine Learning, Inference, Deployment
+description: Learn how to export your YOLO11 model to various formats like ONNX, TensorRT, and CoreML. Achieve maximum compatibility and performance.
+keywords: YOLO11, Model Export, ONNX, TensorRT, CoreML, Ultralytics, AI, Machine Learning, Inference, Deployment
---
# Model Export with Ultralytics YOLO
@@ -10,7 +10,7 @@ keywords: YOLOv8, Model Export, ONNX, TensorRT, CoreML, Ultralytics, AI, Machine
## Introduction
-The ultimate goal of training a model is to deploy it for real-world applications. Export mode in Ultralytics YOLOv8 offers a versatile range of options for exporting your trained model to different formats, making it deployable across various platforms and devices. This comprehensive guide aims to walk you through the nuances of model exporting, showcasing how to achieve maximum compatibility and performance.
+The ultimate goal of training a model is to deploy it for real-world applications. Export mode in Ultralytics YOLO11 offers a versatile range of options for exporting your trained model to different formats, making it deployable across various platforms and devices. This comprehensive guide aims to walk you through the nuances of model exporting, showcasing how to achieve maximum compatibility and performance.
@@ -23,7 +23,7 @@ The ultimate goal of training a model is to deploy it for real-world application
Watch: How To Export Custom Trained Ultralytics YOLOv8 Model and Run Live Inference on Webcam.
@@ -32,7 +32,7 @@ In the world of [machine learning](https://www.ultralytics.com/glossary/machine-
## Why Use Ultralytics YOLO for Inference?
-Here's why you should consider YOLOv8's predict mode for your various inference needs:
+Here's why you should consider YOLO11's predict mode for your various inference needs:
- **Versatility:** Capable of making inferences on images, videos, and even live streams.
- **Performance:** Engineered for real-time, high-speed processing without sacrificing [accuracy](https://www.ultralytics.com/glossary/accuracy).
@@ -41,7 +41,7 @@ Here's why you should consider YOLOv8's predict mode for your various inference
### Key Features of Predict Mode
-YOLOv8's predict mode is designed to be robust and versatile, featuring:
+YOLO11's predict mode is designed to be robust and versatile, featuring:
- **Multiple Data Source Compatibility:** Whether your data is in the form of individual images, a collection of images, video files, or real-time video streams, predict mode has you covered.
- **Streaming Mode:** Use the streaming feature to generate a memory-efficient generator of `Results` objects. Enable this by setting `stream=True` in the predictor's call method.
@@ -58,7 +58,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n.pt") # pretrained YOLOv8n model
+ model = YOLO("yolo11n.pt") # pretrained YOLO11n model
# Run batched inference on a list of images
results = model(["image1.jpg", "image2.jpg"]) # return a list of Results objects
@@ -80,7 +80,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n.pt") # pretrained YOLOv8n model
+ model = YOLO("yolo11n.pt") # pretrained YOLO11n model
# Run batched inference on a list of images
results = model(["image1.jpg", "image2.jpg"], stream=True) # return a generator of Results objects
@@ -98,7 +98,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
## Inference Sources
-YOLOv8 can process different types of input sources for inference, as shown in the table below. The sources include static images, video streams, and various data formats. The table also indicates whether each source can be used in streaming mode with the argument `stream=True` ✅. Streaming mode is beneficial for processing videos or live streams as it creates a generator of results instead of loading all frames into memory.
+YOLO11 can process different types of input sources for inference, as shown in the table below. The sources include static images, video streams, and various data formats. The table also indicates whether each source can be used in streaming mode with the argument `stream=True` ✅. Streaming mode is beneficial for processing videos or live streams as it creates a generator of results instead of loading all frames into memory.
!!! tip
@@ -131,8 +131,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Define path to the image file
source = "path/to/image.jpg"
@@ -147,8 +147,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Define current screenshot as source
source = "screen"
@@ -163,8 +163,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Define remote image or video URL
source = "https://ultralytics.com/images/bus.jpg"
@@ -181,8 +181,8 @@ Below are code examples for using each source type:
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Open an image using PIL
source = Image.open("path/to/image.jpg")
@@ -199,8 +199,8 @@ Below are code examples for using each source type:
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Read an image using OpenCV
source = cv2.imread("path/to/image.jpg")
@@ -217,8 +217,8 @@ Below are code examples for using each source type:
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Create a random numpy array of HWC shape (640, 640, 3) with values in range [0, 255] and type uint8
source = np.random.randint(low=0, high=255, size=(640, 640, 3), dtype="uint8")
@@ -235,8 +235,8 @@ Below are code examples for using each source type:
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Create a random torch tensor of BCHW shape (1, 3, 640, 640) with values in range [0, 1] and type float32
source = torch.rand(1, 3, 640, 640, dtype=torch.float32)
@@ -251,8 +251,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Define a path to a CSV file with images, URLs, videos and directories
source = "path/to/file.csv"
@@ -267,8 +267,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Define path to video file
source = "path/to/video.mp4"
@@ -283,8 +283,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Define path to directory containing images and videos for inference
source = "path/to/dir"
@@ -299,8 +299,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Define a glob search for all JPG files in a directory
source = "path/to/dir/*.jpg"
@@ -318,8 +318,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Define source as YouTube video URL
source = "https://youtu.be/LNwODJXcvt4"
@@ -335,8 +335,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Single stream with batch-size 1 inference
source = "rtsp://example.com/media.mp4" # RTSP, RTMP, TCP, or IP streaming address
@@ -354,8 +354,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Multiple streams with batched inference (e.g., batch-size 8 for 8 streams)
source = "path/to/list.streams" # *.streams text file with one streaming address per line
@@ -385,8 +385,8 @@ Below are code examples for using each source type:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Run inference on 'bus.jpg' with arguments
model.predict("bus.jpg", save=True, imgsz=320, conf=0.5)
@@ -402,7 +402,7 @@ Visualization arguments:
## Image and Video Formats
-YOLOv8 supports various image and video formats, as specified in [ultralytics/data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/utils.py). See the tables below for the valid suffixes and example predict commands.
+YOLO11 supports various image and video formats, as specified in [ultralytics/data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/utils.py). See the tables below for the valid suffixes and example predict commands.
### Images
@@ -449,8 +449,8 @@ All Ultralytics `predict()` calls will return a list of `Results` objects:
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Run inference on an image
results = model("bus.jpg") # list of 1 Results object
@@ -501,8 +501,8 @@ For more details see the [`Results` class documentation](../reference/engine/res
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ # Load a pretrained YOLO11n model
+ model = YOLO("yolo11n.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@@ -540,7 +540,7 @@ For more details see the [`Boxes` class documentation](../reference/engine/resul
from ultralytics import YOLO
# Load a pretrained YOLOv8n-seg Segment model
- model = YOLO("yolov8n-seg.pt")
+ model = YOLO("yolo11n-seg.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@@ -573,7 +573,7 @@ For more details see the [`Masks` class documentation](../reference/engine/resul
from ultralytics import YOLO
# Load a pretrained YOLOv8n-pose Pose model
- model = YOLO("yolov8n-pose.pt")
+ model = YOLO("yolo11n-pose.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@@ -607,7 +607,7 @@ For more details see the [`Keypoints` class documentation](../reference/engine/r
from ultralytics import YOLO
# Load a pretrained YOLOv8n-cls Classify model
- model = YOLO("yolov8n-cls.pt")
+ model = YOLO("yolo11n-cls.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@@ -642,7 +642,7 @@ For more details see the [`Probs` class documentation](../reference/engine/resul
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
- model = YOLO("yolov8n-obb.pt")
+ model = YOLO("yolo11n-obb.pt")
# Run inference on an image
results = model("bus.jpg") # results list
@@ -682,7 +682,7 @@ The `plot()` method in `Results` objects facilitates visualization of prediction
from ultralytics import YOLO
# Load a pretrained YOLOv8n model
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
# Run inference on 'bus.jpg'
results = model(["bus.jpg", "zidane.jpg"]) # results list
@@ -747,8 +747,8 @@ When using YOLO models in a multi-threaded application, it's important to instan
# Starting threads that each have their own model instance
- Thread(target=thread_safe_predict, args=("yolov8n.pt", "image1.jpg")).start()
- Thread(target=thread_safe_predict, args=("yolov8n.pt", "image2.jpg")).start()
+ Thread(target=thread_safe_predict, args=("yolo11n.pt", "image1.jpg")).start()
+ Thread(target=thread_safe_predict, args=("yolo11n.pt", "image2.jpg")).start()
```
For an in-depth look at thread-safe inference with YOLO models and step-by-step instructions, please refer to our [YOLO Thread-Safe Inference Guide](../guides/yolo-thread-safe-inference.md). This guide will provide you with all the necessary information to avoid common pitfalls and ensure that your multi-threaded inference runs smoothly.
@@ -765,7 +765,7 @@ Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video
from ultralytics import YOLO
# Load the YOLOv8 model
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
# Open the video file
video_path = "path/to/your/video/file.mp4"
diff --git a/docs/en/modes/track.md b/docs/en/modes/track.md
index 46c43b0b1a..90a856a049 100644
--- a/docs/en/modes/track.md
+++ b/docs/en/modes/track.md
@@ -60,7 +60,7 @@ The default tracker is BoT-SORT.
If object confidence score will be low, i.e lower than [`track_high_thresh`](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/trackers/bytetrack.yaml#L5), then there will be no tracks successfully returned and updated.
-To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
+To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLO11n, YOLO11n-seg and YOLO11n-pose.
!!! example
@@ -70,9 +70,9 @@ To run the tracker on video streams, use a trained Detect, Segment or Pose model
from ultralytics import YOLO
# Load an official or custom model
- model = YOLO("yolov8n.pt") # Load an official Detect model
- model = YOLO("yolov8n-seg.pt") # Load an official Segment model
- model = YOLO("yolov8n-pose.pt") # Load an official Pose model
+ model = YOLO("yolo11n.pt") # Load an official Detect model
+ model = YOLO("yolo11n-seg.pt") # Load an official Segment model
+ model = YOLO("yolo11n-pose.pt") # Load an official Pose model
model = YOLO("path/to/best.pt") # Load a custom trained model
# Perform tracking with the model
@@ -84,9 +84,9 @@ To run the tracker on video streams, use a trained Detect, Segment or Pose model
```bash
# Perform tracking with various models using the command line interface
- yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" # Official Detect model
- yolo track model=yolov8n-seg.pt source="https://youtu.be/LNwODJXcvt4" # Official Segment model
- yolo track model=yolov8n-pose.pt source="https://youtu.be/LNwODJXcvt4" # Official Pose model
+ yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" # Official Detect model
+ yolo track model=yolo11n-seg.pt source="https://youtu.be/LNwODJXcvt4" # Official Segment model
+ yolo track model=yolo11n-pose.pt source="https://youtu.be/LNwODJXcvt4" # Official Pose model
yolo track model=path/to/best.pt source="https://youtu.be/LNwODJXcvt4" # Custom trained model
# Track using ByteTrack tracker
@@ -113,7 +113,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
from ultralytics import YOLO
# Configure the tracking parameters and run the tracker
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
results = model.track(source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True)
```
@@ -121,7 +121,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
```bash
# Configure tracking parameters and run the tracker using the command line interface
- yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
+ yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
```
### Tracker Selection
@@ -136,7 +136,7 @@ Ultralytics also allows you to use a modified tracker configuration file. To do
from ultralytics import YOLO
# Load the model and run the tracker with a custom configuration file
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
results = model.track(source="https://youtu.be/LNwODJXcvt4", tracker="custom_tracker.yaml")
```
@@ -144,7 +144,7 @@ Ultralytics also allows you to use a modified tracker configuration file. To do
```bash
# Load the model and run the tracker with a custom configuration file using the command line interface
- yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
+ yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
```
For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) page.
@@ -153,7 +153,7 @@ For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/tr
### Persisting Tracks Loop
-Here is a Python script using [OpenCV](https://www.ultralytics.com/glossary/opencv) (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker that the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
+Here is a Python script using [OpenCV](https://www.ultralytics.com/glossary/opencv) (`cv2`) and YOLO11 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker that the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
!!! example "Streaming for-loop with tracking"
@@ -162,8 +162,8 @@ Here is a Python script using [OpenCV](https://www.ultralytics.com/glossary/open
from ultralytics import YOLO
- # Load the YOLOv8 model
- model = YOLO("yolov8n.pt")
+ # Load the YOLO11 model
+ model = YOLO("yolo11n.pt")
# Open the video file
video_path = "path/to/video.mp4"
@@ -175,14 +175,14 @@ Here is a Python script using [OpenCV](https://www.ultralytics.com/glossary/open
success, frame = cap.read()
if success:
- # Run YOLOv8 tracking on the frame, persisting tracks between frames
+ # Run YOLO11 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
- cv2.imshow("YOLOv8 Tracking", annotated_frame)
+ cv2.imshow("YOLO11 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
@@ -200,9 +200,9 @@ Please note the change from `model(frame)` to `model.track(frame)`, which enable
### Plotting Tracks Over Time
-Visualizing object tracks over consecutive frames can provide valuable insights into the movement patterns and behavior of detected objects within a video. With Ultralytics YOLOv8, plotting these tracks is a seamless and efficient process.
+Visualizing object tracks over consecutive frames can provide valuable insights into the movement patterns and behavior of detected objects within a video. With Ultralytics YOLO11, plotting these tracks is a seamless and efficient process.
-In the following example, we demonstrate how to utilize YOLOv8's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
+In the following example, we demonstrate how to utilize YOLO11's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
!!! example "Plotting tracks over multiple video frames"
@@ -214,8 +214,8 @@ In the following example, we demonstrate how to utilize YOLOv8's tracking capabi
from ultralytics import YOLO
- # Load the YOLOv8 model
- model = YOLO("yolov8n.pt")
+ # Load the YOLO11 model
+ model = YOLO("yolo11n.pt")
# Open the video file
video_path = "path/to/video.mp4"
@@ -230,7 +230,7 @@ In the following example, we demonstrate how to utilize YOLOv8's tracking capabi
success, frame = cap.read()
if success:
- # Run YOLOv8 tracking on the frame, persisting tracks between frames
+ # Run YOLO11 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Get the boxes and track IDs
@@ -253,7 +253,7 @@ In the following example, we demonstrate how to utilize YOLOv8's tracking capabi
cv2.polylines(annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=10)
# Display the annotated frame
- cv2.imshow("YOLOv8 Tracking", annotated_frame)
+ cv2.imshow("YOLO11 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
@@ -275,7 +275,7 @@ In the provided Python script, we make use of Python's `threading` module to run
To ensure that each thread receives the correct parameters (the video file, the model to use and the file index), we define a function `run_tracker_in_thread` that accepts these parameters and contains the main tracking loop. This function reads the video frame by frame, runs the tracker, and displays the results.
-Two different models are used in this example: `yolov8n.pt` and `yolov8n-seg.pt`, each tracking objects in a different video file. The video files are specified in `video_file1` and `video_file2`.
+Two different models are used in this example: `yolo11n.pt` and `yolo11n-seg.pt`, each tracking objects in a different video file. The video files are specified in `video_file1` and `video_file2`.
The `daemon=True` parameter in `threading.Thread` means that these threads will be closed as soon as the main program finishes. We then start the threads with `start()` and use `join()` to make the main thread wait until both tracker threads have finished.
@@ -291,7 +291,7 @@ Finally, after all threads have completed their task, the windows displaying the
from ultralytics import YOLO
# Define model names and video sources
- MODEL_NAMES = ["yolov8n.pt", "yolov8n-seg.pt"]
+ MODEL_NAMES = ["yolo11n.pt", "yolo11n-seg.pt"]
SOURCES = ["path/to/video.mp4", "0"] # local video, 0 for webcam
@@ -300,7 +300,7 @@ Finally, after all threads have completed their task, the windows displaying the
Run YOLO tracker in its own thread for concurrent processing.
Args:
- model_name (str): The YOLOv8 model object.
+ model_name (str): The YOLO11 model object.
filename (str): The path to the video file or the identifier for the webcam/external camera source.
"""
model = YOLO(model_name)
@@ -357,14 +357,14 @@ You can configure a custom tracker by copying an existing tracker configuration
```python
from ultralytics import YOLO
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
results = model.track(source="https://youtu.be/LNwODJXcvt4", tracker="custom_tracker.yaml")
```
=== "CLI"
```bash
- yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
+ yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
```
### How can I run object tracking on multiple video streams simultaneously?
@@ -381,7 +381,7 @@ To run object tracking on multiple video streams simultaneously, you can use Pyt
from ultralytics import YOLO
# Define model names and video sources
- MODEL_NAMES = ["yolov8n.pt", "yolov8n-seg.pt"]
+ MODEL_NAMES = ["yolo11n.pt", "yolo11n-seg.pt"]
SOURCES = ["path/to/video.mp4", "0"] # local video, 0 for webcam
@@ -390,7 +390,7 @@ To run object tracking on multiple video streams simultaneously, you can use Pyt
Run YOLO tracker in its own thread for concurrent processing.
Args:
- model_name (str): The YOLOv8 model object.
+ model_name (str): The YOLO11 model object.
filename (str): The path to the video file or the identifier for the webcam/external camera source.
"""
model = YOLO(model_name)
@@ -438,7 +438,7 @@ To visualize object tracks over multiple video frames, you can use the YOLO mode
from ultralytics import YOLO
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)
track_history = defaultdict(lambda: [])
@@ -458,7 +458,7 @@ To visualize object tracks over multiple video frames, you can use the YOLO mode
track.pop(0)
points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))
cv2.polylines(annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=10)
- cv2.imshow("YOLOv8 Tracking", annotated_frame)
+ cv2.imshow("YOLO11 Tracking", annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
diff --git a/docs/en/modes/train.md b/docs/en/modes/train.md
index f5722b7280..9cbe791991 100644
--- a/docs/en/modes/train.md
+++ b/docs/en/modes/train.md
@@ -1,7 +1,7 @@
---
comments: true
-description: Learn how to efficiently train object detection models using YOLOv8 with comprehensive instructions on settings, augmentation, and hardware utilization.
-keywords: Ultralytics, YOLOv8, model training, deep learning, object detection, GPU training, dataset augmentation, hyperparameter tuning, model performance, M1 M2 training
+description: Learn how to efficiently train object detection models using YOLO11 with comprehensive instructions on settings, augmentation, and hardware utilization.
+keywords: Ultralytics, YOLO11, model training, deep learning, object detection, GPU training, dataset augmentation, hyperparameter tuning, model performance, M1 M2 training
---
# Model Training with Ultralytics YOLO
@@ -10,7 +10,7 @@ keywords: Ultralytics, YOLOv8, model training, deep learning, object detection,
## Introduction
-Training a [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) model involves feeding it data and adjusting its parameters so that it can make accurate predictions. Train mode in Ultralytics YOLOv8 is engineered for effective and efficient training of object detection models, fully utilizing modern hardware capabilities. This guide aims to cover all the details you need to get started with training your own models using YOLOv8's robust set of features.
+Training a [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) model involves feeding it data and adjusting its parameters so that it can make accurate predictions. Train mode in Ultralytics YOLO11 is engineered for effective and efficient training of object detection models, fully utilizing modern hardware capabilities. This guide aims to cover all the details you need to get started with training your own models using YOLO11's robust set of features.
@@ -20,12 +20,12 @@ Training a [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl
allowfullscreen>
- Watch: How to Train a YOLOv8 model on Your Custom Dataset in Google Colab.
+ Watch: How to Train a YOLO model on Your Custom Dataset in Google Colab.
@@ -25,7 +25,7 @@ Validation is a critical step in the [machine learning](https://www.ultralytics.
## Why Validate with Ultralytics YOLO?
-Here's why using YOLOv8's Val mode is advantageous:
+Here's why using YOLO11's Val mode is advantageous:
- **Precision:** Get accurate metrics like mAP50, mAP75, and mAP50-95 to comprehensively evaluate your model.
- **Convenience:** Utilize built-in features that remember training settings, simplifying the validation process.
@@ -34,7 +34,7 @@ Here's why using YOLOv8's Val mode is advantageous:
### Key Features of Val Mode
-These are the notable functionalities offered by YOLOv8's Val mode:
+These are the notable functionalities offered by YOLO11's Val mode:
- **Automated Settings:** Models remember their training configurations for straightforward validation.
- **Multi-Metric Support:** Evaluate your model based on a range of accuracy metrics.
@@ -43,11 +43,11 @@ These are the notable functionalities offered by YOLOv8's Val mode:
!!! tip
- * YOLOv8 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolov8n.pt` or `model('yolov8n.pt').val()`
+ * YOLO11 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolo11n.pt` or `model('yolo11n.pt').val()`
## Usage Examples
-Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
+Validate trained YOLO11n model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
!!! example
@@ -57,7 +57,7 @@ Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/a
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n.pt") # load an official model
+ model = YOLO("yolo11n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
@@ -71,7 +71,7 @@ Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/a
=== "CLI"
```bash
- yolo detect val model=yolov8n.pt # val official model
+ yolo detect val model=yolo11n.pt # val official model
yolo detect val model=path/to/best.pt # val custom model
```
@@ -95,7 +95,7 @@ The below examples showcase YOLO model validation with custom arguments in Pytho
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
# Customize validation settings
validation_results = model.val(data="coco8.yaml", imgsz=640, batch=16, conf=0.25, iou=0.6, device="0")
@@ -104,20 +104,20 @@ The below examples showcase YOLO model validation with custom arguments in Pytho
=== "CLI"
```bash
- yolo val model=yolov8n.pt data=coco8.yaml imgsz=640 batch=16 conf=0.25 iou=0.6 device=0
+ yolo val model=yolo11n.pt data=coco8.yaml imgsz=640 batch=16 conf=0.25 iou=0.6 device=0
```
## FAQ
-### How do I validate my YOLOv8 model with Ultralytics?
+### How do I validate my YOLO11 model with Ultralytics?
-To validate your YOLOv8 model, you can use the Val mode provided by Ultralytics. For example, using the Python API, you can load a model and run validation with:
+To validate your YOLO11 model, you can use the Val mode provided by Ultralytics. For example, using the Python API, you can load a model and run validation with:
```python
from ultralytics import YOLO
# Load a model
-model = YOLO("yolov8n.pt")
+model = YOLO("yolo11n.pt")
# Validate the model
metrics = model.val()
@@ -127,14 +127,14 @@ print(metrics.box.map) # map50-95
Alternatively, you can use the command-line interface (CLI):
```bash
-yolo val model=yolov8n.pt
+yolo val model=yolo11n.pt
```
For further customization, you can adjust various arguments like `imgsz`, `batch`, and `conf` in both Python and CLI modes. Check the [Arguments for YOLO Model Validation](#arguments-for-yolo-model-validation) section for the full list of parameters.
-### What metrics can I get from YOLOv8 model validation?
+### What metrics can I get from YOLO11 model validation?
-YOLOv8 model validation provides several key metrics to assess model performance. These include:
+YOLO11 model validation provides several key metrics to assess model performance. These include:
- mAP50 (mean Average Precision at IoU threshold 0.5)
- mAP75 (mean Average Precision at IoU threshold 0.75)
@@ -156,16 +156,16 @@ For a complete performance evaluation, it's crucial to review all these metrics.
Using Ultralytics YOLO for validation provides several advantages:
-- **[Precision](https://www.ultralytics.com/glossary/precision):** YOLOv8 offers accurate performance metrics including mAP50, mAP75, and mAP50-95.
+- **[Precision](https://www.ultralytics.com/glossary/precision):** YOLO11 offers accurate performance metrics including mAP50, mAP75, and mAP50-95.
- **Convenience:** The models remember their training settings, making validation straightforward.
- **Flexibility:** You can validate against the same or different datasets and image sizes.
- **Hyperparameter Tuning:** Validation metrics help in fine-tuning models for better performance.
These benefits ensure that your models are evaluated thoroughly and can be optimized for superior results. Learn more about these advantages in the [Why Validate with Ultralytics YOLO](#why-validate-with-ultralytics-yolo) section.
-### Can I validate my YOLOv8 model using a custom dataset?
+### Can I validate my YOLO11 model using a custom dataset?
-Yes, you can validate your YOLOv8 model using a [custom dataset](https://docs.ultralytics.com/datasets/). Specify the `data` argument with the path to your dataset configuration file. This file should include paths to the [validation data](https://www.ultralytics.com/glossary/validation-data), class names, and other relevant details.
+Yes, you can validate your YOLO11 model using a [custom dataset](https://docs.ultralytics.com/datasets/). Specify the `data` argument with the path to your dataset configuration file. This file should include paths to the [validation data](https://www.ultralytics.com/glossary/validation-data), class names, and other relevant details.
Example in Python:
@@ -173,7 +173,7 @@ Example in Python:
from ultralytics import YOLO
# Load a model
-model = YOLO("yolov8n.pt")
+model = YOLO("yolo11n.pt")
# Validate with a custom dataset
metrics = model.val(data="path/to/your/custom_dataset.yaml")
@@ -183,12 +183,12 @@ print(metrics.box.map) # map50-95
Example using CLI:
```bash
-yolo val model=yolov8n.pt data=path/to/your/custom_dataset.yaml
+yolo val model=yolo11n.pt data=path/to/your/custom_dataset.yaml
```
For more customizable options during validation, see the [Example Validation with Arguments](#example-validation-with-arguments) section.
-### How do I save validation results to a JSON file in YOLOv8?
+### How do I save validation results to a JSON file in YOLO11?
To save the validation results to a JSON file, you can set the `save_json` argument to `True` when running validation. This can be done in both the Python API and CLI.
@@ -198,7 +198,7 @@ Example in Python:
from ultralytics import YOLO
# Load a model
-model = YOLO("yolov8n.pt")
+model = YOLO("yolo11n.pt")
# Save validation results to JSON
metrics = model.val(save_json=True)
@@ -207,7 +207,7 @@ metrics = model.val(save_json=True)
Example using CLI:
```bash
-yolo val model=yolov8n.pt save_json=True
+yolo val model=yolo11n.pt save_json=True
```
This functionality is particularly useful for further analysis or integration with other tools. Check the [Arguments for YOLO Model Validation](#arguments-for-yolo-model-validation) for more details.
diff --git a/docs/en/solutions/index.md b/docs/en/solutions/index.md
index 52423c14f5..e5187ed8d4 100644
--- a/docs/en/solutions/index.md
+++ b/docs/en/solutions/index.md
@@ -1,12 +1,12 @@
---
comments: true
-description: Explore Ultralytics Solutions using YOLOv8 for object counting, blurring, security, and more. Enhance efficiency and solve real-world problems with cutting-edge AI.
-keywords: Ultralytics, YOLOv8, object counting, object blurring, security systems, AI solutions, real-time analysis, computer vision applications
+description: Explore Ultralytics Solutions using YOLO11 for object counting, blurring, security, and more. Enhance efficiency and solve real-world problems with cutting-edge AI.
+keywords: Ultralytics, YOLO11, object counting, object blurring, security systems, AI solutions, real-time analysis, computer vision applications
---
-# Ultralytics Solutions: Harness YOLOv8 to Solve Real-World Problems
+# Ultralytics Solutions: Harness YOLO11 to Solve Real-World Problems
-Ultralytics Solutions provide cutting-edge applications of YOLO models, offering real-world solutions like object counting, blurring, and security systems, enhancing efficiency and [accuracy](https://www.ultralytics.com/glossary/accuracy) in diverse industries. Discover the power of YOLOv8 for practical, impactful implementations.
+Ultralytics Solutions provide cutting-edge applications of YOLO models, offering real-world solutions like object counting, blurring, and security systems, enhancing efficiency and [accuracy](https://www.ultralytics.com/glossary/accuracy) in diverse industries. Discover the power of YOLO11 for practical, impactful implementations.
![Ultralytics Solutions Thumbnail](https://github.com/ultralytics/docs/releases/download/0/ultralytics-solutions-thumbnail.avif)
@@ -14,21 +14,21 @@ Ultralytics Solutions provide cutting-edge applications of YOLO models, offering
Here's our curated list of Ultralytics solutions that can be used to create awesome [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) projects.
-- [Object Counting](../guides/object-counting.md) 🚀 NEW: Learn to perform real-time object counting with YOLOv8. Gain the expertise to accurately count objects in live video streams.
-- [Object Cropping](../guides/object-cropping.md) 🚀 NEW: Master object cropping with YOLOv8 for precise extraction of objects from images and videos.
-- [Object Blurring](../guides/object-blurring.md) 🚀 NEW: Apply object blurring using YOLOv8 to protect privacy in image and video processing.
-- [Workouts Monitoring](../guides/workouts-monitoring.md) 🚀 NEW: Discover how to monitor workouts using YOLOv8. Learn to track and analyze various fitness routines in real time.
-- [Objects Counting in Regions](../guides/region-counting.md) 🚀 NEW: Count objects in specific regions using YOLOv8 for accurate detection in varied areas.
-- [Security Alarm System](../guides/security-alarm-system.md) 🚀 NEW: Create a security alarm system with YOLOv8 that triggers alerts upon detecting new objects. Customize the system to fit your specific needs.
+- [Object Counting](../guides/object-counting.md) 🚀 NEW: Learn to perform real-time object counting with YOLO11. Gain the expertise to accurately count objects in live video streams.
+- [Object Cropping](../guides/object-cropping.md) 🚀 NEW: Master object cropping with YOLO11 for precise extraction of objects from images and videos.
+- [Object Blurring](../guides/object-blurring.md) 🚀 NEW: Apply object blurring using YOLO11 to protect privacy in image and video processing.
+- [Workouts Monitoring](../guides/workouts-monitoring.md) 🚀 NEW: Discover how to monitor workouts using YOLO11. Learn to track and analyze various fitness routines in real time.
+- [Objects Counting in Regions](../guides/region-counting.md) 🚀 NEW: Count objects in specific regions using YOLO11 for accurate detection in varied areas.
+- [Security Alarm System](../guides/security-alarm-system.md) 🚀 NEW: Create a security alarm system with YOLO11 that triggers alerts upon detecting new objects. Customize the system to fit your specific needs.
- [Heatmaps](../guides/heatmaps.md) 🚀 NEW: Utilize detection heatmaps to visualize data intensity across a matrix, providing clear insights in computer vision tasks.
-- [Instance Segmentation with Object Tracking](../guides/instance-segmentation-and-tracking.md) 🚀 NEW: Implement [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation) and object tracking with YOLOv8 to achieve precise object boundaries and continuous monitoring.
+- [Instance Segmentation with Object Tracking](../guides/instance-segmentation-and-tracking.md) 🚀 NEW: Implement [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation) and object tracking with YOLO11 to achieve precise object boundaries and continuous monitoring.
- [VisionEye View Objects Mapping](../guides/vision-eye.md) 🚀 NEW: Develop systems that mimic human eye focus on specific objects, enhancing the computer's ability to discern and prioritize details.
-- [Speed Estimation](../guides/speed-estimation.md) 🚀 NEW: Estimate object speed using YOLOv8 and object tracking techniques, crucial for applications like autonomous vehicles and traffic monitoring.
-- [Distance Calculation](../guides/distance-calculation.md) 🚀 NEW: Calculate distances between objects using [bounding box](https://www.ultralytics.com/glossary/bounding-box) centroids in YOLOv8, essential for spatial analysis.
-- [Queue Management](../guides/queue-management.md) 🚀 NEW: Implement efficient queue management systems to minimize wait times and improve productivity using YOLOv8.
-- [Parking Management](../guides/parking-management.md) 🚀 NEW: Organize and direct vehicle flow in parking areas with YOLOv8, optimizing space utilization and user experience.
-- [Analytics](../guides/analytics.md) 📊 NEW: Conduct comprehensive data analysis to discover patterns and make informed decisions, leveraging YOLOv8 for descriptive, predictive, and prescriptive analytics.
-- [Live Inference with Streamlit](../guides/streamlit-live-inference.md) 🚀 NEW: Leverage the power of YOLOv8 for real-time [object detection](https://www.ultralytics.com/glossary/object-detection) directly through your web browser with a user-friendly Streamlit interface.
+- [Speed Estimation](../guides/speed-estimation.md) 🚀 NEW: Estimate object speed using YOLO11 and object tracking techniques, crucial for applications like autonomous vehicles and traffic monitoring.
+- [Distance Calculation](../guides/distance-calculation.md) 🚀 NEW: Calculate distances between objects using [bounding box](https://www.ultralytics.com/glossary/bounding-box) centroids in YOLO11, essential for spatial analysis.
+- [Queue Management](../guides/queue-management.md) 🚀 NEW: Implement efficient queue management systems to minimize wait times and improve productivity using YOLO11.
+- [Parking Management](../guides/parking-management.md) 🚀 NEW: Organize and direct vehicle flow in parking areas with YOLO11, optimizing space utilization and user experience.
+- [Analytics](../guides/analytics.md) 📊 NEW: Conduct comprehensive data analysis to discover patterns and make informed decisions, leveraging YOLO11 for descriptive, predictive, and prescriptive analytics.
+- [Live Inference with Streamlit](../guides/streamlit-live-inference.md) 🚀 NEW: Leverage the power of YOLO11 for real-time [object detection](https://www.ultralytics.com/glossary/object-detection) directly through your web browser with a user-friendly Streamlit interface.
## Contribute to Our Solutions
@@ -42,20 +42,20 @@ Let's work together to make the Ultralytics YOLO ecosystem more robust and versa
### How can I use Ultralytics YOLO for real-time object counting?
-Ultralytics YOLOv8 can be used for real-time object counting by leveraging its advanced object detection capabilities. You can follow our detailed guide on [Object Counting](../guides/object-counting.md) to set up YOLOv8 for live video stream analysis. Simply install YOLOv8, load your model, and process video frames to count objects dynamically.
+Ultralytics YOLO11 can be used for real-time object counting by leveraging its advanced object detection capabilities. You can follow our detailed guide on [Object Counting](../guides/object-counting.md) to set up YOLO11 for live video stream analysis. Simply install YOLO11, load your model, and process video frames to count objects dynamically.
### What are the benefits of using Ultralytics YOLO for security systems?
-Ultralytics YOLOv8 enhances security systems by offering real-time object detection and alert mechanisms. By employing YOLOv8, you can create a security alarm system that triggers alerts when new objects are detected in the surveillance area. Learn how to set up a [Security Alarm System](../guides/security-alarm-system.md) with YOLOv8 for robust security monitoring.
+Ultralytics YOLO11 enhances security systems by offering real-time object detection and alert mechanisms. By employing YOLO11, you can create a security alarm system that triggers alerts when new objects are detected in the surveillance area. Learn how to set up a [Security Alarm System](../guides/security-alarm-system.md) with YOLO11 for robust security monitoring.
### How can Ultralytics YOLO improve queue management systems?
-Ultralytics YOLOv8 can significantly improve queue management systems by accurately counting and tracking people in queues, thus helping to reduce wait times and optimize service efficiency. Follow our detailed guide on [Queue Management](../guides/queue-management.md) to learn how to implement YOLOv8 for effective queue monitoring and analysis.
+Ultralytics YOLO11 can significantly improve queue management systems by accurately counting and tracking people in queues, thus helping to reduce wait times and optimize service efficiency. Follow our detailed guide on [Queue Management](../guides/queue-management.md) to learn how to implement YOLO11 for effective queue monitoring and analysis.
### Can Ultralytics YOLO be used for workout monitoring?
-Yes, Ultralytics YOLOv8 can be effectively used for monitoring workouts by tracking and analyzing fitness routines in real-time. This allows for precise evaluation of exercise form and performance. Explore our guide on [Workouts Monitoring](../guides/workouts-monitoring.md) to learn how to set up an AI-powered workout monitoring system using YOLOv8.
+Yes, Ultralytics YOLO11 can be effectively used for monitoring workouts by tracking and analyzing fitness routines in real-time. This allows for precise evaluation of exercise form and performance. Explore our guide on [Workouts Monitoring](../guides/workouts-monitoring.md) to learn how to set up an AI-powered workout monitoring system using YOLO11.
### How does Ultralytics YOLO help in creating heatmaps for [data visualization](https://www.ultralytics.com/glossary/data-visualization)?
-Ultralytics YOLOv8 can generate heatmaps to visualize data intensity across a given area, highlighting regions of high activity or interest. This feature is particularly useful in understanding patterns and trends in various computer vision tasks. Learn more about creating and using [Heatmaps](../guides/heatmaps.md) with YOLOv8 for comprehensive data analysis and visualization.
+Ultralytics YOLO11 can generate heatmaps to visualize data intensity across a given area, highlighting regions of high activity or interest. This feature is particularly useful in understanding patterns and trends in various computer vision tasks. Learn more about creating and using [Heatmaps](../guides/heatmaps.md) with YOLO11 for comprehensive data analysis and visualization.
diff --git a/docs/en/tasks/classify.md b/docs/en/tasks/classify.md
index 7674c825ff..1b1af70d90 100644
--- a/docs/en/tasks/classify.md
+++ b/docs/en/tasks/classify.md
@@ -26,16 +26,16 @@ The output of an image classifier is a single class label and a confidence score
!!! tip
- YOLOv8 Classify models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml).
+ YOLO11 Classify models use the `-cls` suffix, i.e. `yolo11n-cls.pt` and are pretrained on [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml).
-## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
+## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/11)
-YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
+YOLO11 pretrained Classify models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size
(pixels) | acc
top1 | acc
top5 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) at 640 |
-|----------------------------------------------------------------------------------------------|-----------------------|------------------|------------------|--------------------------------|-------------------------------------|--------------------|--------------------------|
+| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLO11n-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-cls.pt) | 224 | 70.0 | 89.4 | 5.0 ± 0.3 | 1.1 ± 0.0 | 1.6 | 3.3 |
| [YOLO11s-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-cls.pt) | 224 | 75.4 | 92.7 | 7.9 ± 0.2 | 1.3 ± 0.0 | 5.5 | 12.1 |
| [YOLO11m-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-cls.pt) | 224 | 77.3 | 93.9 | 17.2 ± 0.4 | 2.0 ± 0.0 | 10.4 | 39.3 |
@@ -47,7 +47,7 @@ YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose model
## Train
-Train YOLOv8n-cls on the MNIST160 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 64. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
+Train YOLO11n-cls on the MNIST160 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 64. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! example
@@ -57,9 +57,9 @@ Train YOLOv8n-cls on the MNIST160 dataset for 100 [epochs](https://www.ultralyti
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-cls.yaml") # build a new model from YAML
- model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
- model = YOLO("yolov8n-cls.yaml").load("yolov8n-cls.pt") # build from YAML and transfer weights
+ model = YOLO("yolo11n-cls.yaml") # build a new model from YAML
+ model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
+ model = YOLO("yolo11n-cls.yaml").load("yolo11n-cls.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="mnist160", epochs=100, imgsz=64)
@@ -69,13 +69,13 @@ Train YOLOv8n-cls on the MNIST160 dataset for 100 [epochs](https://www.ultralyti
```bash
# Build a new model from YAML and start training from scratch
- yolo classify train data=mnist160 model=yolov8n-cls.yaml epochs=100 imgsz=64
+ yolo classify train data=mnist160 model=yolo11n-cls.yaml epochs=100 imgsz=64
# Start training from a pretrained *.pt model
- yolo classify train data=mnist160 model=yolov8n-cls.pt epochs=100 imgsz=64
+ yolo classify train data=mnist160 model=yolo11n-cls.pt epochs=100 imgsz=64
# Build a new model from YAML, transfer pretrained weights to it and start training
- yolo classify train data=mnist160 model=yolov8n-cls.yaml pretrained=yolov8n-cls.pt epochs=100 imgsz=64
+ yolo classify train data=mnist160 model=yolo11n-cls.yaml pretrained=yolo11n-cls.pt epochs=100 imgsz=64
```
### Dataset format
@@ -84,7 +84,7 @@ YOLO classification dataset format can be found in detail in the [Dataset Guide]
## Val
-Validate trained YOLOv8n-cls model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the MNIST160 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
+Validate trained YOLO11n-cls model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the MNIST160 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example
@@ -94,7 +94,7 @@ Validate trained YOLOv8n-cls model [accuracy](https://www.ultralytics.com/glossa
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-cls.pt") # load an official model
+ model = YOLO("yolo11n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
@@ -106,13 +106,13 @@ Validate trained YOLOv8n-cls model [accuracy](https://www.ultralytics.com/glossa
=== "CLI"
```bash
- yolo classify val model=yolov8n-cls.pt # val official model
+ yolo classify val model=yolo11n-cls.pt # val official model
yolo classify val model=path/to/best.pt # val custom model
```
## Predict
-Use a trained YOLOv8n-cls model to run predictions on images.
+Use a trained YOLO11n-cls model to run predictions on images.
!!! example
@@ -122,7 +122,7 @@ Use a trained YOLOv8n-cls model to run predictions on images.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-cls.pt") # load an official model
+ model = YOLO("yolo11n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
@@ -132,7 +132,7 @@ Use a trained YOLOv8n-cls model to run predictions on images.
=== "CLI"
```bash
- yolo classify predict model=yolov8n-cls.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
+ yolo classify predict model=yolo11n-cls.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
yolo classify predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model
```
@@ -140,7 +140,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
## Export
-Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
+Export a YOLO11n-cls model to a different format like ONNX, CoreML, etc.
!!! example
@@ -150,7 +150,7 @@ Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-cls.pt") # load an official model
+ model = YOLO("yolo11n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained model
# Export the model
@@ -160,11 +160,11 @@ Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
=== "CLI"
```bash
- yolo export model=yolov8n-cls.pt format=onnx # export official model
+ yolo export model=yolo11n-cls.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
-Available YOLOv8-cls export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-cls.onnx`. Usage examples are shown for your model after export completes.
+Available YOLO11-cls export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolo11n-cls.onnx`. Usage examples are shown for your model after export completes.
{% include "macros/export-table.md" %}
@@ -172,13 +172,13 @@ See full `export` details in the [Export](../modes/export.md) page.
## FAQ
-### What is the purpose of YOLOv8 in image classification?
+### What is the purpose of YOLO11 in image classification?
-YOLOv8 models, such as `yolov8n-cls.pt`, are designed for efficient image classification. They assign a single class label to an entire image along with a confidence score. This is particularly useful for applications where knowing the specific class of an image is sufficient, rather than identifying the location or shape of objects within the image.
+YOLO11 models, such as `yolo11n-cls.pt`, are designed for efficient image classification. They assign a single class label to an entire image along with a confidence score. This is particularly useful for applications where knowing the specific class of an image is sufficient, rather than identifying the location or shape of objects within the image.
-### How do I train a YOLOv8 model for image classification?
+### How do I train a YOLO11 model for image classification?
-To train a YOLOv8 model, you can use either Python or CLI commands. For example, to train a `yolov8n-cls` model on the MNIST160 dataset for 100 epochs at an image size of 64:
+To train a YOLO11 model, you can use either Python or CLI commands. For example, to train a `yolo11n-cls` model on the MNIST160 dataset for 100 epochs at an image size of 64:
!!! example
@@ -188,7 +188,7 @@ To train a YOLOv8 model, you can use either Python or CLI commands. For example,
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
+ model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="mnist160", epochs=100, imgsz=64)
@@ -197,18 +197,18 @@ To train a YOLOv8 model, you can use either Python or CLI commands. For example,
=== "CLI"
```bash
- yolo classify train data=mnist160 model=yolov8n-cls.pt epochs=100 imgsz=64
+ yolo classify train data=mnist160 model=yolo11n-cls.pt epochs=100 imgsz=64
```
For more configuration options, visit the [Configuration](../usage/cfg.md) page.
-### Where can I find pretrained YOLOv8 classification models?
+### Where can I find pretrained YOLO11 classification models?
-Pretrained YOLOv8 classification models can be found in the [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8) section. Models like `yolov8n-cls.pt`, `yolov8s-cls.pt`, `yolov8m-cls.pt`, etc., are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset and can be easily downloaded and used for various image classification tasks.
+Pretrained YOLO11 classification models can be found in the [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/11) section. Models like `yolo11n-cls.pt`, `yolo11s-cls.pt`, `yolo11m-cls.pt`, etc., are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset and can be easily downloaded and used for various image classification tasks.
-### How can I export a trained YOLOv8 model to different formats?
+### How can I export a trained YOLO11 model to different formats?
-You can export a trained YOLOv8 model to various formats using Python or CLI commands. For instance, to export a model to ONNX format:
+You can export a trained YOLO11 model to various formats using Python or CLI commands. For instance, to export a model to ONNX format:
!!! example
@@ -218,7 +218,7 @@ You can export a trained YOLOv8 model to various formats using Python or CLI com
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-cls.pt") # load the trained model
+ model = YOLO("yolo11n-cls.pt") # load the trained model
# Export the model to ONNX
model.export(format="onnx")
@@ -227,12 +227,12 @@ You can export a trained YOLOv8 model to various formats using Python or CLI com
=== "CLI"
```bash
- yolo export model=yolov8n-cls.pt format=onnx # export the trained model to ONNX format
+ yolo export model=yolo11n-cls.pt format=onnx # export the trained model to ONNX format
```
For detailed export options, refer to the [Export](../modes/export.md) page.
-### How do I validate a trained YOLOv8 classification model?
+### How do I validate a trained YOLO11 classification model?
To validate a trained model's accuracy on a dataset like MNIST160, you can use the following Python or CLI commands:
@@ -244,7 +244,7 @@ To validate a trained model's accuracy on a dataset like MNIST160, you can use t
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-cls.pt") # load the trained model
+ model = YOLO("yolo11n-cls.pt") # load the trained model
# Validate the model
metrics = model.val() # no arguments needed, uses the dataset and settings from training
@@ -255,7 +255,7 @@ To validate a trained model's accuracy on a dataset like MNIST160, you can use t
=== "CLI"
```bash
- yolo classify val model=yolov8n-cls.pt # validate the trained model
+ yolo classify val model=yolo11n-cls.pt # validate the trained model
```
For more information, visit the [Validate](#val) section.
diff --git a/docs/en/tasks/detect.md b/docs/en/tasks/detect.md
index 079dc7ecb7..58d58759b1 100644
--- a/docs/en/tasks/detect.md
+++ b/docs/en/tasks/detect.md
@@ -25,16 +25,16 @@ The output of an object detector is a set of bounding boxes that enclose the obj
!!! tip
- YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
+ YOLO11 Detect models are the default YOLO11 models, i.e. `yolo11n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
-## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
+## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/11)
-YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
+YOLO11 pretrained Detect models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size
(pixels) | mAPval
50-95 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) |
-|--------------------------------------------------------------------------------------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
+| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt) | 640 | 39.5 | 56.1 ± 0.8 | 1.5 ± 0.0 | 2.6 | 6.5 |
| [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt) | 640 | 47.0 | 90.0 ± 1.2 | 2.5 ± 0.0 | 9.4 | 21.5 |
| [YOLO11m](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m.pt) | 640 | 51.5 | 183.2 ± 2.0 | 4.7 ± 0.1 | 20.1 | 68.0 |
@@ -46,7 +46,7 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models
## Train
-Train YOLOv8n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
+Train YOLO11n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! example
@@ -56,9 +56,9 @@ Train YOLOv8n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n.yaml") # build a new model from YAML
- model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
- model = YOLO("yolov8n.yaml").load("yolov8n.pt") # build from YAML and transfer weights
+ model = YOLO("yolo11n.yaml") # build a new model from YAML
+ model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
+ model = YOLO("yolo11n.yaml").load("yolo11n.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
@@ -68,13 +68,13 @@ Train YOLOv8n on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/
```bash
# Build a new model from YAML and start training from scratch
- yolo detect train data=coco8.yaml model=yolov8n.yaml epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolo11n.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
- yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
- yolo detect train data=coco8.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
+ yolo detect train data=coco8.yaml model=yolo11n.yaml pretrained=yolo11n.pt epochs=100 imgsz=640
```
### Dataset format
@@ -83,7 +83,7 @@ YOLO detection dataset format can be found in detail in the [Dataset Guide](../d
## Val
-Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
+Validate trained YOLO11n model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example
@@ -93,7 +93,7 @@ Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/a
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n.pt") # load an official model
+ model = YOLO("yolo11n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
@@ -107,13 +107,13 @@ Validate trained YOLOv8n model [accuracy](https://www.ultralytics.com/glossary/a
=== "CLI"
```bash
- yolo detect val model=yolov8n.pt # val official model
+ yolo detect val model=yolo11n.pt # val official model
yolo detect val model=path/to/best.pt # val custom model
```
## Predict
-Use a trained YOLOv8n model to run predictions on images.
+Use a trained YOLO11n model to run predictions on images.
!!! example
@@ -123,7 +123,7 @@ Use a trained YOLOv8n model to run predictions on images.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n.pt") # load an official model
+ model = YOLO("yolo11n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
@@ -133,7 +133,7 @@ Use a trained YOLOv8n model to run predictions on images.
=== "CLI"
```bash
- yolo detect predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
+ yolo detect predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model
```
@@ -141,7 +141,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
## Export
-Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
+Export a YOLO11n model to a different format like ONNX, CoreML, etc.
!!! example
@@ -151,7 +151,7 @@ Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n.pt") # load an official model
+ model = YOLO("yolo11n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained model
# Export the model
@@ -161,11 +161,11 @@ Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
=== "CLI"
```bash
- yolo export model=yolov8n.pt format=onnx # export official model
+ yolo export model=yolo11n.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
-Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n.onnx`. Usage examples are shown for your model after export completes.
+Available YOLO11 export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolo11n.onnx`. Usage examples are shown for your model after export completes.
{% include "macros/export-table.md" %}
@@ -173,9 +173,9 @@ See full `export` details in the [Export](../modes/export.md) page.
## FAQ
-### How do I train a YOLOv8 model on my custom dataset?
+### How do I train a YOLO11 model on my custom dataset?
-Training a YOLOv8 model on a custom dataset involves a few steps:
+Training a YOLO11 model on a custom dataset involves a few steps:
1. **Prepare the Dataset**: Ensure your dataset is in the YOLO format. For guidance, refer to our [Dataset Guide](../datasets/detect/index.md).
2. **Load the Model**: Use the Ultralytics YOLO library to load a pre-trained model or create a new model from a YAML file.
@@ -189,7 +189,7 @@ Training a YOLOv8 model on a custom dataset involves a few steps:
from ultralytics import YOLO
# Load a pretrained model
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
# Train the model on your custom dataset
model.train(data="my_custom_dataset.yaml", epochs=100, imgsz=640)
@@ -198,26 +198,26 @@ Training a YOLOv8 model on a custom dataset involves a few steps:
=== "CLI"
```bash
- yolo detect train data=my_custom_dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
+ yolo detect train data=my_custom_dataset.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For detailed configuration options, visit the [Configuration](../usage/cfg.md) page.
-### What pretrained models are available in YOLOv8?
+### What pretrained models are available in YOLO11?
-Ultralytics YOLOv8 offers various pretrained models for object detection, segmentation, and pose estimation. These models are pretrained on the COCO dataset or ImageNet for classification tasks. Here are some of the available models:
+Ultralytics YOLO11 offers various pretrained models for object detection, segmentation, and pose estimation. These models are pretrained on the COCO dataset or ImageNet for classification tasks. Here are some of the available models:
-- [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt)
-- [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt)
-- [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt)
-- [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt)
-- [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt)
+- [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt)
+- [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt)
+- [YOLO11m](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m.pt)
+- [YOLO11l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l.pt)
+- [YOLO11x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x.pt)
-For a detailed list and performance metrics, refer to the [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8) section.
+For a detailed list and performance metrics, refer to the [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/11) section.
### How can I validate the accuracy of my trained YOLOv8 model?
-To validate the accuracy of your trained YOLOv8 model, you can use the `.val()` method in Python or the `yolo detect val` command in CLI. This will provide metrics like mAP50-95, mAP50, and more.
+To validate the accuracy of your trained YOLO11 model, you can use the `.val()` method in Python or the `yolo detect val` command in CLI. This will provide metrics like mAP50-95, mAP50, and more.
!!! example
@@ -242,9 +242,9 @@ To validate the accuracy of your trained YOLOv8 model, you can use the `.val()`
For more validation details, visit the [Val](../modes/val.md) page.
-### What formats can I export a YOLOv8 model to?
+### What formats can I export a YOLO11 model to?
-Ultralytics YOLOv8 allows exporting models to various formats such as ONNX, TensorRT, CoreML, and more to ensure compatibility across different platforms and devices.
+Ultralytics YOLO11 allows exporting models to various formats such as ONNX, TensorRT, CoreML, and more to ensure compatibility across different platforms and devices.
!!! example
@@ -254,7 +254,7 @@ Ultralytics YOLOv8 allows exporting models to various formats such as ONNX, Tens
from ultralytics import YOLO
# Load the model
- model = YOLO("yolov8n.pt")
+ model = YOLO("yolo11n.pt")
# Export the model to ONNX format
model.export(format="onnx")
@@ -263,18 +263,18 @@ Ultralytics YOLOv8 allows exporting models to various formats such as ONNX, Tens
=== "CLI"
```bash
- yolo export model=yolov8n.pt format=onnx
+ yolo export model=yolo11n.pt format=onnx
```
Check the full list of supported formats and instructions on the [Export](../modes/export.md) page.
-### Why should I use Ultralytics YOLOv8 for object detection?
+### Why should I use Ultralytics YOLO11 for object detection?
-Ultralytics YOLOv8 is designed to offer state-of-the-art performance for object detection, segmentation, and pose estimation. Here are some key advantages:
+Ultralytics YOLO11 is designed to offer state-of-the-art performance for object detection, segmentation, and pose estimation. Here are some key advantages:
1. **Pretrained Models**: Utilize models pretrained on popular datasets like COCO and ImageNet for faster development.
2. **High Accuracy**: Achieves impressive mAP scores, ensuring reliable object detection.
3. **Speed**: Optimized for real-time inference, making it ideal for applications requiring swift processing.
4. **Flexibility**: Export models to various formats like ONNX and TensorRT for deployment across multiple platforms.
-Explore our [Blog](https://www.ultralytics.com/blog) for use cases and success stories showcasing YOLOv8 in action.
+Explore our [Blog](https://www.ultralytics.com/blog) for use cases and success stories showcasing YOLO11 in action.
diff --git a/docs/en/tasks/index.md b/docs/en/tasks/index.md
index 3ad5f2a0ef..d474800706 100644
--- a/docs/en/tasks/index.md
+++ b/docs/en/tasks/index.md
@@ -19,7 +19,7 @@ YOLO11 is an AI framework that supports multiple [computer vision](https://www.u
allowfullscreen>
- Watch: Explore Ultralytics YOLO Tasks: [Object Detection](https://www.ultralytics.com/glossary/object-detection), Segmentation, OBB, Tracking, and Pose Estimation.
+ Watch: Explore Ultralytics YOLO Tasks: Object Detection, Segmentation, OBB, Tracking, and Pose Estimation.
@@ -103,7 +103,7 @@ OBB dataset format can be found in detail in the [Dataset Guide](../datasets/obb
## Val
-Validate trained YOLOv8n-obb model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the DOTA8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
+Validate trained YOLO11n-obb model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the DOTA8 dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example
@@ -113,7 +113,7 @@ Validate trained YOLOv8n-obb model [accuracy](https://www.ultralytics.com/glossa
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-obb.pt") # load an official model
+ model = YOLO("yolo11n-obb.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
@@ -127,13 +127,13 @@ Validate trained YOLOv8n-obb model [accuracy](https://www.ultralytics.com/glossa
=== "CLI"
```bash
- yolo obb val model=yolov8n-obb.pt data=dota8.yaml # val official model
+ yolo obb val model=yolo11n-obb.pt data=dota8.yaml # val official model
yolo obb val model=path/to/best.pt data=path/to/data.yaml # val custom model
```
## Predict
-Use a trained YOLOv8n-obb model to run predictions on images.
+Use a trained YOLO11n-obb model to run predictions on images.
!!! example
@@ -143,7 +143,7 @@ Use a trained YOLOv8n-obb model to run predictions on images.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-obb.pt") # load an official model
+ model = YOLO("yolo11n-obb.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
@@ -153,7 +153,7 @@ Use a trained YOLOv8n-obb model to run predictions on images.
=== "CLI"
```bash
- yolo obb predict model=yolov8n-obb.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
+ yolo obb predict model=yolo11n-obb.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
yolo obb predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model
```
@@ -172,7 +172,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
## Export
-Export a YOLOv8n-obb model to a different format like ONNX, CoreML, etc.
+Export a YOLO11n-obb model to a different format like ONNX, CoreML, etc.
!!! example
@@ -182,7 +182,7 @@ Export a YOLOv8n-obb model to a different format like ONNX, CoreML, etc.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-obb.pt") # load an official model
+ model = YOLO("yolo11n-obb.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained model
# Export the model
@@ -192,11 +192,11 @@ Export a YOLOv8n-obb model to a different format like ONNX, CoreML, etc.
=== "CLI"
```bash
- yolo export model=yolov8n-obb.pt format=onnx # export official model
+ yolo export model=yolo11n-obb.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
-Available YOLOv8-obb export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-obb.onnx`. Usage examples are shown for your model after export completes.
+Available YOLO11-obb export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolo11n-obb.onnx`. Usage examples are shown for your model after export completes.
{% include "macros/export-table.md" %}
@@ -208,9 +208,9 @@ See full `export` details in the [Export](../modes/export.md) page.
Oriented Bounding Boxes (OBB) include an additional angle to enhance object localization accuracy in images. Unlike regular bounding boxes, which are axis-aligned rectangles, OBBs can rotate to fit the orientation of the object better. This is particularly useful for applications requiring precise object placement, such as aerial or satellite imagery ([Dataset Guide](../datasets/obb/index.md)).
-### How do I train a YOLOv8n-obb model using a custom dataset?
+### How do I train a YOLO11n-obb model using a custom dataset?
-To train a YOLOv8n-obb model with a custom dataset, follow the example below using Python or CLI:
+To train a YOLO11n-obb model with a custom dataset, follow the example below using Python or CLI:
!!! example
@@ -220,7 +220,7 @@ To train a YOLOv8n-obb model with a custom dataset, follow the example below usi
from ultralytics import YOLO
# Load a pretrained model
- model = YOLO("yolov8n-obb.pt")
+ model = YOLO("yolo11n-obb.pt")
# Train the model
results = model.train(data="path/to/custom_dataset.yaml", epochs=100, imgsz=640)
@@ -229,18 +229,18 @@ To train a YOLOv8n-obb model with a custom dataset, follow the example below usi
=== "CLI"
```bash
- yolo obb train data=path/to/custom_dataset.yaml model=yolov8n-obb.pt epochs=100 imgsz=640
+ yolo obb train data=path/to/custom_dataset.yaml model=yolo11n-obb.pt epochs=100 imgsz=640
```
For more training arguments, check the [Configuration](../usage/cfg.md) section.
-### What datasets can I use for training YOLOv8-OBB models?
+### What datasets can I use for training YOLO11-OBB models?
-YOLOv8-OBB models are pretrained on datasets like [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml) but you can use any dataset formatted for OBB. Detailed information on OBB dataset formats can be found in the [Dataset Guide](../datasets/obb/index.md).
+YOLO11-OBB models are pretrained on datasets like [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml) but you can use any dataset formatted for OBB. Detailed information on OBB dataset formats can be found in the [Dataset Guide](../datasets/obb/index.md).
-### How can I export a YOLOv8-OBB model to ONNX format?
+### How can I export a YOLO11-OBB model to ONNX format?
-Exporting a YOLOv8-OBB model to ONNX format is straightforward using either Python or CLI:
+Exporting a YOLO11-OBB model to ONNX format is straightforward using either Python or CLI:
!!! example
@@ -250,7 +250,7 @@ Exporting a YOLOv8-OBB model to ONNX format is straightforward using either Pyth
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-obb.pt")
+ model = YOLO("yolo11n-obb.pt")
# Export the model
model.export(format="onnx")
@@ -259,14 +259,14 @@ Exporting a YOLOv8-OBB model to ONNX format is straightforward using either Pyth
=== "CLI"
```bash
- yolo export model=yolov8n-obb.pt format=onnx
+ yolo export model=yolo11n-obb.pt format=onnx
```
For more export formats and details, refer to the [Export](../modes/export.md) page.
-### How do I validate the accuracy of a YOLOv8n-obb model?
+### How do I validate the accuracy of a YOLO11n-obb model?
-To validate a YOLOv8n-obb model, you can use Python or CLI commands as shown below:
+To validate a YOLO11n-obb model, you can use Python or CLI commands as shown below:
!!! example
@@ -276,7 +276,7 @@ To validate a YOLOv8n-obb model, you can use Python or CLI commands as shown bel
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-obb.pt")
+ model = YOLO("yolo11n-obb.pt")
# Validate the model
metrics = model.val(data="dota8.yaml")
@@ -285,7 +285,7 @@ To validate a YOLOv8n-obb model, you can use Python or CLI commands as shown bel
=== "CLI"
```bash
- yolo obb val model=yolov8n-obb.pt data=dota8.yaml
+ yolo obb val model=yolo11n-obb.pt data=dota8.yaml
```
See full validation details in the [Val](../modes/val.md) section.
diff --git a/docs/en/tasks/pose.md b/docs/en/tasks/pose.md
index 8f5f0b8968..8b3059bc72 100644
--- a/docs/en/tasks/pose.md
+++ b/docs/en/tasks/pose.md
@@ -38,9 +38,9 @@ The output of a pose estimation model is a set of points that represent the keyp
!!! tip
- YOLOv8 _pose_ models use the `-pose` suffix, i.e. `yolov8n-pose.pt`. These models are trained on the [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml) dataset and are suitable for a variety of pose estimation tasks.
+ YOLO11 _pose_ models use the `-pose` suffix, i.e. `yolo11n-pose.pt`. These models are trained on the [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml) dataset and are suitable for a variety of pose estimation tasks.
- In the default YOLOv8 pose model, there are 17 keypoints, each representing a different part of the human body. Here is the mapping of each index to its respective body joint:
+ In the default YOLO11 pose model, there are 17 keypoints, each representing a different part of the human body. Here is the mapping of each index to its respective body joint:
0: Nose
1: Left Eye
@@ -60,14 +60,14 @@ The output of a pose estimation model is a set of points that represent the keyp
15: Left Ankle
16: Right Ankle
-## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
+## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/11)
-YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
+YOLO11 pretrained Pose models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size
@@ -36,14 +36,14 @@ The output of an oriented object detector is a set of rotated bounding boxes tha
| :------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------: |
| ![Ships Detection using OBB](https://github.com/ultralytics/docs/releases/download/0/ships-detection-using-obb.avif) | ![Vehicle Detection using OBB](https://github.com/ultralytics/docs/releases/download/0/vehicle-detection-using-obb.avif) |
-## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
+## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/11)
-YOLOv8 pretrained OBB models are shown here, which are pretrained on the [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml) dataset.
+YOLO11 pretrained OBB models are shown here, which are pretrained on the [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size
(pixels) | mAPtest
50 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) |
-|----------------------------------------------------------------------------------------------|-----------------------|--------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
+| -------------------------------------------------------------------------------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLO11n-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-obb.pt) | 1024 | 78.4 | 117.6 ± 0.8 | 4.4 ± 0.0 | 2.7 | 17.2 |
| [YOLO11s-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-obb.pt) | 1024 | 79.5 | 219.4 ± 4.0 | 5.1 ± 0.0 | 9.7 | 57.5 |
| [YOLO11m-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-obb.pt) | 1024 | 80.9 | 562.8 ± 2.9 | 10.1 ± 0.4 | 20.9 | 183.5 |
@@ -55,7 +55,7 @@ YOLOv8 pretrained OBB models are shown here, which are pretrained on the [DOTAv1
## Train
-Train YOLOv8n-obb on the `dota8.yaml` dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
+Train YOLO11n-obb on the `dota8.yaml` dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! example
@@ -65,9 +65,9 @@ Train YOLOv8n-obb on the `dota8.yaml` dataset for 100 [epochs](https://www.ultra
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-obb.yaml") # build a new model from YAML
- model = YOLO("yolov8n-obb.pt") # load a pretrained model (recommended for training)
- model = YOLO("yolov8n-obb.yaml").load("yolov8n.pt") # build from YAML and transfer weights
+ model = YOLO("yolo11n-obb.yaml") # build a new model from YAML
+ model = YOLO("yolo11n-obb.pt") # load a pretrained model (recommended for training)
+ model = YOLO("yolo11n-obb.yaml").load("yolo11n.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="dota8.yaml", epochs=100, imgsz=640)
@@ -77,13 +77,13 @@ Train YOLOv8n-obb on the `dota8.yaml` dataset for 100 [epochs](https://www.ultra
```bash
# Build a new model from YAML and start training from scratch
- yolo obb train data=dota8.yaml model=yolov8n-obb.yaml epochs=100 imgsz=640
+ yolo obb train data=dota8.yaml model=yolo11n-obb.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
- yolo obb train data=dota8.yaml model=yolov8n-obb.pt epochs=100 imgsz=640
+ yolo obb train data=dota8.yaml model=yolo11n-obb.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
- yolo obb train data=dota8.yaml model=yolov8n-obb.yaml pretrained=yolov8n-obb.pt epochs=100 imgsz=640
+ yolo obb train data=dota8.yaml model=yolo11n-obb.yaml pretrained=yolo11n-obb.pt epochs=100 imgsz=640
```
(pixels) | mAPpose
50-95 | mAPpose
50 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) |
-|------------------------------------------------------------------------------------------------|-----------------------|-----------------------|--------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
+| ---------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLO11n-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-pose.pt) | 640 | 50.0 | 81.0 | 52.4 ± 0.5 | 1.7 ± 0.0 | 2.9 | 7.6 |
| [YOLO11s-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-pose.pt) | 640 | 58.9 | 86.3 | 90.5 ± 0.6 | 2.6 ± 0.0 | 9.9 | 23.2 |
| [YOLO11m-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-pose.pt) | 640 | 64.9 | 89.4 | 187.3 ± 0.8 | 4.9 ± 0.1 | 20.9 | 71.7 |
@@ -79,7 +79,7 @@ YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models ar
## Train
-Train a YOLOv8-pose model on the COCO128-pose dataset.
+Train a YOLO11-pose model on the COCO128-pose dataset.
!!! example
@@ -89,9 +89,9 @@ Train a YOLOv8-pose model on the COCO128-pose dataset.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-pose.yaml") # build a new model from YAML
- model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
- model = YOLO("yolov8n-pose.yaml").load("yolov8n-pose.pt") # build from YAML and transfer weights
+ model = YOLO("yolo11n-pose.yaml") # build a new model from YAML
+ model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
+ model = YOLO("yolo11n-pose.yaml").load("yolo11n-pose.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
@@ -101,13 +101,13 @@ Train a YOLOv8-pose model on the COCO128-pose dataset.
```bash
# Build a new model from YAML and start training from scratch
- yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml epochs=100 imgsz=640
+ yolo pose train data=coco8-pose.yaml model=yolo11n-pose.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
- yolo pose train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
+ yolo pose train data=coco8-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
- yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml pretrained=yolov8n-pose.pt epochs=100 imgsz=640
+ yolo pose train data=coco8-pose.yaml model=yolo11n-pose.yaml pretrained=yolo11n-pose.pt epochs=100 imgsz=640
```
### Dataset format
@@ -116,7 +116,7 @@ YOLO pose dataset format can be found in detail in the [Dataset Guide](../datase
## Val
-Validate trained YOLOv8n-pose model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO128-pose dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
+Validate trained YOLO11n-pose model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO128-pose dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example
@@ -126,7 +126,7 @@ Validate trained YOLOv8n-pose model [accuracy](https://www.ultralytics.com/gloss
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-pose.pt") # load an official model
+ model = YOLO("yolo11n-pose.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
@@ -140,13 +140,13 @@ Validate trained YOLOv8n-pose model [accuracy](https://www.ultralytics.com/gloss
=== "CLI"
```bash
- yolo pose val model=yolov8n-pose.pt # val official model
+ yolo pose val model=yolo11n-pose.pt # val official model
yolo pose val model=path/to/best.pt # val custom model
```
## Predict
-Use a trained YOLOv8n-pose model to run predictions on images.
+Use a trained YOLO11n-pose model to run predictions on images.
!!! example
@@ -156,7 +156,7 @@ Use a trained YOLOv8n-pose model to run predictions on images.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-pose.pt") # load an official model
+ model = YOLO("yolo11n-pose.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
@@ -166,7 +166,7 @@ Use a trained YOLOv8n-pose model to run predictions on images.
=== "CLI"
```bash
- yolo pose predict model=yolov8n-pose.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
+ yolo pose predict model=yolo11n-pose.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
yolo pose predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model
```
@@ -174,7 +174,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
## Export
-Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
+Export a YOLO11n Pose model to a different format like ONNX, CoreML, etc.
!!! example
@@ -184,7 +184,7 @@ Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-pose.pt") # load an official model
+ model = YOLO("yolo11n-pose.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained model
# Export the model
@@ -194,11 +194,11 @@ Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
=== "CLI"
```bash
- yolo export model=yolov8n-pose.pt format=onnx # export official model
+ yolo export model=yolo11n-pose.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
-Available YOLOv8-pose export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your model after export completes.
+Available YOLO11-pose export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolo11n-pose.onnx`. Usage examples are shown for your model after export completes.
{% include "macros/export-table.md" %}
@@ -206,20 +206,20 @@ See full `export` details in the [Export](../modes/export.md) page.
## FAQ
-### What is Pose Estimation with Ultralytics YOLOv8 and how does it work?
+### What is Pose Estimation with Ultralytics YOLO11 and how does it work?
-Pose estimation with Ultralytics YOLOv8 involves identifying specific points, known as keypoints, in an image. These keypoints typically represent joints or other important features of the object. The output includes the `[x, y]` coordinates and confidence scores for each point. YOLOv8-pose models are specifically designed for this task and use the `-pose` suffix, such as `yolov8n-pose.pt`. These models are pre-trained on datasets like [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml) and can be used for various pose estimation tasks. For more information, visit the [Pose Estimation Page](#pose-estimation).
+Pose estimation with Ultralytics YOLO11 involves identifying specific points, known as keypoints, in an image. These keypoints typically represent joints or other important features of the object. The output includes the `[x, y]` coordinates and confidence scores for each point. YOLO11-pose models are specifically designed for this task and use the `-pose` suffix, such as `yolo11n-pose.pt`. These models are pre-trained on datasets like [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml) and can be used for various pose estimation tasks. For more information, visit the [Pose Estimation Page](#pose-estimation).
-### How can I train a YOLOv8-pose model on a custom dataset?
+### How can I train a YOLO11-pose model on a custom dataset?
-Training a YOLOv8-pose model on a custom dataset involves loading a model, either a new model defined by a YAML file or a pre-trained model. You can then start the training process using your specified dataset and parameters.
+Training a YOLO11-pose model on a custom dataset involves loading a model, either a new model defined by a YAML file or a pre-trained model. You can then start the training process using your specified dataset and parameters.
```python
from ultralytics import YOLO
# Load a model
-model = YOLO("yolov8n-pose.yaml") # build a new model from YAML
-model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
+model = YOLO("yolo11n-pose.yaml") # build a new model from YAML
+model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="your-dataset.yaml", epochs=100, imgsz=640)
@@ -227,9 +227,9 @@ results = model.train(data="your-dataset.yaml", epochs=100, imgsz=640)
For comprehensive details on training, refer to the [Train Section](#train).
-### How do I validate a trained YOLOv8-pose model?
+### How do I validate a trained YOLO11-pose model?
-Validation of a YOLOv8-pose model involves assessing its accuracy using the same dataset parameters retained during training. Here's an example:
+Validation of a YOLO11-pose model involves assessing its accuracy using the same dataset parameters retained during training. Here's an example:
```python
from ultralytics import YOLO
@@ -244,9 +244,9 @@ metrics = model.val() # no arguments needed, dataset and settings remembered
For more information, visit the [Val Section](#val).
-### Can I export a YOLOv8-pose model to other formats, and how?
+### Can I export a YOLO11-pose model to other formats, and how?
-Yes, you can export a YOLOv8-pose model to various formats like ONNX, CoreML, TensorRT, and more. This can be done using either Python or the Command Line Interface (CLI).
+Yes, you can export a YOLO11-pose model to various formats like ONNX, CoreML, TensorRT, and more. This can be done using either Python or the Command Line Interface (CLI).
```python
from ultralytics import YOLO
@@ -261,6 +261,6 @@ model.export(format="onnx")
Refer to the [Export Section](#export) for more details.
-### What are the available Ultralytics YOLOv8-pose models and their performance metrics?
+### What are the available Ultralytics YOLO11-pose models and their performance metrics?
-Ultralytics YOLOv8 offers various pretrained pose models such as YOLOv8n-pose, YOLOv8s-pose, YOLOv8m-pose, among others. These models differ in size, accuracy (mAP), and speed. For instance, the YOLOv8n-pose model achieves a mAPpose50-95 of 50.4 and an mAPpose50 of 80.1. For a complete list and performance details, visit the [Models Section](#models).
+Ultralytics YOLO11 offers various pretrained pose models such as YOLO11n-pose, YOLO11s-pose, YOLO11m-pose, among others. These models differ in size, accuracy (mAP), and speed. For instance, the YOLO11n-pose model achieves a mAPpose50-95 of 50.4 and an mAPpose50 of 80.1. For a complete list and performance details, visit the [Models Section](#models).
diff --git a/docs/en/tasks/segment.md b/docs/en/tasks/segment.md
index f7054fb36a..f205bb15fb 100644
--- a/docs/en/tasks/segment.md
+++ b/docs/en/tasks/segment.md
@@ -26,16 +26,16 @@ The output of an instance segmentation model is a set of masks or contours that
!!! tip
- YOLOv8 Segment models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
+ YOLO11 Segment models use the `-seg` suffix, i.e. `yolo11n-seg.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
-## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8)
+## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/11)
-YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
+YOLO11 pretrained Segment models are shown here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml) dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
| Model | size
(pixels) | mAPbox
50-95 | mAPmask
50-95 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) |
-|----------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
+| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLO11n-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt) | 640 | 38.9 | 32.0 | 65.9 ± 1.1 | 1.8 ± 0.0 | 2.9 | 10.4 |
| [YOLO11s-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-seg.pt) | 640 | 46.6 | 37.8 | 117.6 ± 4.9 | 2.9 ± 0.0 | 10.1 | 35.5 |
| [YOLO11m-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-seg.pt) | 640 | 51.5 | 41.5 | 281.6 ± 1.2 | 6.3 ± 0.1 | 22.4 | 123.3 |
@@ -47,7 +47,7 @@ YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models
## Train
-Train YOLOv8n-seg on the COCO128-seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
+Train YOLO11n-seg on the COCO128-seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
!!! example
@@ -57,9 +57,9 @@ Train YOLOv8n-seg on the COCO128-seg dataset for 100 [epochs](https://www.ultral
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-seg.yaml") # build a new model from YAML
- model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
- model = YOLO("yolov8n-seg.yaml").load("yolov8n.pt") # build from YAML and transfer weights
+ model = YOLO("yolo11n-seg.yaml") # build a new model from YAML
+ model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
+ model = YOLO("yolo11n-seg.yaml").load("yolo11n.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
@@ -69,13 +69,13 @@ Train YOLOv8n-seg on the COCO128-seg dataset for 100 [epochs](https://www.ultral
```bash
# Build a new model from YAML and start training from scratch
- yolo segment train data=coco8-seg.yaml model=yolov8n-seg.yaml epochs=100 imgsz=640
+ yolo segment train data=coco8-seg.yaml model=yolo11n-seg.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
- yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
+ yolo segment train data=coco8-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
- yolo segment train data=coco8-seg.yaml model=yolov8n-seg.yaml pretrained=yolov8n-seg.pt epochs=100 imgsz=640
+ yolo segment train data=coco8-seg.yaml model=yolo11n-seg.yaml pretrained=yolo11n-seg.pt epochs=100 imgsz=640
```
### Dataset format
@@ -84,7 +84,7 @@ YOLO segmentation dataset format can be found in detail in the [Dataset Guide](.
## Val
-Validate trained YOLOv8n-seg model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO128-seg dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
+Validate trained YOLO11n-seg model [accuracy](https://www.ultralytics.com/glossary/accuracy) on the COCO128-seg dataset. No arguments are needed as the `model` retains its training `data` and arguments as model attributes.
!!! example
@@ -94,7 +94,7 @@ Validate trained YOLOv8n-seg model [accuracy](https://www.ultralytics.com/glossa
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-seg.pt") # load an official model
+ model = YOLO("yolo11n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
@@ -112,13 +112,13 @@ Validate trained YOLOv8n-seg model [accuracy](https://www.ultralytics.com/glossa
=== "CLI"
```bash
- yolo segment val model=yolov8n-seg.pt # val official model
+ yolo segment val model=yolo11n-seg.pt # val official model
yolo segment val model=path/to/best.pt # val custom model
```
## Predict
-Use a trained YOLOv8n-seg model to run predictions on images.
+Use a trained YOLO11n-seg model to run predictions on images.
!!! example
@@ -128,7 +128,7 @@ Use a trained YOLOv8n-seg model to run predictions on images.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-seg.pt") # load an official model
+ model = YOLO("yolo11n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
@@ -138,7 +138,7 @@ Use a trained YOLOv8n-seg model to run predictions on images.
=== "CLI"
```bash
- yolo segment predict model=yolov8n-seg.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
+ yolo segment predict model=yolo11n-seg.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
yolo segment predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model
```
@@ -146,7 +146,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
## Export
-Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
+Export a YOLO11n-seg model to a different format like ONNX, CoreML, etc.
!!! example
@@ -156,7 +156,7 @@ Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
from ultralytics import YOLO
# Load a model
- model = YOLO("yolov8n-seg.pt") # load an official model
+ model = YOLO("yolo11n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained model
# Export the model
@@ -166,11 +166,11 @@ Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
=== "CLI"
```bash
- yolo export model=yolov8n-seg.pt format=onnx # export official model
+ yolo export model=yolo11n-seg.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
-Available YOLOv8-seg export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-seg.onnx`. Usage examples are shown for your model after export completes.
+Available YOLO11-seg export formats are in the table below. You can export to any format using the `format` argument, i.e. `format='onnx'` or `format='engine'`. You can predict or validate directly on exported models, i.e. `yolo predict model=yolo11n-seg.onnx`. Usage examples are shown for your model after export completes.
{% include "macros/export-table.md" %}
@@ -178,9 +178,9 @@ See full `export` details in the [Export](../modes/export.md) page.
## FAQ
-### How do I train a YOLOv8 segmentation model on a custom dataset?
+### How do I train a YOLO11 segmentation model on a custom dataset?
-To train a YOLOv8 segmentation model on a custom dataset, you first need to prepare your dataset in the YOLO segmentation format. You can use tools like [JSON2YOLO](https://github.com/ultralytics/JSON2YOLO) to convert datasets from other formats. Once your dataset is ready, you can train the model using Python or CLI commands:
+To train a YOLO11 segmentation model on a custom dataset, you first need to prepare your dataset in the YOLO segmentation format. You can use tools like [JSON2YOLO](https://github.com/ultralytics/JSON2YOLO) to convert datasets from other formats. Once your dataset is ready, you can train the model using Python or CLI commands:
!!! example
@@ -189,8 +189,8 @@ To train a YOLOv8 segmentation model on a custom dataset, you first need to prep
```python
from ultralytics import YOLO
- # Load a pretrained YOLOv8 segment model
- model = YOLO("yolov8n-seg.pt")
+ # Load a pretrained YOLO11 segment model
+ model = YOLO("yolo11n-seg.pt")
# Train the model
results = model.train(data="path/to/your_dataset.yaml", epochs=100, imgsz=640)
@@ -199,18 +199,18 @@ To train a YOLOv8 segmentation model on a custom dataset, you first need to prep
=== "CLI"
```bash
- yolo segment train data=path/to/your_dataset.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
+ yolo segment train data=path/to/your_dataset.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
Check the [Configuration](../usage/cfg.md) page for more available arguments.
-### What is the difference between [object detection](https://www.ultralytics.com/glossary/object-detection) and instance segmentation in YOLOv8?
+### What is the difference between [object detection](https://www.ultralytics.com/glossary/object-detection) and instance segmentation in YOLO11?
-Object detection identifies and localizes objects within an image by drawing bounding boxes around them, whereas instance segmentation not only identifies the bounding boxes but also delineates the exact shape of each object. YOLOv8 instance segmentation models provide masks or contours that outline each detected object, which is particularly useful for tasks where knowing the precise shape of objects is important, such as medical imaging or autonomous driving.
+Object detection identifies and localizes objects within an image by drawing bounding boxes around them, whereas instance segmentation not only identifies the bounding boxes but also delineates the exact shape of each object. YOLO11 instance segmentation models provide masks or contours that outline each detected object, which is particularly useful for tasks where knowing the precise shape of objects is important, such as medical imaging or autonomous driving.
-### Why use YOLOv8 for instance segmentation?
+### Why use YOLO11 for instance segmentation?
-Ultralytics YOLOv8 is a state-of-the-art model recognized for its high accuracy and real-time performance, making it ideal for instance segmentation tasks. YOLOv8 Segment models come pretrained on the [COCO dataset](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml), ensuring robust performance across a variety of objects. Additionally, YOLOv8 supports training, validation, prediction, and export functionalities with seamless integration, making it highly versatile for both research and industry applications.
+Ultralytics YOLO11 is a state-of-the-art model recognized for its high accuracy and real-time performance, making it ideal for instance segmentation tasks. YOLO11 Segment models come pretrained on the [COCO dataset](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml), ensuring robust performance across a variety of objects. Additionally, YOLOv8 supports training, validation, prediction, and export functionalities with seamless integration, making it highly versatile for both research and industry applications.
### How do I load and validate a pretrained YOLOv8 segmentation model?
@@ -224,7 +224,7 @@ Loading and validating a pretrained YOLOv8 segmentation model is straightforward
from ultralytics import YOLO
# Load a pretrained model
- model = YOLO("yolov8n-seg.pt")
+ model = YOLO("yolo11n-seg.pt")
# Validate the model
metrics = model.val()
@@ -235,7 +235,7 @@ Loading and validating a pretrained YOLOv8 segmentation model is straightforward
=== "CLI"
```bash
- yolo segment val model=yolov8n-seg.pt
+ yolo segment val model=yolo11n-seg.pt
```
These steps will provide you with validation metrics like [Mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP), crucial for assessing model performance.
@@ -252,7 +252,7 @@ Exporting a YOLOv8 segmentation model to ONNX format is simple and can be done u
from ultralytics import YOLO
# Load a pretrained model
- model = YOLO("yolov8n-seg.pt")
+ model = YOLO("yolo11n-seg.pt")
# Export the model to ONNX format
model.export(format="onnx")
@@ -261,7 +261,7 @@ Exporting a YOLOv8 segmentation model to ONNX format is simple and can be done u
=== "CLI"
```bash
- yolo export model=yolov8n-seg.pt format=onnx
+ yolo export model=yolo11n-seg.pt format=onnx
```
For more details on exporting to various formats, refer to the [Export](../modes/export.md) page.
diff --git a/docs/en/usage/callbacks.md b/docs/en/usage/callbacks.md
index 2886f8f512..16c4718786 100644
--- a/docs/en/usage/callbacks.md
+++ b/docs/en/usage/callbacks.md
@@ -1,7 +1,7 @@
---
comments: true
description: Explore Ultralytics callbacks for training, validation, exporting, and prediction. Learn how to use and customize them for your ML models.
-keywords: Ultralytics, callbacks, training, validation, export, prediction, ML models, YOLOv8, Python, machine learning
+keywords: Ultralytics, callbacks, training, validation, export, prediction, ML models, YOLO11, Python, machine learning
---
## Callbacks
@@ -16,7 +16,7 @@ Ultralytics framework supports callbacks as entry points in strategic stages of
allowfullscreen>
- Watch: Mastering Ultralytics YOLOv8: Callbacks
+ Watch: Mastering Ultralytics YOLO: Callbacks
- Watch: Mastering Ultralytics YOLOv8: Configuration
+ Watch: Mastering Ultralytics YOLO: Configuration
@@ -16,7 +16,7 @@ Welcome to the YOLOv8 Python Usage documentation! This guide is designed to help
allowfullscreen>
- Watch: Mastering Ultralytics YOLOv8: Python
+ Watch: Mastering Ultralytics YOLO11: Python