diff --git a/docs/en/datasets/classify/caltech101.md b/docs/en/datasets/classify/caltech101.md
index 2462a167fc..b50a51916b 100644
--- a/docs/en/datasets/classify/caltech101.md
+++ b/docs/en/datasets/classify/caltech101.md
@@ -28,7 +28,7 @@ The Caltech-101 dataset is extensively used for training and evaluating deep lea
To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -61,7 +61,7 @@ The example showcases the variety and complexity of the objects in the Caltech-1
If you use the Caltech-101 dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -90,7 +90,7 @@ The [Caltech-101](https://data.caltech.edu/records/mzrjq-6wc02) dataset is widel
To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the provided code snippets. For example, to train for 100 epochs:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -128,7 +128,7 @@ These features make it an excellent choice for training and evaluating object re
Citing the Caltech-101 dataset in your research acknowledges the creators' contributions and provides a reference for others who might use the dataset. The recommended citation is:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/datasets/classify/caltech256.md b/docs/en/datasets/classify/caltech256.md
index a2551b9a60..c337721009 100644
--- a/docs/en/datasets/classify/caltech256.md
+++ b/docs/en/datasets/classify/caltech256.md
@@ -39,7 +39,7 @@ The Caltech-256 dataset is extensively used for training and evaluating deep lea
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -72,7 +72,7 @@ The example showcases the diversity and complexity of the objects in the Caltech
If you use the Caltech-256 dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -98,7 +98,7 @@ The [Caltech-256](https://data.caltech.edu/records/nyy15-4j048) dataset is a lar
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. Refer to the model [Training](../../modes/train.md) page for additional options.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/classify/cifar10.md b/docs/en/datasets/classify/cifar10.md
index 39762681b2..865c80865f 100644
--- a/docs/en/datasets/classify/cifar10.md
+++ b/docs/en/datasets/classify/cifar10.md
@@ -42,7 +42,7 @@ The CIFAR-10 dataset is widely used for training and evaluating deep learning mo
To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -75,7 +75,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-10
If you use the CIFAR-10 dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -96,7 +96,7 @@ We would like to acknowledge Alex Krizhevsky for creating and maintaining the CI
To train a YOLO model on the CIFAR-10 dataset using Ultralytics, you can follow the examples provided for both Python and CLI. Here is a basic example to train your model for 100 epochs with an image size of 32x32 pixels:
-!!! Example
+!!! example
=== "Python"
@@ -153,7 +153,7 @@ Each subset comprises images categorized into 10 classes, with their annotations
If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/datasets/classify/cifar100.md b/docs/en/datasets/classify/cifar100.md
index 2861c9469f..ca868240f7 100644
--- a/docs/en/datasets/classify/cifar100.md
+++ b/docs/en/datasets/classify/cifar100.md
@@ -31,7 +31,7 @@ The CIFAR-100 dataset is extensively used for training and evaluating deep learn
To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -64,7 +64,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-100
If you use the CIFAR-100 dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -89,7 +89,7 @@ The [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a large
You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI commands. Here's how:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/classify/fashion-mnist.md b/docs/en/datasets/classify/fashion-mnist.md
index 2de2a805c3..d1c87e2676 100644
--- a/docs/en/datasets/classify/fashion-mnist.md
+++ b/docs/en/datasets/classify/fashion-mnist.md
@@ -56,7 +56,7 @@ The Fashion-MNIST dataset is widely used for training and evaluating deep learni
To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image size of 28x28, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -99,7 +99,7 @@ The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is
To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use both Python and CLI commands. Here's a quick example to get you started:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/classify/imagenet.md b/docs/en/datasets/classify/imagenet.md
index ae1ade9ba3..8c4102caf8 100644
--- a/docs/en/datasets/classify/imagenet.md
+++ b/docs/en/datasets/classify/imagenet.md
@@ -41,7 +41,7 @@ The ImageNet dataset is widely used for training and evaluating deep learning mo
To train a deep learning model on the ImageNet dataset for 100 epochs with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -74,7 +74,7 @@ The example showcases the variety and complexity of the images in the ImageNet d
If you use the ImageNet dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -102,7 +102,7 @@ The [ImageNet dataset](https://www.image-net.org/) is a large-scale database con
To use a pretrained Ultralytics YOLO model for image classification on the ImageNet dataset, follow these steps:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/classify/imagenet10.md b/docs/en/datasets/classify/imagenet10.md
index 38764c89ec..b74e2a7a62 100644
--- a/docs/en/datasets/classify/imagenet10.md
+++ b/docs/en/datasets/classify/imagenet10.md
@@ -27,7 +27,7 @@ The ImageNet10 dataset is useful for quickly testing and debugging computer visi
To test a deep learning model on the ImageNet10 dataset with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Test Example"
+!!! example "Test Example"
=== "Python"
@@ -58,7 +58,7 @@ The ImageNet10 dataset contains a subset of images from the original ImageNet da
If you use the ImageNet10 dataset in your research or development work, please cite the original ImageNet paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -86,7 +86,7 @@ The [ImageNet10](https://github.com/ultralytics/assets/releases/download/v0.0.0/
To test your deep learning model on the ImageNet10 dataset with an image size of 224x224, use the following code snippets.
-!!! Example "Test Example"
+!!! example "Test Example"
=== "Python"
diff --git a/docs/en/datasets/classify/imagenette.md b/docs/en/datasets/classify/imagenette.md
index fa06e0d38f..42368a672a 100644
--- a/docs/en/datasets/classify/imagenette.md
+++ b/docs/en/datasets/classify/imagenette.md
@@ -29,7 +29,7 @@ The ImageNette dataset is widely used for training and evaluating deep learning
To train a model on the ImageNette dataset for 100 epochs with a standard image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -64,7 +64,7 @@ For faster prototyping and training, the ImageNette dataset is also available in
To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imagenette320' in the training command. The following code snippets illustrate this:
-!!! Example "Train Example with ImageNette160"
+!!! example "Train Example with ImageNette160"
=== "Python"
@@ -85,7 +85,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
yolo classify train data=imagenette160 model=yolov8n-cls.pt epochs=100 imgsz=160
```
-!!! Example "Train Example with ImageNette320"
+!!! example "Train Example with ImageNette320"
=== "Python"
@@ -122,7 +122,7 @@ The [ImageNette dataset](https://github.com/fastai/imagenette) is a simplified s
To train a YOLO model on the ImageNette dataset for 100 epochs, you can use the following commands. Make sure to have the Ultralytics YOLO environment set up.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -159,7 +159,7 @@ For more details on model training and dataset management, explore the [Dataset
Yes, the ImageNette dataset is also available in two resized versions: ImageNette160 and ImageNette320. These versions help in faster prototyping and are especially useful when computational resources are limited.
-!!! Example "Train Example with ImageNette160"
+!!! example "Train Example with ImageNette160"
=== "Python"
diff --git a/docs/en/datasets/classify/imagewoof.md b/docs/en/datasets/classify/imagewoof.md
index 0f6537453b..3abb512a0d 100644
--- a/docs/en/datasets/classify/imagewoof.md
+++ b/docs/en/datasets/classify/imagewoof.md
@@ -26,7 +26,7 @@ The ImageWoof dataset is widely used for training and evaluating deep learning m
To train a CNN model on the ImageWoof dataset for 100 epochs with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -59,7 +59,7 @@ ImageWoof dataset comes in three different sizes to accommodate various research
To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example:
-!!! Example "Example"
+!!! example "Example"
=== "Python"
@@ -109,7 +109,7 @@ The [ImageWoof](https://github.com/fastai/imagenette) dataset is a challenging s
To train a Convolutional Neural Network (CNN) model on the ImageWoof dataset using Ultralytics YOLO for 100 epochs at an image size of 224x224, you can use the following code:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/classify/index.md b/docs/en/datasets/classify/index.md
index bc3d719124..2b01824a96 100644
--- a/docs/en/datasets/classify/index.md
+++ b/docs/en/datasets/classify/index.md
@@ -78,7 +78,7 @@ This structured approach ensures that the model can effectively learn from well-
## Usage
-!!! Example
+!!! example
=== "Python"
@@ -194,7 +194,7 @@ For additional insights and real-world applications, you can explore [Ultralytic
Training a model using Ultralytics YOLO can be done easily in both Python and CLI. Here's an example:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/datasets/classify/mnist.md b/docs/en/datasets/classify/mnist.md
index 6fcf5bd436..7ee4531392 100644
--- a/docs/en/datasets/classify/mnist.md
+++ b/docs/en/datasets/classify/mnist.md
@@ -34,7 +34,7 @@ The MNIST dataset is widely used for training and evaluating deep learning model
To train a CNN model on the MNIST dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -69,7 +69,7 @@ If you use the MNIST dataset in your
research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -95,7 +95,7 @@ The [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, or Modified National Ins
To train a model on the MNIST dataset using Ultralytics YOLO, you can follow these steps:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/african-wildlife.md b/docs/en/datasets/detect/african-wildlife.md
index fce0cf54f3..d0c86a0314 100644
--- a/docs/en/datasets/detect/african-wildlife.md
+++ b/docs/en/datasets/detect/african-wildlife.md
@@ -35,7 +35,7 @@ This dataset can be applied in various computer vision tasks such as object dete
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths, classes, and other pertinent details. For the African wildlife dataset, the `african-wildlife.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml).
-!!! Example "ultralytics/cfg/datasets/african-wildlife.yaml"
+!!! example "ultralytics/cfg/datasets/african-wildlife.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/african-wildlife.yaml"
@@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -66,7 +66,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
yolo detect train data=african-wildlife.yaml model=yolov8n.pt epochs=100 imgsz=640
```
-!!! Example "Inference Example"
+!!! example "Inference Example"
=== "Python"
@@ -111,7 +111,7 @@ The African Wildlife Dataset includes images of four common animal species found
You can train a YOLOv8 model on the African Wildlife Dataset by using the `african-wildlife.yaml` configuration file. Below is an example of how to train the YOLOv8n model for 100 epochs with an image size of 640:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/datasets/detect/argoverse.md b/docs/en/datasets/detect/argoverse.md
index c3a2c6e232..47ef822b08 100644
--- a/docs/en/datasets/detect/argoverse.md
+++ b/docs/en/datasets/detect/argoverse.md
@@ -8,7 +8,7 @@ keywords: Argoverse dataset, autonomous driving, 3D tracking, motion forecasting
The [Argoverse](https://www.argoverse.org/) dataset is a collection of data designed to support research in autonomous driving tasks, such as 3D tracking, motion forecasting, and stereo depth estimation. Developed by Argo AI, the dataset provides a wide range of high-quality sensor data, including high-resolution images, LiDAR point clouds, and map data.
-!!! Note
+!!! note
The Argoverse dataset `*.zip` file required for training was removed from Amazon S3 after the shutdown of Argo AI by Ford, but we have made it available for manual download on [Google Drive](https://drive.google.com/file/d/1st9qW3BeIwQsnR0t8mRpvbsSWIo16ACi/view?usp=drive_link).
@@ -35,7 +35,7 @@ The Argoverse dataset is widely used for training and evaluating deep learning m
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Argoverse dataset, the `Argoverse.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml).
-!!! Example "ultralytics/cfg/datasets/Argoverse.yaml"
+!!! example "ultralytics/cfg/datasets/Argoverse.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/Argoverse.yaml"
@@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -80,7 +80,7 @@ The example showcases the variety and complexity of the data in the Argoverse da
If you use the Argoverse dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -106,7 +106,7 @@ The [Argoverse](https://www.argoverse.org/) dataset, developed by Argo AI, suppo
To train a YOLOv8 model with the Argoverse dataset, use the provided YAML configuration file and the following code:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/brain-tumor.md b/docs/en/datasets/detect/brain-tumor.md
index 38d69ff61a..aa48c5f9e4 100644
--- a/docs/en/datasets/detect/brain-tumor.md
+++ b/docs/en/datasets/detect/brain-tumor.md
@@ -34,7 +34,7 @@ The application of brain tumor detection using computer vision enables early dia
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the brain tumor dataset, the `brain-tumor.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml).
-!!! Example "ultralytics/cfg/datasets/brain-tumor.yaml"
+!!! example "ultralytics/cfg/datasets/brain-tumor.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/brain-tumor.yaml"
@@ -44,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image size of 640, utilize the provided code snippets. For a detailed list of available arguments, consult the model's [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -65,7 +65,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
yolo detect train data=brain-tumor.yaml model=yolov8n.pt epochs=100 imgsz=640
```
-!!! Example "Inference Example"
+!!! example "Inference Example"
=== "Python"
@@ -110,7 +110,7 @@ The brain tumor dataset is divided into two subsets: the **training set** consis
You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an image size of 640px using both Python and CLI methods. Below are the examples for both:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -142,7 +142,7 @@ Using the brain tumor dataset in AI projects enables early diagnosis and treatme
Inference using a fine-tuned YOLOv8 model can be performed with either Python or CLI approaches. Here are the examples:
-!!! Example "Inference Example"
+!!! example "Inference Example"
=== "Python"
diff --git a/docs/en/datasets/detect/coco.md b/docs/en/datasets/detect/coco.md
index b0d42b9bfb..25878ed913 100644
--- a/docs/en/datasets/detect/coco.md
+++ b/docs/en/datasets/detect/coco.md
@@ -52,7 +52,7 @@ The COCO dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
-!!! Example "ultralytics/cfg/datasets/coco.yaml"
+!!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco.yaml"
@@ -62,7 +62,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -97,7 +97,7 @@ The example showcases the variety and complexity of the images in the COCO datas
If you use the COCO dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -124,7 +124,7 @@ The [COCO dataset](https://cocodataset.org/#home) (Common Objects in Context) is
To train a YOLOv8 model using the COCO dataset, you can use the following code snippets:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/coco8.md b/docs/en/datasets/detect/coco8.md
index b2ed77b4e5..a0972693ea 100644
--- a/docs/en/datasets/detect/coco8.md
+++ b/docs/en/datasets/detect/coco8.md
@@ -27,7 +27,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8 dataset, the `coco8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml).
-!!! Example "ultralytics/cfg/datasets/coco8.yaml"
+!!! example "ultralytics/cfg/datasets/coco8.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8.yaml"
@@ -37,7 +37,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -72,7 +72,7 @@ The example showcases the variety and complexity of the images in the COCO8 data
If you use the COCO dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -99,7 +99,7 @@ The Ultralytics COCO8 dataset is a compact yet versatile object detection datase
To train a YOLOv8 model using the COCO8 dataset, you can employ either Python or CLI commands. Here's how you can start:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/globalwheat2020.md b/docs/en/datasets/detect/globalwheat2020.md
index 8b8a0467ad..37b9759f36 100644
--- a/docs/en/datasets/detect/globalwheat2020.md
+++ b/docs/en/datasets/detect/globalwheat2020.md
@@ -30,7 +30,7 @@ The Global Wheat Head Dataset is widely used for training and evaluating deep le
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Global Wheat Head Dataset, the `GlobalWheat2020.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml).
-!!! Example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
+!!! example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
@@ -40,7 +40,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Global Wheat
If you use the Global Wheat Head Dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -100,7 +100,7 @@ The Global Wheat Head Dataset is primarily used for developing and training deep
To train a YOLOv8n model on the Global Wheat Head Dataset, you can use the following code snippets. Make sure you have the `GlobalWheat2020.yaml` configuration file specifying dataset paths and classes:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/index.md b/docs/en/datasets/detect/index.md
index e43cd46104..f76ba40bc2 100644
--- a/docs/en/datasets/detect/index.md
+++ b/docs/en/datasets/detect/index.md
@@ -48,7 +48,7 @@ When using the Ultralytics YOLO format, organize your training and validation im
Here's how you can use these formats to train your model:
-!!! Example
+!!! example
=== "Python"
@@ -100,7 +100,7 @@ If you have your own dataset and would like to use it for training detection mod
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
-!!! Example
+!!! example
=== "Python"
@@ -164,7 +164,7 @@ Each dataset page provides detailed information on the structure and usage tailo
To start training a YOLOv8 model, ensure your dataset is formatted correctly and the paths are defined in a YAML file. Use the following script to begin training:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/datasets/detect/lvis.md b/docs/en/datasets/detect/lvis.md
index afc91fc680..8c06920fd6 100644
--- a/docs/en/datasets/detect/lvis.md
+++ b/docs/en/datasets/detect/lvis.md
@@ -48,7 +48,7 @@ The LVIS dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the LVIS dataset, the `lvis.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/lvis.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/lvis.yaml).
-!!! Example "ultralytics/cfg/datasets/lvis.yaml"
+!!! example "ultralytics/cfg/datasets/lvis.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/lvis.yaml"
@@ -58,7 +58,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -93,7 +93,7 @@ The example showcases the variety and complexity of the images in the LVIS datas
If you use the LVIS dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -118,7 +118,7 @@ The [LVIS dataset](https://www.lvisdataset.org/) is a large-scale dataset with f
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, follow the example below. This process utilizes Ultralytics' framework, which offers comprehensive training features.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/objects365.md b/docs/en/datasets/detect/objects365.md
index ea95798fe1..f2b44f245a 100644
--- a/docs/en/datasets/detect/objects365.md
+++ b/docs/en/datasets/detect/objects365.md
@@ -30,7 +30,7 @@ The Objects365 dataset is widely used for training and evaluating deep learning
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Objects365 Dataset, the `Objects365.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml).
-!!! Example "ultralytics/cfg/datasets/Objects365.yaml"
+!!! example "ultralytics/cfg/datasets/Objects365.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/Objects365.yaml"
@@ -40,7 +40,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Objects365 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Objects365 d
If you use the Objects365 dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -101,7 +101,7 @@ The [Objects365 dataset](https://www.objects365.org/) is designed for object det
To train a YOLOv8n model using the Objects365 dataset for 100 epochs with an image size of 640, follow these instructions:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/open-images-v7.md b/docs/en/datasets/detect/open-images-v7.md
index 92958773e9..f6b0c63bc4 100644
--- a/docs/en/datasets/detect/open-images-v7.md
+++ b/docs/en/datasets/detect/open-images-v7.md
@@ -61,7 +61,7 @@ Open Images V7 is a cornerstone for training and evaluating state-of-the-art mod
Typically, datasets come with a YAML (Yet Another Markup Language) file that delineates the dataset's configuration. For the case of Open Images V7, a hypothetical `OpenImagesV7.yaml` might exist. For accurate paths and configurations, one should refer to the dataset's official repository or documentation.
-!!! Example "OpenImagesV7.yaml"
+!!! example "OpenImagesV7.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/open-images-v7.yaml"
@@ -71,7 +71,7 @@ Typically, datasets come with a YAML (Yet Another Markup Language) file that del
To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Warning
+!!! warning
The complete Open Images V7 dataset comprises 1,743,042 training images and 41,620 validation images, requiring approximately **561 GB of storage space** upon download.
@@ -80,7 +80,7 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an im
- Verify that your device has enough storage capacity.
- Ensure a robust and speedy internet connection.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -115,7 +115,7 @@ Researchers can gain invaluable insights into the array of computer vision chall
For those employing Open Images V7 in their work, it's prudent to cite the relevant papers and acknowledge the creators:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -140,7 +140,7 @@ Open Images V7 is an extensive and versatile dataset created by Google, designed
To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python and CLI commands. Here's an example of training the YOLOv8n model for 100 epochs with an image size of 640:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/roboflow-100.md b/docs/en/datasets/detect/roboflow-100.md
index 870ecb842a..844326c381 100644
--- a/docs/en/datasets/detect/roboflow-100.md
+++ b/docs/en/datasets/detect/roboflow-100.md
@@ -37,11 +37,11 @@ This structure enables a diverse and extensive testing ground for object detecti
Dataset benchmarking evaluates machine learning model performance on specific datasets using standardized metrics like accuracy, mean average precision and F1-score.
-!!! Tip "Benchmarking"
+!!! tip "Benchmarking"
Benchmarking results will be stored in "ultralytics-benchmarks/evaluation.txt"
-!!! Example "Benchmarking example"
+!!! example "Benchmarking example"
=== "Python"
@@ -113,7 +113,7 @@ The diversity in the Roboflow 100 benchmark that can be seen above is a signific
If you use the Roboflow 100 dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -139,7 +139,7 @@ The **Roboflow 100** dataset, developed by [Roboflow](https://roboflow.com/?ref=
To use the Roboflow 100 dataset for benchmarking, you can implement the RF100Benchmark class from the Ultralytics library. Here's a brief example:
-!!! Example "Benchmarking example"
+!!! example "Benchmarking example"
=== "Python"
@@ -203,7 +203,7 @@ The **Roboflow 100** dataset is accessible on [GitHub](https://github.com/robofl
When using the Roboflow 100 dataset in your research, ensure to properly cite it. Here is the recommended citation:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/datasets/detect/signature.md b/docs/en/datasets/detect/signature.md
index db82a59712..0d76e11f50 100644
--- a/docs/en/datasets/detect/signature.md
+++ b/docs/en/datasets/detect/signature.md
@@ -23,7 +23,7 @@ This dataset can be applied in various computer vision tasks such as object dete
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths and classes information. For the signature detection dataset, the `signature.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
-!!! Example "ultralytics/cfg/datasets/signature.yaml"
+!!! example "ultralytics/cfg/datasets/signature.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/signature.yaml"
@@ -33,7 +33,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
To train a YOLOv8n model on the signature detection dataset for 100 epochs with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -54,7 +54,7 @@ To train a YOLOv8n model on the signature detection dataset for 100 epochs with
yolo detect train data=signature.yaml model=yolov8n.pt epochs=100 imgsz=640
```
-!!! Example "Inference Example"
+!!! example "Inference Example"
=== "Python"
@@ -102,7 +102,7 @@ To train a YOLOv8n model on the Signature Detection Dataset, follow these steps:
1. Download the `signature.yaml` dataset configuration file from [signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
2. Use the following Python script or CLI command to start training:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -140,7 +140,7 @@ To perform inference using a model trained on the Signature Detection Dataset, f
1. Load your fine-tuned model.
2. Use the below Python script or CLI command to perform inference:
-!!! Example "Inference Example"
+!!! example "Inference Example"
=== "Python"
diff --git a/docs/en/datasets/detect/sku-110k.md b/docs/en/datasets/detect/sku-110k.md
index b307e5974e..145468321e 100644
--- a/docs/en/datasets/detect/sku-110k.md
+++ b/docs/en/datasets/detect/sku-110k.md
@@ -43,7 +43,7 @@ The SKU-110k dataset is widely used for training and evaluating deep learning mo
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the SKU-110K dataset, the `SKU-110K.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml).
-!!! Example "ultralytics/cfg/datasets/SKU-110K.yaml"
+!!! example "ultralytics/cfg/datasets/SKU-110K.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/SKU-110K.yaml"
@@ -53,7 +53,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the SKU-110K dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -88,7 +88,7 @@ The example showcases the variety and complexity of the data in the SKU-110k dat
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -113,7 +113,7 @@ The SKU-110k dataset consists of densely packed retail shelf images designed to
Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an example to train a YOLOv8n model for 100 epochs with an image size of 640:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -165,7 +165,7 @@ These features make the SKU-110k dataset particularly valuable for training and
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/datasets/detect/visdrone.md b/docs/en/datasets/detect/visdrone.md
index 4473bd2d93..c1060e9989 100644
--- a/docs/en/datasets/detect/visdrone.md
+++ b/docs/en/datasets/detect/visdrone.md
@@ -39,7 +39,7 @@ The VisDrone dataset is widely used for training and evaluating deep learning mo
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Visdrone dataset, the `VisDrone.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml).
-!!! Example "ultralytics/cfg/datasets/VisDrone.yaml"
+!!! example "ultralytics/cfg/datasets/VisDrone.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/VisDrone.yaml"
@@ -49,7 +49,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the VisDrone dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -84,7 +84,7 @@ The example showcases the variety and complexity of the data in the VisDrone dat
If you use the VisDrone dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -117,7 +117,7 @@ The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-
To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image size of 640, you can follow these steps:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -161,7 +161,7 @@ The configuration file for the VisDrone dataset, `VisDrone.yaml`, can be found i
If you use the VisDrone dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/datasets/detect/voc.md b/docs/en/datasets/detect/voc.md
index 9efb527990..0afc920529 100644
--- a/docs/en/datasets/detect/voc.md
+++ b/docs/en/datasets/detect/voc.md
@@ -31,7 +31,7 @@ The VOC dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the VOC dataset, the `VOC.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml).
-!!! Example "ultralytics/cfg/datasets/VOC.yaml"
+!!! example "ultralytics/cfg/datasets/VOC.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/VOC.yaml"
@@ -41,7 +41,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the VOC dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -76,7 +76,7 @@ The example showcases the variety and complexity of the images in the VOC datase
If you use the VOC dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -103,7 +103,7 @@ The [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) (Visual Object Classes
To train a YOLOv8 model with the VOC dataset, you need the dataset configuration in a YAML file. Here's an example to start training a YOLOv8n model for 100 epochs with an image size of 640:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/detect/xview.md b/docs/en/datasets/detect/xview.md
index 53afa1957a..e7e2f3d3f7 100644
--- a/docs/en/datasets/detect/xview.md
+++ b/docs/en/datasets/detect/xview.md
@@ -34,7 +34,7 @@ The xView dataset is widely used for training and evaluating deep learning model
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the xView dataset, the `xView.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml).
-!!! Example "ultralytics/cfg/datasets/xView.yaml"
+!!! example "ultralytics/cfg/datasets/xView.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/xView.yaml"
@@ -44,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a model on the xView dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -79,7 +79,7 @@ The example showcases the variety and complexity of the data in the xView datase
If you use the xView dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -106,7 +106,7 @@ The [xView](http://xviewdataset.org/) dataset is one of the largest publicly ava
To train a model on the xView dataset using Ultralytics YOLO, follow these steps:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -147,7 +147,7 @@ The xView dataset comprises high-resolution satellite images collected from Worl
If you utilize the xView dataset in your research, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/datasets/explorer/api.md b/docs/en/datasets/explorer/api.md
index 87a09d19cb..905142efa7 100644
--- a/docs/en/datasets/explorer/api.md
+++ b/docs/en/datasets/explorer/api.md
@@ -48,7 +48,7 @@ dataframe = explorer.get_similar(img="path/to/image.jpg")
dataframe = explorer.get_similar(idx=0)
```
-!!! Tip "Note"
+!!! note
Embeddings table for a given dataset and model pair is only created once and reused. These use [LanceDB](https://lancedb.github.io/lancedb/) under the hood, which scales on-disk, so you can create and reuse embeddings for large datasets like COCO without running out of memory.
@@ -67,7 +67,7 @@ In case of multiple inputs, the aggregate of their embeddings is used.
You get a pandas dataframe with the `limit` number of most similar data points to the input, along with their distance in the embedding space. You can use this dataset to perform further filtering
-!!! Example "Semantic Search"
+!!! example "Semantic Search"
=== "Using Images"
@@ -110,7 +110,7 @@ You get a pandas dataframe with the `limit` number of most similar data points t
You can also plot the similar images using the `plot_similar` method. This method takes the same arguments as `get_similar` and plots the similar images in a grid.
-!!! Example "Plotting Similar Images"
+!!! example "Plotting Similar Images"
=== "Using Images"
@@ -143,7 +143,7 @@ You can also plot the similar images using the `plot_similar` method. This metho
This allows you to write how you want to filter your dataset using natural language. You don't have to be proficient in writing SQL queries. Our AI powered query generator will automatically do that under the hood. For example - you can say - "show me 100 images with exactly one person and 2 dogs. There can be other objects too" and it'll internally generate the query and show you those results.
Note: This works using LLMs under the hood so the results are probabilistic and might get things wrong sometimes
-!!! Example "Ask AI"
+!!! example "Ask AI"
```python
from ultralytics import Explorer
@@ -165,7 +165,7 @@ Note: This works using LLMs under the hood so the results are probabilistic and
You can run SQL queries on your dataset using the `sql_query` method. This method takes a SQL query as input and returns a pandas dataframe with the results.
-!!! Example "SQL Query"
+!!! example "SQL Query"
```python
from ultralytics import Explorer
@@ -182,7 +182,7 @@ You can run SQL queries on your dataset using the `sql_query` method. This metho
You can also plot the results of a SQL query using the `plot_sql_query` method. This method takes the same arguments as `sql_query` and plots the results in a grid.
-!!! Example "Plotting SQL Query Results"
+!!! example "Plotting SQL Query Results"
```python
from ultralytics import Explorer
@@ -199,7 +199,9 @@ You can also plot the results of a SQL query using the `plot_sql_query` method.
You can also work with the embeddings table directly. Once the embeddings table is created, you can access it using the `Explorer.table`
-!!! Tip "Explorer works on [LanceDB](https://lancedb.github.io/lancedb/) tables internally. You can access this table directly, using `Explorer.table` object and run raw queries, push down pre- and post-filters, etc."
+!!! tip
+
+ Explorer works on [LanceDB](https://lancedb.github.io/lancedb/) tables internally. You can access this table directly, using `Explorer.table` object and run raw queries, push down pre- and post-filters, etc.
```python
from ultralytics import Explorer
@@ -213,7 +215,7 @@ Here are some examples of what you can do with the table:
### Get raw Embeddings
-!!! Example
+!!! example
```python
from ultralytics import Explorer
@@ -228,7 +230,7 @@ Here are some examples of what you can do with the table:
### Advanced Querying with pre- and post-filters
-!!! Example
+!!! example
```python
from ultralytics import Explorer
@@ -270,11 +272,11 @@ It returns a pandas dataframe with the following columns:
- `count`: Number of images in the dataset that are closer than `max_dist` to the current image
- `sim_im_files`: List of paths to the `count` similar images
-!!! Tip
+!!! tip
For a given dataset, model, `max_dist` & `top_k` the similarity index once generated will be reused. In case, your dataset has changed, or you simply need to regenerate the similarity index, you can pass `force=True`.
-!!! Example "Similarity Index"
+!!! example "Similarity Index"
```python
from ultralytics import Explorer
diff --git a/docs/en/datasets/index.md b/docs/en/datasets/index.md
index 97603b42f8..f1e154ef7f 100644
--- a/docs/en/datasets/index.md
+++ b/docs/en/datasets/index.md
@@ -127,7 +127,7 @@ Contributing a new dataset involves several steps to ensure that it aligns well
### Example Code to Optimize and Zip a Dataset
-!!! Example "Optimize and Zip a Dataset"
+!!! example "Optimize and Zip a Dataset"
=== "Python"
@@ -205,7 +205,7 @@ Discover more about YOLO on the [Ultralytics YOLO](https://www.ultralytics.com/y
To optimize and zip a dataset using Ultralytics tools, follow this example code:
-!!! Example "Optimize and Zip a Dataset"
+!!! example "Optimize and Zip a Dataset"
=== "Python"
diff --git a/docs/en/datasets/obb/dota-v2.md b/docs/en/datasets/obb/dota-v2.md
index 7de8209a2b..70cac23cd2 100644
--- a/docs/en/datasets/obb/dota-v2.md
+++ b/docs/en/datasets/obb/dota-v2.md
@@ -60,7 +60,7 @@ DOTA serves as a benchmark for training and evaluating models specifically tailo
Typically, datasets incorporate a YAML (Yet Another Markup Language) file detailing the dataset's configuration. For DOTA v1 and DOTA v1.5, Ultralytics provides `DOTAv1.yaml` and `DOTAv1.5.yaml` files. For additional details on these as well as DOTA v2 please consult DOTA's official repository and documentation.
-!!! Example "DOTAv1.yaml"
+!!! example "DOTAv1.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/DOTAv1.yaml"
@@ -70,7 +70,7 @@ Typically, datasets incorporate a YAML (Yet Another Markup Language) file detail
To train DOTA dataset, we split original DOTA images with high-resolution into images with 1024x1024 resolution in multiscale way.
-!!! Example "Split images"
+!!! example "Split images"
=== "Python"
@@ -97,11 +97,11 @@ To train DOTA dataset, we split original DOTA images with high-resolution into i
To train a model on the DOTA v1 dataset, you can utilize the following code snippets. Always refer to your model's documentation for a thorough list of available arguments.
-!!! Warning
+!!! warning
Please note that all images and associated annotations in the DOTAv1 dataset can be used for academic purposes, but commercial use is prohibited. Your understanding and respect for the dataset creators' wishes are greatly appreciated!
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -136,7 +136,7 @@ The dataset's richness offers invaluable insights into object detection challeng
For those leveraging DOTA in their endeavors, it's pertinent to cite the relevant research papers:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -169,7 +169,7 @@ DOTA utilizes Oriented Bounding Boxes (OBB) for annotation, which are represente
To train a model on the DOTA dataset, you can use the following example with Ultralytics YOLO:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -204,7 +204,7 @@ For a detailed comparison and additional specifics, check the [dataset versions
DOTA images, which can be very large, are split into smaller resolutions for manageable training. Here's a Python snippet to split images:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/datasets/obb/dota8.md b/docs/en/datasets/obb/dota8.md
index 5a8bb29535..0188c31785 100644
--- a/docs/en/datasets/obb/dota8.md
+++ b/docs/en/datasets/obb/dota8.md
@@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the DOTA8 dataset, the `dota8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dota8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dota8.yaml).
-!!! Example "ultralytics/cfg/datasets/dota8.yaml"
+!!! example "ultralytics/cfg/datasets/dota8.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/dota8.yaml"
@@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the DOTA8 data
If you use the DOTA dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -90,7 +90,7 @@ The DOTA8 dataset is a small, versatile oriented object detection dataset made u
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For comprehensive argument options, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/obb/index.md b/docs/en/datasets/obb/index.md
index d22d112172..02cd08e311 100644
--- a/docs/en/datasets/obb/index.md
+++ b/docs/en/datasets/obb/index.md
@@ -32,7 +32,7 @@ An example of a `*.txt` label file for the above image, which contains an object
To train a model using these OBB formats:
-!!! Example
+!!! example
=== "Python"
@@ -70,7 +70,7 @@ For those looking to introduce their own datasets with oriented bounding boxes,
Transitioning labels from the DOTA dataset format to the YOLO OBB format can be achieved with this script:
-!!! Example
+!!! example
=== "Python"
@@ -106,7 +106,7 @@ This script will reformat your DOTA annotations into a YOLO-compatible format.
Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO OBB format and then using the Ultralytics API to train the model. Here's an example in both Python and CLI:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/datasets/pose/coco.md b/docs/en/datasets/pose/coco.md
index 02bdb3e1a0..8addbd96e3 100644
--- a/docs/en/datasets/pose/coco.md
+++ b/docs/en/datasets/pose/coco.md
@@ -43,7 +43,7 @@ The COCO-Pose dataset is specifically used for training and evaluating deep lear
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Pose dataset, the `coco-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml).
-!!! Example "ultralytics/cfg/datasets/coco-pose.yaml"
+!!! example "ultralytics/cfg/datasets/coco-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco-pose.yaml"
@@ -53,7 +53,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -88,7 +88,7 @@ The example showcases the variety and complexity of the images in the COCO-Pose
If you use the COCO-Pose dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -115,7 +115,7 @@ The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialize
Training a YOLOv8 model on the COCO-Pose dataset can be accomplished using either Python or CLI commands. For example, to train a YOLOv8n-pose model for 100 epochs with an image size of 640, you can follow the steps below:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/pose/coco8-pose.md b/docs/en/datasets/pose/coco8-pose.md
index eeb0c3ddc3..c5847e1129 100644
--- a/docs/en/datasets/pose/coco8-pose.md
+++ b/docs/en/datasets/pose/coco8-pose.md
@@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Pose dataset, the `coco8-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml).
-!!! Example "ultralytics/cfg/datasets/coco8-pose.yaml"
+!!! example "ultralytics/cfg/datasets/coco8-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8-pose.yaml"
@@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the COCO8-Pose
If you use the COCO dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -88,7 +88,7 @@ The COCO8-Pose dataset is a small, versatile pose detection dataset that include
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, follow these examples:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/pose/index.md b/docs/en/datasets/pose/index.md
index 700713d75e..4288497174 100644
--- a/docs/en/datasets/pose/index.md
+++ b/docs/en/datasets/pose/index.md
@@ -64,7 +64,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
-!!! Example
+!!! example
=== "Python"
@@ -126,7 +126,7 @@ If you have your own dataset and would like to use it for training pose estimati
Ultralytics provides a convenient conversion tool to convert labels from the popular COCO dataset format to YOLO format:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/datasets/pose/tiger-pose.md b/docs/en/datasets/pose/tiger-pose.md
index 2462cf108c..13f62232d6 100644
--- a/docs/en/datasets/pose/tiger-pose.md
+++ b/docs/en/datasets/pose/tiger-pose.md
@@ -29,7 +29,7 @@ This dataset is intended for use with [Ultralytics HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file serves as the means to specify the configuration details of a dataset. It encompasses crucial data such as file paths, class definitions, and other pertinent information. Specifically, for the `tiger-pose.yaml` file, you can check [Ultralytics Tiger-Pose Dataset Configuration File](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/tiger-pose.yaml).
-!!! Example "ultralytics/cfg/datasets/tiger-pose.yaml"
+!!! example "ultralytics/cfg/datasets/tiger-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/tiger-pose.yaml"
@@ -39,7 +39,7 @@ A YAML (Yet Another Markup Language) file serves as the means to specify the con
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -72,7 +72,7 @@ The example showcases the variety and complexity of the images in the Tiger-Pose
## Inference Example
-!!! Example "Inference Example"
+!!! example "Inference Example"
=== "Python"
@@ -107,7 +107,7 @@ The Ultralytics Tiger-Pose dataset is designed for pose estimation tasks, consis
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, use the following code snippets. For more details, visit the [Training](../../modes/train.md) page:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -137,7 +137,7 @@ The `tiger-pose.yaml` file is used to specify the configuration details of the T
To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you can use the following code snippets. For a detailed guide, visit the [Prediction](../../modes/predict.md) page:
-!!! Example "Inference Example"
+!!! example "Inference Example"
=== "Python"
diff --git a/docs/en/datasets/segment/carparts-seg.md b/docs/en/datasets/segment/carparts-seg.md
index 621fe9f2ba..6283cae8ce 100644
--- a/docs/en/datasets/segment/carparts-seg.md
+++ b/docs/en/datasets/segment/carparts-seg.md
@@ -37,7 +37,7 @@ Carparts Segmentation finds applications in automotive quality control, auto rep
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Package Segmentation dataset, the `carparts-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/carparts-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/carparts-seg.yaml).
-!!! Example "ultralytics/cfg/datasets/carparts-seg.yaml"
+!!! example "ultralytics/cfg/datasets/carparts-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/carparts-seg.yaml"
@@ -47,7 +47,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -81,7 +81,7 @@ The Carparts Segmentation dataset includes a diverse array of images and videos
If you integrate the Carparts Segmentation dataset into your research or development projects, please make reference to the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -112,7 +112,7 @@ The [Roboflow Carparts Segmentation Dataset](https://universe.roboflow.com/gianm
To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow these steps:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/segment/coco.md b/docs/en/datasets/segment/coco.md
index ad372675fa..5c6b56402d 100644
--- a/docs/en/datasets/segment/coco.md
+++ b/docs/en/datasets/segment/coco.md
@@ -41,7 +41,7 @@ COCO-Seg is widely used for training and evaluating deep learning models in inst
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Seg dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
-!!! Example "ultralytics/cfg/datasets/coco.yaml"
+!!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco.yaml"
@@ -51,7 +51,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -86,7 +86,7 @@ The example showcases the variety and complexity of the images in the COCO-Seg d
If you use the COCO-Seg dataset in your research or development work, please cite the original COCO paper and acknowledge the extension to COCO-Seg:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -113,7 +113,7 @@ The [COCO-Seg](https://cocodataset.org/#home) dataset is an extension of the ori
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a detailed list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/segment/coco8-seg.md b/docs/en/datasets/segment/coco8-seg.md
index 15128ed24e..fb505a5615 100644
--- a/docs/en/datasets/segment/coco8-seg.md
+++ b/docs/en/datasets/segment/coco8-seg.md
@@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Seg dataset, the `coco8-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml).
-!!! Example "ultralytics/cfg/datasets/coco8-seg.yaml"
+!!! example "ultralytics/cfg/datasets/coco8-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8-seg.yaml"
@@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the COCO8-Seg
If you use the COCO dataset in your research or development work, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -88,7 +88,7 @@ The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralyt
To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use Python or CLI commands. Here's a quick example:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/segment/crack-seg.md b/docs/en/datasets/segment/crack-seg.md
index 32113dfc6d..ed66d7cf94 100644
--- a/docs/en/datasets/segment/crack-seg.md
+++ b/docs/en/datasets/segment/crack-seg.md
@@ -26,7 +26,7 @@ Crack segmentation finds practical applications in infrastructure maintenance, a
A YAML (Yet Another Markup Language) file is employed to outline the configuration of the dataset, encompassing details about paths, classes, and other pertinent information. Specifically, for the Crack Segmentation dataset, the `crack-seg.yaml` file is managed and accessible at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/crack-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/crack-seg.yaml).
-!!! Example "ultralytics/cfg/datasets/crack-seg.yaml"
+!!! example "ultralytics/cfg/datasets/crack-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/crack-seg.yaml"
@@ -36,7 +36,7 @@ A YAML (Yet Another Markup Language) file is employed to outline the configurati
To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -71,7 +71,7 @@ The Crack Segmentation dataset comprises a varied collection of images and video
If you incorporate the crack segmentation dataset into your research or development endeavors, kindly reference the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -102,7 +102,7 @@ The [Roboflow Crack Segmentation Dataset](https://universe.roboflow.com/universi
To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the following code snippets. Detailed instructions and further parameters can be found on the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -135,7 +135,7 @@ Ultralytics YOLO offers advanced real-time object detection, segmentation, and c
If you incorporate the Crack Segmentation Dataset into your research, please use the following BibTeX reference:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/datasets/segment/index.md b/docs/en/datasets/segment/index.md
index 09c9279715..838dda2c7a 100644
--- a/docs/en/datasets/segment/index.md
+++ b/docs/en/datasets/segment/index.md
@@ -33,7 +33,7 @@ Here is an example of the YOLO dataset format for a single image with two object
1 0.504 0.000 0.501 0.004 0.498 0.004 0.493 0.010 0.492 0.0104
```
-!!! Tip "Tip"
+!!! tip "Tip"
- The length of each row does **not** have to be equal.
- Each segmentation label must have a **minimum of 3 xy points**: ` `
@@ -66,7 +66,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
-!!! Example
+!!! example
=== "Python"
@@ -108,7 +108,7 @@ If you have your own dataset and would like to use it for training segmentation
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
-!!! Example
+!!! example
=== "Python"
@@ -130,7 +130,7 @@ Auto-annotation is an essential feature that allows you to generate a segmentati
To auto-annotate your dataset using the Ultralytics framework, you can use the `auto_annotate` function as shown below:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/datasets/segment/package-seg.md b/docs/en/datasets/segment/package-seg.md
index e228c4c505..ebdf4e6db2 100644
--- a/docs/en/datasets/segment/package-seg.md
+++ b/docs/en/datasets/segment/package-seg.md
@@ -26,7 +26,7 @@ Package segmentation, facilitated by the Package Segmentation Dataset, is crucia
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Package Segmentation dataset, the `package-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml).
-!!! Example "ultralytics/cfg/datasets/package-seg.yaml"
+!!! example "ultralytics/cfg/datasets/package-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/package-seg.yaml"
@@ -36,7 +36,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
@@ -70,7 +70,7 @@ The Package Segmentation dataset comprises a varied collection of images and vid
If you integrate the crack segmentation dataset into your research or development initiatives, please cite the following paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -101,7 +101,7 @@ The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factor
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below:
-!!! Example "Train Example"
+!!! example "Train Example"
=== "Python"
diff --git a/docs/en/datasets/track/index.md b/docs/en/datasets/track/index.md
index 2e7735d1df..a2e5c13107 100644
--- a/docs/en/datasets/track/index.md
+++ b/docs/en/datasets/track/index.md
@@ -12,7 +12,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
## Usage
-!!! Example
+!!! example
=== "Python"
@@ -35,7 +35,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
To use Multi-Object Tracking with Ultralytics YOLO, you can start by using the Python or CLI examples provided. Here is how you can get started:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/guides/analytics.md b/docs/en/guides/analytics.md
index a29e6abe45..d5e74038af 100644
--- a/docs/en/guides/analytics.md
+++ b/docs/en/guides/analytics.md
@@ -22,7 +22,7 @@ This guide provides a comprehensive overview of three fundamental types of data
- Bar plots, on the other hand, are suitable for comparing quantities across different categories and showing relationships between a category and its numerical value.
- Lastly, pie charts are effective for illustrating proportions among categories and showing parts of a whole.
-!!! Analytics "Analytics Examples"
+!!! analytics "Analytics Examples"
=== "Line Graph"
diff --git a/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md b/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
index 4e5a02d3f7..ca09169000 100644
--- a/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
+++ b/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
@@ -85,7 +85,7 @@ After installing the runtime, you need to plug in your Coral Edge TPU into a USB
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
-!!! Exporting the model
+!!! exporting the model
=== "Python"
@@ -111,7 +111,7 @@ The exported model will be saved in the `_saved_model/` folder with
After exporting your model, you can run inference with it using the following code:
-!!! Running the model
+!!! running the model
=== "Python"
@@ -170,7 +170,7 @@ Make sure to uninstall any previous Coral Edge TPU runtime versions by following
Yes, you can export your Ultralytics YOLOv8 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the [Ultralytics Docker container](docker-quickstart.md). You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:
-!!! Exporting the model
+!!! exporting the model
=== "Python"
@@ -212,7 +212,7 @@ For a specific wheel, such as TensorFlow 2.15.0 `tflite-runtime`, you can downlo
After exporting your YOLOv8 model to an Edge TPU-compatible format, you can run inference using the following code snippets:
-!!! Running the model
+!!! running the model
=== "Python"
diff --git a/docs/en/guides/deepstream-nvidia-jetson.md b/docs/en/guides/deepstream-nvidia-jetson.md
index 86cb76330c..cd114b8690 100644
--- a/docs/en/guides/deepstream-nvidia-jetson.md
+++ b/docs/en/guides/deepstream-nvidia-jetson.md
@@ -21,7 +21,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
-!!! Note
+!!! note
This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.4](https://developer.nvidia.com/jetpack-sdk-464). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.
@@ -39,7 +39,7 @@ Before you start to follow this guide:
- For JetPack 4.6.4, install [DeepStream 6.0.1](https://docs.nvidia.com/metropolis/deepstream/6.0.1/dev-guide/text/DS_Quickstart.html)
- For JetPack 5.1.3, install [DeepStream 6.3](https://docs.nvidia.com/metropolis/deepstream/6.3/dev-guide/text/DS_Quickstart.html)
-!!! Tip
+!!! tip
In this guide we have used the Debian package method of installing DeepStream SDK to the Jetson device. You can also visit the [DeepStream SDK on Jetson (Archived)](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived) to access legacy versions of DeepStream.
@@ -67,7 +67,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
wget https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt
```
- !!! Note
+ !!! note
You can also use a [custom trained YOLOv8 model](https://docs.ultralytics.com/modes/train/).
@@ -77,7 +77,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
python3 utils/export_yoloV8.py -w yolov8s.pt
```
- !!! Note "Pass the below arguments to the above command"
+ !!! note "Pass the below arguments to the above command"
For DeepStream 6.0.1, use opset 12 or lower. The default opset is 16.
@@ -175,13 +175,13 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
deepstream-app -c deepstream_app_config.txt
```
-!!! Note
+!!! note
It will take a long time to generate the TensorRT engine file before starting the inference. So please be patient.
-!!! Tip
+!!! tip
If you want to convert the model to FP16 precision, simply set `model-engine-file=model_b1_gpu0_fp16.engine` and `network-mode=2` inside `config_infer_primary_yoloV8.txt`
@@ -217,7 +217,7 @@ If you want to use INT8 precision for inference, you need to follow the steps be
done
```
- !!! Note
+ !!! note
NVIDIA recommends at least 500 images to get a good accuracy. On this example, 1000 images are chosen to get better accuracy (more images = more accuracy). You can set it from **head -1000**. For example, for 2000 images, **head -2000**. This process can take a long time.
@@ -234,7 +234,7 @@ If you want to use INT8 precision for inference, you need to follow the steps be
export INT8_CALIB_BATCH_SIZE=1
```
- !!! Note
+ !!! note
Higher INT8_CALIB_BATCH_SIZE values will result in more accuracy and faster calibration speed. Set it according to you GPU memory.
diff --git a/docs/en/guides/distance-calculation.md b/docs/en/guides/distance-calculation.md
index 761d4ea9cf..00af619bf2 100644
--- a/docs/en/guides/distance-calculation.md
+++ b/docs/en/guides/distance-calculation.md
@@ -36,7 +36,7 @@ Measuring the gap between two objects is known as distance calculation within a
- Click on any two bounding boxes with Left Mouse click for distance calculation
-!!! Example "Distance Calculation using YOLOv8 Example"
+!!! example "Distance Calculation using YOLOv8 Example"
=== "Video Stream"
diff --git a/docs/en/guides/heatmaps.md b/docs/en/guides/heatmaps.md
index 4073f45371..8397269510 100644
--- a/docs/en/guides/heatmaps.md
+++ b/docs/en/guides/heatmaps.md
@@ -39,7 +39,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
- `heatmap_alpha`: Ensure this value is within the range (0.0 - 1.0).
- `decay_factor`: Used for removing heatmap after an object is no longer in the frame, its value should also be in the range (0.0 - 1.0).
-!!! Example "Heatmaps using Ultralytics YOLOv8 Example"
+!!! example "Heatmaps using Ultralytics YOLOv8 Example"
=== "Heatmap"
diff --git a/docs/en/guides/hyperparameter-tuning.md b/docs/en/guides/hyperparameter-tuning.md
index ec95d2b8e1..b9b1723d32 100644
--- a/docs/en/guides/hyperparameter-tuning.md
+++ b/docs/en/guides/hyperparameter-tuning.md
@@ -69,7 +69,7 @@ The process is repeated until either the set number of iterations is reached or
Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyperparameter tuning of YOLOv8n on COCO8 for 30 epochs with an AdamW optimizer and skipping plotting, checkpointing and validation other than on final epoch for faster Tuning.
-!!! Example
+!!! example
=== "Python"
@@ -212,7 +212,7 @@ For deeper insights, you can explore the `Tuner` class source code and accompany
To optimize the learning rate for Ultralytics YOLO, start by setting an initial learning rate using the `lr0` parameter. Common values range from `0.001` to `0.01`. During the hyperparameter tuning process, this value will be mutated to find the optimal setting. You can utilize the `model.tune()` method to automate this process. For example:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/guides/index.md b/docs/en/guides/index.md
index e9c046a929..e1cb5341c0 100644
--- a/docs/en/guides/index.md
+++ b/docs/en/guides/index.md
@@ -68,7 +68,7 @@ Let's work together to make the Ultralytics YOLO ecosystem more robust and versa
Training a custom object detection model with Ultralytics YOLO is straightforward. Start by preparing your dataset in the correct format and installing the Ultralytics package. Use the following code to initiate training:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/guides/instance-segmentation-and-tracking.md b/docs/en/guides/instance-segmentation-and-tracking.md
index 54127140a6..52a11a3ae0 100644
--- a/docs/en/guides/instance-segmentation-and-tracking.md
+++ b/docs/en/guides/instance-segmentation-and-tracking.md
@@ -34,7 +34,7 @@ There are two types of instance segmentation tracking available in the Ultralyti
| ![Ultralytics Instance Segmentation](https://github.com/ultralytics/docs/releases/download/0/ultralytics-instance-segmentation.avif) | ![Ultralytics Instance Segmentation with Object Tracking](https://github.com/ultralytics/docs/releases/download/0/ultralytics-instance-segmentation-object-tracking.avif) |
| Ultralytics Instance Segmentation 😍 | Ultralytics Instance Segmentation with Object Tracking 🔥 |
-!!! Example "Instance Segmentation and Tracking"
+!!! example "Instance Segmentation and Tracking"
=== "Instance Segmentation"
@@ -146,7 +146,7 @@ For any inquiries, feel free to post your questions in the [Ultralytics Issue Se
To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO model with a segmentation version of YOLOv8 and process video frames through it. Here's a simplified code example:
-!!! Example
+!!! example
=== "Python"
@@ -200,7 +200,7 @@ Ultralytics YOLOv8 offers real-time performance, superior accuracy, and ease of
To implement object tracking, use the `model.track` method and ensure that each object's ID is consistently assigned across frames. Below is a simple example:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/guides/model-deployment-options.md b/docs/en/guides/model-deployment-options.md
index 353dcabd68..635716e581 100644
--- a/docs/en/guides/model-deployment-options.md
+++ b/docs/en/guides/model-deployment-options.md
@@ -331,7 +331,7 @@ For more insights, check out our [blog post](https://www.ultralytics.com/blog/ac
Yes, YOLOv8 models can be deployed on mobile devices using TensorFlow Lite (TF Lite) for both Android and iOS platforms. TF Lite is designed for mobile and embedded devices, providing efficient on-device inference.
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/guides/model-evaluation-insights.md b/docs/en/guides/model-evaluation-insights.md
index 08c3da64e1..22c44e5df0 100644
--- a/docs/en/guides/model-evaluation-insights.md
+++ b/docs/en/guides/model-evaluation-insights.md
@@ -63,7 +63,7 @@ The `imgsz` validation parameter sets the maximum dimension for image resizing,
If you want to get a deeper understanding of your YOLOv8 model's performance, you can easily access specific evaluation metrics with a few lines of Python code. The code snippet below will let you load your model, run an evaluation, and print out various metrics that show how well your model is doing.
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -165,7 +165,7 @@ Improving mean average precision (mAP) for a YOLOv8 model involves several steps
You can access YOLOv8 model evaluation metrics using Python with the following steps:
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/guides/nvidia-jetson.md b/docs/en/guides/nvidia-jetson.md
index 7e6bd41181..9f5dbc9ba7 100644
--- a/docs/en/guides/nvidia-jetson.md
+++ b/docs/en/guides/nvidia-jetson.md
@@ -21,7 +21,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
-!!! Note
+!!! note
This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running the latest stable JetPack release of [JP6.0](https://developer.nvidia.com/embedded/jetpack-sdk-60), JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.1](https://developer.nvidia.com/embedded/jetpack-sdk-461). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.
@@ -57,7 +57,7 @@ The first step after getting your hands on an NVIDIA Jetson device is to flash N
3. If you own a Seeed Studio reComputer J4012 device, you can [flash JetPack to the included SSD](https://wiki.seeedstudio.com/reComputer_J4012_Flash_Jetpack/) and if you own a Seeed Studio reComputer J1020 v2 device, you can [flash JetPack to the eMMC/ SSD](https://wiki.seeedstudio.com/reComputer_J2021_J202_Flash_Jetpack/).
4. If you own any other third party device powered by the NVIDIA Jetson module, it is recommended to follow [command-line flashing](https://docs.nvidia.com/jetson/archives/r35.5.0/DeveloperGuide/IN/QuickStart.html).
-!!! Note
+!!! note
For methods 3 and 4 above, after flashing the system and booting the device, please enter "sudo apt update && sudo apt install nvidia-jetpack -y" on the device terminal to install all the remaining JetPack components needed.
@@ -157,7 +157,7 @@ wget https://nvidia.box.com/shared/static/48dtuob7meiw6ebgfsfqakc9vse62sg4.whl -
pip install onnxruntime_gpu-1.18.0-cp310-cp310-linux_aarch64.whl
```
-!!! Note
+!!! note
`onnxruntime-gpu` will automatically revert back the numpy version to latest. So we need to reinstall numpy to `1.23.5` to fix an issue by executing:
@@ -230,7 +230,7 @@ wget https://nvidia.box.com/shared/static/zostg6agm00fb6t5uisw51qi6kpcuwzd.whl -
pip install onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl
```
-!!! Note
+!!! note
`onnxruntime-gpu` will automatically revert back the numpy version to latest. So we need to reinstall numpy to `1.23.5` to fix an issue by executing:
@@ -244,7 +244,7 @@ Out of all the model export formats supported by Ultralytics, TensorRT delivers
The YOLOv8n model in PyTorch format is converted to TensorRT to run inference with the exported model.
-!!! Example
+!!! example
=== "Python"
@@ -274,7 +274,7 @@ The YOLOv8n model in PyTorch format is converted to TensorRT to run inference wi
yolo predict model=yolov8n.engine source='https://ultralytics.com/images/bus.jpg'
```
-!!! Note
+!!! note
Visit the [Export page](../modes/export.md#arguments) to access additional arguments when exporting models to different model formats
@@ -294,7 +294,7 @@ Even though all model exports are working with NVIDIA Jetson, we have only inclu
The below table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
-!!! Performance
+!!! performance
=== "YOLOv8n"
@@ -377,7 +377,7 @@ The below table represents the benchmark results for five different models (YOLO
To reproduce the above Ultralytics benchmarks on all export [formats](../modes/export.md) run this code:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/guides/object-blurring.md b/docs/en/guides/object-blurring.md
index e6a21338da..48a4a04f2a 100644
--- a/docs/en/guides/object-blurring.md
+++ b/docs/en/guides/object-blurring.md
@@ -27,7 +27,7 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
- **Selective Focus**: YOLOv8 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
- **Real-time Processing**: YOLOv8's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.
-!!! Example "Object Blurring using YOLOv8 Example"
+!!! example "Object Blurring using YOLOv8 Example"
=== "Object Blurring"
diff --git a/docs/en/guides/object-counting.md b/docs/en/guides/object-counting.md
index 00aa917454..1204dfce51 100644
--- a/docs/en/guides/object-counting.md
+++ b/docs/en/guides/object-counting.md
@@ -46,7 +46,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| ![Conveyor Belt Packets Counting Using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/conveyor-belt-packets-counting.avif) | ![Fish Counting in Sea using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/fish-counting-in-sea-using-ultralytics-yolov8.avif) |
| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
-!!! Example "Object Counting using YOLOv8 Example"
+!!! example "Object Counting using YOLOv8 Example"
=== "Count in Region"
diff --git a/docs/en/guides/object-cropping.md b/docs/en/guides/object-cropping.md
index 3efaba93e1..caf831fe88 100644
--- a/docs/en/guides/object-cropping.md
+++ b/docs/en/guides/object-cropping.md
@@ -34,7 +34,7 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| ![Conveyor Belt at Airport Suitcases Cropping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/suitcases-cropping-airport-conveyor-belt.avif) |
| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
-!!! Example "Object Cropping using YOLOv8 Example"
+!!! example "Object Cropping using YOLOv8 Example"
=== "Object Cropping"
diff --git a/docs/en/guides/parking-management.md b/docs/en/guides/parking-management.md
index e25936fbd3..bd5b0edd41 100644
--- a/docs/en/guides/parking-management.md
+++ b/docs/en/guides/parking-management.md
@@ -38,18 +38,18 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
### Selection of Points
-!!! Tip "Point Selection is now Easy"
+!!! tip "Point Selection is now Easy"
Choosing parking points is a critical and complex task in parking management systems. Ultralytics streamlines this process by providing a tool that lets you define parking lot areas, which can be utilized later for additional processing.
- Capture a frame from the video or camera stream where you want to manage the parking lot.
- Use the provided code to launch a graphical interface, where you can select an image and start outlining parking regions by mouse click to create polygons.
-!!! Warning "Image Size"
+!!! warning "Image Size"
Max Image Size of 1920 * 1080 supported
-!!! Example "Parking slots Annotator Ultralytics YOLOv8"
+!!! example "Parking slots Annotator Ultralytics YOLOv8"
=== "Parking Annotator"
@@ -65,7 +65,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
### Python Code for Parking Management
-!!! Example "Parking management using YOLOv8 Example"
+!!! example "Parking management using YOLOv8 Example"
=== "Parking Management"
diff --git a/docs/en/guides/queue-management.md b/docs/en/guides/queue-management.md
index ad599770b2..7abc31d31c 100644
--- a/docs/en/guides/queue-management.md
+++ b/docs/en/guides/queue-management.md
@@ -33,7 +33,7 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
| ![Queue management at airport ticket counter using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-management-airport-ticket-counter-ultralytics-yolov8.avif) | ![Queue monitoring in crowd using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-monitoring-crowd-ultralytics-yolov8.avif) |
| Queue management at airport ticket counter Using Ultralytics YOLOv8 | Queue monitoring in crowd Ultralytics YOLOv8 |
-!!! Example "Queue Management using YOLOv8 Example"
+!!! example "Queue Management using YOLOv8 Example"
=== "Queue Manager"
diff --git a/docs/en/guides/raspberry-pi.md b/docs/en/guides/raspberry-pi.md
index 997c08547b..674bc847b3 100644
--- a/docs/en/guides/raspberry-pi.md
+++ b/docs/en/guides/raspberry-pi.md
@@ -19,7 +19,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
Watch: Raspberry Pi 5 updates and improvements.
-!!! Note
+!!! note
This guide has been tested with Raspberry Pi 4 and Raspberry Pi 5 running the latest [Raspberry Pi OS Bookworm (Debian 12)](https://www.raspberrypi.com/software/operating-systems/). Using this guide for older Raspberry Pi devices such as the Raspberry Pi 3 is expected to work as long as the same Raspberry Pi OS Bookworm is installed.
@@ -100,7 +100,7 @@ Out of all the model export formats supported by Ultralytics, [NCNN](https://doc
The YOLOv8n model in PyTorch format is converted to NCNN to run inference with the exported model.
-!!! Example
+!!! example
=== "Python"
@@ -130,7 +130,7 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
yolo predict model='yolov8n_ncnn_model' source='https://ultralytics.com/images/bus.jpg'
```
-!!! Tip
+!!! tip
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](https://docs.ultralytics.com/guides/model-deployment-options).
@@ -138,7 +138,7 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
YOLOv8 benchmarks were run by the Ultralytics team on nine different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX, OpenVINO, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN. Benchmarks were run on both Raspberry Pi 5 and Raspberry Pi 4 at FP32 precision with default input image size of 640.
-!!! Note
+!!! note
We have only included benchmarks for YOLOv8n and YOLOv8s models because other models sizes are too big to run on the Raspberry Pis and does not offer decent performance.
@@ -224,7 +224,7 @@ The below table represents the benchmark results for two different models (YOLOv
To reproduce the above Ultralytics benchmarks on all [export formats](../modes/export.md), run this code:
-!!! Example
+!!! example
=== "Python"
@@ -251,11 +251,11 @@ To reproduce the above Ultralytics benchmarks on all [export formats](../modes/e
When using Raspberry Pi for Computer Vision projects, it can be essentially to grab real-time video feeds to perform inference. The onboard MIPI CSI connector on the Raspberry Pi allows you to connect official Raspberry PI camera modules. In this guide, we have used a [Raspberry Pi Camera Module 3](https://www.raspberrypi.com/products/camera-module-3) to grab the video feeds and perform inference using YOLOv8 models.
-!!! Tip
+!!! tip
Learn more about the [different camera modules offered by Raspberry Pi](https://www.raspberrypi.com/documentation/accessories/camera.html) and also [how to get started with the Raspberry Pi camera modules](https://www.raspberrypi.com/documentation/computers/camera_software.html#introducing-the-raspberry-pi-cameras).
-!!! Note
+!!! note
Raspberry Pi 5 uses smaller CSI connectors than the Raspberry Pi 4 (15-pin vs 22-pin), so you will need a [15-pin to 22pin adapter cable](https://www.raspberrypi.com/products/camera-cable) to connect to a Raspberry Pi Camera.
@@ -267,7 +267,7 @@ Execute the following command after connecting the camera to the Raspberry Pi. Y
rpicam-hello
```
-!!! Tip
+!!! tip
Learn more about [`rpicam-hello` usage on official Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/camera_software.html#rpicam-hello)
@@ -275,13 +275,13 @@ rpicam-hello
There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
-!!! Usage
+!!! usage
=== "Method 1"
We can use `picamera2`which comes pre-installed with Raspberry Pi OS to access the camera and inference YOLOv8 models.
- !!! Example
+ !!! example
=== "Python"
@@ -333,7 +333,7 @@ There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
Learn more about [`rpicam-vid` usage on official Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/camera_software.html#rpicam-vid)
- !!! Example
+ !!! example
=== "Python"
@@ -353,7 +353,7 @@ There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
yolo predict model=yolov8n.pt source="tcp://127.0.0.1:8888"
```
-!!! Tip
+!!! tip
Check our document on [Inference Sources](https://docs.ultralytics.com/modes/predict/#inference-sources) if you want to change the image/ video input type
@@ -410,7 +410,7 @@ Ultralytics YOLOv8's NCNN format is highly optimized for mobile and embedded pla
You can convert a PyTorch YOLOv8 model to NCNN format using either Python or CLI commands:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/guides/sahi-tiled-inference.md b/docs/en/guides/sahi-tiled-inference.md
index 5795ab6475..6b23b21c7f 100644
--- a/docs/en/guides/sahi-tiled-inference.md
+++ b/docs/en/guides/sahi-tiled-inference.md
@@ -187,7 +187,7 @@ That's it! Now you're equipped to use YOLOv8 with SAHI for both standard and sli
If you use SAHI in your research or development work, please cite the original SAHI paper and acknowledge the authors:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/guides/speed-estimation.md b/docs/en/guides/speed-estimation.md
index 9e9ddce5ee..4342d108e6 100644
--- a/docs/en/guides/speed-estimation.md
+++ b/docs/en/guides/speed-estimation.md
@@ -38,7 +38,7 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
| ![Speed Estimation on Road using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-road-using-ultralytics-yolov8.avif) | ![Speed Estimation on Bridge using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-bridge-using-ultralytics-yolov8.avif) |
| Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
-!!! Example "Speed Estimation using YOLOv8 Example"
+!!! example "Speed Estimation using YOLOv8 Example"
=== "Speed Estimation"
diff --git a/docs/en/guides/streamlit-live-inference.md b/docs/en/guides/streamlit-live-inference.md
index 24388eb302..3a89f123f4 100644
--- a/docs/en/guides/streamlit-live-inference.md
+++ b/docs/en/guides/streamlit-live-inference.md
@@ -38,7 +38,7 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
Before you start building the application, ensure you have the Ultralytics Python Package installed. You can install it using the command **pip install ultralytics**
-!!! Example "Streamlit Application"
+!!! example "Streamlit Application"
=== "Python"
@@ -60,7 +60,7 @@ This will launch the Streamlit application in your default web browser. You will
You can optionally supply a specific model in Python:
-!!! Example "Streamlit Application with a custom model"
+!!! example "Streamlit Application with a custom model"
=== "Python"
@@ -104,7 +104,7 @@ pip install ultralytics
Then, you can create a basic Streamlit application to run live inference:
-!!! Example "Streamlit Application"
+!!! example "Streamlit Application"
=== "Python"
diff --git a/docs/en/guides/vision-eye.md b/docs/en/guides/vision-eye.md
index 9fc1e0a71f..891a589b24 100644
--- a/docs/en/guides/vision-eye.md
+++ b/docs/en/guides/vision-eye.md
@@ -17,7 +17,7 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
| ![VisionEye View Object Mapping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-view-object-mapping-yolov8.avif) | ![VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-object-mapping-with-tracking.avif) | ![VisionEye View with Distance Calculation using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-distance-calculation-yolov8.avif) |
| VisionEye View Object Mapping using Ultralytics YOLOv8 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8 | VisionEye View with Distance Calculation using Ultralytics YOLOv8 |
-!!! Example "VisionEye Object Mapping using YOLOv8"
+!!! example "VisionEye Object Mapping using YOLOv8"
=== "VisionEye Object Mapping"
diff --git a/docs/en/guides/workouts-monitoring.md b/docs/en/guides/workouts-monitoring.md
index 9140555149..7134a56675 100644
--- a/docs/en/guides/workouts-monitoring.md
+++ b/docs/en/guides/workouts-monitoring.md
@@ -34,7 +34,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
| ![PushUps Counting](https://github.com/ultralytics/docs/releases/download/0/pushups-counting.avif) | ![PullUps Counting](https://github.com/ultralytics/docs/releases/download/0/pullups-counting.avif) |
| PushUps Counting | PullUps Counting |
-!!! Example "Workouts Monitoring Example"
+!!! example "Workouts Monitoring Example"
=== "Workouts Monitoring"
diff --git a/docs/en/help/contributing.md b/docs/en/help/contributing.md
index 637c1ae86e..1d02ba23af 100644
--- a/docs/en/help/contributing.md
+++ b/docs/en/help/contributing.md
@@ -57,7 +57,7 @@ I have read the CLA Document and I sign the CLA
When adding new functions or classes, please include [Google-style docstrings](https://google.github.io/styleguide/pyguide.html). These docstrings provide clear, standardized documentation that helps other developers understand and maintain your code.
-!!! Example "Example Docstrings"
+!!! example "Example Docstrings"
=== "Google-style"
diff --git a/docs/en/help/privacy.md b/docs/en/help/privacy.md
index a053f199fe..3683b1aea5 100644
--- a/docs/en/help/privacy.md
+++ b/docs/en/help/privacy.md
@@ -39,7 +39,7 @@ We take several measures to ensure the privacy and security of the data you entr
[Sentry](https://sentry.io/welcome/) is a developer-centric error tracking software that aids in identifying, diagnosing, and resolving issues in real-time, ensuring the robustness and reliability of applications. Within our package, it plays a crucial role by providing insights through crash reporting, significantly contributing to the stability and ongoing refinement of our software.
-!!! Note
+!!! note
Crash reporting via Sentry is activated only if the `sentry-sdk` Python package is pre-installed on your system. This package isn't included in the `ultralytics` prerequisites and won't be installed automatically by Ultralytics.
@@ -74,7 +74,7 @@ To opt out of sending analytics and crash reports, you can simply set `sync=Fals
To gain insight into the current configuration of your settings, you can view them directly:
-!!! Example "View settings"
+!!! example "View settings"
=== "Python"
@@ -100,7 +100,7 @@ To gain insight into the current configuration of your settings, you can view th
Ultralytics allows users to easily modify their settings. Changes can be performed in the following ways:
-!!! Example "Update settings"
+!!! example "Update settings"
=== "Python"
@@ -159,7 +159,7 @@ Ultralytics collects three primary types of data using Google Analytics:
To opt out of data collection, you can simply set `sync=False` in your YOLO settings. This action stops the transmission of any analytics or crash reports. You can disable data collection using Python or CLI methods:
-!!! Example "Update settings"
+!!! example "Update settings"
=== "Python"
@@ -193,7 +193,7 @@ If the `sentry-sdk` package is pre-installed, Sentry collects detailed crash log
Yes, you can easily view your current settings to understand the configuration of your data collection preferences. Use the following methods to inspect these settings:
-!!! Example "View settings"
+!!! example "View settings"
=== "Python"
diff --git a/docs/en/hub/app/android.md b/docs/en/hub/app/android.md
index 365180545d..0f3f0b820f 100644
--- a/docs/en/hub/app/android.md
+++ b/docs/en/hub/app/android.md
@@ -54,7 +54,7 @@ FP16 (or half-precision) quantization converts the model's 32-bit floating-point
INT8 (or 8-bit integer) quantization further reduces the model's size and computation requirements by converting its 32-bit floating-point numbers to 8-bit integers. This quantization method can result in a significant speedup, but it may lead to a slight reduction in mean average precision (mAP) due to the lower numerical precision.
-!!! Tip "mAP Reduction in INT8 Models"
+!!! tip "mAP Reduction in INT8 Models"
The reduced numerical precision in INT8 models can lead to some loss of information during the quantization process, which may result in a slight decrease in mAP. However, this trade-off is often acceptable considering the substantial performance gains offered by INT8 quantization.
diff --git a/docs/en/hub/datasets.md b/docs/en/hub/datasets.md
index 3546698085..a323508b93 100644
--- a/docs/en/hub/datasets.md
+++ b/docs/en/hub/datasets.md
@@ -40,7 +40,7 @@ You can download our [COCO8](https://github.com/ultralytics/hub/blob/main/exampl
The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format.
-!!! Example "coco8.yaml"
+!!! example "coco8.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8.yaml"
diff --git a/docs/en/hub/inference-api.md b/docs/en/hub/inference-api.md
index a52914fa61..845802a817 100644
--- a/docs/en/hub/inference-api.md
+++ b/docs/en/hub/inference-api.md
@@ -125,7 +125,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### Classification
-!!! Example "Classification Model"
+!!! example "Classification Model"
=== "`ultralytics`"
@@ -205,7 +205,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### Detection
-!!! Example "Detection Model"
+!!! example "Detection Model"
=== "`ultralytics`"
@@ -291,7 +291,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### OBB
-!!! Example "OBB Model"
+!!! example "OBB Model"
=== "`ultralytics`"
@@ -381,7 +381,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### Segmentation
-!!! Example "Segmentation Model"
+!!! example "Segmentation Model"
=== "`ultralytics`"
@@ -481,7 +481,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### Pose
-!!! Example "Pose Model"
+!!! example "Pose Model"
=== "`ultralytics`"
diff --git a/docs/en/hub/models.md b/docs/en/hub/models.md
index eb1e9e567a..60b0234259 100644
--- a/docs/en/hub/models.md
+++ b/docs/en/hub/models.md
@@ -64,7 +64,7 @@ In this step, you have to choose the project in which you want to create your mo
In case you don't have a project created yet, you can set the name of your project in this step and it will be created together with your model.
-!!! Info "Info"
+!!! info "Info"
You can read more about the available [YOLOv8](https://docs.ultralytics.com/models/yolov8) (and [YOLOv5](https://docs.ultralytics.com/models/yolov5)) architectures in our documentation.
diff --git a/docs/en/hub/projects.md b/docs/en/hub/projects.md
index 2f169abd06..253f64bcad 100644
--- a/docs/en/hub/projects.md
+++ b/docs/en/hub/projects.md
@@ -76,7 +76,7 @@ Set the general access to "Unlisted" and click **Save**.
![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-dialog.avif)
-!!! Warning "Warning"
+!!! warning "Warning"
When changing the general access of a project, the general access of the models inside the project will be changed as well.
@@ -116,7 +116,7 @@ Navigate to the Project page of the project you want to delete, open the project
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Delete option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-option-1.avif)
-!!! Warning "Warning"
+!!! warning "Warning"
When deleting a project, the models inside the project will be deleted as well.
diff --git a/docs/en/hub/teams.md b/docs/en/hub/teams.md
index 83480584b0..1fee1cf59b 100644
--- a/docs/en/hub/teams.md
+++ b/docs/en/hub/teams.md
@@ -56,7 +56,7 @@ Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, op
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Delete option of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-delete-team-option.avif)
-!!! Warning "Warning"
+!!! warning "Warning"
When deleting a team, the team can't be restored.
diff --git a/docs/en/integrations/clearml.md b/docs/en/integrations/clearml.md
index 566a6db8bd..8296c15533 100644
--- a/docs/en/integrations/clearml.md
+++ b/docs/en/integrations/clearml.md
@@ -26,7 +26,7 @@ You can bring automation and efficiency to your machine learning workflow by imp
To install the required packages, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -43,7 +43,7 @@ Once you have installed the necessary packages, the next step is to initialize a
Begin by initializing the ClearML SDK in your environment. The 'clearml-init' command starts the setup process and prompts you for the necessary credentials.
-!!! Tip "Initial SDK Setup"
+!!! tip "Initial SDK Setup"
=== "CLI"
@@ -58,7 +58,7 @@ After executing this command, visit the [ClearML Settings page](https://app.clea
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/comet.md b/docs/en/integrations/comet.md
index a705985449..0cd1959481 100644
--- a/docs/en/integrations/comet.md
+++ b/docs/en/integrations/comet.md
@@ -26,7 +26,7 @@ By combining Ultralytics YOLOv8 with Comet ML, you unlock a range of benefits. T
To install the required packages, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -39,7 +39,7 @@ To install the required packages, run:
After installing the required packages, you'll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
-!!! Tip "Configuring Comet ML"
+!!! tip "Configuring Comet ML"
=== "CLI"
@@ -62,7 +62,7 @@ If you are using a Google Colab notebook, the code above will prompt you to ente
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/coreml.md b/docs/en/integrations/coreml.md
index 2f22e6c444..b242486d1b 100644
--- a/docs/en/integrations/coreml.md
+++ b/docs/en/integrations/coreml.md
@@ -60,7 +60,7 @@ Exporting YOLOv8 to CoreML enables optimized, on-device machine learning perform
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -75,7 +75,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -131,7 +131,7 @@ Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, vi
To export your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models to CoreML format, you'll first need to ensure you have the `ultralytics` package installed. You can install it using:
-!!! Example "Installation"
+!!! example "Installation"
=== "CLI"
@@ -141,7 +141,7 @@ To export your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics)
Next, you can export the model using the following Python or CLI commands:
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -198,7 +198,7 @@ For more information on performance optimization, visit the [CoreML official doc
Yes, you can run inference directly using the exported CoreML model. Below are the commands for Python and CLI:
-!!! Example "Running Inference"
+!!! example "Running Inference"
=== "Python"
diff --git a/docs/en/integrations/dvc.md b/docs/en/integrations/dvc.md
index 7e1325b9d1..11cbde8601 100644
--- a/docs/en/integrations/dvc.md
+++ b/docs/en/integrations/dvc.md
@@ -26,7 +26,7 @@ YOLOv8 training sessions can be effectively monitored with DVCLive. Additionally
To install the required packages, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -43,7 +43,7 @@ Once you have installed the necessary packages, the next step is to set up and c
Begin by initializing a Git repository, as Git plays a crucial role in version control for both your code and DVCLive configurations.
-!!! Tip "Initial Environment Setup"
+!!! tip "Initial Environment Setup"
=== "CLI"
@@ -176,7 +176,7 @@ Additionally, explore more integrations and capabilities of Ultralytics by visit
Integrating DVCLive with Ultralytics YOLOv8 is straightforward. Start by installing the necessary packages:
-!!! Example "Installation"
+!!! example "Installation"
=== "CLI"
@@ -186,7 +186,7 @@ Integrating DVCLive with Ultralytics YOLOv8 is straightforward. Start by install
Next, initialize a Git repository and configure DVCLive in your project:
-!!! Example "Initial Environment Setup"
+!!! example "Initial Environment Setup"
=== "CLI"
@@ -258,7 +258,7 @@ These steps ensure proper version control and setup for experiment tracking. For
DVCLive offers powerful tools to visualize the results of YOLOv8 experiments. Here's how you can generate comparative plots:
-!!! Example "Generate Comparative Plots"
+!!! example "Generate Comparative Plots"
=== "CLI"
diff --git a/docs/en/integrations/edge-tpu.md b/docs/en/integrations/edge-tpu.md
index 71941ef337..d1caf90f5c 100644
--- a/docs/en/integrations/edge-tpu.md
+++ b/docs/en/integrations/edge-tpu.md
@@ -50,7 +50,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -65,7 +65,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -123,7 +123,7 @@ Also, for more information on other Ultralytics YOLOv8 integrations, please visi
To export a YOLOv8 model to TFLite Edge TPU format, you can follow these steps:
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/ibm-watsonx.md b/docs/en/integrations/ibm-watsonx.md
index ccde685666..024a6d3c5e 100644
--- a/docs/en/integrations/ibm-watsonx.md
+++ b/docs/en/integrations/ibm-watsonx.md
@@ -56,7 +56,7 @@ Once you do so, a notebook environment will open for you to load your data set.
Next, you can install and import the necessary Python libraries.
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -71,7 +71,7 @@ For detailed instructions and best practices related to the installation process
Then, you can import the needed packages.
-!!! Example "Import Relevant Libraries"
+!!! example "Import Relevant Libraries"
=== "Python"
@@ -92,7 +92,7 @@ We can load the dataset directly into the notebook using the Kaggle API. First,
Copy and paste your Kaggle username and API key into the following code. Then run the code to install the API and load the dataset into Watsonx.
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -103,7 +103,7 @@ Copy and paste your Kaggle username and API key into the following code. Then ru
After installing Kaggle, we can load the dataset into Watsonx.
-!!! Example "Load the Data"
+!!! example "Load the Data"
=== "Python"
@@ -155,7 +155,7 @@ But, YOLO models by default require separate images and labels in subdirectories
To reorganize the data set directory, we can run the following script:
-!!! Example "Preprocess the Data"
+!!! example "Preprocess the Data"
=== "Python"
@@ -207,7 +207,7 @@ names:
Run the following script to delete the current contents of config.yaml and replace it with the above contents that reflect our new data set directory structure. Be certain to replace the work_dir portion of the root directory path in line 4 with your own working directory path we retrieved earlier. Leave the train, val, and test subdirectory definitions. Also, do not change {work_dir} in line 23 of the code.
-!!! Example "Edit the .yaml File"
+!!! example "Edit the .yaml File"
=== "Python"
@@ -240,7 +240,7 @@ Run the following script to delete the current contents of config.yaml and repla
Run the following command-line code to fine tune a pretrained default YOLOv8 model.
-!!! Example "Train the YOLOv8 model"
+!!! example "Train the YOLOv8 model"
=== "CLI"
@@ -263,7 +263,7 @@ For a detailed understanding of the model training process and best practices, r
We can now run inference to test the performance of our fine-tuned model:
-!!! Example "Test the YOLOv8 model"
+!!! example "Test the YOLOv8 model"
=== "CLI"
@@ -279,7 +279,7 @@ The parameter `conf=0.5` informs the model to ignore all predictions with a conf
Lastly, `iou=.5` directs the model to ignore boxes in the same class with an overlap of 50% or greater. It helps to reduce potential duplicate boxes generated for the same object.
we can load the images with predicted bounding box overlays to view how our model performs on a handful of images.
-!!! Example "Display Predictions"
+!!! example "Display Predictions"
=== "Python"
diff --git a/docs/en/integrations/jupyterlab.md b/docs/en/integrations/jupyterlab.md
index aa7ec76885..fa343fddd8 100644
--- a/docs/en/integrations/jupyterlab.md
+++ b/docs/en/integrations/jupyterlab.md
@@ -54,7 +54,7 @@ JupyterLab makes it easy to experiment with YOLOv8. To get started, follow these
First, you need to install JupyterLab. Open your terminal and run the command:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -71,7 +71,7 @@ Next, download the [tutorial.ipynb](https://github.com/ultralytics/ultralytics/b
Navigate to the directory where you saved the notebook file using your terminal. Then, run the following command to launch JupyterLab:
-!!! Example "Usage"
+!!! example "Usage"
=== "CLI"
diff --git a/docs/en/integrations/mlflow.md b/docs/en/integrations/mlflow.md
index 23a8ad47fb..727b79e0bb 100644
--- a/docs/en/integrations/mlflow.md
+++ b/docs/en/integrations/mlflow.md
@@ -34,7 +34,7 @@ pip install mlflow
Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this is controlled by the settings `mflow` key. See the [settings](../quickstart.md#ultralytics-settings) page for more info.
-!!! Example "Update Ultralytics MLflow Settings"
+!!! example "Update Ultralytics MLflow Settings"
=== "Python"
@@ -130,7 +130,7 @@ pip install mlflow
Next, enable MLflow logging in Ultralytics settings. This can be controlled using the `mlflow` key. For more information, see the [settings guide](../quickstart.md#ultralytics-settings).
-!!! Example "Update Ultralytics MLflow Settings"
+!!! example "Update Ultralytics MLflow Settings"
=== "Python"
diff --git a/docs/en/integrations/ncnn.md b/docs/en/integrations/ncnn.md
index 90481a5462..cc64a0f84c 100644
--- a/docs/en/integrations/ncnn.md
+++ b/docs/en/integrations/ncnn.md
@@ -52,7 +52,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required packages, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -67,7 +67,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/neural-magic.md b/docs/en/integrations/neural-magic.md
index 7d4d5bed74..31eebda53a 100644
--- a/docs/en/integrations/neural-magic.md
+++ b/docs/en/integrations/neural-magic.md
@@ -70,7 +70,7 @@ Deploying YOLOv8 with Neural Magic's DeepSparse involves a few straightforward s
To install the required packages, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -83,7 +83,7 @@ To install the required packages, run:
DeepSparse Engine requires YOLOv8 models in ONNX format. Exporting your model to this format is essential for compatibility with DeepSparse. Use the following command to export YOLOv8 models:
-!!! Tip "Model Export"
+!!! tip "Model Export"
=== "CLI"
@@ -98,7 +98,7 @@ This command will save the `yolov8n.onnx` model to your disk.
With your YOLOv8 model in ONNX format, you can deploy and run inferences using DeepSparse. This can be done easily with their intuitive Python API:
-!!! Tip "Deploying and Running Inferences"
+!!! tip "Deploying and Running Inferences"
=== "Python"
@@ -120,7 +120,7 @@ With your YOLOv8 model in ONNX format, you can deploy and run inferences using D
It's important to check that your YOLOv8 model is performing optimally on DeepSparse. You can benchmark your model's performance to analyze throughput and latency:
-!!! Tip "Benchmarking"
+!!! tip "Benchmarking"
=== "CLI"
@@ -133,7 +133,7 @@ It's important to check that your YOLOv8 model is performing optimally on DeepSp
DeepSparse provides additional features for practical integration of YOLOv8 in applications, such as image annotation and dataset evaluation.
-!!! Tip "Additional Features"
+!!! tip "Additional Features"
=== "CLI"
diff --git a/docs/en/integrations/onnx.md b/docs/en/integrations/onnx.md
index 36f9febb0f..766757cde3 100644
--- a/docs/en/integrations/onnx.md
+++ b/docs/en/integrations/onnx.md
@@ -68,7 +68,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -83,7 +83,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -139,7 +139,7 @@ Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, vi
To export your YOLOv8 models to ONNX format using Ultralytics, follow these steps:
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/openvino.md b/docs/en/integrations/openvino.md
index f917efa63e..1278091f6e 100644
--- a/docs/en/integrations/openvino.md
+++ b/docs/en/integrations/openvino.md
@@ -27,7 +27,7 @@ OpenVINO, short for Open Visual Inference & Neural Network Optimization toolkit,
Export a YOLOv8n model to OpenVINO format and run inference with the exported model.
-!!! Example
+!!! example
=== "Python"
@@ -105,7 +105,7 @@ For more detailed steps and code snippets, refer to the [OpenVINO documentation]
YOLOv8 benchmarks below were run by the Ultralytics team on 4 different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX and OpenVINO. Benchmarks were run on Intel Flex and Arc GPUs, and on Intel Xeon CPUs at FP32 precision (with the `half=False` argument).
-!!! Note
+!!! note
The benchmarking results below are for reference and might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run.
@@ -255,7 +255,7 @@ Benchmarks below run on 13th Gen Intel® Core® i7-13700H CPU at FP32 precision.
To reproduce the Ultralytics benchmarks above on all export [formats](../modes/export.md) run this code:
-!!! Example
+!!! example
=== "Python"
@@ -294,7 +294,7 @@ For more detailed information and instructions on using OpenVINO, refer to the [
Exporting YOLOv8 models to the OpenVINO format can significantly enhance CPU speed and enable GPU and NPU accelerations on Intel hardware. To export, you can use either Python or CLI as shown below:
-!!! Example
+!!! example
=== "Python"
@@ -332,7 +332,7 @@ For detailed performance comparisons, visit our [benchmarks section](#openvino-y
After exporting a YOLOv8 model to OpenVINO format, you can run inference using Python or CLI:
-!!! Example
+!!! example
=== "Python"
@@ -369,7 +369,7 @@ For in-depth performance analysis, check our detailed [YOLOv8 benchmarks](#openv
Yes, you can benchmark YOLOv8 models in various formats including PyTorch, TorchScript, ONNX, and OpenVINO. Use the following code snippet to run benchmarks on your chosen dataset:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/integrations/paddlepaddle.md b/docs/en/integrations/paddlepaddle.md
index ce2dfb809e..700c24f62e 100644
--- a/docs/en/integrations/paddlepaddle.md
+++ b/docs/en/integrations/paddlepaddle.md
@@ -54,7 +54,7 @@ Converting YOLOv8 models to the PaddlePaddle format can improve execution flexib
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -69,7 +69,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -127,7 +127,7 @@ Want to explore more ways to integrate your Ultralytics YOLOv8 models? Our [inte
Exporting Ultralytics YOLOv8 models to PaddlePaddle format is straightforward. You can use the `export` method of the YOLO class to perform this exportation. Here is an example using Python:
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/ray-tune.md b/docs/en/integrations/ray-tune.md
index 55d77a64ed..d216cf38e9 100644
--- a/docs/en/integrations/ray-tune.md
+++ b/docs/en/integrations/ray-tune.md
@@ -28,7 +28,7 @@ YOLOv8 also allows optional integration with [Weights & Biases](https://wandb.ai
To install the required packages, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -42,7 +42,7 @@ To install the required packages, run:
## Usage
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -103,7 +103,7 @@ The following table lists the default search space parameters for hyperparameter
In this example, we demonstrate how to use a custom search space for hyperparameter tuning with Ray Tune and YOLOv8. By providing a custom search space, you can focus the tuning process on specific hyperparameters of interest.
-!!! Example "Usage"
+!!! example "Usage"
```python
from ultralytics import YOLO
diff --git a/docs/en/integrations/roboflow.md b/docs/en/integrations/roboflow.md
index a60081c83b..e851e1bde7 100644
--- a/docs/en/integrations/roboflow.md
+++ b/docs/en/integrations/roboflow.md
@@ -8,7 +8,7 @@ keywords: Roboflow, YOLOv8, data labeling, computer vision, model training, mode
[Roboflow](https://roboflow.com/?ref=ultralytics) has everything you need to build and deploy computer vision models. Connect Roboflow at any step in your pipeline with APIs and SDKs, or use the end-to-end interface to automate the entire process from image to inference. Whether you're in need of [data labeling](https://roboflow.com/annotate?ref=ultralytics), [model training](https://roboflow.com/train?ref=ultralytics), or [model deployment](https://roboflow.com/deploy?ref=ultralytics), Roboflow gives you building blocks to bring custom computer vision solutions to your project.
-!!! Question "Licensing"
+!!! question "Licensing"
Ultralytics offers two licensing options:
diff --git a/docs/en/integrations/tensorboard.md b/docs/en/integrations/tensorboard.md
index 333a3085ec..59add9258f 100644
--- a/docs/en/integrations/tensorboard.md
+++ b/docs/en/integrations/tensorboard.md
@@ -26,7 +26,7 @@ Using TensorBoard while training YOLOv8 models is straightforward and offers sig
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -43,7 +43,7 @@ For detailed instructions and best practices related to the installation process
When using Google Colab, it's important to set up TensorBoard before starting your training code:
-!!! Example "Configure TensorBoard for Google Colab"
+!!! example "Configure TensorBoard for Google Colab"
=== "Python"
@@ -56,7 +56,7 @@ When using Google Colab, it's important to set up TensorBoard before starting yo
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -189,7 +189,7 @@ These visualizations are essential for tracking model performance and making nec
Yes, you can use TensorBoard in a Google Colab environment to train YOLOv8 models. Here's a quick setup:
-!!! Example "Configure TensorBoard for Google Colab"
+!!! example "Configure TensorBoard for Google Colab"
=== "Python"
diff --git a/docs/en/integrations/tensorrt.md b/docs/en/integrations/tensorrt.md
index 798dcd2bb8..c40b9f654c 100644
--- a/docs/en/integrations/tensorrt.md
+++ b/docs/en/integrations/tensorrt.md
@@ -62,7 +62,7 @@ You can improve execution efficiency and optimize performance by converting YOLO
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -77,7 +77,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/tf-graphdef.md b/docs/en/integrations/tf-graphdef.md
index 1070e3bd11..24ae0dd980 100644
--- a/docs/en/integrations/tf-graphdef.md
+++ b/docs/en/integrations/tf-graphdef.md
@@ -58,7 +58,7 @@ You can convert your YOLOv8 object detection model to the TF GraphDef format, wh
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -73,7 +73,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -131,7 +131,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Ultralytics YOLOv8 models can be exported to TensorFlow GraphDef (TF GraphDef) format seamlessly. This format provides a serialized, platform-independent representation of the model, ideal for deploying in varied environments like mobile and web. To export a YOLOv8 model to TF GraphDef, follow these steps:
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/tf-savedmodel.md b/docs/en/integrations/tf-savedmodel.md
index cb22c21a57..5de706a8e0 100644
--- a/docs/en/integrations/tf-savedmodel.md
+++ b/docs/en/integrations/tf-savedmodel.md
@@ -52,7 +52,7 @@ By exporting YOLOv8 models to the TF SavedModel format, you enhance their adapta
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -67,7 +67,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -125,7 +125,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Exporting an Ultralytics YOLO model to the TensorFlow SavedModel format is straightforward. You can use either Python or CLI to achieve this:
-!!! Example "Exporting YOLOv8 to TF SavedModel"
+!!! example "Exporting YOLOv8 to TF SavedModel"
=== "Python"
diff --git a/docs/en/integrations/tfjs.md b/docs/en/integrations/tfjs.md
index acb5f74ae3..726bba251e 100644
--- a/docs/en/integrations/tfjs.md
+++ b/docs/en/integrations/tfjs.md
@@ -50,7 +50,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -65,7 +65,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -123,7 +123,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Exporting Ultralytics YOLOv8 models to TensorFlow.js (TF.js) format is straightforward. You can follow these steps:
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/tflite.md b/docs/en/integrations/tflite.md
index 01c40df2e2..db8b033844 100644
--- a/docs/en/integrations/tflite.md
+++ b/docs/en/integrations/tflite.md
@@ -56,7 +56,7 @@ You can improve on-device model execution efficiency and optimize performance by
To install the required packages, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -71,7 +71,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
diff --git a/docs/en/integrations/torchscript.md b/docs/en/integrations/torchscript.md
index d109fa2a09..138f975e1c 100644
--- a/docs/en/integrations/torchscript.md
+++ b/docs/en/integrations/torchscript.md
@@ -60,7 +60,7 @@ Exporting YOLOv8 models to TorchScript makes it easier to use them in different
To install the required package, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -75,7 +75,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -135,7 +135,7 @@ Exporting an Ultralytics YOLOv8 model to TorchScript allows for flexible, cross-
To export a YOLOv8 model to TorchScript, you can use the following example code:
-!!! Example "Usage"
+!!! example "Usage"
=== "Python"
@@ -182,7 +182,7 @@ For more insights into deployment, visit the [PyTorch Mobile Documentation](http
To install the required package for exporting YOLOv8 models, use the following command:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
diff --git a/docs/en/integrations/vscode.md b/docs/en/integrations/vscode.md
index d634bd5722..71c4dce808 100644
--- a/docs/en/integrations/vscode.md
+++ b/docs/en/integrations/vscode.md
@@ -39,7 +39,7 @@ Want to let us know what you use for developing code? Head over to our Discourse
## Installing the Extension
-!!! Note
+!!! note
Any code environment that will allow for installing VS Code extensions _should be_ compatible with the Ultralytics-snippets extension. After publishing the extension, it was discovered that [neovim](https://neovim.io/) can be made compatible with VS Code extensions. To learn more see the [`neovim` install section][neovim install] of the Readme in the [Ultralytics-Snippets repository][repo].
@@ -127,7 +127,7 @@ These are the current snippet categories available to the Ultralytics-snippets e
The `ultra.examples` snippets are to useful for anyone looking to learn how to get started with the basics of working with Ultralytics YOLO. Example snippets are intended to run once inserted (some have dropdown options as well). An example of this is shown at the animation at the [top] of this page, where after the snippet is inserted, all code is selected and run interactively using Shift ⇑+Enter ↵.
-!!! Example
+!!! example
Just like the animation shows at the [top] of this page, you can use the snippet `ultra.example-yolo-predict` to insert the following code example. Once inserted, the only configurable option is for the model scale which can be any one of: `n`, `s`, `m`, `l`, or `x`.
@@ -146,7 +146,7 @@ The `ultra.examples` snippets are to useful for anyone looking to learn how to g
The aim for snippets other than the `ultra.examples` are for making development easier and quicker when working with Ultralytics. A common code block to be used in many projects, is to iterate the list of `Results` returned from using the model [predict] method. The `ultra.result-loop` snippet can help with this.
-!!! Example
+!!! example
Using the `ultra.result-loop` will insert the following default code (including comments).
@@ -170,7 +170,7 @@ However, since Ultralytics supports numerous [tasks], when [working with inferen
There are over 💯 keyword arguments for all of the various Ultralytics [tasks] and [modes]! That's a lot to remember and it can be easy to forget if the argument is `save_frame` or `save_frames` (it's definitely `save_frames` by the way). This is where the `ultra.kwargs` snippets can help out!
-!!! Example
+!!! example
To insert the [predict] method, including all [inference arguments], use `ultra.kwargs-predict`, which will insert the following code (including comments).
diff --git a/docs/en/integrations/weights-biases.md b/docs/en/integrations/weights-biases.md
index d80a1c15bd..232860f92f 100644
--- a/docs/en/integrations/weights-biases.md
+++ b/docs/en/integrations/weights-biases.md
@@ -37,7 +37,7 @@ You can use Weights & Biases to bring efficiency and automation to your YOLOv8 t
To install the required packages, run:
-!!! Tip "Installation"
+!!! tip "Installation"
=== "CLI"
@@ -54,7 +54,7 @@ After installing the necessary packages, the next step is to set up your Weights
Start by initializing the Weights & Biases environment in your workspace. You can do this by running the following command and following the prompted instructions.
-!!! Tip "Initial SDK Setup"
+!!! tip "Initial SDK Setup"
=== "CLI"
@@ -70,7 +70,7 @@ Navigate to the Weights & Biases authorization page to create and retrieve your
Before diving into the usage instructions for YOLOv8 model training with Weights & Biases, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
-!!! Example "Usage: Training YOLOv8 with Weights & Biases"
+!!! example "Usage: Training YOLOv8 with Weights & Biases"
=== "Python"
diff --git a/docs/en/models/fast-sam.md b/docs/en/models/fast-sam.md
index 19577b8be1..e3f889c3b5 100644
--- a/docs/en/models/fast-sam.md
+++ b/docs/en/models/fast-sam.md
@@ -60,7 +60,7 @@ The FastSAM models are easy to integrate into your Python applications. Ultralyt
To perform object detection on an image, use the `predict` method as shown below:
-!!! Example
+!!! example
=== "Python"
@@ -98,7 +98,7 @@ To perform object detection on an image, use the `predict` method as shown below
This snippet demonstrates the simplicity of loading a pre-trained model and running a prediction on an image.
-!!! Example "FastSAMPredictor example"
+!!! example "FastSAMPredictor example"
This way you can run inference on image and get all the segment `results` once and run prompts inference multiple times without running inference multiple times.
@@ -120,7 +120,7 @@ This snippet demonstrates the simplicity of loading a pre-trained model and runn
text_results = predictor.prompt(everything_results, texts="a photo of a dog")
```
-!!! Note
+!!! note
All the returned `results` in above examples are [Results](../modes/predict.md#working-with-results) object which allows access predicted masks and source image easily.
@@ -128,7 +128,7 @@ This snippet demonstrates the simplicity of loading a pre-trained model and runn
Validation of the model on a dataset can be done as follows:
-!!! Example
+!!! example
=== "Python"
@@ -155,7 +155,7 @@ Please note that FastSAM only supports detection and segmentation of a single cl
To perform object tracking on an image, use the `track` method as shown below:
-!!! Example
+!!! example
=== "Python"
@@ -241,7 +241,7 @@ Additionally, you can try FastSAM through a [Colab demo](https://colab.research.
We would like to acknowledge the FastSAM authors for their significant contributions in the field of real-time instance segmentation:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/models/index.md b/docs/en/models/index.md
index 0b294801d5..0753023892 100644
--- a/docs/en/models/index.md
+++ b/docs/en/models/index.md
@@ -45,7 +45,7 @@ This example provides simple YOLO training and inference examples. For full docu
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md) and [Pose](../tasks/pose.md) docs.
-!!! Example
+!!! example
=== "Python"
@@ -107,7 +107,7 @@ Ultralytics YOLOv8 offers enhanced capabilities such as real-time object detecti
Training a YOLOv8 model on custom data can be easily accomplished using Ultralytics' libraries. Here's a quick example:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/models/mobile-sam.md b/docs/en/models/mobile-sam.md
index d57f96df6a..ed3ab1c950 100644
--- a/docs/en/models/mobile-sam.md
+++ b/docs/en/models/mobile-sam.md
@@ -69,7 +69,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
### Point Prompt
-!!! Example
+!!! example
=== "Python"
@@ -85,7 +85,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
### Box Prompt
-!!! Example
+!!! example
=== "Python"
@@ -105,7 +105,7 @@ We have implemented `MobileSAM` and `SAM` using the same API. For more usage inf
If you find MobileSAM useful in your research or development work, please consider citing our paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/models/rtdetr.md b/docs/en/models/rtdetr.md
index f23f29da48..46269e4458 100644
--- a/docs/en/models/rtdetr.md
+++ b/docs/en/models/rtdetr.md
@@ -40,7 +40,7 @@ The Ultralytics Python API provides pre-trained PaddlePaddle RT-DETR models with
This example provides simple RT-DETR training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
-!!! Example
+!!! example
=== "Python"
@@ -83,7 +83,7 @@ This table presents the model types, the specific pre-trained weights, the tasks
If you use Baidu's RT-DETR in your research or development work, please cite the [original paper](https://arxiv.org/abs/2304.08069):
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -110,7 +110,7 @@ Baidu's RT-DETR (Real-Time Detection Transformer) is an advanced real-time objec
You can leverage Ultralytics Python API to use pre-trained PaddlePaddle RT-DETR models. For instance, to load an RT-DETR-l model pre-trained on COCO val2017 and achieve high FPS on T4 GPU, you can utilize the following example:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/models/sam-2.md b/docs/en/models/sam-2.md
index ac60ec14fd..336b294e0f 100644
--- a/docs/en/models/sam-2.md
+++ b/docs/en/models/sam-2.md
@@ -116,7 +116,7 @@ SAM 2 can be utilized across a broad spectrum of tasks, including real-time vide
#### Segment with Prompts
-!!! Example "Segment with Prompts"
+!!! example "Segment with Prompts"
Use prompts to segment specific objects in images or videos.
@@ -140,7 +140,7 @@ SAM 2 can be utilized across a broad spectrum of tasks, including real-time vide
#### Segment Everything
-!!! Example "Segment Everything"
+!!! example "Segment Everything"
Segment the entire image or video content without specific prompts.
@@ -185,7 +185,7 @@ This comparison shows the order-of-magnitude differences in the model sizes and
Tests run on a 2023 Apple M2 Macbook with 16GB of RAM using `torch==2.3.1` and `ultralytics==8.3.82`. To reproduce this test:
-!!! Example
+!!! example
=== "Python"
@@ -217,7 +217,7 @@ Auto-annotation is a powerful feature of SAM 2, enabling users to generate segme
To auto-annotate your dataset using SAM 2, follow this example:
-!!! Example "Auto-Annotation Example"
+!!! example "Auto-Annotation Example"
```python
from ultralytics.data.annotator import auto_annotate
@@ -248,7 +248,7 @@ Despite its strengths, SAM 2 has certain limitations:
If SAM 2 is a crucial part of your research or development work, please cite it using the following reference:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -281,7 +281,7 @@ For more details on SAM 2's architecture and capabilities, explore the [SAM 2 re
SAM 2 can be utilized for real-time video segmentation by leveraging its promptable interface and real-time inference capabilities. Here's a basic example:
-!!! Example "Segment with Prompts"
+!!! example "Segment with Prompts"
Use prompts to segment specific objects in images or videos.
diff --git a/docs/en/models/sam.md b/docs/en/models/sam.md
index f7fc410390..3768041383 100644
--- a/docs/en/models/sam.md
+++ b/docs/en/models/sam.md
@@ -40,7 +40,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
### SAM prediction example
-!!! Example "Segment with prompts"
+!!! example "Segment with prompts"
Segment image with given prompts.
@@ -62,7 +62,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
results = model("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
```
-!!! Example "Segment everything"
+!!! example "Segment everything"
Segment the whole image.
@@ -90,7 +90,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
- The logic here is to segment the whole image if you don't pass any prompts(bboxes/points/masks).
-!!! Example "SAMPredictor example"
+!!! example "SAMPredictor example"
This way you can set image once and run prompts inference multiple times without running image encoder multiple times.
@@ -128,7 +128,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
results = predictor(source="ultralytics/assets/zidane.jpg", crop_n_layers=1, points_stride=64)
```
-!!! Note
+!!! note
All the returned `results` in above examples are [Results](../modes/predict.md#working-with-results) object which allows access predicted masks and source image easily.
@@ -149,7 +149,7 @@ This comparison shows the order-of-magnitude differences in the model sizes and
Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
-!!! Example
+!!! example
=== "Python"
@@ -181,7 +181,7 @@ Auto-annotation is a key feature of SAM, allowing users to generate a [segmentat
To auto-annotate your dataset with the Ultralytics framework, use the `auto_annotate` function as shown below:
-!!! Example
+!!! example
=== "Python"
@@ -207,7 +207,7 @@ Auto-annotation with pre-trained models can dramatically cut down the time and e
If you find SAM useful in your research or development work, please consider citing our paper:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/models/yolo-nas.md b/docs/en/models/yolo-nas.md
index 5e0b1e7352..f0aff53b1e 100644
--- a/docs/en/models/yolo-nas.md
+++ b/docs/en/models/yolo-nas.md
@@ -43,7 +43,7 @@ The following examples show how to use YOLO-NAS models with the `ultralytics` pa
In this example we validate YOLO-NAS-s on the COCO8 dataset.
-!!! Example
+!!! example
This example provides simple inference and validation code for YOLO-NAS. For handling inference results see [Predict](../modes/predict.md) mode. For using YOLO-NAS with additional modes see [Val](../modes/val.md) and [Export](../modes/export.md). YOLO-NAS on the `ultralytics` package does not support training.
@@ -99,7 +99,7 @@ Below is a detailed overview of each model, including links to their pre-trained
If you employ YOLO-NAS in your research or development work, please cite SuperGradients:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/models/yolo-world.md b/docs/en/models/yolo-world.md
index b5521694df..06b69798f3 100644
--- a/docs/en/models/yolo-world.md
+++ b/docs/en/models/yolo-world.md
@@ -43,7 +43,7 @@ YOLO-World tackles the challenges faced by traditional Open-Vocabulary detection
This section details the models available with their specific pre-trained weights, the tasks they support, and their compatibility with various operating modes such as [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), denoted by ✅ for supported modes and ❌ for unsupported modes.
-!!! Note
+!!! note
All the YOLOv8-World weights have been directly migrated from the official [YOLO-World](https://github.com/AILab-CVC/YOLO-World) repository, highlighting their excellent contributions.
@@ -77,13 +77,13 @@ The YOLO-World models are easy to integrate into your Python applications. Ultra
### Train Usage
-!!! Tip "Tip"
+!!! tip "Tip"
We strongly recommend to use `yolov8-worldv2` model for custom training, because it supports deterministic training and also easy to export other formats i.e onnx/tensorrt.
Object detection is straightforward with the `train` method, as illustrated below:
-!!! Example
+!!! example
=== "Python"
@@ -113,7 +113,7 @@ Object detection is straightforward with the `train` method, as illustrated belo
Object detection is straightforward with the `predict` method, as illustrated below:
-!!! Example
+!!! example
=== "Python"
@@ -143,7 +143,7 @@ This snippet demonstrates the simplicity of loading a pre-trained model and runn
Model validation on a dataset is streamlined as follows:
-!!! Example
+!!! example
=== "Python"
@@ -168,7 +168,7 @@ Model validation on a dataset is streamlined as follows:
Object tracking with YOLO-World model on a video/images is streamlined as follows:
-!!! Example
+!!! example
=== "Python"
@@ -189,7 +189,7 @@ Object tracking with YOLO-World model on a video/images is streamlined as follow
yolo track model=yolov8s-world.pt imgsz=640 source="path/to/video/file.mp4"
```
-!!! Note
+!!! note
The YOLO-World models provided by Ultralytics come pre-configured with [COCO dataset](../datasets/detect/coco.md) categories as part of their offline vocabulary, enhancing efficiency for immediate application. This integration allows the YOLOv8-World models to directly recognize and predict the 80 standard categories defined in the COCO dataset without requiring additional setup or customization.
@@ -201,7 +201,7 @@ The YOLO-World framework allows for the dynamic specification of classes through
For instance, if your application only requires detecting 'person' and 'bus' objects, you can specify these classes directly:
-!!! Example
+!!! example
=== "Custom Inference Prompts"
@@ -223,7 +223,7 @@ For instance, if your application only requires detecting 'person' and 'bus' obj
You can also save a model after setting custom classes. By doing this you create a version of the YOLO-World model that is specialized for your specific use case. This process embeds your custom class definitions directly into the model file, making the model ready to use with your specified classes without further adjustments. Follow these steps to save and load your custom YOLOv8 model:
-!!! Example
+!!! example
=== "Persisting Models with Custom Vocabulary"
@@ -286,11 +286,11 @@ This approach provides a powerful means of customizing state-of-the-art object d
### Launch training from scratch
-!!! Note
+!!! note
`WorldTrainerFromScratch` is highly customized to allow training yolo-world models on both detection datasets and grounding datasets simultaneously. More details please checkout [ultralytics.model.yolo.world.train_world.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/world/train_world.py).
-!!! Example
+!!! example
=== "Python"
@@ -322,7 +322,7 @@ This approach provides a powerful means of customizing state-of-the-art object d
We extend our gratitude to the [Tencent AILab Computer Vision Center](https://ai.tencent.com/) for their pioneering work in real-time open-vocabulary object detection with YOLO-World:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/models/yolov10.md b/docs/en/models/yolov10.md
index fb99f4d1ad..babf972688 100644
--- a/docs/en/models/yolov10.md
+++ b/docs/en/models/yolov10.md
@@ -140,7 +140,7 @@ Here is a detailed comparison of YOLOv10 variants with other state-of-the-art mo
For predicting new images with YOLOv10:
-!!! Example
+!!! example
=== "Python"
@@ -166,7 +166,7 @@ For predicting new images with YOLOv10:
For training YOLOv10 on a custom dataset:
-!!! Example
+!!! example
=== "Python"
@@ -225,7 +225,7 @@ YOLOv10 sets a new standard in real-time object detection by addressing the shor
We would like to acknowledge the YOLOv10 authors from [Tsinghua University](https://www.tsinghua.edu.cn/en/) for their extensive research and significant contributions to the [Ultralytics](https://www.ultralytics.com/) framework:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -252,7 +252,7 @@ YOLOv10, developed by researchers at [Tsinghua University](https://www.tsinghua.
For easy inference, you can use the Ultralytics YOLO Python library or the command line interface (CLI). Below are examples of predicting new images using YOLOv10:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/models/yolov3.md b/docs/en/models/yolov3.md
index 168e590dee..182ccbcd21 100644
--- a/docs/en/models/yolov3.md
+++ b/docs/en/models/yolov3.md
@@ -44,7 +44,7 @@ This table provides an at-a-glance view of the capabilities of each YOLOv3 varia
This example provides simple YOLOv3 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
-!!! Example
+!!! example
=== "Python"
@@ -82,7 +82,7 @@ This example provides simple YOLOv3 training and inference examples. For full do
If you use YOLOv3 in your research, please cite the original YOLO papers and the Ultralytics YOLOv3 repository:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -107,7 +107,7 @@ YOLOv3 is the third iteration of the YOLO (You Only Look Once) object detection
Training a YOLOv3 model with Ultralytics is straightforward. You can train the model using either Python or CLI:
-!!! Example
+!!! example
=== "Python"
@@ -138,7 +138,7 @@ YOLOv3u improves upon YOLOv3 and YOLOv3-Ultralytics by incorporating the anchor-
You can perform inference using YOLOv3 models by either Python scripts or CLI commands:
-!!! Example
+!!! example
=== "Python"
@@ -169,7 +169,7 @@ YOLOv3, YOLOv3-Ultralytics, and YOLOv3u primarily support object detection tasks
If you use YOLOv3 in your research, please cite the original YOLO papers and the Ultralytics YOLOv3 repository. Example BibTeX citation:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/models/yolov4.md b/docs/en/models/yolov4.md
index 2137adabef..c6f7d0da19 100644
--- a/docs/en/models/yolov4.md
+++ b/docs/en/models/yolov4.md
@@ -52,7 +52,7 @@ YOLOv4 is a powerful and efficient object detection model that strikes a balance
We would like to acknowledge the YOLOv4 authors for their significant contributions in the field of real-time object detection:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/models/yolov5.md b/docs/en/models/yolov5.md
index 57b562423a..4baa08e960 100644
--- a/docs/en/models/yolov5.md
+++ b/docs/en/models/yolov5.md
@@ -32,7 +32,7 @@ This table provides a detailed overview of the YOLOv5u model variants, highlight
## Performance Metrics
-!!! Performance
+!!! performance
=== "Detection"
@@ -56,7 +56,7 @@ This table provides a detailed overview of the YOLOv5u model variants, highlight
This example provides simple YOLOv5 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
-!!! Example
+!!! example
=== "Python"
@@ -94,7 +94,7 @@ This example provides simple YOLOv5 training and inference examples. For full do
If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv5 repository as follows:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -135,7 +135,7 @@ The performance metrics of YOLOv5u models vary depending on the platform and har
You can train a YOLOv5u model by loading a pre-trained model and running the training command with your dataset. Here's a quick example:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/models/yolov6.md b/docs/en/models/yolov6.md
index 11d016e6f6..5cf52931f7 100644
--- a/docs/en/models/yolov6.md
+++ b/docs/en/models/yolov6.md
@@ -36,7 +36,7 @@ YOLOv6 also provides quantized models for different precisions and models optimi
This example provides simple YOLOv6 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
-!!! Example
+!!! example
=== "Python"
@@ -88,7 +88,7 @@ This table provides a detailed overview of the YOLOv6 model variants, highlighti
We would like to acknowledge the authors for their significant contributions in the field of real-time object detection:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -119,7 +119,7 @@ The Bi-directional Concatenation (BiC) module in YOLOv6 enhances localization si
You can train a YOLOv6 model using Ultralytics with simple Python or CLI commands. For instance:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/models/yolov7.md b/docs/en/models/yolov7.md
index 54e9ea192c..b13467a974 100644
--- a/docs/en/models/yolov7.md
+++ b/docs/en/models/yolov7.md
@@ -98,7 +98,7 @@ We regret any inconvenience this may cause and will strive to update this docume
We would like to acknowledge the YOLOv7 authors for their significant contributions in the field of real-time object detection:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/models/yolov8.md b/docs/en/models/yolov8.md
index aecbc157c0..c95902beba 100644
--- a/docs/en/models/yolov8.md
+++ b/docs/en/models/yolov8.md
@@ -48,7 +48,7 @@ This table provides an overview of the YOLOv8 model variants, highlighting their
## Performance Metrics
-!!! Performance
+!!! performance
=== "Detection (COCO)"
@@ -129,7 +129,7 @@ This example provides simple YOLOv8 training and inference examples. For full do
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md), [OBB](../tasks/obb.md) docs and [Pose](../tasks/pose.md) docs.
-!!! Example
+!!! example
=== "Python"
@@ -167,7 +167,7 @@ Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for obj
If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
@@ -203,7 +203,7 @@ YOLOv8 models achieve state-of-the-art performance across various benchmarking d
Training a YOLOv8 model can be done using either Python or CLI. Below are examples for training a model using a COCO-pretrained YOLOv8 model on the COCO8 dataset for 100 epochs:
-!!! Example
+!!! example
=== "Python"
@@ -229,7 +229,7 @@ For further details, visit the [Training](../modes/train.md) documentation.
Yes, YOLOv8 models can be benchmarked for performance in terms of speed and accuracy across various export formats. You can use PyTorch, ONNX, TensorRT, and more for benchmarking. Below are example commands for benchmarking using Python and CLI:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/models/yolov9.md b/docs/en/models/yolov9.md
index 2a32176086..a83f85d3ee 100644
--- a/docs/en/models/yolov9.md
+++ b/docs/en/models/yolov9.md
@@ -128,7 +128,7 @@ YOLOv9 represents a pivotal development in real-time object detection, offering
This example provides simple YOLOv9 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
-!!! Example
+!!! example
=== "Python"
@@ -184,7 +184,7 @@ This table provides a detailed overview of the YOLOv9 model variants, highlighti
We would like to acknowledge the YOLOv9 authors for their significant contributions in the field of real-time object detection:
-!!! Quote ""
+!!! quote ""
=== "BibTeX"
diff --git a/docs/en/modes/benchmark.md b/docs/en/modes/benchmark.md
index 6b484dc429..8e24c0a0c2 100644
--- a/docs/en/modes/benchmark.md
+++ b/docs/en/modes/benchmark.md
@@ -43,7 +43,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
- **OpenVINO:** For Intel hardware optimization
- **CoreML, TensorFlow SavedModel, and More:** For diverse deployment needs.
-!!! Tip "Tip"
+!!! tip "Tip"
* Export to ONNX or OpenVINO for up to 3x CPU speedup.
* Export to TensorRT for up to 5x GPU speedup.
@@ -52,7 +52,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
-!!! Example
+!!! example
=== "Python"
@@ -97,7 +97,7 @@ See full `export` details in the [Export](../modes/export.md) page.
Ultralytics YOLOv8 offers a Benchmark mode to assess your model's performance across different export formats. This mode provides insights into key metrics such as mean Average Precision (mAP50-95), accuracy, and inference time in milliseconds. To run benchmarks, you can use either Python or CLI commands. For example, to benchmark on a GPU:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/modes/export.md b/docs/en/modes/export.md
index 70305351b2..bd14bfd9b8 100644
--- a/docs/en/modes/export.md
+++ b/docs/en/modes/export.md
@@ -39,7 +39,7 @@ Here are some of the standout functionalities:
- **Optimized Inference:** Exported models are optimized for quicker inference times.
- **Tutorial Videos:** In-depth guides and tutorials for a smooth exporting experience.
-!!! Tip "Tip"
+!!! tip "Tip"
* Export to [ONNX](../integrations/onnx.md) or [OpenVINO](../integrations/openvino.md) for up to 3x CPU speedup.
* Export to [TensorRT](../integrations/tensorrt.md) for up to 5x GPU speedup.
@@ -48,7 +48,7 @@ Here are some of the standout functionalities:
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
-!!! Example
+!!! example
=== "Python"
@@ -90,7 +90,7 @@ Available YOLOv8 export formats are in the table below. You can export to any fo
Exporting a YOLOv8 model to ONNX format is straightforward with Ultralytics. It provides both Python and CLI methods for exporting models.
-!!! Example
+!!! example
=== "Python"
@@ -128,7 +128,7 @@ To learn more about integrating TensorRT, see the [TensorRT](../integrations/ten
INT8 quantization is an excellent way to compress the model and speed up inference, especially on edge devices. Here's how you can enable INT8 quantization:
-!!! Example
+!!! example
=== "Python"
@@ -153,7 +153,7 @@ Dynamic input size allows the exported model to handle varying image dimensions,
To enable this feature, use the `dynamic=True` flag during export:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/modes/index.md b/docs/en/modes/index.md
index 63871ee695..270ce4a8ee 100644
--- a/docs/en/modes/index.md
+++ b/docs/en/modes/index.md
@@ -78,7 +78,7 @@ Benchmark mode is used to profile the speed and accuracy of various export forma
Training a custom object detection model with Ultralytics YOLOv8 involves using the train mode. You need a dataset formatted in YOLO format, containing images and corresponding annotation files. Use the following command to start the training process:
-!!! Example
+!!! example
=== "Python"
@@ -108,7 +108,7 @@ Ultralytics YOLOv8 uses various metrics during the validation process to assess
You can run the following command to start the validation:
-!!! Example
+!!! example
=== "Python"
@@ -132,7 +132,7 @@ Refer to the [Validation Guide](../modes/val.md) for further details.
Ultralytics YOLOv8 offers export functionality to convert your trained model into various deployment formats such as ONNX, TensorRT, CoreML, and more. Use the following example to export your model:
-!!! Example
+!!! example
=== "Python"
@@ -156,7 +156,7 @@ Detailed steps for each export format can be found in the [Export Guide](../mode
Benchmark mode in Ultralytics YOLOv8 is used to analyze the speed and accuracy of various export formats such as ONNX, TensorRT, and OpenVINO. It provides metrics like model size, `mAP50-95` for object detection, and inference time across different hardware setups, helping you choose the most suitable format for your deployment needs.
-!!! Example
+!!! example
=== "Python"
@@ -179,7 +179,7 @@ For more details, refer to the [Benchmark Guide](../modes/benchmark.md).
Real-time object tracking can be achieved using the track mode in Ultralytics YOLOv8. This mode extends object detection capabilities to track objects across video frames or live feeds. Use the following example to enable tracking:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/modes/predict.md b/docs/en/modes/predict.md
index c5e8bdb1d9..34741f94e2 100644
--- a/docs/en/modes/predict.md
+++ b/docs/en/modes/predict.md
@@ -50,7 +50,7 @@ YOLOv8's predict mode is designed to be robust and versatile, featuring:
Ultralytics YOLO models return either a Python list of `Results` objects, or a memory-efficient Python generator of `Results` objects when `stream=True` is passed to the model during inference:
-!!! Example "Predict"
+!!! example "Predict"
=== "Return a list with `stream=False`"
@@ -100,7 +100,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
YOLOv8 can process different types of input sources for inference, as shown in the table below. The sources include static images, video streams, and various data formats. The table also indicates whether each source can be used in streaming mode with the argument `stream=True` ✅. Streaming mode is beneficial for processing videos or live streams as it creates a generator of results instead of loading all frames into memory.
-!!! Tip "Tip"
+!!! tip "Tip"
Use `stream=True` for processing long videos or large datasets to efficiently manage memory. When `stream=False`, the results for all frames or data points are stored in memory, which can quickly add up and cause out-of-memory errors for large inputs. In contrast, `stream=True` utilizes a generator, which only keeps the results of the current frame or data point in memory, significantly reducing memory consumption and preventing out-of-memory issues.
@@ -123,7 +123,7 @@ YOLOv8 can process different types of input sources for inference, as shown in t
Below are code examples for using each source type:
-!!! Example "Prediction sources"
+!!! example "Prediction sources"
=== "image"
@@ -351,7 +351,7 @@ Below are code examples for using each source type:
`model.predict()` accepts multiple arguments that can be passed at inference time to override defaults:
-!!! Example
+!!! example
```python
from ultralytics import YOLO
@@ -442,7 +442,7 @@ The below table contains valid Ultralytics video formats.
All Ultralytics `predict()` calls will return a list of `Results` objects:
-!!! Example "Results"
+!!! example "Results"
```python
from ultralytics import YOLO
@@ -494,7 +494,7 @@ For more details see the [`Results` class documentation](../reference/engine/res
`Boxes` object can be used to index, manipulate, and convert bounding boxes to different formats.
-!!! Example "Boxes"
+!!! example "Boxes"
```python
from ultralytics import YOLO
@@ -532,7 +532,7 @@ For more details see the [`Boxes` class documentation](../reference/engine/resul
`Masks` object can be used index, manipulate and convert masks to segments.
-!!! Example "Masks"
+!!! example "Masks"
```python
from ultralytics import YOLO
@@ -565,7 +565,7 @@ For more details see the [`Masks` class documentation](../reference/engine/resul
`Keypoints` object can be used index, manipulate and normalize coordinates.
-!!! Example "Keypoints"
+!!! example "Keypoints"
```python
from ultralytics import YOLO
@@ -599,7 +599,7 @@ For more details see the [`Keypoints` class documentation](../reference/engine/r
`Probs` object can be used index, get `top1` and `top5` indices and scores of classification.
-!!! Example "Probs"
+!!! example "Probs"
```python
from ultralytics import YOLO
@@ -634,7 +634,7 @@ For more details see the [`Probs` class documentation](../reference/engine/resul
`OBB` object can be used to index, manipulate, and convert oriented bounding boxes to different formats.
-!!! Example "OBB"
+!!! example "OBB"
```python
from ultralytics import YOLO
@@ -672,7 +672,7 @@ For more details see the [`OBB` class documentation](../reference/engine/results
The `plot()` method in `Results` objects facilitates visualization of predictions by overlaying detected objects (such as bounding boxes, masks, keypoints, and probabilities) onto the original image. This method returns the annotated image as a NumPy array, allowing for easy display or saving.
-!!! Example "Plotting"
+!!! example "Plotting"
```python
from PIL import Image
@@ -728,7 +728,7 @@ Ensuring thread safety during inference is crucial when you are running multiple
When using YOLO models in a multi-threaded application, it's important to instantiate separate model objects for each thread or employ thread-local storage to prevent conflicts:
-!!! Example "Thread-Safe Inference"
+!!! example "Thread-Safe Inference"
Instantiate a single model inside each thread for thread-safe inference:
```python
@@ -755,7 +755,7 @@ For an in-depth look at thread-safe inference with YOLO models and step-by-step
Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`).
-!!! Example "Streaming for-loop"
+!!! example "Streaming for-loop"
```python
import cv2
diff --git a/docs/en/modes/track.md b/docs/en/modes/track.md
index d79e7c7a06..cfeb8c9084 100644
--- a/docs/en/modes/track.md
+++ b/docs/en/modes/track.md
@@ -56,13 +56,13 @@ The default tracker is BoT-SORT.
## Tracking
-!!! Warning "Tracker Threshold Information"
+!!! warning "Tracker Threshold Information"
If object confidence score will be low, i.e lower than [`track_high_thresh`](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/trackers/bytetrack.yaml#L5), then there will be no tracks successfully returned and updated.
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
-!!! Example
+!!! example
=== "Python"
@@ -97,7 +97,7 @@ As can be seen in the above usage, tracking is available for all Detect, Segment
## Configuration
-!!! Warning "Tracker Threshold Information"
+!!! warning "Tracker Threshold Information"
If object confidence score will be low, i.e lower than [`track_high_thresh`](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/trackers/bytetrack.yaml#L5), then there will be no tracks successfully returned and updated.
@@ -105,7 +105,7 @@ As can be seen in the above usage, tracking is available for all Detect, Segment
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](../modes/predict.md#inference-arguments) model page.
-!!! Example
+!!! example
=== "Python"
@@ -128,7 +128,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
-!!! Example
+!!! example
=== "Python"
@@ -155,7 +155,7 @@ For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/tr
Here is a Python script using OpenCV (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker that the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
-!!! Example "Streaming for-loop with tracking"
+!!! example "Streaming for-loop with tracking"
```python
import cv2
@@ -204,7 +204,7 @@ Visualizing object tracks over consecutive frames can provide valuable insights
In the following example, we demonstrate how to utilize YOLOv8's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
-!!! Example "Plotting tracks over multiple video frames"
+!!! example "Plotting tracks over multiple video frames"
```python
from collections import defaultdict
@@ -281,7 +281,7 @@ The `daemon=True` parameter in `threading.Thread` means that these threads will
Finally, after all threads have completed their task, the windows displaying the results are closed using `cv2.destroyAllWindows()`.
-!!! Example "Streaming for-loop with tracking"
+!!! example "Streaming for-loop with tracking"
```python
import threading
@@ -378,7 +378,7 @@ Multi-object tracking in video analytics involves both identifying objects and m
You can configure a custom tracker by copying an existing tracker configuration file (e.g., `custom_tracker.yaml`) from the [Ultralytics tracker configuration directory](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modifying parameters as needed, except for the `tracker_type`. Use this file in your tracking model like so:
-!!! Example
+!!! example
=== "Python"
@@ -399,7 +399,7 @@ You can configure a custom tracker by copying an existing tracker configuration
To run object tracking on multiple video streams simultaneously, you can use Python's `threading` module. Each thread will handle a separate video stream. Here's an example of how you can set this up:
-!!! Example "Multithreaded Tracking"
+!!! example "Multithreaded Tracking"
```python
import threading
@@ -454,7 +454,7 @@ These applications benefit from Ultralytics YOLO's ability to process high-frame
To visualize object tracks over multiple video frames, you can use the YOLO model's tracking features along with OpenCV to draw the paths of detected objects. Here's an example script that demonstrates this:
-!!! Example "Plotting tracks over multiple video frames"
+!!! example "Plotting tracks over multiple video frames"
```python
from collections import defaultdict
diff --git a/docs/en/modes/train.md b/docs/en/modes/train.md
index 56d6004fa9..4e5fcb05b5 100644
--- a/docs/en/modes/train.md
+++ b/docs/en/modes/train.md
@@ -41,7 +41,7 @@ The following are some notable features of YOLOv8's Train mode:
- **Hyperparameter Configuration:** The option to modify hyperparameters through YAML configuration files or CLI arguments.
- **Visualization and Monitoring:** Real-time tracking of training metrics and visualization of the learning process for better insights.
-!!! Tip "Tip"
+!!! tip "Tip"
* YOLOv8 datasets like COCO, VOC, ImageNet and many others automatically download on first use, i.e. `yolo train data=coco.yaml`
@@ -49,7 +49,7 @@ The following are some notable features of YOLOv8's Train mode:
Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. The training device can be specified using the `device` argument. If no argument is passed GPU `device=0` will be used if available, otherwise `device='cpu'` will be used. See Arguments section below for a full list of training arguments.
-!!! Example "Single-GPU and CPU Training Example"
+!!! example "Single-GPU and CPU Training Example"
Device is determined automatically. If a GPU is available then it will be used, otherwise training will start on CPU.
@@ -84,7 +84,7 @@ Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. The trainin
Multi-GPU training allows for more efficient utilization of available hardware resources by distributing the training load across multiple GPUs. This feature is available through both the Python API and the command-line interface. To enable multi-GPU training, specify the GPU device IDs you wish to use.
-!!! Example "Multi-GPU Training Example"
+!!! example "Multi-GPU Training Example"
To train with 2 GPUs, CUDA devices 0 and 1 use the following commands. Expand to additional GPUs as required.
@@ -113,7 +113,7 @@ With the support for Apple M1 and M2 chips integrated in the Ultralytics YOLO mo
To enable training on Apple M1 and M2 chips, you should specify 'mps' as your device when initiating the training process. Below is an example of how you could do this in Python and via the command line:
-!!! Example "MPS Training Example"
+!!! example "MPS Training Example"
=== "Python"
@@ -146,7 +146,7 @@ You can easily resume training in Ultralytics YOLO by setting the `resume` argum
Below is an example of how to resume an interrupted training using Python and via the command line:
-!!! Example "Resume Training Example"
+!!! example "Resume Training Example"
=== "Python"
@@ -276,7 +276,7 @@ To use a logger, select it from the dropdown menu in the code snippet above and
To use Comet:
-!!! Example
+!!! example
=== "Python"
@@ -295,7 +295,7 @@ Remember to sign in to your Comet account on their website and get your API key.
To use ClearML:
-!!! Example
+!!! example
=== "Python"
@@ -314,7 +314,7 @@ After running this script, you will need to sign in to your ClearML account on t
To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb):
-!!! Example
+!!! example
=== "CLI"
@@ -325,7 +325,7 @@ To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ul
To use TensorBoard locally run the below command and view results at http://localhost:6006/.
-!!! Example
+!!! example
=== "CLI"
@@ -343,7 +343,7 @@ After setting up your logger, you can then proceed with your model training. All
To train an object detection model using Ultralytics YOLOv8, you can either use the Python API or the CLI. Below is an example for both:
-!!! Example "Single-GPU and CPU Training Example"
+!!! example "Single-GPU and CPU Training Example"
=== "Python"
@@ -380,7 +380,7 @@ These features make training efficient and customizable to your needs. For more
To resume training from an interrupted session, set the `resume` argument to `True` and specify the path to the last saved checkpoint.
-!!! Example "Resume Training Example"
+!!! example "Resume Training Example"
=== "Python"
@@ -406,7 +406,7 @@ Check the section on [Resuming Interrupted Trainings](#resuming-interrupted-trai
Yes, Ultralytics YOLOv8 supports training on Apple M1 and M2 chips utilizing the Metal Performance Shaders (MPS) framework. Specify 'mps' as your training device.
-!!! Example "MPS Training Example"
+!!! example "MPS Training Example"
=== "Python"
diff --git a/docs/en/modes/val.md b/docs/en/modes/val.md
index b24f82dee0..c5c496a630 100644
--- a/docs/en/modes/val.md
+++ b/docs/en/modes/val.md
@@ -41,7 +41,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
- **CLI and Python API:** Choose from command-line interface or Python API based on your preference for validation.
- **Data Compatibility:** Works seamlessly with datasets used during the training phase as well as custom datasets.
-!!! Tip "Tip"
+!!! tip "Tip"
* YOLOv8 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolov8n.pt` or `model('yolov8n.pt').val()`
@@ -49,7 +49,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
-!!! Example
+!!! example
=== "Python"
@@ -102,7 +102,7 @@ Each of these settings plays a vital role in the validation process, allowing fo
The below examples showcase YOLO model validation with custom arguments in Python and CLI.
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/quickstart.md b/docs/en/quickstart.md
index 5f41a7a0bb..f8d3a7cf85 100644
--- a/docs/en/quickstart.md
+++ b/docs/en/quickstart.md
@@ -19,7 +19,7 @@ Ultralytics provides various installation methods including pip, conda, and Dock
Watch: Ultralytics YOLO Quick Start Guide
-!!! Example "Install"
+!!! example "Install"
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)
@@ -56,7 +56,7 @@ Ultralytics provides various installation methods including pip, conda, and Dock
conda install -c conda-forge ultralytics
```
- !!! Note
+ !!! note
If you are installing in a CUDA environment best practice is to install `ultralytics`, `pytorch` and `pytorch-cuda` in the same command to allow the conda package manager to resolve any conflicts, or else to install `pytorch-cuda` last to allow it override the CPU-specific `pytorch` package if necessary.
```bash
@@ -141,7 +141,7 @@ Ultralytics provides various installation methods including pip, conda, and Dock
See the `ultralytics` [pyproject.toml](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) file for a list of dependencies. Note that all examples above install all required dependencies.
-!!! Tip "Tip"
+!!! tip "Tip"
PyTorch requirements vary by operating system and CUDA requirements, so it's recommended to install PyTorch first following instructions at [https://pytorch.org/get-started/locally](https://pytorch.org/get-started/locally/).
@@ -153,7 +153,7 @@ See the `ultralytics` [pyproject.toml](https://github.com/ultralytics/ultralytic
The Ultralytics command line interface (CLI) allows for simple single-line commands without the need for a Python environment. CLI requires no customization or Python code. You can simply run all tasks from the terminal with the `yolo` command. Check out the [CLI Guide](usage/cli.md) to learn more about using YOLOv8 from the command line.
-!!! Example
+!!! example
=== "Syntax"
@@ -208,7 +208,7 @@ The Ultralytics command line interface (CLI) allows for simple single-line comma
yolo cfg
```
-!!! Warning "Warning"
+!!! warning "Warning"
Arguments must be passed as `arg=val` pairs, split by an equals `=` sign and delimited by spaces between pairs. Do not use `--` argument prefixes or commas `,` between arguments.
@@ -225,7 +225,7 @@ YOLOv8's Python interface allows for seamless integration into your Python proje
For example, users can load a model, train it, evaluate its performance on a validation set, and even export it to ONNX format with just a few lines of code. Check out the [Python Guide](usage/python.md) to learn more about using YOLOv8 within your Python projects.
-!!! Example
+!!! example
```python
from ultralytics import YOLO
@@ -259,7 +259,7 @@ The Ultralytics library provides a powerful settings management system to enable
To gain insight into the current configuration of your settings, you can view them directly:
-!!! Example "View settings"
+!!! example "View settings"
=== "Python"
@@ -285,7 +285,7 @@ To gain insight into the current configuration of your settings, you can view th
Ultralytics allows users to easily modify their settings. Changes can be performed in the following ways:
-!!! Example "Update settings"
+!!! example "Update settings"
=== "Python"
diff --git a/docs/en/tasks/classify.md b/docs/en/tasks/classify.md
index 6fa0fabbfe..07a9cc4ecb 100644
--- a/docs/en/tasks/classify.md
+++ b/docs/en/tasks/classify.md
@@ -24,7 +24,7 @@ The output of an image classifier is a single class label and a confidence score
Watch: Explore Ultralytics YOLO Tasks: Image Classification using Ultralytics HUB
-!!! Tip "Tip"
+!!! tip
YOLOv8 Classify models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml).
@@ -49,7 +49,7 @@ YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose model
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
-!!! Example
+!!! example
=== "Python"
@@ -86,7 +86,7 @@ YOLO classification dataset format can be found in detail in the [Dataset Guide]
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes.
-!!! Example
+!!! example
=== "Python"
@@ -114,7 +114,7 @@ Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument
Use a trained YOLOv8n-cls model to run predictions on images.
-!!! Example
+!!! example
=== "Python"
@@ -142,7 +142,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
-!!! Example
+!!! example
=== "Python"
@@ -180,7 +180,7 @@ YOLOv8 models, such as `yolov8n-cls.pt`, are designed for efficient image classi
To train a YOLOv8 model, you can use either Python or CLI commands. For example, to train a `yolov8n-cls` model on the MNIST160 dataset for 100 epochs at an image size of 64:
-!!! Example
+!!! example
=== "Python"
@@ -210,7 +210,7 @@ Pretrained YOLOv8 classification models can be found in the [Models](https://git
You can export a trained YOLOv8 model to various formats using Python or CLI commands. For instance, to export a model to ONNX format:
-!!! Example
+!!! example
=== "Python"
@@ -236,7 +236,7 @@ For detailed export options, refer to the [Export](../modes/export.md) page.
To validate a trained model's accuracy on a dataset like MNIST160, you can use the following Python or CLI commands:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/tasks/detect.md b/docs/en/tasks/detect.md
index d5b4e1f0dc..d3542f184d 100644
--- a/docs/en/tasks/detect.md
+++ b/docs/en/tasks/detect.md
@@ -23,7 +23,7 @@ The output of an object detector is a set of bounding boxes that enclose the obj
Watch: Object Detection with Pre-trained Ultralytics YOLOv8 Model.
-!!! Tip "Tip"
+!!! tip
YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
@@ -48,7 +48,7 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models
Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
-!!! Example
+!!! example
=== "Python"
@@ -85,7 +85,7 @@ YOLO detection dataset format can be found in detail in the [Dataset Guide](../d
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes.
-!!! Example
+!!! example
=== "Python"
@@ -115,7 +115,7 @@ Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need t
Use a trained YOLOv8n model to run predictions on images.
-!!! Example
+!!! example
=== "Python"
@@ -143,7 +143,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
-!!! Example
+!!! example
=== "Python"
@@ -181,7 +181,7 @@ Training a YOLOv8 model on a custom dataset involves a few steps:
2. **Load the Model**: Use the Ultralytics YOLO library to load a pre-trained model or create a new model from a YAML file.
3. **Train the Model**: Execute the `train` method in Python or the `yolo detect train` command in CLI.
-!!! Example
+!!! example
=== "Python"
@@ -219,7 +219,7 @@ For a detailed list and performance metrics, refer to the [Models](https://githu
To validate the accuracy of your trained YOLOv8 model, you can use the `.val()` method in Python or the `yolo detect val` command in CLI. This will provide metrics like mAP50-95, mAP50, and more.
-!!! Example
+!!! example
=== "Python"
@@ -246,7 +246,7 @@ For more validation details, visit the [Val](../modes/val.md) page.
Ultralytics YOLOv8 allows exporting models to various formats such as ONNX, TensorRT, CoreML, and more to ensure compatibility across different platforms and devices.
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/tasks/index.md b/docs/en/tasks/index.md
index e52fd53700..5b7b75afd6 100644
--- a/docs/en/tasks/index.md
+++ b/docs/en/tasks/index.md
@@ -76,7 +76,7 @@ To use Ultralytics YOLOv8 for object detection, follow these steps:
2. Train the YOLOv8 model using the detection task.
3. Use the model to make predictions by feeding in new images or video frames.
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/tasks/obb.md b/docs/en/tasks/obb.md
index d49a289beb..a603ab1a84 100644
--- a/docs/en/tasks/obb.md
+++ b/docs/en/tasks/obb.md
@@ -15,7 +15,7 @@ The output of an oriented object detector is a set of rotated bounding boxes tha
-!!! Tip "Tip"
+!!! tip
YOLOv8 OBB models use the `-obb` suffix, i.e. `yolov8n-obb.pt` and are pretrained on [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml).
@@ -69,7 +69,7 @@ YOLOv8 pretrained OBB models are shown here, which are pretrained on the [DOTAv1
Train YOLOv8n-obb on the `dota8.yaml` dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
-!!! Example
+!!! example
=== "Python"
@@ -107,7 +107,7 @@ OBB dataset format can be found in detail in the [Dataset Guide](../datasets/obb
Validate trained YOLOv8n-obb model accuracy on the DOTA8 dataset. No argument need to passed as the `model`
retains its training `data` and arguments as model attributes.
-!!! Example
+!!! example
=== "Python"
@@ -137,7 +137,7 @@ retains its training `data` and arguments as model attributes.
Use a trained YOLOv8n-obb model to run predictions on images.
-!!! Example
+!!! example
=== "Python"
@@ -165,7 +165,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
Export a YOLOv8n-obb model to a different format like ONNX, CoreML, etc.
-!!! Example
+!!! example
=== "Python"
@@ -203,7 +203,7 @@ Oriented Bounding Boxes (OBB) include an additional angle to enhance object loca
To train a YOLOv8n-obb model with a custom dataset, follow the example below using Python or CLI:
-!!! Example
+!!! example
=== "Python"
@@ -233,7 +233,7 @@ YOLOv8-OBB models are pretrained on datasets like [DOTAv1](https://github.com/ul
Exporting a YOLOv8-OBB model to ONNX format is straightforward using either Python or CLI:
-!!! Example
+!!! example
=== "Python"
@@ -259,7 +259,7 @@ For more export formats and details, refer to the [Export](../modes/export.md) p
To validate a YOLOv8n-obb model, you can use Python or CLI commands as shown below:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/tasks/pose.md b/docs/en/tasks/pose.md
index ffa0a39ffb..81384cc951 100644
--- a/docs/en/tasks/pose.md
+++ b/docs/en/tasks/pose.md
@@ -36,7 +36,7 @@ The output of a pose estimation model is a set of points that represent the keyp
-!!! Tip "Tip"
+!!! tip
YOLOv8 _pose_ models use the `-pose` suffix, i.e. `yolov8n-pose.pt`. These models are trained on the [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml) dataset and are suitable for a variety of pose estimation tasks.
@@ -82,7 +82,7 @@ YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models ar
Train a YOLOv8-pose model on the COCO128-pose dataset.
-!!! Example
+!!! example
=== "Python"
@@ -120,7 +120,7 @@ YOLO pose dataset format can be found in detail in the [Dataset Guide](../datase
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model`
retains its training `data` and arguments as model attributes.
-!!! Example
+!!! example
=== "Python"
@@ -150,7 +150,7 @@ retains its training `data` and arguments as model attributes.
Use a trained YOLOv8n-pose model to run predictions on images.
-!!! Example
+!!! example
=== "Python"
@@ -178,7 +178,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/tasks/segment.md b/docs/en/tasks/segment.md
index 96090fb2ee..56a3b3ba24 100644
--- a/docs/en/tasks/segment.md
+++ b/docs/en/tasks/segment.md
@@ -24,7 +24,7 @@ The output of an instance segmentation model is a set of masks or contours that
Watch: Run Segmentation with Pre-Trained Ultralytics YOLOv8 Model in Python.
-!!! Tip "Tip"
+!!! tip
YOLOv8 Segment models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
@@ -49,7 +49,7 @@ YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](../usage/cfg.md) page.
-!!! Example
+!!! example
=== "Python"
@@ -87,7 +87,7 @@ YOLO segmentation dataset format can be found in detail in the [Dataset Guide](.
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`
retains its training `data` and arguments as model attributes.
-!!! Example
+!!! example
=== "Python"
@@ -121,7 +121,7 @@ retains its training `data` and arguments as model attributes.
Use a trained YOLOv8n-seg model to run predictions on images.
-!!! Example
+!!! example
=== "Python"
@@ -149,7 +149,7 @@ See full `predict` mode details in the [Predict](../modes/predict.md) page.
Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
-!!! Example
+!!! example
=== "Python"
@@ -183,7 +183,7 @@ See full `export` details in the [Export](../modes/export.md) page.
To train a YOLOv8 segmentation model on a custom dataset, you first need to prepare your dataset in the YOLO segmentation format. You can use tools like [JSON2YOLO](https://github.com/ultralytics/JSON2YOLO) to convert datasets from other formats. Once your dataset is ready, you can train the model using Python or CLI commands:
-!!! Example
+!!! example
=== "Python"
@@ -217,7 +217,7 @@ Ultralytics YOLOv8 is a state-of-the-art model recognized for its high accuracy
Loading and validating a pretrained YOLOv8 segmentation model is straightforward. Here's how you can do it using both Python and CLI:
-!!! Example
+!!! example
=== "Python"
@@ -245,7 +245,7 @@ These steps will provide you with validation metrics like Mean Average Precision
Exporting a YOLOv8 segmentation model to ONNX format is simple and can be done using Python or CLI commands:
-!!! Example
+!!! example
=== "Python"
diff --git a/docs/en/usage/cfg.md b/docs/en/usage/cfg.md
index 299c8acd7a..0f8776cd21 100644
--- a/docs/en/usage/cfg.md
+++ b/docs/en/usage/cfg.md
@@ -19,7 +19,7 @@ YOLO settings and hyperparameters play a critical role in the model's performanc
Ultralytics commands use the following syntax:
-!!! Example
+!!! example
=== "CLI"
diff --git a/docs/en/usage/cli.md b/docs/en/usage/cli.md
index ba649cbf82..d2ba49277c 100644
--- a/docs/en/usage/cli.md
+++ b/docs/en/usage/cli.md
@@ -19,7 +19,7 @@ The YOLO command line interface (CLI) allows for simple single-line commands wit
Watch: Mastering Ultralytics YOLOv8: CLI
-!!! Example
+!!! example
=== "Syntax"
@@ -79,7 +79,7 @@ Where:
- `MODE` (required) is one of `[train, val, predict, export, track, benchmark]`
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults. For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
-!!! Warning "Warning"
+!!! warning "Warning"
Arguments must be passed as `arg=val` pairs, split by an equals `=` sign and delimited by spaces ` ` between pairs. Do not use `--` argument prefixes or commas `,` between arguments.
@@ -91,7 +91,7 @@ Where:
Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the [Configuration](cfg.md) page.
-!!! Example "Example"
+!!! example "Example"
=== "Train"
@@ -111,7 +111,7 @@ Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full
Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes.
-!!! Example "Example"
+!!! example "Example"
=== "Official"
@@ -131,7 +131,7 @@ Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need t
Use a trained YOLOv8n model to run predictions on images.
-!!! Example "Example"
+!!! example "Example"
=== "Official"
@@ -151,7 +151,7 @@ Use a trained YOLOv8n model to run predictions on images.
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
-!!! Example "Example"
+!!! example "Example"
=== "Official"
@@ -177,7 +177,7 @@ See full `export` details in the [Export](../modes/export.md) page.
Default arguments can be overridden by simply passing them as arguments in the CLI in `arg=value` pairs.
-!!! Tip ""
+!!! tip ""
=== "Train"
@@ -208,7 +208,7 @@ To do this first create a copy of `default.yaml` in your current working dir wit
This will create `default_copy.yaml`, which you can then pass as `cfg=default_copy.yaml` along with any additional args, like `imgsz=320` in this example:
-!!! Example
+!!! example
=== "CLI"
diff --git a/docs/en/usage/python.md b/docs/en/usage/python.md
index 90b2fcfdac..66b2e6c61d 100644
--- a/docs/en/usage/python.md
+++ b/docs/en/usage/python.md
@@ -21,7 +21,7 @@ Welcome to the YOLOv8 Python Usage documentation! This guide is designed to help
For example, users can load a model, train it, evaluate its performance on a validation set, and even export it to ONNX format with just a few lines of code.
-!!! Example "Python"
+!!! example "Python"
```python
from ultralytics import YOLO
@@ -49,7 +49,7 @@ For example, users can load a model, train it, evaluate its performance on a val
Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can accurately predict the classes and locations of objects in an image.
-!!! Example "Train"
+!!! example "Train"
=== "From pretrained(recommended)"
@@ -82,7 +82,7 @@ Train mode is used for training a YOLOv8 model on a custom dataset. In this mode
Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters of the model to improve its performance.
-!!! Example "Val"
+!!! example "Val"
=== "Val after training"
@@ -120,7 +120,7 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model predicts the classes and locations of objects in the input images or videos.
-!!! Example "Predict"
+!!! example "Predict"
=== "From source"
@@ -191,7 +191,7 @@ Predict mode is used for making predictions using a trained YOLOv8 model on new
Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is converted to a format that can be used by other software applications or hardware devices. This mode is useful when deploying the model to production environments.
-!!! Example "Export"
+!!! example "Export"
=== "Export to ONNX"
@@ -219,7 +219,7 @@ Export mode is used for exporting a YOLOv8 model to a format that can be used fo
Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful for applications such as surveillance systems or self-driving cars.
-!!! Example "Track"
+!!! example "Track"
=== "Python"
@@ -242,7 +242,7 @@ Track mode is used for tracking objects in real-time using a YOLOv8 model. In th
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection and segmentation) or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy.
-!!! Example "Benchmark"
+!!! example "Benchmark"
=== "Python"
@@ -260,7 +260,7 @@ Benchmark mode is used to profile the speed and accuracy of various export forma
Explorer API can be used to explore datasets with advanced semantic, vector-similarity and SQL search among other features. It also enabled searching for images based on their content using natural language by utilizing the power of LLMs. The Explorer API allows you to write your own dataset exploration notebooks or scripts to get insights into your datasets.
-!!! Example "Semantic Search Using Explorer"
+!!! example "Semantic Search Using Explorer"
=== "Using Images"
@@ -304,7 +304,7 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits from `BaseTrainer`.
-!!! Tip "Detection Trainer Example"
+!!! tip "Detection Trainer Example"
```python
from ultralytics.models.yolo import DetectionPredictor, DetectionTrainer, DetectionValidator
diff --git a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
index ac1454b201..3047e48c2e 100644
--- a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
+++ b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
@@ -8,7 +8,7 @@ keywords: Roboflow, YOLOv5, data management, dataset labeling, dataset versionin
You can now use Roboflow to organize, label, prepare, version, and host your datasets for training YOLOv5 🚀 models. Roboflow is free to use with YOLOv5 if you make your workspace public.
-!!! Question "Licensing"
+!!! question "Licensing"
Ultralytics offers two licensing options:
diff --git a/docs/en/yolov5/tutorials/train_custom_data.md b/docs/en/yolov5/tutorials/train_custom_data.md
index 69888785cd..f7cc6f098c 100644
--- a/docs/en/yolov5/tutorials/train_custom_data.md
+++ b/docs/en/yolov5/tutorials/train_custom_data.md
@@ -25,7 +25,7 @@ pip install -r requirements.txt # install
Creating a custom model to detect your objects is an iterative process of collecting and organizing images, labeling your objects of interest, training a model, deploying it into the wild to make predictions, and then using that deployed model to collect examples of edge cases to repeat and improve.
-!!! Question "Licensing"
+!!! question "Licensing"
Ultralytics offers two licensing options:
@@ -137,11 +137,11 @@ Train a YOLOv5s model on COCO128 by specifying dataset, batch-size, image size a
python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt
```
-!!! Tip "Tip"
+!!! tip "Tip"
💡 Add `--cache ram` or `--cache disk` to speed up training (requires significant RAM/disk resources).
-!!! Tip "Tip"
+!!! tip "Tip"
💡 Always train from a local dataset. Mounted or network drives like Google Drive will be very slow.
diff --git a/docs/mkdocs_github_authors.yaml b/docs/mkdocs_github_authors.yaml
index 1f47cd8200..a40aa79a22 100644
--- a/docs/mkdocs_github_authors.yaml
+++ b/docs/mkdocs_github_authors.yaml
@@ -4,6 +4,9 @@
130829914+IvorZhu331@users.noreply.github.com:
avatar: https://avatars.githubusercontent.com/u/130829914?v=4
username: IvorZhu331
+131261051+MatthewNoyce@users.noreply.github.com:
+ avatar: https://avatars.githubusercontent.com/u/131261051?v=4
+ username: MatthewNoyce
135830346+UltralyticsAssistant@users.noreply.github.com:
avatar: https://avatars.githubusercontent.com/u/135830346?v=4
username: UltralyticsAssistant
@@ -97,6 +100,9 @@ lakshantha@ultralytics.com:
lakshanthad@yahoo.com:
avatar: https://avatars.githubusercontent.com/u/20147381?v=4
username: lakshanthad
+matthewnoyce@icloud.com:
+ avatar: https://avatars.githubusercontent.com/u/131261051?v=4
+ username: MatthewNoyce
muhammadrizwanmunawar123@gmail.com:
avatar: https://avatars.githubusercontent.com/u/62513924?v=4
username: RizwanMunawar