@ -28,7 +28,7 @@ The Caltech-101 dataset is extensively used for training and evaluating deep lea
To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the objects in the Caltech-1
If you use the Caltech-101 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -90,7 +90,7 @@ The [Caltech-101](https://data.caltech.edu/records/mzrjq-6wc02) dataset is widel
To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the provided code snippets. For example, to train for 100 epochs:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -110,11 +110,13 @@ To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the p
For more detailed arguments and options, refer to the model [Training](../../modes/train.md) page.
### What are the key features of the Caltech-101 dataset?
The Caltech-101 dataset includes:
- Around 9,000 color images across 101 categories.
- Categories covering a diverse range of objects, including animals, vehicles, and household items.
- Variable number of images per category, typically between 40 and 800.
@ -126,7 +128,7 @@ These features make it an excellent choice for training and evaluating object re
Citing the Caltech-101 dataset in your research acknowledges the creators' contributions and provides a reference for others who might use the dataset. The recommended citation is:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -142,6 +144,7 @@ Citing the Caltech-101 dataset in your research acknowledges the creators' contr
publisher={Elsevier}
}
```
Citing helps in maintaining the integrity of academic work and assists peers in locating the original resource.
### Can I use Ultralytics HUB for training models on the Caltech-101 dataset?
@ -39,7 +39,7 @@ The Caltech-256 dataset is extensively used for training and evaluating deep lea
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the diversity and complexity of the objects in the Caltech
If you use the Caltech-256 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -98,7 +98,7 @@ The [Caltech-256](https://data.caltech.edu/records/nyy15-4j048) dataset is a lar
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. Refer to the model [Training](../../modes/train.md) page for additional options.
@ -42,7 +42,7 @@ The CIFAR-10 dataset is widely used for training and evaluating deep learning mo
To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-10
If you use the CIFAR-10 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -96,7 +96,7 @@ We would like to acknowledge Alex Krizhevsky for creating and maintaining the CI
To train a YOLO model on the CIFAR-10 dataset using Ultralytics, you can follow the examples provided for both Python and CLI. Here is a basic example to train your model for 100 epochs with an image size of 32x32 pixels:
!!! Example
!!! example
=== "Python"
@ -153,7 +153,7 @@ Each subset comprises images categorized into 10 classes, with their annotations
If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper:
@ -31,7 +31,7 @@ The CIFAR-100 dataset is extensively used for training and evaluating deep learn
To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -64,7 +64,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-100
If you use the CIFAR-100 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -89,10 +89,10 @@ The [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a large
You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI commands. Here's how:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -104,7 +104,7 @@ You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI c
@ -56,7 +56,7 @@ The Fashion-MNIST dataset is widely used for training and evaluating deep learni
To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image size of 28x28, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -99,10 +99,10 @@ The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is
To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use both Python and CLI commands. Here's a quick example to get you started:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -112,10 +112,10 @@ To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use bot
@ -41,7 +41,7 @@ The ImageNet dataset is widely used for training and evaluating deep learning mo
To train a deep learning model on the ImageNet dataset for 100 epochs with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -74,7 +74,7 @@ The example showcases the variety and complexity of the images in the ImageNet d
If you use the ImageNet dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -102,10 +102,10 @@ The [ImageNet dataset](https://www.image-net.org/) is a large-scale database con
To use a pretrained Ultralytics YOLO model for image classification on the ImageNet dataset, follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -117,7 +117,7 @@ To use a pretrained Ultralytics YOLO model for image classification on the Image
@ -27,7 +27,7 @@ The ImageNet10 dataset is useful for quickly testing and debugging computer visi
To test a deep learning model on the ImageNet10 dataset with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Test Example"
!!! example "Test Example"
=== "Python"
@ -58,7 +58,7 @@ The ImageNet10 dataset contains a subset of images from the original ImageNet da
If you use the ImageNet10 dataset in your research or development work, please cite the original ImageNet paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -86,7 +86,7 @@ The [ImageNet10](https://github.com/ultralytics/assets/releases/download/v0.0.0/
To test your deep learning model on the ImageNet10 dataset with an image size of 224x224, use the following code snippets.
@ -29,7 +29,7 @@ The ImageNette dataset is widely used for training and evaluating deep learning
To train a model on the ImageNette dataset for 100 epochs with a standard image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -64,7 +64,7 @@ For faster prototyping and training, the ImageNette dataset is also available in
To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imagenette320' in the training command. The following code snippets illustrate this:
!!! Example "Train Example with ImageNette160"
!!! example "Train Example with ImageNette160"
=== "Python"
@ -85,7 +85,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
@ -122,7 +122,7 @@ The [ImageNette dataset](https://github.com/fastai/imagenette) is a simplified s
To train a YOLO model on the ImageNette dataset for 100 epochs, you can use the following commands. Make sure to have the Ultralytics YOLO environment set up.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -152,14 +152,14 @@ The ImageNette dataset is advantageous for several reasons:
- **Quick and Simple**: It contains only 10 classes, making it less complex and time-consuming compared to larger datasets.
- **Educational Use**: Ideal for learning and teaching the basics of image classification since it requires less computational power and time.
- **Versatility**: Widely used to train and benchmark various machine learning models, especially in image classification.
For more details on model training and dataset management, explore the [Dataset Structure](#dataset-structure) section.
### Can the ImageNette dataset be used with different image sizes?
Yes, the ImageNette dataset is also available in two resized versions: ImageNette160 and ImageNette320. These versions help in faster prototyping and are especially useful when computational resources are limited.
Yes, the ImageNette dataset is also available in two resized versions: ImageNette160 and ImageNette320. These versions help in faster prototyping and are especially useful when computational resources are limited.
!!! Example "Train Example with ImageNette160"
!!! example "Train Example with ImageNette160"
=== "Python"
@ -174,7 +174,7 @@ Yes, the ImageNette dataset is also available in two resized versions: ImageNett
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model with ImageNette160
@ -26,7 +26,7 @@ The ImageWoof dataset is widely used for training and evaluating deep learning m
To train a CNN model on the ImageWoof dataset for 100 epochs with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -59,7 +59,7 @@ ImageWoof dataset comes in three different sizes to accommodate various research
To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example:
!!! Example "Example"
!!! example "Example"
=== "Python"
@ -109,20 +109,20 @@ The [ImageWoof](https://github.com/fastai/imagenette) dataset is a challenging s
To train a Convolutional Neural Network (CNN) model on the ImageWoof dataset using Ultralytics YOLO for 100 epochs at an image size of 224x224, you can use the following code:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
model = YOLO("yolov8n-cls.pt") # Load a pretrained model
@ -34,7 +34,7 @@ The MNIST dataset is widely used for training and evaluating deep learning model
To train a CNN model on the MNIST dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -69,7 +69,7 @@ If you use the MNIST dataset in your
research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -95,10 +95,10 @@ The [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, or Modified National Ins
To train a model on the MNIST dataset using Ultralytics YOLO, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -110,7 +110,7 @@ To train a model on the MNIST dataset using Ultralytics YOLO, you can follow the
@ -35,7 +35,7 @@ This dataset can be applied in various computer vision tasks such as object dete
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths, classes, and other pertinent details. For the African wildlife dataset, the `african-wildlife.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml).
!!! Example "ultralytics/cfg/datasets/african-wildlife.yaml"
!!! example "ultralytics/cfg/datasets/african-wildlife.yaml"
@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -66,7 +66,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
@ -111,10 +111,10 @@ The African Wildlife Dataset includes images of four common animal species found
You can train a YOLOv8 model on the African Wildlife Dataset by using the `african-wildlife.yaml` configuration file. Below is an example of how to train the YOLOv8n model for 100 epochs with an image size of 640:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -126,7 +126,7 @@ You can train a YOLOv8 model on the African Wildlife Dataset by using the `afric
The [Argoverse](https://www.argoverse.org/) dataset is a collection of data designed to support research in autonomous driving tasks, such as 3D tracking, motion forecasting, and stereo depth estimation. Developed by Argo AI, the dataset provides a wide range of high-quality sensor data, including high-resolution images, LiDAR point clouds, and map data.
!!! Note
!!! note
The Argoverse dataset `*.zip` file required for training was removed from Amazon S3 after the shutdown of Argo AI by Ford, but we have made it available for manual download on [Google Drive](https://drive.google.com/file/d/1st9qW3BeIwQsnR0t8mRpvbsSWIo16ACi/view?usp=drive_link).
@ -35,7 +35,7 @@ The Argoverse dataset is widely used for training and evaluating deep learning m
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Argoverse dataset, the `Argoverse.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml).
!!! Example "ultralytics/cfg/datasets/Argoverse.yaml"
!!! example "ultralytics/cfg/datasets/Argoverse.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/Argoverse.yaml"
@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -80,7 +80,7 @@ The example showcases the variety and complexity of the data in the Argoverse da
If you use the Argoverse dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -106,10 +106,10 @@ The [Argoverse](https://www.argoverse.org/) dataset, developed by Argo AI, suppo
To train a YOLOv8 model with the Argoverse dataset, use the provided YAML configuration file and the following code:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -119,10 +119,10 @@ To train a YOLOv8 model with the Argoverse dataset, use the provided YAML config
@ -34,7 +34,7 @@ The application of brain tumor detection using computer vision enables early dia
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the brain tumor dataset, the `brain-tumor.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml).
!!! Example "ultralytics/cfg/datasets/brain-tumor.yaml"
!!! example "ultralytics/cfg/datasets/brain-tumor.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/brain-tumor.yaml"
@ -44,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image size of 640, utilize the provided code snippets. For a detailed list of available arguments, consult the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -65,7 +65,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
@ -110,10 +110,10 @@ The brain tumor dataset is divided into two subsets: the **training set** consis
You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an image size of 640px using both Python and CLI methods. Below are the examples for both:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -123,10 +123,10 @@ You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an i
@ -52,7 +52,7 @@ The COCO dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
!!! Example "ultralytics/cfg/datasets/coco.yaml"
!!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/coco.yaml"
@ -62,7 +62,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -97,7 +97,7 @@ The example showcases the variety and complexity of the images in the COCO datas
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -124,10 +124,10 @@ The [COCO dataset](https://cocodataset.org/#home) (Common Objects in Context) is
To train a YOLOv8 model using the COCO dataset, you can use the following code snippets:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -139,7 +139,7 @@ To train a YOLOv8 model using the COCO dataset, you can use the following code s
@ -27,7 +27,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8 dataset, the `coco8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml).
!!! Example "ultralytics/cfg/datasets/coco8.yaml"
!!! example "ultralytics/cfg/datasets/coco8.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/coco8.yaml"
@ -37,7 +37,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the variety and complexity of the images in the COCO8 data
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -99,10 +99,10 @@ The Ultralytics COCO8 dataset is a compact yet versatile object detection datase
To train a YOLOv8 model using the COCO8 dataset, you can employ either Python or CLI commands. Here's how you can start:
@ -30,7 +30,7 @@ The Global Wheat Head Dataset is widely used for training and evaluating deep le
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Global Wheat Head Dataset, the `GlobalWheat2020.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml).
!!! Example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
!!! example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
@ -40,7 +40,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Global Wheat
If you use the Global Wheat Head Dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -100,10 +100,10 @@ The Global Wheat Head Dataset is primarily used for developing and training deep
To train a YOLOv8n model on the Global Wheat Head Dataset, you can use the following code snippets. Make sure you have the `GlobalWheat2020.yaml` configuration file specifying dataset paths and classes:
@ -16,20 +16,20 @@ The Ultralytics YOLO format is a dataset configuration format that allows you to
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco8 # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
path: ../datasets/coco8 # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
# Classes (80 COCO classes)
names:
0: person
1: bicycle
2: car
# ...
77: teddy bear
78: hair drier
79: toothbrush
0: person
1: bicycle
2: car
# ...
77: teddy bear
78: hair drier
79: toothbrush
```
Labels for this format should be exported to YOLO format with one `*.txt` file per image. If there are no objects in an image, no `*.txt` file is required. The `*.txt` file should be formatted with one row per object in `class x_center y_center width height` format. Box coordinates must be in **normalized xywh** format (from 0 to 1). If your boxes are in pixels, you should divide `x_center` and `width` by image width, and `y_center` and `height` by image height. Class numbers should be zero-indexed (start with 0).
@ -48,7 +48,7 @@ When using the Ultralytics YOLO format, organize your training and validation im
Here's how you can use these formats to train your model:
!!! Example
!!! example
=== "Python"
@ -100,7 +100,7 @@ If you have your own dataset and would like to use it for training detection mod
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
!!! Example
!!! example
=== "Python"
@ -121,15 +121,15 @@ Remember to double-check if the dataset you want to use is compatible with your
The Ultralytics YOLO format is a structured configuration for defining datasets in your training projects. It involves setting paths to your training, validation, and testing images and corresponding labels. For example:
```yaml
path: ../datasets/coco8 # dataset root directory
train: images/train # training images (relative to 'path')
val: images/val # validation images (relative to 'path')
test: # optional test images
path: ../datasets/coco8 # dataset root directory
train: images/train # training images (relative to 'path')
val: images/val # validation images (relative to 'path')
test: # optional test images
names:
0: person
1: bicycle
2: car
# ...
0: person
1: bicycle
2: car
# ...
```
Labels are saved in `*.txt` files with one file per image, formatted as `class x_center y_center width height` with normalized coordinates. For a detailed guide, see the [COCO8 dataset example](coco8.md).
@ -164,10 +164,10 @@ Each dataset page provides detailed information on the structure and usage tailo
To start training a YOLOv8 model, ensure your dataset is formatted correctly and the paths are defined in a YAML file. Use the following script to begin training:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -176,7 +176,7 @@ To start training a YOLOv8 model, ensure your dataset is formatted correctly and
@ -48,7 +48,7 @@ The LVIS dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the LVIS dataset, the `lvis.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/lvis.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/lvis.yaml).
!!! Example "ultralytics/cfg/datasets/lvis.yaml"
!!! example "ultralytics/cfg/datasets/lvis.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/lvis.yaml"
@ -58,7 +58,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -93,7 +93,7 @@ The example showcases the variety and complexity of the images in the LVIS datas
If you use the LVIS dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -118,10 +118,10 @@ The [LVIS dataset](https://www.lvisdataset.org/) is a large-scale dataset with f
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, follow the example below. This process utilizes Ultralytics' framework, which offers comprehensive training features.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -131,10 +131,10 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
@ -30,7 +30,7 @@ The Objects365 dataset is widely used for training and evaluating deep learning
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Objects365 Dataset, the `Objects365.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml).
!!! Example "ultralytics/cfg/datasets/Objects365.yaml"
!!! example "ultralytics/cfg/datasets/Objects365.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/Objects365.yaml"
@ -40,7 +40,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Objects365 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Objects365 d
If you use the Objects365 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -101,7 +101,7 @@ The [Objects365 dataset](https://www.objects365.org/) is designed for object det
To train a YOLOv8n model using the Objects365 dataset for 100 epochs with an image size of 640, follow these instructions:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -127,6 +127,7 @@ Refer to the [Training](../../modes/train.md) page for a comprehensive list of a
### Why should I use the Objects365 dataset for my object detection projects?
The Objects365 dataset offers several advantages for object detection tasks:
1. **Diversity**: It includes 2 million images with objects in diverse scenarios, covering 365 categories.
2. **High-quality Annotations**: Over 30 million bounding boxes provide comprehensive ground truth data.
3. **Performance**: Models pre-trained on Objects365 significantly outperform those trained on datasets like ImageNet, leading to better generalization.
@ -61,7 +61,7 @@ Open Images V7 is a cornerstone for training and evaluating state-of-the-art mod
Typically, datasets come with a YAML (Yet Another Markup Language) file that delineates the dataset's configuration. For the case of Open Images V7, a hypothetical `OpenImagesV7.yaml` might exist. For accurate paths and configurations, one should refer to the dataset's official repository or documentation.
@ -71,7 +71,7 @@ Typically, datasets come with a YAML (Yet Another Markup Language) file that del
To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Warning
!!! warning
The complete Open Images V7 dataset comprises 1,743,042 training images and 41,620 validation images, requiring approximately **561 GB of storage space** upon download.
@ -80,7 +80,7 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an im
- Verify that your device has enough storage capacity.
- Ensure a robust and speedy internet connection.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -115,7 +115,7 @@ Researchers can gain invaluable insights into the array of computer vision chall
For those employing Open Images V7 in their work, it's prudent to cite the relevant papers and acknowledge the creators:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -140,11 +140,10 @@ Open Images V7 is an extensive and versatile dataset created by Google, designed
To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python and CLI commands. Here's an example of training the YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -154,10 +153,10 @@ To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python a
@ -37,11 +37,11 @@ This structure enables a diverse and extensive testing ground for object detecti
Dataset benchmarking evaluates machine learning model performance on specific datasets using standardized metrics like accuracy, mean average precision and F1-score.
!!! Tip "Benchmarking"
!!! tip "Benchmarking"
Benchmarking results will be stored in "ultralytics-benchmarks/evaluation.txt"
!!! Example "Benchmarking example"
!!! example "Benchmarking example"
=== "Python"
@ -113,7 +113,7 @@ The diversity in the Roboflow 100 benchmark that can be seen above is a signific
If you use the Roboflow 100 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -139,10 +139,10 @@ The **Roboflow 100** dataset, developed by [Roboflow](https://roboflow.com/?ref=
To use the Roboflow 100 dataset for benchmarking, you can implement the RF100Benchmark class from the Ultralytics library. Here's a brief example:
!!! Example "Benchmarking example"
!!! example "Benchmarking example"
=== "Python"
```python
import os
import shutil
@ -203,7 +203,7 @@ The **Roboflow 100** dataset is accessible on [GitHub](https://github.com/robofl
When using the Roboflow 100 dataset in your research, ensure to properly cite it. Here is the recommended citation:
@ -23,7 +23,7 @@ This dataset can be applied in various computer vision tasks such as object dete
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths and classes information. For the signature detection dataset, the `signature.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
!!! Example "ultralytics/cfg/datasets/signature.yaml"
!!! example "ultralytics/cfg/datasets/signature.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/signature.yaml"
@ -33,7 +33,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
To train a YOLOv8n model on the signature detection dataset for 100 epochs with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -54,7 +54,7 @@ To train a YOLOv8n model on the signature detection dataset for 100 epochs with
@ -102,7 +102,7 @@ To train a YOLOv8n model on the Signature Detection Dataset, follow these steps:
1. Download the `signature.yaml` dataset configuration file from [signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
2. Use the following Python script or CLI command to start training:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -140,7 +140,7 @@ To perform inference using a model trained on the Signature Detection Dataset, f
1. Load your fine-tuned model.
2. Use the below Python script or CLI command to perform inference:
@ -43,7 +43,7 @@ The SKU-110k dataset is widely used for training and evaluating deep learning mo
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the SKU-110K dataset, the `SKU-110K.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml).
!!! Example "ultralytics/cfg/datasets/SKU-110K.yaml"
!!! example "ultralytics/cfg/datasets/SKU-110K.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/SKU-110K.yaml"
@ -53,7 +53,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the SKU-110K dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -88,7 +88,7 @@ The example showcases the variety and complexity of the data in the SKU-110k dat
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -113,10 +113,10 @@ The SKU-110k dataset consists of densely packed retail shelf images designed to
Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an example to train a YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -126,10 +126,10 @@ Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an ex
@ -39,7 +39,7 @@ The VisDrone dataset is widely used for training and evaluating deep learning mo
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Visdrone dataset, the `VisDrone.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml).
!!! Example "ultralytics/cfg/datasets/VisDrone.yaml"
!!! example "ultralytics/cfg/datasets/VisDrone.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/VisDrone.yaml"
@ -49,7 +49,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the VisDrone dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -84,7 +84,7 @@ The example showcases the variety and complexity of the data in the VisDrone dat
If you use the VisDrone dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -107,6 +107,7 @@ We would like to acknowledge the AISKYEYE team at the Lab of Machine Learning an
### What is the VisDrone Dataset and what are its key features?
The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-scale benchmark created by the AISKYEYE team at Tianjin University, China. It is designed for various computer vision tasks related to drone-based image and video analysis. Key features include:
- **Composition**: 288 video clips with 261,908 frames and 10,209 static images.
- **Annotations**: Over 2.6 million bounding boxes for objects like pedestrians, cars, bicycles, and tricycles.
- **Diversity**: Collected across 14 cities, in urban and rural settings, under different weather and lighting conditions.
@ -116,10 +117,10 @@ The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-
To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image size of 640, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -131,7 +132,7 @@ To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image siz
@ -31,7 +31,7 @@ The VOC dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the VOC dataset, the `VOC.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml).
!!! Example "ultralytics/cfg/datasets/VOC.yaml"
!!! example "ultralytics/cfg/datasets/VOC.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/VOC.yaml"
@ -41,7 +41,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the VOC dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -76,7 +76,7 @@ The example showcases the variety and complexity of the images in the VOC datase
If you use the VOC dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -103,7 +103,7 @@ The [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) (Visual Object Classes
To train a YOLOv8 model with the VOC dataset, you need the dataset configuration in a YAML file. Here's an example to start training a YOLOv8n model for 100 epochs with an image size of 640:
@ -34,7 +34,7 @@ The xView dataset is widely used for training and evaluating deep learning model
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the xView dataset, the `xView.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml).
!!! Example "ultralytics/cfg/datasets/xView.yaml"
!!! example "ultralytics/cfg/datasets/xView.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/xView.yaml"
@ -44,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a model on the xView dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -79,7 +79,7 @@ The example showcases the variety and complexity of the data in the xView datase
If you use the xView dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -106,10 +106,10 @@ The [xView](http://xviewdataset.org/) dataset is one of the largest publicly ava
To train a model on the xView dataset using Ultralytics YOLO, follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -119,10 +119,10 @@ To train a model on the xView dataset using Ultralytics YOLO, follow these steps
Embeddings table for a given dataset and model pair is only created once and reused. These use [LanceDB](https://lancedb.github.io/lancedb/) under the hood, which scales on-disk, so you can create and reuse embeddings for large datasets like COCO without running out of memory.
@ -67,7 +67,7 @@ In case of multiple inputs, the aggregate of their embeddings is used.
You get a pandas dataframe with the `limit` number of most similar data points to the input, along with their distance in the embedding space. You can use this dataset to perform further filtering
!!! Example "Semantic Search"
!!! example "Semantic Search"
=== "Using Images"
@ -110,7 +110,7 @@ You get a pandas dataframe with the `limit` number of most similar data points t
You can also plot the similar images using the `plot_similar` method. This method takes the same arguments as `get_similar` and plots the similar images in a grid.
!!! Example "Plotting Similar Images"
!!! example "Plotting Similar Images"
=== "Using Images"
@ -143,7 +143,7 @@ You can also plot the similar images using the `plot_similar` method. This metho
This allows you to write how you want to filter your dataset using natural language. You don't have to be proficient in writing SQL queries. Our AI powered query generator will automatically do that under the hood. For example - you can say - "show me 100 images with exactly one person and 2 dogs. There can be other objects too" and it'll internally generate the query and show you those results.
Note: This works using LLMs under the hood so the results are probabilistic and might get things wrong sometimes
!!! Example "Ask AI"
!!! example "Ask AI"
```python
from ultralytics import Explorer
@ -165,7 +165,7 @@ Note: This works using LLMs under the hood so the results are probabilistic and
You can run SQL queries on your dataset using the `sql_query` method. This method takes a SQL query as input and returns a pandas dataframe with the results.
!!! Example "SQL Query"
!!! example "SQL Query"
```python
from ultralytics import Explorer
@ -182,7 +182,7 @@ You can run SQL queries on your dataset using the `sql_query` method. This metho
You can also plot the results of a SQL query using the `plot_sql_query` method. This method takes the same arguments as `sql_query` and plots the results in a grid.
!!! Example "Plotting SQL Query Results"
!!! example "Plotting SQL Query Results"
```python
from ultralytics import Explorer
@ -199,7 +199,9 @@ You can also plot the results of a SQL query using the `plot_sql_query` method.
You can also work with the embeddings table directly. Once the embeddings table is created, you can access it using the `Explorer.table`
!!! Tip "Explorer works on [LanceDB](https://lancedb.github.io/lancedb/) tables internally. You can access this table directly, using `Explorer.table` object and run raw queries, push down pre- and post-filters, etc."
!!! tip
Explorer works on [LanceDB](https://lancedb.github.io/lancedb/) tables internally. You can access this table directly, using `Explorer.table` object and run raw queries, push down pre- and post-filters, etc.
```python
from ultralytics import Explorer
@ -213,7 +215,7 @@ Here are some examples of what you can do with the table:
### Get raw Embeddings
!!! Example
!!! example
```python
from ultralytics import Explorer
@ -228,7 +230,7 @@ Here are some examples of what you can do with the table:
### Advanced Querying with pre- and post-filters
!!! Example
!!! example
```python
from ultralytics import Explorer
@ -270,11 +272,11 @@ It returns a pandas dataframe with the following columns:
- `count`: Number of images in the dataset that are closer than `max_dist` to the current image
- `sim_im_files`: List of paths to the `count` similar images
!!! Tip
!!! tip
For a given dataset, model, `max_dist`&`top_k` the similarity index once generated will be reused. In case, your dataset has changed, or you simply need to regenerate the similarity index, you can pass `force=True`.
!!! Example "Similarity Index"
!!! example "Similarity Index"
```python
from ultralytics import Explorer
@ -342,14 +344,17 @@ The Ultralytics Explorer API is designed for comprehensive dataset exploration.
### How do I install the Ultralytics Explorer API?
To install the Ultralytics Explorer API along with its dependencies, use the following command:
```bash
pip install ultralytics[explorer]
```
This will automatically install all necessary external libraries for the Explorer API functionality. For additional setup details, refer to the [installation section](#installation) of our documentation.
### How can I use the Ultralytics Explorer API for similarity search?
You can use the Ultralytics Explorer API to perform similarity searches by creating an embeddings table and querying it for similar images. Here's a basic example:
@ -52,6 +54,7 @@ On performing similarity search, you should see a similar result:
## Ask AI
This allows you to write how you want to filter your dataset using natural language. You don't have to be proficient in writing SQL queries. Our AI powered query generator will automatically do that under the hood. For example - you can say - "show me 100 images with exactly one person and 2 dogs. There can be other objects too" and it'll internally generate the query and show you those results. Here's an example output when asked to "Show 10 images with exactly 5 persons" and you'll see a result like this:
@ -76,7 +79,7 @@ This is a Demo build using the Explorer API. You can use the API to build your o
### What is Ultralytics Explorer GUI and how do I install it?
Ultralytics Explorer GUI is a powerful interface that unlocks advanced data exploration capabilities using the [Ultralytics Explorer API](api.md). It allows you to run semantic/vector similarity search, SQL queries, and natural language queries using the Ask AI feature powered by Large Language Models (LLMs).
Ultralytics Explorer GUI is a powerful interface that unlocks advanced data exploration capabilities using the [Ultralytics Explorer API](api.md). It allows you to run semantic/vector similarity search, SQL queries, and natural language queries using the Ask AI feature powered by Large Language Models (LLMs).
To install the Explorer GUI, you can use pip:
@ -106,13 +109,14 @@ Ultralytics Explorer GUI allows you to run SQL queries directly on your dataset
WHERE labels LIKE '%person%' AND labels LIKE '%dog%'
```
You can also provide only the WHERE clause, making the querying process more flexible.
You can also provide only the WHERE clause, making the querying process more flexible.
For more details, refer to the [SQL Queries Section](#run-sql-queries-on-your-cv-datasets).
### What are the benefits of using Ultralytics Explorer GUI for data exploration?
Ultralytics Explorer GUI enhances data exploration with features like semantic search, SQL querying, and natural language interactions through the Ask AI feature. These capabilities allow users to:
- Efficiently find visually similar images.
- Filter datasets using complex SQL queries.
- Utilize AI to perform natural language searches, eliminating the need for advanced SQL expertise.
@ -60,7 +60,7 @@ DOTA serves as a benchmark for training and evaluating models specifically tailo
Typically, datasets incorporate a YAML (Yet Another Markup Language) file detailing the dataset's configuration. For DOTA v1 and DOTA v1.5, Ultralytics provides `DOTAv1.yaml` and `DOTAv1.5.yaml` files. For additional details on these as well as DOTA v2 please consult DOTA's official repository and documentation.
!!! Example "DOTAv1.yaml"
!!! example "DOTAv1.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/DOTAv1.yaml"
@ -70,7 +70,7 @@ Typically, datasets incorporate a YAML (Yet Another Markup Language) file detail
To train DOTA dataset, we split original DOTA images with high-resolution into images with 1024x1024 resolution in multiscale way.
!!! Example "Split images"
!!! example "Split images"
=== "Python"
@ -97,11 +97,11 @@ To train DOTA dataset, we split original DOTA images with high-resolution into i
To train a model on the DOTA v1 dataset, you can utilize the following code snippets. Always refer to your model's documentation for a thorough list of available arguments.
!!! Warning
!!! warning
Please note that all images and associated annotations in the DOTAv1 dataset can be used for academic purposes, but commercial use is prohibited. Your understanding and respect for the dataset creators' wishes are greatly appreciated!
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -136,7 +136,7 @@ The dataset's richness offers invaluable insights into object detection challeng
For those leveraging DOTA in their endeavors, it's pertinent to cite the relevant research papers:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -169,7 +169,7 @@ DOTA utilizes Oriented Bounding Boxes (OBB) for annotation, which are represente
To train a model on the DOTA dataset, you can use the following example with Ultralytics YOLO:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -195,9 +195,7 @@ For more details on how to split and preprocess the DOTA images, refer to the [s
### What are the differences between DOTA-v1.0, DOTA-v1.5, and DOTA-v2.0?
- **DOTA-v1.0**: Includes 15 common categories across 2,806 images with 188,282 instances. The dataset is split into training, validation, and testing sets.
- **DOTA-v1.5**: Builds upon DOTA-v1.0 by annotating very small instances (less than 10 pixels) and adding a new category, "container crane," totaling 403,318 instances.
- **DOTA-v2.0**: Expands further with annotations from Google Earth and GF-2 Satellite, featuring 11,268 images and 1,793,658 instances. It includes new categories like "airport" and "helipad."
For a detailed comparison and additional specifics, check the [dataset versions section](#dataset-versions).
@ -206,7 +204,7 @@ For a detailed comparison and additional specifics, check the [dataset versions
DOTA images, which can be very large, are split into smaller resolutions for manageable training. Here's a Python snippet to split images:
@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the DOTA8 dataset, the `dota8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dota8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dota8.yaml).
!!! Example "ultralytics/cfg/datasets/dota8.yaml"
!!! example "ultralytics/cfg/datasets/dota8.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/dota8.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the DOTA8 data
If you use the DOTA dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -90,10 +90,10 @@ The DOTA8 dataset is a small, versatile oriented object detection dataset made u
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For comprehensive argument options, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -105,7 +105,7 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image s
@ -32,7 +32,7 @@ An example of a `*.txt` label file for the above image, which contains an object
To train a model using these OBB formats:
!!! Example
!!! example
=== "Python"
@ -70,7 +70,7 @@ For those looking to introduce their own datasets with oriented bounding boxes,
Transitioning labels from the DOTA dataset format to the YOLO OBB format can be achieved with this script:
!!! Example
!!! example
=== "Python"
@ -106,10 +106,10 @@ This script will reformat your DOTA annotations into a YOLO-compatible format.
Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO OBB format and then using the Ultralytics API to train the model. Here's an example in both Python and CLI:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -119,15 +119,14 @@ Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO
@ -43,7 +43,7 @@ The COCO-Pose dataset is specifically used for training and evaluating deep lear
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Pose dataset, the `coco-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml).
!!! Example "ultralytics/cfg/datasets/coco-pose.yaml"
!!! example "ultralytics/cfg/datasets/coco-pose.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/coco-pose.yaml"
@ -53,7 +53,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -88,7 +88,7 @@ The example showcases the variety and complexity of the images in the COCO-Pose
If you use the COCO-Pose dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -115,7 +115,7 @@ The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialize
Training a YOLOv8 model on the COCO-Pose dataset can be accomplished using either Python or CLI commands. For example, to train a YOLOv8n-pose model for 100 epochs with an image size of 640, you can follow the steps below:
@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Pose dataset, the `coco8-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml).
!!! Example "ultralytics/cfg/datasets/coco8-pose.yaml"
!!! example "ultralytics/cfg/datasets/coco8-pose.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/coco8-pose.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the COCO8-Pose
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -88,10 +88,10 @@ The COCO8-Pose dataset is a small, versatile pose detection dataset that include
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, follow these examples:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -103,7 +103,7 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an i
The `train` and `val` fields specify the paths to the directories containing the training and validation images, respectively.
@ -64,7 +64,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
!!! Example
!!! example
=== "Python"
@ -126,7 +126,7 @@ If you have your own dataset and would like to use it for training pose estimati
Ultralytics provides a convenient conversion tool to convert labels from the popular COCO dataset format to YOLO format:
!!! Example
!!! example
=== "Python"
@ -142,7 +142,7 @@ This conversion tool can be used to convert the COCO dataset or any dataset in t
### What is the Ultralytics YOLO format for pose estimation?
The Ultralytics YOLO format for pose estimation datasets involves labeling each image with a corresponding text file. Each row of the text file stores information about an object instance:
The Ultralytics YOLO format for pose estimation datasets involves labeling each image with a corresponding text file. Each row of the text file stores information about an object instance:
- Object class index
- Object center coordinates (normalized x and y)
@ -154,6 +154,7 @@ For 2D poses, keypoints include pixel coordinates. For 3D, each keypoint also ha
### How do I use the COCO-Pose dataset with Ultralytics YOLO?
To use the COCO-Pose dataset with Ultralytics YOLO:
1. Download the dataset and prepare your label files in the YOLO format.
2. Create a YAML configuration file specifying paths to training and validation images, keypoint shape, and class names.
3. Use the configuration file for training:
@ -164,12 +165,13 @@ To use the COCO-Pose dataset with Ultralytics YOLO:
model = YOLO("yolov8n-pose.pt") # load pretrained model
@ -29,7 +29,7 @@ This dataset is intended for use with [Ultralytics HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file serves as the means to specify the configuration details of a dataset. It encompasses crucial data such as file paths, class definitions, and other pertinent information. Specifically, for the `tiger-pose.yaml` file, you can check [Ultralytics Tiger-Pose Dataset Configuration File](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/tiger-pose.yaml).
!!! Example "ultralytics/cfg/datasets/tiger-pose.yaml"
!!! example "ultralytics/cfg/datasets/tiger-pose.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/tiger-pose.yaml"
@ -39,7 +39,7 @@ A YAML (Yet Another Markup Language) file serves as the means to specify the con
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the variety and complexity of the images in the Tiger-Pose
## Inference Example
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -107,10 +107,10 @@ The Ultralytics Tiger-Pose dataset is designed for pose estimation tasks, consis
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, use the following code snippets. For more details, visit the [Training](../../modes/train.md) page:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -120,10 +120,10 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
@ -137,10 +137,10 @@ The `tiger-pose.yaml` file is used to specify the configuration details of the T
To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you can use the following code snippets. For a detailed guide, visit the [Prediction](../../modes/predict.md) page:
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
```python
from ultralytics import YOLO
@ -150,10 +150,10 @@ To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you
@ -37,7 +37,7 @@ Carparts Segmentation finds applications in automotive quality control, auto rep
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Package Segmentation dataset, the `carparts-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/carparts-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/carparts-seg.yaml).
!!! Example "ultralytics/cfg/datasets/carparts-seg.yaml"
!!! example "ultralytics/cfg/datasets/carparts-seg.yaml"
@ -47,7 +47,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -81,7 +81,7 @@ The Carparts Segmentation dataset includes a diverse array of images and videos
If you integrate the Carparts Segmentation dataset into your research or development projects, please make reference to the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -112,10 +112,10 @@ The [Roboflow Carparts Segmentation Dataset](https://universe.roboflow.com/gianm
To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -138,6 +138,7 @@ For more details, refer to the [Training](../../modes/train.md) documentation.
### What are some applications of Carparts Segmentation?
Carparts Segmentation can be widely applied in various fields such as:
- **Automotive quality control**
- **Auto repair and maintenance**
- **E-commerce cataloging**
@ -155,6 +156,6 @@ The dataset configuration file for the Carparts Segmentation dataset, `carparts-
### Why should I use the Carparts Segmentation Dataset?
The Carparts Segmentation Dataset provides rich, annotated data essential for developing high-accuracy segmentation models in automotive computer vision. This dataset's diversity and detailed annotations improve model training, making it ideal for applications like vehicle maintenance automation, enhancing vehicle safety systems, and supporting autonomous driving technologies. Partnering with a robust dataset accelerates AI development and ensures better model performance.
The Carparts Segmentation Dataset provides rich, annotated data essential for developing high-accuracy segmentation models in automotive computer vision. This dataset's diversity and detailed annotations improve model training, making it ideal for applications like vehicle maintenance automation, enhancing vehicle safety systems, and supporting autonomous driving technologies. Partnering with a robust dataset accelerates AI development and ensures better model performance.
For more details, visit the [CarParts Segmentation Dataset Page](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm?ref=ultralytics).
@ -41,7 +41,7 @@ COCO-Seg is widely used for training and evaluating deep learning models in inst
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Seg dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
!!! Example "ultralytics/cfg/datasets/coco.yaml"
!!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/coco.yaml"
@ -51,7 +51,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -86,7 +86,7 @@ The example showcases the variety and complexity of the images in the COCO-Seg d
If you use the COCO-Seg dataset in your research or development work, please cite the original COCO paper and acknowledge the extension to COCO-Seg:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -113,10 +113,10 @@ The [COCO-Seg](https://cocodataset.org/#home) dataset is an extension of the ori
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a detailed list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -128,7 +128,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
@ -148,7 +148,7 @@ The COCO-Seg dataset includes several key features:
The COCO-Seg dataset supports multiple pretrained YOLOv8 segmentation models with varying performance metrics. Here's a summary of the available models and their key metrics:
@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Seg dataset, the `coco8-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml).
!!! Example "ultralytics/cfg/datasets/coco8-seg.yaml"
!!! example "ultralytics/cfg/datasets/coco8-seg.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/coco8-seg.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the COCO8-Seg
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -88,10 +88,10 @@ The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralyt
To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use Python or CLI commands. Here's a quick example:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -103,7 +103,7 @@ To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an
@ -26,7 +26,7 @@ Crack segmentation finds practical applications in infrastructure maintenance, a
A YAML (Yet Another Markup Language) file is employed to outline the configuration of the dataset, encompassing details about paths, classes, and other pertinent information. Specifically, for the Crack Segmentation dataset, the `crack-seg.yaml` file is managed and accessible at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/crack-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/crack-seg.yaml).
!!! Example "ultralytics/cfg/datasets/crack-seg.yaml"
!!! example "ultralytics/cfg/datasets/crack-seg.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/crack-seg.yaml"
@ -36,7 +36,7 @@ A YAML (Yet Another Markup Language) file is employed to outline the configurati
To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -71,7 +71,7 @@ The Crack Segmentation dataset comprises a varied collection of images and video
If you incorporate the crack segmentation dataset into your research or development endeavors, kindly reference the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -102,7 +102,7 @@ The [Roboflow Crack Segmentation Dataset](https://universe.roboflow.com/universi
To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the following code snippets. Detailed instructions and further parameters can be found on the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -135,7 +135,7 @@ Ultralytics YOLO offers advanced real-time object detection, segmentation, and c
If you incorporate the Crack Segmentation Dataset into your research, please use the following BibTeX reference:
| `sam_model` | `str, optional` | Pre-trained SAM segmentation model. Defaults to `'sam_b.pt'`. | `'sam_b.pt'` |
@ -175,15 +175,15 @@ This script converts your COCO dataset annotations to the required YOLO format,
To prepare a YAML file for training YOLO models with Ultralytics, you need to define the dataset paths and class names. Here's an example YAML configuration:
```yaml
path: ../datasets/coco8-seg # dataset root dir
train: images/train # train images (relative to 'path')
val: images/val # val images (relative to 'path')
path: ../datasets/coco8-seg # dataset root dir
train: images/train # train images (relative to 'path')
val: images/val # val images (relative to 'path')
names:
0: person
1: bicycle
2: car
# ...
0: person
1: bicycle
2: car
# ...
```
Ensure you update the paths and class names according to your dataset. For more information, check the [Dataset YAML Format](#dataset-yaml-format) section.
@ -26,7 +26,7 @@ Package segmentation, facilitated by the Package Segmentation Dataset, is crucia
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Package Segmentation dataset, the `package-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml).
!!! Example "ultralytics/cfg/datasets/package-seg.yaml"
!!! example "ultralytics/cfg/datasets/package-seg.yaml"
```yaml
--8<--"ultralytics/cfg/datasets/package-seg.yaml"
@ -36,7 +36,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -70,7 +70,7 @@ The Package Segmentation dataset comprises a varied collection of images and vid
If you integrate the crack segmentation dataset into your research or development initiatives, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -101,10 +101,10 @@ The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factor
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -116,7 +116,7 @@ You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Us
@ -127,6 +127,7 @@ Refer to the model [Training](../../modes/train.md) page for more details.
### What are the components of the Package Segmentation Dataset, and how is it structured?
The dataset is structured into three main components:
- **Training set**: Contains 1920 images with annotations.
- **Testing set**: Comprises 89 images with corresponding annotations.
- **Validation set**: Includes 188 images with annotations.
@ -139,6 +140,6 @@ Ultralytics YOLOv8 provides state-of-the-art accuracy and speed for real-time ob
### How can I access and use the package-seg.yaml file for the Package Segmentation Dataset?
The `package-seg.yaml` file is hosted on Ultralytics' GitHub repository and contains essential information about the dataset's paths, classes, and configuration. You can download it from [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml). This file is crucial for configuring your models to utilize the dataset efficiently.
The `package-seg.yaml` file is hosted on Ultralytics' GitHub repository and contains essential information about the dataset's paths, classes, and configuration. You can download it from [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml). This file is crucial for configuring your models to utilize the dataset efficiently.
For more insights and practical examples, explore our [Usage](https://docs.ultralytics.com/usage/python/) section.
@ -12,7 +12,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
## Usage
!!! Example
!!! example
=== "Python"
@ -35,10 +35,10 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
To use Multi-Object Tracking with Ultralytics YOLO, you can start by using the Python or CLI examples provided. Here is how you can get started:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -51,7 +51,7 @@ To use Multi-Object Tracking with Ultralytics YOLO, you can start by using the P
```bash
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3 iou=0.5 show
```
These commands load the YOLOv8 model and use it for tracking objects in the given video source with specific confidence (`conf`) and Intersection over Union (`iou`) thresholds. For more details, refer to the [track mode documentation](../../modes/track.md).
### What are the upcoming features for training trackers in Ultralytics?
@ -22,7 +22,7 @@ This guide provides a comprehensive overview of three fundamental types of data
- Bar plots, on the other hand, are suitable for comparing quantities across different categories and showing relationships between a category and its numerical value.
- Lastly, pie charts are effective for illustrating proportions among categories and showing parts of a whole.
@ -85,7 +85,7 @@ After installing the runtime, you need to plug in your Coral Edge TPU into a USB
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
!!! Exporting the model
!!! exporting the model
=== "Python"
@ -111,7 +111,7 @@ The exported model will be saved in the `<model_name>_saved_model/` folder with
After exporting your model, you can run inference with it using the following code:
!!! Running the model
!!! running the model
=== "Python"
@ -170,7 +170,7 @@ Make sure to uninstall any previous Coral Edge TPU runtime versions by following
Yes, you can export your Ultralytics YOLOv8 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the [Ultralytics Docker container](docker-quickstart.md). You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:
!!! Exporting the model
!!! exporting the model
=== "Python"
@ -212,7 +212,7 @@ For a specific wheel, such as TensorFlow 2.15.0 `tflite-runtime`, you can downlo
After exporting your YOLOv8 model to an Edge TPU-compatible format, you can run inference using the following code snippets:
<strong>Watch:</strong> How to Run Multiple Streams with DeepStream SDK on Jetson Nano using Ultralytics YOLOv8
</p>
This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLOv8 on [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) devices using DeepStream SDK and TensorRT. Here we use TensorRT to maximize the inference performance on the Jetson platform.
<imgwidth="1024"src="https://github.com/ultralytics/docs/releases/download/0/deepstream-nvidia-jetson.avif"alt="DeepStream on NVIDIA Jetson">
!!! Note
!!! note
This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.4](https://developer.nvidia.com/jetpack-sdk-464). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.
@ -28,7 +39,7 @@ Before you start to follow this guide:
- For JetPack 4.6.4, install [DeepStream 6.0.1](https://docs.nvidia.com/metropolis/deepstream/6.0.1/dev-guide/text/DS_Quickstart.html)
- For JetPack 5.1.3, install [DeepStream 6.3](https://docs.nvidia.com/metropolis/deepstream/6.3/dev-guide/text/DS_Quickstart.html)
!!! Tip
!!! tip
In this guide we have used the Debian package method of installing DeepStream SDK to the Jetson device. You can also visit the [DeepStream SDK on Jetson (Archived)](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived) to access legacy versions of DeepStream.
@ -56,7 +67,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
You can also use a [custom trained YOLOv8 model](https://docs.ultralytics.com/modes/train/).
@ -66,7 +77,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
python3 utils/export_yoloV8.py -w yolov8s.pt
```
!!! Note "Pass the below arguments to the above command"
!!! note "Pass the below arguments to the above command"
For DeepStream 6.0.1, use opset 12 or lower. The default opset is 16.
@ -164,13 +175,13 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
deepstream-app -c deepstream_app_config.txt
```
!!! Note
!!! note
It will take a long time to generate the TensorRT engine file before starting the inference. So please be patient.
<divalign=center><imgwidth=1000src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-deepstream.avif"alt="YOLOv8 with deepstream"></div>
!!! Tip
!!! tip
If you want to convert the model to FP16 precision, simply set `model-engine-file=model_b1_gpu0_fp16.engine` and `network-mode=2` inside `config_infer_primary_yoloV8.txt`
@ -206,7 +217,7 @@ If you want to use INT8 precision for inference, you need to follow the steps be
done
```
!!! Note
!!! note
NVIDIA recommends at least 500 images to get a good accuracy. On this example, 1000 images are chosen to get better accuracy (more images = more accuracy). You can set it from **head -1000**. For example, for 2000 images, **head -2000**. This process can take a long time.
@ -223,7 +234,7 @@ If you want to use INT8 precision for inference, you need to follow the steps be
export INT8_CALIB_BATCH_SIZE=1
```
!!! Note
!!! note
Higher INT8_CALIB_BATCH_SIZE values will result in more accuracy and faster calibration speed. Set it according to you GPU memory.
@ -69,7 +69,7 @@ The process is repeated until either the set number of iterations is reached or
Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyperparameter tuning of YOLOv8n on COCO8 for 30 epochs with an AdamW optimizer and skipping plotting, checkpointing and validation other than on final epoch for faster Tuning.
!!! Example
!!! example
=== "Python"
@ -212,7 +212,7 @@ For deeper insights, you can explore the `Tuner` class source code and accompany
To optimize the learning rate for Ultralytics YOLO, start by setting an initial learning rate using the `lr0` parameter. Common values range from `0.001` to `0.01`. During the hyperparameter tuning process, this value will be mutated to find the optimal setting. You can utilize the `model.tune()` method to automate this process. For example:
@ -68,7 +68,7 @@ Let's work together to make the Ultralytics YOLO ecosystem more robust and versa
Training a custom object detection model with Ultralytics YOLO is straightforward. Start by preparing your dataset in the correct format and installing the Ultralytics package. Use the following code to initiate training:
@ -146,7 +146,7 @@ For any inquiries, feel free to post your questions in the [Ultralytics Issue Se
To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO model with a segmentation version of YOLOv8 and process video frames through it. Here's a simplified code example:
!!! Example
!!! example
=== "Python"
@ -200,7 +200,7 @@ Ultralytics YOLOv8 offers real-time performance, superior accuracy, and ease of
To implement object tracking, use the `model.track` method and ensure that each object's ID is consistently assigned across frames. Below is a simple example:
@ -331,7 +331,7 @@ For more insights, check out our [blog post](https://www.ultralytics.com/blog/ac
Yes, YOLOv8 models can be deployed on mobile devices using TensorFlow Lite (TF Lite) for both Android and iOS platforms. TF Lite is designed for mobile and embedded devices, providing efficient on-device inference.
@ -63,7 +63,7 @@ The `imgsz` validation parameter sets the maximum dimension for image resizing,
If you want to get a deeper understanding of your YOLOv8 model's performance, you can easily access specific evaluation metrics with a few lines of Python code. The code snippet below will let you load your model, run an evaluation, and print out various metrics that show how well your model is doing.
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -165,7 +165,7 @@ Improving mean average precision (mAP) for a YOLOv8 model involves several steps
You can access YOLOv8 model evaluation metrics using Python with the following steps:
This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running the latest stable JetPack release of [JP6.0](https://developer.nvidia.com/embedded/jetpack-sdk-60), JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.1](https://developer.nvidia.com/embedded/jetpack-sdk-461). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.
@ -57,7 +57,7 @@ The first step after getting your hands on an NVIDIA Jetson device is to flash N
3. If you own a Seeed Studio reComputer J4012 device, you can [flash JetPack to the included SSD](https://wiki.seeedstudio.com/reComputer_J4012_Flash_Jetpack/) and if you own a Seeed Studio reComputer J1020 v2 device, you can [flash JetPack to the eMMC/ SSD](https://wiki.seeedstudio.com/reComputer_J2021_J202_Flash_Jetpack/).
4. If you own any other third party device powered by the NVIDIA Jetson module, it is recommended to follow [command-line flashing](https://docs.nvidia.com/jetson/archives/r35.5.0/DeveloperGuide/IN/QuickStart.html).
!!! Note
!!! note
For methods 3 and 4 above, after flashing the system and booting the device, please enter "sudo apt update && sudo apt install nvidia-jetpack -y" on the device terminal to install all the remaining JetPack components needed.
Visit the [Export page](../modes/export.md#arguments) to access additional arguments when exporting models to different model formats
@ -294,7 +294,7 @@ Even though all model exports are working with NVIDIA Jetson, we have only inclu
The below table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
!!! Performance
!!! performance
=== "YOLOv8n"
@ -377,7 +377,7 @@ The below table represents the benchmark results for five different models (YOLO
To reproduce the above Ultralytics benchmarks on all export [formats](../modes/export.md) run this code:
@ -27,7 +27,7 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
- **Selective Focus**: YOLOv8 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
- **Real-time Processing**: YOLOv8's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.
!!! Example "Object Blurring using YOLOv8 Example"
!!! example "Object Blurring using YOLOv8 Example"
@ -46,7 +46,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| ![Conveyor Belt Packets Counting Using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/conveyor-belt-packets-counting.avif) | ![Fish Counting in Sea using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/fish-counting-in-sea-using-ultralytics-yolov8.avif) |
| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
!!! Example "Object Counting using YOLOv8 Example"
!!! example "Object Counting using YOLOv8 Example"
=== "Count in Region"
@ -224,23 +224,15 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
Here's a table with the `ObjectCounter` arguments:
@ -34,7 +34,7 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| ![Conveyor Belt at Airport Suitcases Cropping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/suitcases-cropping-airport-conveyor-belt.avif) |
| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
!!! Example "Object Cropping using YOLOv8 Example"
!!! example "Object Cropping using YOLOv8 Example"
@ -38,18 +38,18 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
### Selection of Points
!!! Tip "Point Selection is now Easy"
!!! tip "Point Selection is now Easy"
Choosing parking points is a critical and complex task in parking management systems. Ultralytics streamlines this process by providing a tool that lets you define parking lot areas, which can be utilized later for additional processing.
- Capture a frame from the video or camera stream where you want to manage the parking lot.
- Use the provided code to launch a graphical interface, where you can select an image and start outlining parking regions by mouse click to create polygons.
!!! Warning "Image Size"
!!! warning "Image Size"
Max Image Size of 1920 * 1080 supported
!!! Example "Parking slots Annotator Ultralytics YOLOv8"
!!! example "Parking slots Annotator Ultralytics YOLOv8"
=== "Parking Annotator"
@ -65,7 +65,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
### Python Code for Parking Management
!!! Example "Parking management using YOLOv8 Example"
!!! example "Parking management using YOLOv8 Example"
@ -33,7 +33,7 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
| ![Queue management at airport ticket counter using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-management-airport-ticket-counter-ultralytics-yolov8.avif) | ![Queue monitoring in crowd using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-monitoring-crowd-ultralytics-yolov8.avif) |
| Queue management at airport ticket counter Using Ultralytics YOLOv8 | Queue monitoring in crowd Ultralytics YOLOv8 |
!!! Example "Queue Management using YOLOv8 Example"
!!! example "Queue Management using YOLOv8 Example"
@ -19,7 +19,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
<strong>Watch:</strong> Raspberry Pi 5 updates and improvements.
</p>
!!! Note
!!! note
This guide has been tested with Raspberry Pi 4 and Raspberry Pi 5 running the latest [Raspberry Pi OS Bookworm (Debian 12)](https://www.raspberrypi.com/software/operating-systems/). Using this guide for older Raspberry Pi devices such as the Raspberry Pi 3 is expected to work as long as the same Raspberry Pi OS Bookworm is installed.
@ -100,7 +100,7 @@ Out of all the model export formats supported by Ultralytics, [NCNN](https://doc
The YOLOv8n model in PyTorch format is converted to NCNN to run inference with the exported model.
!!! Example
!!! example
=== "Python"
@ -130,7 +130,7 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](https://docs.ultralytics.com/guides/model-deployment-options).
@ -138,7 +138,7 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
YOLOv8 benchmarks were run by the Ultralytics team on nine different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX, OpenVINO, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN. Benchmarks were run on both Raspberry Pi 5 and Raspberry Pi 4 at FP32 precision with default input image size of 640.
!!! Note
!!! note
We have only included benchmarks for YOLOv8n and YOLOv8s models because other models sizes are too big to run on the Raspberry Pis and does not offer decent performance.
@ -224,7 +224,7 @@ The below table represents the benchmark results for two different models (YOLOv
To reproduce the above Ultralytics benchmarks on all [export formats](../modes/export.md), run this code:
!!! Example
!!! example
=== "Python"
@ -251,11 +251,11 @@ To reproduce the above Ultralytics benchmarks on all [export formats](../modes/e
When using Raspberry Pi for Computer Vision projects, it can be essentially to grab real-time video feeds to perform inference. The onboard MIPI CSI connector on the Raspberry Pi allows you to connect official Raspberry PI camera modules. In this guide, we have used a [Raspberry Pi Camera Module 3](https://www.raspberrypi.com/products/camera-module-3) to grab the video feeds and perform inference using YOLOv8 models.
!!! Tip
!!! tip
Learn more about the [different camera modules offered by Raspberry Pi](https://www.raspberrypi.com/documentation/accessories/camera.html) and also [how to get started with the Raspberry Pi camera modules](https://www.raspberrypi.com/documentation/computers/camera_software.html#introducing-the-raspberry-pi-cameras).
!!! Note
!!! note
Raspberry Pi 5 uses smaller CSI connectors than the Raspberry Pi 4 (15-pin vs 22-pin), so you will need a [15-pin to 22pin adapter cable](https://www.raspberrypi.com/products/camera-cable) to connect to a Raspberry Pi Camera.
@ -267,7 +267,7 @@ Execute the following command after connecting the camera to the Raspberry Pi. Y
rpicam-hello
```
!!! Tip
!!! tip
Learn more about [`rpicam-hello` usage on official Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/camera_software.html#rpicam-hello)
@ -275,13 +275,13 @@ rpicam-hello
There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
!!! Usage
!!! usage
=== "Method 1"
We can use `picamera2`which comes pre-installed with Raspberry Pi OS to access the camera and inference YOLOv8 models.
!!! Example
!!! example
=== "Python"
@ -333,7 +333,7 @@ There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
Learn more about [`rpicam-vid` usage on official Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/camera_software.html#rpicam-vid)
!!! Example
!!! example
=== "Python"
@ -353,7 +353,7 @@ There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
Check our document on [Inference Sources](https://docs.ultralytics.com/modes/predict/#inference-sources) if you want to change the image/ video input type
@ -410,7 +410,7 @@ Ultralytics YOLOv8's NCNN format is highly optimized for mobile and embedded pla
You can convert a PyTorch YOLOv8 model to NCNN format using either Python or CLI commands:
| ![Speed Estimation on Road using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-road-using-ultralytics-yolov8.avif) | ![Speed Estimation on Bridge using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-bridge-using-ultralytics-yolov8.avif) |
| Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
!!! Example "Speed Estimation using YOLOv8 Example"
!!! example "Speed Estimation using YOLOv8 Example"
@ -38,7 +38,7 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
Before you start building the application, ensure you have the Ultralytics Python Package installed. You can install it using the command **pip install ultralytics**
!!! Example "Streamlit Application"
!!! example "Streamlit Application"
=== "Python"
@ -60,7 +60,7 @@ This will launch the Streamlit application in your default web browser. You will
You can optionally supply a specific model in Python:
!!! Example "Streamlit Application with a custom model"
!!! example "Streamlit Application with a custom model"
=== "Python"
@ -104,7 +104,7 @@ pip install ultralytics
Then, you can create a basic Streamlit application to run live inference:
@ -57,7 +57,7 @@ I have read the CLA Document and I sign the CLA
When adding new functions or classes, please include [Google-style docstrings](https://google.github.io/styleguide/pyguide.html). These docstrings provide clear, standardized documentation that helps other developers understand and maintain your code.
@ -39,7 +39,7 @@ We take several measures to ensure the privacy and security of the data you entr
[Sentry](https://sentry.io/welcome/) is a developer-centric error tracking software that aids in identifying, diagnosing, and resolving issues in real-time, ensuring the robustness and reliability of applications. Within our package, it plays a crucial role by providing insights through crash reporting, significantly contributing to the stability and ongoing refinement of our software.
!!! Note
!!! note
Crash reporting via Sentry is activated only if the `sentry-sdk` Python package is pre-installed on your system. This package isn't included in the `ultralytics` prerequisites and won't be installed automatically by Ultralytics.
@ -74,7 +74,7 @@ To opt out of sending analytics and crash reports, you can simply set `sync=Fals
To gain insight into the current configuration of your settings, you can view them directly:
!!! Example "View settings"
!!! example "View settings"
=== "Python"
@ -100,7 +100,7 @@ To gain insight into the current configuration of your settings, you can view th
Ultralytics allows users to easily modify their settings. Changes can be performed in the following ways:
!!! Example "Update settings"
!!! example "Update settings"
=== "Python"
@ -159,7 +159,7 @@ Ultralytics collects three primary types of data using Google Analytics:
To opt out of data collection, you can simply set `sync=False` in your YOLO settings. This action stops the transmission of any analytics or crash reports. You can disable data collection using Python or CLI methods:
!!! Example "Update settings"
!!! example "Update settings"
=== "Python"
@ -193,7 +193,7 @@ If the `sentry-sdk` package is pre-installed, Sentry collects detailed crash log
Yes, you can easily view your current settings to understand the configuration of your data collection preferences. Use the following methods to inspect these settings:
INT8 (or 8-bit integer) quantization further reduces the model's size and computation requirements by converting its 32-bit floating-point numbers to 8-bit integers. This quantization method can result in a significant speedup, but it may lead to a slight reduction in mean average precision (mAP) due to the lower numerical precision.
!!! Tip "mAP Reduction in INT8 Models"
!!! tip "mAP Reduction in INT8 Models"
The reduced numerical precision in INT8 models can lead to some loss of information during the quantization process, which may result in a slight decrease in mAP. However, this trade-off is often acceptable considering the substantial performance gains offered by INT8 quantization.
@ -64,7 +64,7 @@ In this step, you have to choose the project in which you want to create your mo
In case you don't have a project created yet, you can set the name of your project in this step and it will be created together with your model.
!!! Info "Info"
!!! info "Info"
You can read more about the available [YOLOv8](https://docs.ultralytics.com/models/yolov8) (and [YOLOv5](https://docs.ultralytics.com/models/yolov5)) architectures in our documentation.
@ -76,7 +76,7 @@ Set the general access to "Unlisted" and click **Save**.
![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-dialog.avif)
!!! Warning "Warning"
!!! warning "Warning"
When changing the general access of a project, the general access of the models inside the project will be changed as well.
@ -116,7 +116,7 @@ Navigate to the Project page of the project you want to delete, open the project
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Delete option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-option-1.avif)
!!! Warning "Warning"
!!! warning "Warning"
When deleting a project, the models inside the project will be deleted as well.
@ -56,7 +56,7 @@ Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, op
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Delete option of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-delete-team-option.avif)
@ -26,7 +26,7 @@ You can bring automation and efficiency to your machine learning workflow by imp
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -43,7 +43,7 @@ Once you have installed the necessary packages, the next step is to initialize a
Begin by initializing the ClearML SDK in your environment. The 'clearml-init' command starts the setup process and prompts you for the necessary credentials.
!!! Tip "Initial SDK Setup"
!!! tip "Initial SDK Setup"
=== "CLI"
@ -58,7 +58,7 @@ After executing this command, visit the [ClearML Settings page](https://app.clea
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
@ -26,7 +26,7 @@ By combining Ultralytics YOLOv8 with Comet ML, you unlock a range of benefits. T
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -39,7 +39,7 @@ To install the required packages, run:
After installing the required packages, you'll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
!!! Tip "Configuring Comet ML"
!!! tip "Configuring Comet ML"
=== "CLI"
@ -62,7 +62,7 @@ If you are using a Google Colab notebook, the code above will prompt you to ente
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
@ -75,7 +75,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -131,7 +131,7 @@ Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, vi
To export your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models to CoreML format, you'll first need to ensure you have the `ultralytics` package installed. You can install it using:
!!! Example "Installation"
!!! example "Installation"
=== "CLI"
@ -141,7 +141,7 @@ To export your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics)
Next, you can export the model using the following Python or CLI commands:
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -198,7 +198,7 @@ For more information on performance optimization, visit the [CoreML official doc
Yes, you can run inference directly using the exported CoreML model. Below are the commands for Python and CLI:
@ -50,7 +50,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -65,7 +65,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -123,7 +123,7 @@ Also, for more information on other Ultralytics YOLOv8 integrations, please visi
To export a YOLOv8 model to TFLite Edge TPU format, you can follow these steps:
@ -56,7 +56,7 @@ Once you do so, a notebook environment will open for you to load your data set.
Next, you can install and import the necessary Python libraries.
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -71,7 +71,7 @@ For detailed instructions and best practices related to the installation process
Then, you can import the needed packages.
!!! Example "Import Relevant Libraries"
!!! example "Import Relevant Libraries"
=== "Python"
@ -92,7 +92,7 @@ We can load the dataset directly into the notebook using the Kaggle API. First,
Copy and paste your Kaggle username and API key into the following code. Then run the code to install the API and load the dataset into Watsonx.
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -103,7 +103,7 @@ Copy and paste your Kaggle username and API key into the following code. Then ru
After installing Kaggle, we can load the dataset into Watsonx.
!!! Example "Load the Data"
!!! example "Load the Data"
=== "Python"
@ -155,7 +155,7 @@ But, YOLO models by default require separate images and labels in subdirectories
To reorganize the data set directory, we can run the following script:
!!! Example "Preprocess the Data"
!!! example "Preprocess the Data"
=== "Python"
@ -207,7 +207,7 @@ names:
Run the following script to delete the current contents of config.yaml and replace it with the above contents that reflect our new data set directory structure. Be certain to replace the work_dir portion of the root directory path in line 4 with your own working directory path we retrieved earlier. Leave the train, val, and test subdirectory definitions. Also, do not change {work_dir} in line 23 of the code.
!!! Example "Edit the .yaml File"
!!! example "Edit the .yaml File"
=== "Python"
@ -240,7 +240,7 @@ Run the following script to delete the current contents of config.yaml and repla
Run the following command-line code to fine tune a pretrained default YOLOv8 model.
!!! Example "Train the YOLOv8 model"
!!! example "Train the YOLOv8 model"
=== "CLI"
@ -263,7 +263,7 @@ For a detailed understanding of the model training process and best practices, r
We can now run inference to test the performance of our fine-tuned model:
!!! Example "Test the YOLOv8 model"
!!! example "Test the YOLOv8 model"
=== "CLI"
@ -279,7 +279,7 @@ The parameter `conf=0.5` informs the model to ignore all predictions with a conf
Lastly, `iou=.5` directs the model to ignore boxes in the same class with an overlap of 50% or greater. It helps to reduce potential duplicate boxes generated for the same object.
we can load the images with predicted bounding box overlays to view how our model performs on a handful of images.
Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this is controlled by the settings `mflow` key. See the [settings](../quickstart.md#ultralytics-settings) page for more info.
!!! Example "Update Ultralytics MLflow Settings"
!!! example "Update Ultralytics MLflow Settings"
=== "Python"
@ -130,7 +130,7 @@ pip install mlflow
Next, enable MLflow logging in Ultralytics settings. This can be controlled using the `mlflow` key. For more information, see the [settings guide](../quickstart.md#ultralytics-settings).
@ -52,7 +52,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -67,7 +67,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
@ -70,7 +70,7 @@ Deploying YOLOv8 with Neural Magic's DeepSparse involves a few straightforward s
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -83,7 +83,7 @@ To install the required packages, run:
DeepSparse Engine requires YOLOv8 models in ONNX format. Exporting your model to this format is essential for compatibility with DeepSparse. Use the following command to export YOLOv8 models:
!!! Tip "Model Export"
!!! tip "Model Export"
=== "CLI"
@ -98,7 +98,7 @@ This command will save the `yolov8n.onnx` model to your disk.
With your YOLOv8 model in ONNX format, you can deploy and run inferences using DeepSparse. This can be done easily with their intuitive Python API:
!!! Tip "Deploying and Running Inferences"
!!! tip "Deploying and Running Inferences"
=== "Python"
@ -120,7 +120,7 @@ With your YOLOv8 model in ONNX format, you can deploy and run inferences using D
It's important to check that your YOLOv8 model is performing optimally on DeepSparse. You can benchmark your model's performance to analyze throughput and latency:
!!! Tip "Benchmarking"
!!! tip "Benchmarking"
=== "CLI"
@ -133,7 +133,7 @@ It's important to check that your YOLOv8 model is performing optimally on DeepSp
DeepSparse provides additional features for practical integration of YOLOv8 in applications, such as image annotation and dataset evaluation.
@ -68,7 +68,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -83,7 +83,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -139,7 +139,7 @@ Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, vi
To export your YOLOv8 models to ONNX format using Ultralytics, follow these steps:
@ -27,7 +27,7 @@ OpenVINO, short for Open Visual Inference & Neural Network Optimization toolkit,
Export a YOLOv8n model to OpenVINO format and run inference with the exported model.
!!! Example
!!! example
=== "Python"
@ -105,7 +105,7 @@ For more detailed steps and code snippets, refer to the [OpenVINO documentation]
YOLOv8 benchmarks below were run by the Ultralytics team on 4 different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX and OpenVINO. Benchmarks were run on Intel Flex and Arc GPUs, and on Intel Xeon CPUs at FP32 precision (with the `half=False` argument).
!!! Note
!!! note
The benchmarking results below are for reference and might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run.
@ -255,7 +255,7 @@ Benchmarks below run on 13th Gen Intel® Core® i7-13700H CPU at FP32 precision.
To reproduce the Ultralytics benchmarks above on all export [formats](../modes/export.md) run this code:
!!! Example
!!! example
=== "Python"
@ -294,7 +294,7 @@ For more detailed information and instructions on using OpenVINO, refer to the [
Exporting YOLOv8 models to the OpenVINO format can significantly enhance CPU speed and enable GPU and NPU accelerations on Intel hardware. To export, you can use either Python or CLI as shown below:
Yes, you can benchmark YOLOv8 models in various formats including PyTorch, TorchScript, ONNX, and OpenVINO. Use the following code snippet to run benchmarks on your chosen dataset:
@ -54,7 +54,7 @@ Converting YOLOv8 models to the PaddlePaddle format can improve execution flexib
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -69,7 +69,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -127,7 +127,7 @@ Want to explore more ways to integrate your Ultralytics YOLOv8 models? Our [inte
Exporting Ultralytics YOLOv8 models to PaddlePaddle format is straightforward. You can use the `export` method of the YOLO class to perform this exportation. Here is an example using Python:
@ -28,7 +28,7 @@ YOLOv8 also allows optional integration with [Weights & Biases](https://wandb.ai
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -42,7 +42,7 @@ To install the required packages, run:
## Usage
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -103,7 +103,7 @@ The following table lists the default search space parameters for hyperparameter
In this example, we demonstrate how to use a custom search space for hyperparameter tuning with Ray Tune and YOLOv8. By providing a custom search space, you can focus the tuning process on specific hyperparameters of interest.
@ -8,7 +8,7 @@ keywords: Roboflow, YOLOv8, data labeling, computer vision, model training, mode
[Roboflow](https://roboflow.com/?ref=ultralytics) has everything you need to build and deploy computer vision models. Connect Roboflow at any step in your pipeline with APIs and SDKs, or use the end-to-end interface to automate the entire process from image to inference. Whether you're in need of [data labeling](https://roboflow.com/annotate?ref=ultralytics), [model training](https://roboflow.com/train?ref=ultralytics), or [model deployment](https://roboflow.com/deploy?ref=ultralytics), Roboflow gives you building blocks to bring custom computer vision solutions to your project.
@ -26,7 +26,7 @@ Using TensorBoard while training YOLOv8 models is straightforward and offers sig
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -43,7 +43,7 @@ For detailed instructions and best practices related to the installation process
When using Google Colab, it's important to set up TensorBoard before starting your training code:
!!! Example "Configure TensorBoard for Google Colab"
!!! example "Configure TensorBoard for Google Colab"
=== "Python"
@ -56,7 +56,7 @@ When using Google Colab, it's important to set up TensorBoard before starting yo
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -189,7 +189,7 @@ These visualizations are essential for tracking model performance and making nec
Yes, you can use TensorBoard in a Google Colab environment to train YOLOv8 models. Here's a quick setup:
!!! Example "Configure TensorBoard for Google Colab"
!!! example "Configure TensorBoard for Google Colab"
@ -62,7 +62,7 @@ You can improve execution efficiency and optimize performance by converting YOLO
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -77,7 +77,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
@ -58,7 +58,7 @@ You can convert your YOLOv8 object detection model to the TF GraphDef format, wh
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -73,7 +73,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -131,7 +131,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Ultralytics YOLOv8 models can be exported to TensorFlow GraphDef (TF GraphDef) format seamlessly. This format provides a serialized, platform-independent representation of the model, ideal for deploying in varied environments like mobile and web. To export a YOLOv8 model to TF GraphDef, follow these steps:
@ -52,7 +52,7 @@ By exporting YOLOv8 models to the TF SavedModel format, you enhance their adapta
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -67,7 +67,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -125,7 +125,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Exporting an Ultralytics YOLO model to the TensorFlow SavedModel format is straightforward. You can use either Python or CLI to achieve this:
@ -50,7 +50,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -65,7 +65,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -123,7 +123,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Exporting Ultralytics YOLOv8 models to TensorFlow.js (TF.js) format is straightforward. You can follow these steps:
@ -56,7 +56,7 @@ You can improve on-device model execution efficiency and optimize performance by
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -71,7 +71,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
@ -60,7 +60,7 @@ Exporting YOLOv8 models to TorchScript makes it easier to use them in different
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -75,7 +75,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -135,7 +135,7 @@ Exporting an Ultralytics YOLOv8 model to TorchScript allows for flexible, cross-
To export a YOLOv8 model to TorchScript, you can use the following example code:
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -182,7 +182,7 @@ For more insights into deployment, visit the [PyTorch Mobile Documentation](http
To install the required package for exporting YOLOv8 models, use the following command:
@ -39,7 +39,7 @@ Want to let us know what you use for developing code? Head over to our Discourse
## Installing the Extension
!!! Note
!!! note
Any code environment that will allow for installing VS Code extensions _should be_ compatible with the Ultralytics-snippets extension. After publishing the extension, it was discovered that [neovim](https://neovim.io/) can be made compatible with VS Code extensions. To learn more see the [`neovim` install section][neovim install] of the Readme in the [Ultralytics-Snippets repository][repo].
@ -127,7 +127,7 @@ These are the current snippet categories available to the Ultralytics-snippets e
The `ultra.examples` snippets are to useful for anyone looking to learn how to get started with the basics of working with Ultralytics YOLO. Example snippets are intended to run once inserted (some have dropdown options as well). An example of this is shown at the animation at the [top] of this page, where after the snippet is inserted, all code is selected and run interactively using <kbd>Shift ⇑</kbd>+<kbd>Enter ↵</kbd>.
!!! Example
!!! example
Just like the animation shows at the [top] of this page, you can use the snippet `ultra.example-yolo-predict` to insert the following code example. Once inserted, the only configurable option is for the model scale which can be any one of: `n`, `s`, `m`, `l`, or `x`.
@ -146,7 +146,7 @@ The `ultra.examples` snippets are to useful for anyone looking to learn how to g
The aim for snippets other than the `ultra.examples` are for making development easier and quicker when working with Ultralytics. A common code block to be used in many projects, is to iterate the list of `Results` returned from using the model [predict] method. The `ultra.result-loop` snippet can help with this.
!!! Example
!!! example
Using the `ultra.result-loop` will insert the following default code (including comments).
@ -170,7 +170,7 @@ However, since Ultralytics supports numerous [tasks], when [working with inferen
There are over 💯 keyword arguments for all of the various Ultralytics [tasks] and [modes]! That's a lot to remember and it can be easy to forget if the argument is `save_frame` or `save_frames` (it's definitely `save_frames` by the way). This is where the `ultra.kwargs` snippets can help out!
!!! Example
!!! example
To insert the [predict] method, including all [inference arguments], use `ultra.kwargs-predict`, which will insert the following code (including comments).
@ -37,7 +37,7 @@ You can use Weights & Biases to bring efficiency and automation to your YOLOv8 t
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -54,7 +54,7 @@ After installing the necessary packages, the next step is to set up your Weights
Start by initializing the Weights & Biases environment in your workspace. You can do this by running the following command and following the prompted instructions.
!!! Tip "Initial SDK Setup"
!!! tip "Initial SDK Setup"
=== "CLI"
@ -70,7 +70,7 @@ Navigate to the Weights & Biases authorization page to create and retrieve your
Before diving into the usage instructions for YOLOv8 model training with Weights & Biases, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage: Training YOLOv8 with Weights & Biases"
!!! example "Usage: Training YOLOv8 with Weights & Biases"
=== "Python"
Some files were not shown because too many files have changed in this diff
Show More