Merge branch 'main' into augment-fix

augment-fix
Ultralytics Assistant 3 months ago committed by GitHub
commit 0c4a5dc2b2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 5
      .gitignore
  2. 11
      docs/en/datasets/classify/caltech101.md
  3. 6
      docs/en/datasets/classify/caltech256.md
  4. 8
      docs/en/datasets/classify/cifar10.md
  5. 10
      docs/en/datasets/classify/cifar100.md
  6. 10
      docs/en/datasets/classify/fashion-mnist.md
  7. 12
      docs/en/datasets/classify/imagenet.md
  8. 6
      docs/en/datasets/classify/imagenet10.md
  9. 16
      docs/en/datasets/classify/imagenette.md
  10. 12
      docs/en/datasets/classify/imagewoof.md
  11. 10
      docs/en/datasets/classify/index.md
  12. 10
      docs/en/datasets/classify/mnist.md
  13. 12
      docs/en/datasets/detect/african-wildlife.md
  14. 16
      docs/en/datasets/detect/argoverse.md
  15. 18
      docs/en/datasets/detect/brain-tumor.md
  16. 14
      docs/en/datasets/detect/coco.md
  17. 10
      docs/en/datasets/detect/coco8.md
  18. 10
      docs/en/datasets/detect/globalwheat2020.md
  19. 48
      docs/en/datasets/detect/index.md
  20. 14
      docs/en/datasets/detect/lvis.md
  21. 9
      docs/en/datasets/detect/objects365.md
  22. 35
      docs/en/datasets/detect/open-images-v7.md
  23. 12
      docs/en/datasets/detect/roboflow-100.md
  24. 10
      docs/en/datasets/detect/signature.md
  25. 16
      docs/en/datasets/detect/sku-110k.md
  26. 16
      docs/en/datasets/detect/visdrone.md
  27. 8
      docs/en/datasets/detect/voc.md
  28. 19
      docs/en/datasets/detect/xview.md
  29. 28
      docs/en/datasets/explorer/api.md
  30. 8
      docs/en/datasets/explorer/dashboard.md
  31. 1201
      docs/en/datasets/explorer/explorer.ipynb
  32. 10
      docs/en/datasets/index.md
  33. 16
      docs/en/datasets/obb/dota-v2.md
  34. 12
      docs/en/datasets/obb/dota8.md
  35. 13
      docs/en/datasets/obb/index.md
  36. 10
      docs/en/datasets/pose/coco.md
  37. 12
      docs/en/datasets/pose/coco8-pose.md
  38. 26
      docs/en/datasets/pose/index.md
  39. 22
      docs/en/datasets/pose/tiger-pose.md
  40. 13
      docs/en/datasets/segment/carparts-seg.md
  41. 16
      docs/en/datasets/segment/coco.md
  42. 12
      docs/en/datasets/segment/coco8-seg.md
  43. 10
      docs/en/datasets/segment/crack-seg.md
  44. 46
      docs/en/datasets/segment/index.md
  45. 15
      docs/en/datasets/segment/package-seg.md
  46. 8
      docs/en/datasets/track/index.md
  47. 2
      docs/en/guides/analytics.md
  48. 8
      docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
  49. 27
      docs/en/guides/deepstream-nvidia-jetson.md
  50. 2
      docs/en/guides/distance-calculation.md
  51. 2
      docs/en/guides/heatmaps.md
  52. 4
      docs/en/guides/hyperparameter-tuning.md
  53. 2
      docs/en/guides/index.md
  54. 6
      docs/en/guides/instance-segmentation-and-tracking.md
  55. 2
      docs/en/guides/model-deployment-options.md
  56. 4
      docs/en/guides/model-evaluation-insights.md
  57. 16
      docs/en/guides/nvidia-jetson.md
  58. 2
      docs/en/guides/object-blurring.md
  59. 28
      docs/en/guides/object-counting.md
  60. 2
      docs/en/guides/object-cropping.md
  61. 8
      docs/en/guides/parking-management.md
  62. 2
      docs/en/guides/queue-management.md
  63. 26
      docs/en/guides/raspberry-pi.md
  64. 2
      docs/en/guides/sahi-tiled-inference.md
  65. 2
      docs/en/guides/speed-estimation.md
  66. 6
      docs/en/guides/streamlit-live-inference.md
  67. 2
      docs/en/guides/vision-eye.md
  68. 2
      docs/en/guides/workouts-monitoring.md
  69. 2
      docs/en/help/contributing.md
  70. 10
      docs/en/help/privacy.md
  71. 2
      docs/en/hub/app/android.md
  72. 2
      docs/en/hub/datasets.md
  73. 10
      docs/en/hub/inference-api.md
  74. 2
      docs/en/hub/models.md
  75. 4
      docs/en/hub/projects.md
  76. 2
      docs/en/hub/teams.md
  77. 6
      docs/en/integrations/clearml.md
  78. 6
      docs/en/integrations/comet.md
  79. 10
      docs/en/integrations/coreml.md
  80. 10
      docs/en/integrations/dvc.md
  81. 6
      docs/en/integrations/edge-tpu.md
  82. 18
      docs/en/integrations/ibm-watsonx.md
  83. 4
      docs/en/integrations/jupyterlab.md
  84. 4
      docs/en/integrations/mlflow.md
  85. 4
      docs/en/integrations/ncnn.md
  86. 10
      docs/en/integrations/neural-magic.md
  87. 6
      docs/en/integrations/onnx.md
  88. 12
      docs/en/integrations/openvino.md
  89. 6
      docs/en/integrations/paddlepaddle.md
  90. 6
      docs/en/integrations/ray-tune.md
  91. 2
      docs/en/integrations/roboflow.md
  92. 8
      docs/en/integrations/tensorboard.md
  93. 4
      docs/en/integrations/tensorrt.md
  94. 6
      docs/en/integrations/tf-graphdef.md
  95. 6
      docs/en/integrations/tf-savedmodel.md
  96. 6
      docs/en/integrations/tfjs.md
  97. 4
      docs/en/integrations/tflite.md
  98. 8
      docs/en/integrations/torchscript.md
  99. 8
      docs/en/integrations/vscode.md
  100. 6
      docs/en/integrations/weights-biases.md
  101. Some files were not shown because too many files have changed in this diff Show More

5
.gitignore vendored

@ -130,7 +130,6 @@ venv.bak/
# mkdocs documentation
/site
mkdocs_github_authors.yaml
# mypy
.mypy_cache/
@ -140,8 +139,8 @@ dmypy.json
# Pyre type checker
.pyre/
# datasets and projects
datasets/
# datasets and projects (ignore /datasets dir at root only to allow for docs/en/datasets dir)
/datasets
runs/
wandb/
.DS_Store

@ -28,7 +28,7 @@ The Caltech-101 dataset is extensively used for training and evaluating deep lea
To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the objects in the Caltech-1
If you use the Caltech-101 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -90,7 +90,7 @@ The [Caltech-101](https://data.caltech.edu/records/mzrjq-6wc02) dataset is widel
To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the provided code snippets. For example, to train for 100 epochs:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -110,11 +110,13 @@ To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the p
# Start training from a pretrained *.pt model
yolo classify train data=caltech101 model=yolov8n-cls.pt epochs=100 imgsz=416
```
For more detailed arguments and options, refer to the model [Training](../../modes/train.md) page.
### What are the key features of the Caltech-101 dataset?
The Caltech-101 dataset includes:
- Around 9,000 color images across 101 categories.
- Categories covering a diverse range of objects, including animals, vehicles, and household items.
- Variable number of images per category, typically between 40 and 800.
@ -126,7 +128,7 @@ These features make it an excellent choice for training and evaluating object re
Citing the Caltech-101 dataset in your research acknowledges the creators' contributions and provides a reference for others who might use the dataset. The recommended citation is:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -142,6 +144,7 @@ Citing the Caltech-101 dataset in your research acknowledges the creators' contr
publisher={Elsevier}
}
```
Citing helps in maintaining the integrity of academic work and assists peers in locating the original resource.
### Can I use Ultralytics HUB for training models on the Caltech-101 dataset?

@ -39,7 +39,7 @@ The Caltech-256 dataset is extensively used for training and evaluating deep lea
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the diversity and complexity of the objects in the Caltech
If you use the Caltech-256 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -98,7 +98,7 @@ The [Caltech-256](https://data.caltech.edu/records/nyy15-4j048) dataset is a lar
To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the following code snippets. Refer to the model [Training](../../modes/train.md) page for additional options.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

@ -42,7 +42,7 @@ The CIFAR-10 dataset is widely used for training and evaluating deep learning mo
To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-10
If you use the CIFAR-10 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -96,7 +96,7 @@ We would like to acknowledge Alex Krizhevsky for creating and maintaining the CI
To train a YOLO model on the CIFAR-10 dataset using Ultralytics, you can follow the examples provided for both Python and CLI. Here is a basic example to train your model for 100 epochs with an image size of 32x32 pixels:
!!! Example
!!! example
=== "Python"
@ -153,7 +153,7 @@ Each subset comprises images categorized into 10 classes, with their annotations
If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"

@ -31,7 +31,7 @@ The CIFAR-100 dataset is extensively used for training and evaluating deep learn
To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -64,7 +64,7 @@ The example showcases the variety and complexity of the objects in the CIFAR-100
If you use the CIFAR-100 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -89,10 +89,10 @@ The [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a large
You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI commands. Here's how:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -104,7 +104,7 @@ You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI c
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo classify train data=cifar100 model=yolov8n-cls.pt epochs=100 imgsz=32

@ -56,7 +56,7 @@ The Fashion-MNIST dataset is widely used for training and evaluating deep learni
To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image size of 28x28, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -99,10 +99,10 @@ The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is
To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use both Python and CLI commands. Here's a quick example to get you started:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -112,10 +112,10 @@ To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use bot
# Train the model on Fashion-MNIST
results = model.train(data="fashion-mnist", epochs=100, imgsz=28)
```
=== "CLI"
```bash
yolo classify train data=fashion-mnist model=yolov8n-cls.pt epochs=100 imgsz=28
```

@ -11,7 +11,7 @@ keywords: ImageNet, deep learning, visual recognition, computer vision, pretrain
## ImageNet Pretrained Models
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
|----------------------------------------------------------------------------------------------|-----------------------|------------------|------------------|--------------------------------|-------------------------------------|--------------------|--------------------------|
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-cls.pt) | 224 | 69.0 | 88.3 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-cls.pt) | 224 | 73.8 | 91.7 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-cls.pt) | 224 | 76.8 | 93.5 | 85.4 | 0.62 | 17.0 | 42.7 |
@ -41,7 +41,7 @@ The ImageNet dataset is widely used for training and evaluating deep learning mo
To train a deep learning model on the ImageNet dataset for 100 epochs with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -74,7 +74,7 @@ The example showcases the variety and complexity of the images in the ImageNet d
If you use the ImageNet dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -102,10 +102,10 @@ The [ImageNet dataset](https://www.image-net.org/) is a large-scale database con
To use a pretrained Ultralytics YOLO model for image classification on the ImageNet dataset, follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -117,7 +117,7 @@ To use a pretrained Ultralytics YOLO model for image classification on the Image
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo classify train data=imagenet model=yolov8n-cls.pt epochs=100 imgsz=224

@ -27,7 +27,7 @@ The ImageNet10 dataset is useful for quickly testing and debugging computer visi
To test a deep learning model on the ImageNet10 dataset with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Test Example"
!!! example "Test Example"
=== "Python"
@ -58,7 +58,7 @@ The ImageNet10 dataset contains a subset of images from the original ImageNet da
If you use the ImageNet10 dataset in your research or development work, please cite the original ImageNet paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -86,7 +86,7 @@ The [ImageNet10](https://github.com/ultralytics/assets/releases/download/v0.0.0/
To test your deep learning model on the ImageNet10 dataset with an image size of 224x224, use the following code snippets.
!!! Example "Test Example"
!!! example "Test Example"
=== "Python"

@ -29,7 +29,7 @@ The ImageNette dataset is widely used for training and evaluating deep learning
To train a model on the ImageNette dataset for 100 epochs with a standard image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -64,7 +64,7 @@ For faster prototyping and training, the ImageNette dataset is also available in
To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imagenette320' in the training command. The following code snippets illustrate this:
!!! Example "Train Example with ImageNette160"
!!! example "Train Example with ImageNette160"
=== "Python"
@ -85,7 +85,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
yolo classify train data=imagenette160 model=yolov8n-cls.pt epochs=100 imgsz=160
```
!!! Example "Train Example with ImageNette320"
!!! example "Train Example with ImageNette320"
=== "Python"
@ -122,7 +122,7 @@ The [ImageNette dataset](https://github.com/fastai/imagenette) is a simplified s
To train a YOLO model on the ImageNette dataset for 100 epochs, you can use the following commands. Make sure to have the Ultralytics YOLO environment set up.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -152,14 +152,14 @@ The ImageNette dataset is advantageous for several reasons:
- **Quick and Simple**: It contains only 10 classes, making it less complex and time-consuming compared to larger datasets.
- **Educational Use**: Ideal for learning and teaching the basics of image classification since it requires less computational power and time.
- **Versatility**: Widely used to train and benchmark various machine learning models, especially in image classification.
For more details on model training and dataset management, explore the [Dataset Structure](#dataset-structure) section.
### Can the ImageNette dataset be used with different image sizes?
Yes, the ImageNette dataset is also available in two resized versions: ImageNette160 and ImageNette320. These versions help in faster prototyping and are especially useful when computational resources are limited.
Yes, the ImageNette dataset is also available in two resized versions: ImageNette160 and ImageNette320. These versions help in faster prototyping and are especially useful when computational resources are limited.
!!! Example "Train Example with ImageNette160"
!!! example "Train Example with ImageNette160"
=== "Python"
@ -174,7 +174,7 @@ Yes, the ImageNette dataset is also available in two resized versions: ImageNett
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model with ImageNette160
yolo detect train data=imagenette160 model=yolov8n-cls.pt epochs=100 imgsz=160

@ -26,7 +26,7 @@ The ImageWoof dataset is widely used for training and evaluating deep learning m
To train a CNN model on the ImageWoof dataset for 100 epochs with an image size of 224x224, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -59,7 +59,7 @@ ImageWoof dataset comes in three different sizes to accommodate various research
To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example:
!!! Example "Example"
!!! example "Example"
=== "Python"
@ -109,20 +109,20 @@ The [ImageWoof](https://github.com/fastai/imagenette) dataset is a challenging s
To train a Convolutional Neural Network (CNN) model on the ImageWoof dataset using Ultralytics YOLO for 100 epochs at an image size of 224x224, you can use the following code:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
model = YOLO("yolov8n-cls.pt") # Load a pretrained model
results = model.train(data="imagewoof", epochs=100, imgsz=224)
```
=== "CLI"
```bash
yolo classify train data=imagewoof model=yolov8n-cls.pt epochs=100 imgsz=224
```

@ -78,7 +78,7 @@ This structured approach ensures that the model can effectively learn from well-
## Usage
!!! Example
!!! example
=== "Python"
@ -194,10 +194,10 @@ For additional insights and real-world applications, you can explore [Ultralytic
Training a model using Ultralytics YOLO can be done easily in both Python and CLI. Here's an example:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -207,10 +207,10 @@ Training a model using Ultralytics YOLO can be done easily in both Python and CL
# Train the model
results = model.train(data="path/to/dataset", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=path/to/data model=yolov8n-cls.pt epochs=100 imgsz=640

@ -34,7 +34,7 @@ The MNIST dataset is widely used for training and evaluating deep learning model
To train a CNN model on the MNIST dataset for 100 epochs with an image size of 32x32, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -69,7 +69,7 @@ If you use the MNIST dataset in your
research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -95,10 +95,10 @@ The [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, or Modified National Ins
To train a model on the MNIST dataset using Ultralytics YOLO, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -110,7 +110,7 @@ To train a model on the MNIST dataset using Ultralytics YOLO, you can follow the
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo classify train data=mnist model=yolov8n-cls.pt epochs=100 imgsz=28

@ -35,7 +35,7 @@ This dataset can be applied in various computer vision tasks such as object dete
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths, classes, and other pertinent details. For the African wildlife dataset, the `african-wildlife.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/african-wildlife.yaml).
!!! Example "ultralytics/cfg/datasets/african-wildlife.yaml"
!!! example "ultralytics/cfg/datasets/african-wildlife.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/african-wildlife.yaml"
@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -66,7 +66,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
yolo detect train data=african-wildlife.yaml model=yolov8n.pt epochs=100 imgsz=640
```
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -111,10 +111,10 @@ The African Wildlife Dataset includes images of four common animal species found
You can train a YOLOv8 model on the African Wildlife Dataset by using the `african-wildlife.yaml` configuration file. Below is an example of how to train the YOLOv8n model for 100 epochs with an image size of 640:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -126,7 +126,7 @@ You can train a YOLOv8 model on the African Wildlife Dataset by using the `afric
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=african-wildlife.yaml model=yolov8n.pt epochs=100 imgsz=640

@ -8,7 +8,7 @@ keywords: Argoverse dataset, autonomous driving, 3D tracking, motion forecasting
The [Argoverse](https://www.argoverse.org/) dataset is a collection of data designed to support research in autonomous driving tasks, such as 3D tracking, motion forecasting, and stereo depth estimation. Developed by Argo AI, the dataset provides a wide range of high-quality sensor data, including high-resolution images, LiDAR point clouds, and map data.
!!! Note
!!! note
The Argoverse dataset `*.zip` file required for training was removed from Amazon S3 after the shutdown of Argo AI by Ford, but we have made it available for manual download on [Google Drive](https://drive.google.com/file/d/1st9qW3BeIwQsnR0t8mRpvbsSWIo16ACi/view?usp=drive_link).
@ -35,7 +35,7 @@ The Argoverse dataset is widely used for training and evaluating deep learning m
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Argoverse dataset, the `Argoverse.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml).
!!! Example "ultralytics/cfg/datasets/Argoverse.yaml"
!!! example "ultralytics/cfg/datasets/Argoverse.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/Argoverse.yaml"
@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -80,7 +80,7 @@ The example showcases the variety and complexity of the data in the Argoverse da
If you use the Argoverse dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -106,10 +106,10 @@ The [Argoverse](https://www.argoverse.org/) dataset, developed by Argo AI, suppo
To train a YOLOv8 model with the Argoverse dataset, use the provided YAML configuration file and the following code:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -119,10 +119,10 @@ To train a YOLOv8 model with the Argoverse dataset, use the provided YAML config
# Train the model
results = model.train(data="Argoverse.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=Argoverse.yaml model=yolov8n.pt epochs=100 imgsz=640

@ -34,7 +34,7 @@ The application of brain tumor detection using computer vision enables early dia
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the brain tumor dataset, the `brain-tumor.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/brain-tumor.yaml).
!!! Example "ultralytics/cfg/datasets/brain-tumor.yaml"
!!! example "ultralytics/cfg/datasets/brain-tumor.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/brain-tumor.yaml"
@ -44,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image size of 640, utilize the provided code snippets. For a detailed list of available arguments, consult the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -65,7 +65,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
yolo detect train data=brain-tumor.yaml model=yolov8n.pt epochs=100 imgsz=640
```
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -110,10 +110,10 @@ The brain tumor dataset is divided into two subsets: the **training set** consis
You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an image size of 640px using both Python and CLI methods. Below are the examples for both:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -123,10 +123,10 @@ You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an i
# Train the model
results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=brain-tumor.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -142,7 +142,7 @@ Using the brain tumor dataset in AI projects enables early diagnosis and treatme
Inference using a fine-tuned YOLOv8 model can be performed with either Python or CLI approaches. Here are the examples:
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -157,7 +157,7 @@ Inference using a fine-tuned YOLOv8 model can be performed with either Python or
```
=== "CLI"
```bash
# Start prediction with a finetuned *.pt model
yolo detect predict model='path/to/best.pt' imgsz=640 source="https://ultralytics.com/assets/brain-tumor-sample.jpg"

@ -22,7 +22,7 @@ The [COCO](https://cocodataset.org/#home) (Common Objects in Context) dataset is
## COCO Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|--------------------------------------------------------------------------------------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
@ -52,7 +52,7 @@ The COCO dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
!!! Example "ultralytics/cfg/datasets/coco.yaml"
!!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco.yaml"
@ -62,7 +62,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -97,7 +97,7 @@ The example showcases the variety and complexity of the images in the COCO datas
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -124,10 +124,10 @@ The [COCO dataset](https://cocodataset.org/#home) (Common Objects in Context) is
To train a YOLOv8 model using the COCO dataset, you can use the following code snippets:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -139,7 +139,7 @@ To train a YOLOv8 model using the COCO dataset, you can use the following code s
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=coco.yaml model=yolov8n.pt epochs=100 imgsz=640

@ -27,7 +27,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8 dataset, the `coco8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml).
!!! Example "ultralytics/cfg/datasets/coco8.yaml"
!!! example "ultralytics/cfg/datasets/coco8.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8.yaml"
@ -37,7 +37,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the variety and complexity of the images in the COCO8 data
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -99,10 +99,10 @@ The Ultralytics COCO8 dataset is a compact yet versatile object detection datase
To train a YOLOv8 model using the COCO8 dataset, you can employ either Python or CLI commands. Here's how you can start:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO

@ -30,7 +30,7 @@ The Global Wheat Head Dataset is widely used for training and evaluating deep le
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Global Wheat Head Dataset, the `GlobalWheat2020.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml).
!!! Example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
!!! example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
@ -40,7 +40,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Global Wheat
If you use the Global Wheat Head Dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -100,10 +100,10 @@ The Global Wheat Head Dataset is primarily used for developing and training deep
To train a YOLOv8n model on the Global Wheat Head Dataset, you can use the following code snippets. Make sure you have the `GlobalWheat2020.yaml` configuration file specifying dataset paths and classes:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO

@ -16,20 +16,20 @@ The Ultralytics YOLO format is a dataset configuration format that allows you to
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco8 # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
path: ../datasets/coco8 # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
# Classes (80 COCO classes)
names:
0: person
1: bicycle
2: car
# ...
77: teddy bear
78: hair drier
79: toothbrush
0: person
1: bicycle
2: car
# ...
77: teddy bear
78: hair drier
79: toothbrush
```
Labels for this format should be exported to YOLO format with one `*.txt` file per image. If there are no objects in an image, no `*.txt` file is required. The `*.txt` file should be formatted with one row per object in `class x_center y_center width height` format. Box coordinates must be in **normalized xywh** format (from 0 to 1). If your boxes are in pixels, you should divide `x_center` and `width` by image width, and `y_center` and `height` by image height. Class numbers should be zero-indexed (start with 0).
@ -48,7 +48,7 @@ When using the Ultralytics YOLO format, organize your training and validation im
Here's how you can use these formats to train your model:
!!! Example
!!! example
=== "Python"
@ -100,7 +100,7 @@ If you have your own dataset and would like to use it for training detection mod
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
!!! Example
!!! example
=== "Python"
@ -121,15 +121,15 @@ Remember to double-check if the dataset you want to use is compatible with your
The Ultralytics YOLO format is a structured configuration for defining datasets in your training projects. It involves setting paths to your training, validation, and testing images and corresponding labels. For example:
```yaml
path: ../datasets/coco8 # dataset root directory
train: images/train # training images (relative to 'path')
val: images/val # validation images (relative to 'path')
test: # optional test images
path: ../datasets/coco8 # dataset root directory
train: images/train # training images (relative to 'path')
val: images/val # validation images (relative to 'path')
test: # optional test images
names:
0: person
1: bicycle
2: car
# ...
0: person
1: bicycle
2: car
# ...
```
Labels are saved in `*.txt` files with one file per image, formatted as `class x_center y_center width height` with normalized coordinates. For a detailed guide, see the [COCO8 dataset example](coco8.md).
@ -164,10 +164,10 @@ Each dataset page provides detailed information on the structure and usage tailo
To start training a YOLOv8 model, ensure your dataset is formatted correctly and the paths are defined in a YAML file. Use the following script to begin training:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -176,7 +176,7 @@ To start training a YOLOv8 model, ensure your dataset is formatted correctly and
```
=== "CLI"
```bash
yolo detect train data=path/to/your_dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
```

@ -48,7 +48,7 @@ The LVIS dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the LVIS dataset, the `lvis.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/lvis.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/lvis.yaml).
!!! Example "ultralytics/cfg/datasets/lvis.yaml"
!!! example "ultralytics/cfg/datasets/lvis.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/lvis.yaml"
@ -58,7 +58,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -93,7 +93,7 @@ The example showcases the variety and complexity of the images in the LVIS datas
If you use the LVIS dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -118,10 +118,10 @@ The [LVIS dataset](https://www.lvisdataset.org/) is a large-scale dataset with f
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, follow the example below. This process utilizes Ultralytics' framework, which offers comprehensive training features.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -131,10 +131,10 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
# Train the model
results = model.train(data="lvis.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=lvis.yaml model=yolov8n.pt epochs=100 imgsz=640

@ -30,7 +30,7 @@ The Objects365 dataset is widely used for training and evaluating deep learning
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Objects365 Dataset, the `Objects365.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml).
!!! Example "ultralytics/cfg/datasets/Objects365.yaml"
!!! example "ultralytics/cfg/datasets/Objects365.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/Objects365.yaml"
@ -40,7 +40,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the Objects365 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -75,7 +75,7 @@ The example showcases the variety and complexity of the data in the Objects365 d
If you use the Objects365 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -101,7 +101,7 @@ The [Objects365 dataset](https://www.objects365.org/) is designed for object det
To train a YOLOv8n model using the Objects365 dataset for 100 epochs with an image size of 640, follow these instructions:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -127,6 +127,7 @@ Refer to the [Training](../../modes/train.md) page for a comprehensive list of a
### Why should I use the Objects365 dataset for my object detection projects?
The Objects365 dataset offers several advantages for object detection tasks:
1. **Diversity**: It includes 2 million images with objects in diverse scenarios, covering 365 categories.
2. **High-quality Annotations**: Over 30 million bounding boxes provide comprehensive ground truth data.
3. **Performance**: Models pre-trained on Objects365 significantly outperform those trained on datasets like ImageNet, leading to better generalization.

@ -22,7 +22,7 @@ keywords: Open Images V7, Google dataset, computer vision, YOLOv8 models, object
## Open Images V7 Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|-------------------------------------------------------------------------------------------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-oiv7.pt) | 640 | 18.4 | 142.4 | 1.21 | 3.5 | 10.5 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-oiv7.pt) | 640 | 27.7 | 183.1 | 1.40 | 11.4 | 29.7 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-oiv7.pt) | 640 | 33.6 | 408.5 | 2.26 | 26.2 | 80.6 |
@ -61,7 +61,7 @@ Open Images V7 is a cornerstone for training and evaluating state-of-the-art mod
Typically, datasets come with a YAML (Yet Another Markup Language) file that delineates the dataset's configuration. For the case of Open Images V7, a hypothetical `OpenImagesV7.yaml` might exist. For accurate paths and configurations, one should refer to the dataset's official repository or documentation.
!!! Example "OpenImagesV7.yaml"
!!! example "OpenImagesV7.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/open-images-v7.yaml"
@ -71,7 +71,7 @@ Typically, datasets come with a YAML (Yet Another Markup Language) file that del
To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Warning
!!! warning
The complete Open Images V7 dataset comprises 1,743,042 training images and 41,620 validation images, requiring approximately **561 GB of storage space** upon download.
@ -80,7 +80,7 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an im
- Verify that your device has enough storage capacity.
- Ensure a robust and speedy internet connection.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -115,7 +115,7 @@ Researchers can gain invaluable insights into the array of computer vision chall
For those employing Open Images V7 in their work, it's prudent to cite the relevant papers and acknowledge the creators:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -140,11 +140,10 @@ Open Images V7 is an extensive and versatile dataset created by Google, designed
To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python and CLI commands. Here's an example of training the YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -154,10 +153,10 @@ To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python a
# Train the model on the Open Images V7 dataset
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Train a COCO-pretrained YOLOv8n model on the Open Images V7 dataset
yolo detect train data=open-images-v7.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -168,6 +167,7 @@ For more details on arguments and settings, refer to the [Training](../../modes/
### What are some key features of the Open Images V7 dataset?
The Open Images V7 dataset includes approximately 9 million images with various annotations:
- **Bounding Boxes**: 16 million bounding boxes across 600 object classes.
- **Segmentation Masks**: Masks for 2.8 million objects across 350 classes.
- **Visual Relationships**: 3.3 million annotations indicating relationships, properties, and actions.
@ -179,17 +179,18 @@ The Open Images V7 dataset includes approximately 9 million images with various
Ultralytics provides several YOLOv8 pretrained models for the Open Images V7 dataset, each with different sizes and performance metrics:
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|-------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-oiv7.pt) | 640 | 18.4 | 142.4 | 1.21 | 3.5 | 10.5 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-oiv7.pt) | 640 | 27.7 | 183.1 | 1.40 | 11.4 | 29.7 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-oiv7.pt) | 640 | 33.6 | 408.5 | 2.26 | 26.2 | 80.6 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-oiv7.pt) | 640 | 34.9 | 596.9 | 2.43 | 44.1 | 167.4 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-oiv7.pt) | 640 | 36.3 | 860.6 | 3.56 | 68.7 | 260.6 |
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-oiv7.pt) | 640 | 18.4 | 142.4 | 1.21 | 3.5 | 10.5 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-oiv7.pt) | 640 | 27.7 | 183.1 | 1.40 | 11.4 | 29.7 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-oiv7.pt) | 640 | 33.6 | 408.5 | 2.26 | 26.2 | 80.6 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-oiv7.pt) | 640 | 34.9 | 596.9 | 2.43 | 44.1 | 167.4 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-oiv7.pt) | 640 | 36.3 | 860.6 | 3.56 | 68.7 | 260.6 |
### What applications can the Open Images V7 dataset be used for?
The Open Images V7 dataset supports a variety of computer vision tasks including:
- **Image Classification**
- **Object Detection**
- **Instance Segmentation**

@ -37,11 +37,11 @@ This structure enables a diverse and extensive testing ground for object detecti
Dataset benchmarking evaluates machine learning model performance on specific datasets using standardized metrics like accuracy, mean average precision and F1-score.
!!! Tip "Benchmarking"
!!! tip "Benchmarking"
Benchmarking results will be stored in "ultralytics-benchmarks/evaluation.txt"
!!! Example "Benchmarking example"
!!! example "Benchmarking example"
=== "Python"
@ -113,7 +113,7 @@ The diversity in the Roboflow 100 benchmark that can be seen above is a signific
If you use the Roboflow 100 dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -139,10 +139,10 @@ The **Roboflow 100** dataset, developed by [Roboflow](https://roboflow.com/?ref=
To use the Roboflow 100 dataset for benchmarking, you can implement the RF100Benchmark class from the Ultralytics library. Here's a brief example:
!!! Example "Benchmarking example"
!!! example "Benchmarking example"
=== "Python"
```python
import os
import shutil
@ -203,7 +203,7 @@ The **Roboflow 100** dataset is accessible on [GitHub](https://github.com/robofl
When using the Roboflow 100 dataset in your research, ensure to properly cite it. Here is the recommended citation:
!!! Quote ""
!!! quote ""
=== "BibTeX"

@ -23,7 +23,7 @@ This dataset can be applied in various computer vision tasks such as object dete
A YAML (Yet Another Markup Language) file defines the dataset configuration, including paths and classes information. For the signature detection dataset, the `signature.yaml` file is located at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
!!! Example "ultralytics/cfg/datasets/signature.yaml"
!!! example "ultralytics/cfg/datasets/signature.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/signature.yaml"
@ -33,7 +33,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
To train a YOLOv8n model on the signature detection dataset for 100 epochs with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -54,7 +54,7 @@ To train a YOLOv8n model on the signature detection dataset for 100 epochs with
yolo detect train data=signature.yaml model=yolov8n.pt epochs=100 imgsz=640
```
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -102,7 +102,7 @@ To train a YOLOv8n model on the Signature Detection Dataset, follow these steps:
1. Download the `signature.yaml` dataset configuration file from [signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
2. Use the following Python script or CLI command to start training:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -140,7 +140,7 @@ To perform inference using a model trained on the Signature Detection Dataset, f
1. Load your fine-tuned model.
2. Use the below Python script or CLI command to perform inference:
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"

@ -43,7 +43,7 @@ The SKU-110k dataset is widely used for training and evaluating deep learning mo
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the SKU-110K dataset, the `SKU-110K.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml).
!!! Example "ultralytics/cfg/datasets/SKU-110K.yaml"
!!! example "ultralytics/cfg/datasets/SKU-110K.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/SKU-110K.yaml"
@ -53,7 +53,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the SKU-110K dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -88,7 +88,7 @@ The example showcases the variety and complexity of the data in the SKU-110k dat
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -113,10 +113,10 @@ The SKU-110k dataset consists of densely packed retail shelf images designed to
Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an example to train a YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -126,10 +126,10 @@ Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an ex
# Train the model
results = model.train(data="SKU-110K.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=SKU-110K.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -165,7 +165,7 @@ These features make the SKU-110k dataset particularly valuable for training and
If you use the SKU-110k dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"

@ -39,7 +39,7 @@ The VisDrone dataset is widely used for training and evaluating deep learning mo
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Visdrone dataset, the `VisDrone.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml).
!!! Example "ultralytics/cfg/datasets/VisDrone.yaml"
!!! example "ultralytics/cfg/datasets/VisDrone.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/VisDrone.yaml"
@ -49,7 +49,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the VisDrone dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -84,7 +84,7 @@ The example showcases the variety and complexity of the data in the VisDrone dat
If you use the VisDrone dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -107,6 +107,7 @@ We would like to acknowledge the AISKYEYE team at the Lab of Machine Learning an
### What is the VisDrone Dataset and what are its key features?
The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-scale benchmark created by the AISKYEYE team at Tianjin University, China. It is designed for various computer vision tasks related to drone-based image and video analysis. Key features include:
- **Composition**: 288 video clips with 261,908 frames and 10,209 static images.
- **Annotations**: Over 2.6 million bounding boxes for objects like pedestrians, cars, bicycles, and tricycles.
- **Diversity**: Collected across 14 cities, in urban and rural settings, under different weather and lighting conditions.
@ -116,10 +117,10 @@ The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-
To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image size of 640, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -131,7 +132,7 @@ To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image siz
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=VisDrone.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -142,6 +143,7 @@ For additional configuration options, please refer to the model [Training](../..
### What are the main subsets of the VisDrone dataset and their applications?
The VisDrone dataset is divided into five main subsets, each tailored for a specific computer vision task:
1. **Task 1**: Object detection in images.
2. **Task 2**: Object detection in videos.
3. **Task 3**: Single-object tracking.
@ -159,7 +161,7 @@ The configuration file for the VisDrone dataset, `VisDrone.yaml`, can be found i
If you use the VisDrone dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"

@ -31,7 +31,7 @@ The VOC dataset is widely used for training and evaluating deep learning models
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the VOC dataset, the `VOC.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml).
!!! Example "ultralytics/cfg/datasets/VOC.yaml"
!!! example "ultralytics/cfg/datasets/VOC.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/VOC.yaml"
@ -41,7 +41,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n model on the VOC dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -76,7 +76,7 @@ The example showcases the variety and complexity of the images in the VOC datase
If you use the VOC dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -103,7 +103,7 @@ The [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) (Visual Object Classes
To train a YOLOv8 model with the VOC dataset, you need the dataset configuration in a YAML file. Here's an example to start training a YOLOv8n model for 100 epochs with an image size of 640:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

@ -34,7 +34,7 @@ The xView dataset is widely used for training and evaluating deep learning model
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the xView dataset, the `xView.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml).
!!! Example "ultralytics/cfg/datasets/xView.yaml"
!!! example "ultralytics/cfg/datasets/xView.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/xView.yaml"
@ -44,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a model on the xView dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -79,7 +79,7 @@ The example showcases the variety and complexity of the data in the xView datase
If you use the xView dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -106,10 +106,10 @@ The [xView](http://xviewdataset.org/) dataset is one of the largest publicly ava
To train a model on the xView dataset using Ultralytics YOLO, follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -119,10 +119,10 @@ To train a model on the xView dataset using Ultralytics YOLO, follow these steps
# Train the model
results = model.train(data="xView.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo detect train data=xView.yaml model=yolov8n.pt epochs=100 imgsz=640
@ -133,6 +133,7 @@ For detailed arguments and settings, refer to the model [Training](../../modes/t
### What are the key features of the xView dataset?
The xView dataset stands out due to its comprehensive set of features:
- Over 1 million object instances across 60 distinct classes.
- High-resolution imagery at 0.3 meters.
- Diverse object types including small, rare, and fine-grained objects, all annotated with bounding boxes.
@ -146,7 +147,7 @@ The xView dataset comprises high-resolution satellite images collected from Worl
If you utilize the xView dataset in your research, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -160,5 +161,5 @@ If you utilize the xView dataset in your research, please cite the following pap
primaryClass={cs.CV}
}
```
For more information about the xView dataset, visit the official [xView dataset website](http://xviewdataset.org/).

@ -48,7 +48,7 @@ dataframe = explorer.get_similar(img="path/to/image.jpg")
dataframe = explorer.get_similar(idx=0)
```
!!! Tip "Note"
!!! note
Embeddings table for a given dataset and model pair is only created once and reused. These use [LanceDB](https://lancedb.github.io/lancedb/) under the hood, which scales on-disk, so you can create and reuse embeddings for large datasets like COCO without running out of memory.
@ -67,7 +67,7 @@ In case of multiple inputs, the aggregate of their embeddings is used.
You get a pandas dataframe with the `limit` number of most similar data points to the input, along with their distance in the embedding space. You can use this dataset to perform further filtering
!!! Example "Semantic Search"
!!! example "Semantic Search"
=== "Using Images"
@ -110,7 +110,7 @@ You get a pandas dataframe with the `limit` number of most similar data points t
You can also plot the similar images using the `plot_similar` method. This method takes the same arguments as `get_similar` and plots the similar images in a grid.
!!! Example "Plotting Similar Images"
!!! example "Plotting Similar Images"
=== "Using Images"
@ -143,7 +143,7 @@ You can also plot the similar images using the `plot_similar` method. This metho
This allows you to write how you want to filter your dataset using natural language. You don't have to be proficient in writing SQL queries. Our AI powered query generator will automatically do that under the hood. For example - you can say - "show me 100 images with exactly one person and 2 dogs. There can be other objects too" and it'll internally generate the query and show you those results.
Note: This works using LLMs under the hood so the results are probabilistic and might get things wrong sometimes
!!! Example "Ask AI"
!!! example "Ask AI"
```python
from ultralytics import Explorer
@ -165,7 +165,7 @@ Note: This works using LLMs under the hood so the results are probabilistic and
You can run SQL queries on your dataset using the `sql_query` method. This method takes a SQL query as input and returns a pandas dataframe with the results.
!!! Example "SQL Query"
!!! example "SQL Query"
```python
from ultralytics import Explorer
@ -182,7 +182,7 @@ You can run SQL queries on your dataset using the `sql_query` method. This metho
You can also plot the results of a SQL query using the `plot_sql_query` method. This method takes the same arguments as `sql_query` and plots the results in a grid.
!!! Example "Plotting SQL Query Results"
!!! example "Plotting SQL Query Results"
```python
from ultralytics import Explorer
@ -199,7 +199,9 @@ You can also plot the results of a SQL query using the `plot_sql_query` method.
You can also work with the embeddings table directly. Once the embeddings table is created, you can access it using the `Explorer.table`
!!! Tip "Explorer works on [LanceDB](https://lancedb.github.io/lancedb/) tables internally. You can access this table directly, using `Explorer.table` object and run raw queries, push down pre- and post-filters, etc."
!!! tip
Explorer works on [LanceDB](https://lancedb.github.io/lancedb/) tables internally. You can access this table directly, using `Explorer.table` object and run raw queries, push down pre- and post-filters, etc.
```python
from ultralytics import Explorer
@ -213,7 +215,7 @@ Here are some examples of what you can do with the table:
### Get raw Embeddings
!!! Example
!!! example
```python
from ultralytics import Explorer
@ -228,7 +230,7 @@ Here are some examples of what you can do with the table:
### Advanced Querying with pre- and post-filters
!!! Example
!!! example
```python
from ultralytics import Explorer
@ -270,11 +272,11 @@ It returns a pandas dataframe with the following columns:
- `count`: Number of images in the dataset that are closer than `max_dist` to the current image
- `sim_im_files`: List of paths to the `count` similar images
!!! Tip
!!! tip
For a given dataset, model, `max_dist` & `top_k` the similarity index once generated will be reused. In case, your dataset has changed, or you simply need to regenerate the similarity index, you can pass `force=True`.
!!! Example "Similarity Index"
!!! example "Similarity Index"
```python
from ultralytics import Explorer
@ -342,14 +344,17 @@ The Ultralytics Explorer API is designed for comprehensive dataset exploration.
### How do I install the Ultralytics Explorer API?
To install the Ultralytics Explorer API along with its dependencies, use the following command:
```bash
pip install ultralytics[explorer]
```
This will automatically install all necessary external libraries for the Explorer API functionality. For additional setup details, refer to the [installation section](#installation) of our documentation.
### How can I use the Ultralytics Explorer API for similarity search?
You can use the Ultralytics Explorer API to perform similarity searches by creating an embeddings table and querying it for similar images. Here's a basic example:
```python
from ultralytics import Explorer
@ -361,6 +366,7 @@ explorer.create_embeddings_table()
similar_images_df = explorer.get_similar(img="path/to/image.jpg")
print(similar_images_df.head())
```
For more details, please visit the [Similarity Search section](#1-similarity-search).
### What are the benefits of using LanceDB with Ultralytics Explorer?

@ -40,11 +40,13 @@ Semantic search is a technique for finding similar images to a given image. It i
For example:
In this VOC Exploration dashboard, user selects a couple airplane images like this:
<p>
<img width="1710" alt="Explorer Dashboard Screenshot 2" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-2.avif">
</p>
On performing similarity search, you should see a similar result:
<p>
<img width="1710" alt="Explorer Dashboard Screenshot 3" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-3.avif">
</p>
@ -52,6 +54,7 @@ On performing similarity search, you should see a similar result:
## Ask AI
This allows you to write how you want to filter your dataset using natural language. You don't have to be proficient in writing SQL queries. Our AI powered query generator will automatically do that under the hood. For example - you can say - "show me 100 images with exactly one person and 2 dogs. There can be other objects too" and it'll internally generate the query and show you those results. Here's an example output when asked to "Show 10 images with exactly 5 persons" and you'll see a result like this:
<p>
<img width="1709" alt="Explorer Dashboard Screenshot 4" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-4.avif">
</p>
@ -76,7 +79,7 @@ This is a Demo build using the Explorer API. You can use the API to build your o
### What is Ultralytics Explorer GUI and how do I install it?
Ultralytics Explorer GUI is a powerful interface that unlocks advanced data exploration capabilities using the [Ultralytics Explorer API](api.md). It allows you to run semantic/vector similarity search, SQL queries, and natural language queries using the Ask AI feature powered by Large Language Models (LLMs).
Ultralytics Explorer GUI is a powerful interface that unlocks advanced data exploration capabilities using the [Ultralytics Explorer API](api.md). It allows you to run semantic/vector similarity search, SQL queries, and natural language queries using the Ask AI feature powered by Large Language Models (LLMs).
To install the Explorer GUI, you can use pip:
@ -106,13 +109,14 @@ Ultralytics Explorer GUI allows you to run SQL queries directly on your dataset
WHERE labels LIKE '%person%' AND labels LIKE '%dog%'
```
You can also provide only the WHERE clause, making the querying process more flexible.
You can also provide only the WHERE clause, making the querying process more flexible.
For more details, refer to the [SQL Queries Section](#run-sql-queries-on-your-cv-datasets).
### What are the benefits of using Ultralytics Explorer GUI for data exploration?
Ultralytics Explorer GUI enhances data exploration with features like semantic search, SQL querying, and natural language interactions through the Ask AI feature. These capabilities allow users to:
- Efficiently find visually similar images.
- Filter datasets using complex SQL queries.
- Utilize AI to perform natural language searches, eliminating the need for advanced SQL expertise.

File diff suppressed because it is too large Load Diff

@ -127,7 +127,7 @@ Contributing a new dataset involves several steps to ensure that it aligns well
### Example Code to Optimize and Zip a Dataset
!!! Example "Optimize and Zip a Dataset"
!!! example "Optimize and Zip a Dataset"
=== "Python"
@ -155,6 +155,7 @@ By following these steps, you can contribute a new dataset that integrates well
### What datasets does Ultralytics support for object detection?
Ultralytics supports a wide variety of datasets for object detection, including:
- [COCO](detect/coco.md): A large-scale object detection, segmentation, and captioning dataset with 80 object categories.
- [LVIS](detect/lvis.md): An extensive dataset with 1203 object categories, designed for more fine-grained object detection and segmentation.
- [Argoverse](detect/argoverse.md): A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations.
@ -166,6 +167,7 @@ These datasets facilitate training robust models for various object detection ap
### How do I contribute a new dataset to Ultralytics?
Contributing a new dataset involves several steps:
1. **Collect Images**: Gather images from public databases or personal collections.
2. **Annotate Images**: Apply bounding boxes, segments, or keypoints, depending on the task.
3. **Export Annotations**: Convert annotations into the YOLO `*.txt` format.
@ -180,6 +182,7 @@ Visit [Contribute New Datasets](#contribute-new-datasets) for a comprehensive gu
### Why should I use Ultralytics Explorer for my dataset?
Ultralytics Explorer offers powerful features for dataset analysis, including:
- **Embeddings Generation**: Create vector embeddings for images.
- **Semantic Search**: Search for similar images using embeddings or AI.
- **SQL Queries**: Run advanced SQL queries for detailed data analysis.
@ -190,6 +193,7 @@ Explore the [Ultralytics Explorer](explorer/index.md) for more information and t
### What are the unique features of Ultralytics YOLO models for computer vision?
Ultralytics YOLO models provide several unique features:
- **Real-time Performance**: High-speed inference and training.
- **Versatility**: Suitable for detection, segmentation, classification, and pose estimation tasks.
- **Pretrained Models**: Access to high-performing, pretrained models for various applications.
@ -201,10 +205,10 @@ Discover more about YOLO on the [Ultralytics YOLO](https://www.ultralytics.com/y
To optimize and zip a dataset using Ultralytics tools, follow this example code:
!!! Example "Optimize and Zip a Dataset"
!!! example "Optimize and Zip a Dataset"
=== "Python"
```python
from pathlib import Path

@ -60,7 +60,7 @@ DOTA serves as a benchmark for training and evaluating models specifically tailo
Typically, datasets incorporate a YAML (Yet Another Markup Language) file detailing the dataset's configuration. For DOTA v1 and DOTA v1.5, Ultralytics provides `DOTAv1.yaml` and `DOTAv1.5.yaml` files. For additional details on these as well as DOTA v2 please consult DOTA's official repository and documentation.
!!! Example "DOTAv1.yaml"
!!! example "DOTAv1.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/DOTAv1.yaml"
@ -70,7 +70,7 @@ Typically, datasets incorporate a YAML (Yet Another Markup Language) file detail
To train DOTA dataset, we split original DOTA images with high-resolution into images with 1024x1024 resolution in multiscale way.
!!! Example "Split images"
!!! example "Split images"
=== "Python"
@ -97,11 +97,11 @@ To train DOTA dataset, we split original DOTA images with high-resolution into i
To train a model on the DOTA v1 dataset, you can utilize the following code snippets. Always refer to your model's documentation for a thorough list of available arguments.
!!! Warning
!!! warning
Please note that all images and associated annotations in the DOTAv1 dataset can be used for academic purposes, but commercial use is prohibited. Your understanding and respect for the dataset creators' wishes are greatly appreciated!
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -136,7 +136,7 @@ The dataset's richness offers invaluable insights into object detection challeng
For those leveraging DOTA in their endeavors, it's pertinent to cite the relevant research papers:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -169,7 +169,7 @@ DOTA utilizes Oriented Bounding Boxes (OBB) for annotation, which are represente
To train a model on the DOTA dataset, you can use the following example with Ultralytics YOLO:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -195,9 +195,7 @@ For more details on how to split and preprocess the DOTA images, refer to the [s
### What are the differences between DOTA-v1.0, DOTA-v1.5, and DOTA-v2.0?
- **DOTA-v1.0**: Includes 15 common categories across 2,806 images with 188,282 instances. The dataset is split into training, validation, and testing sets.
- **DOTA-v1.5**: Builds upon DOTA-v1.0 by annotating very small instances (less than 10 pixels) and adding a new category, "container crane," totaling 403,318 instances.
- **DOTA-v2.0**: Expands further with annotations from Google Earth and GF-2 Satellite, featuring 11,268 images and 1,793,658 instances. It includes new categories like "airport" and "helipad."
For a detailed comparison and additional specifics, check the [dataset versions section](#dataset-versions).
@ -206,7 +204,7 @@ For a detailed comparison and additional specifics, check the [dataset versions
DOTA images, which can be very large, are split into smaller resolutions for manageable training. Here's a Python snippet to split images:
!!! Example
!!! example
=== "Python"

@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the DOTA8 dataset, the `dota8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dota8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dota8.yaml).
!!! Example "ultralytics/cfg/datasets/dota8.yaml"
!!! example "ultralytics/cfg/datasets/dota8.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/dota8.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the DOTA8 data
If you use the DOTA dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -90,10 +90,10 @@ The DOTA8 dataset is a small, versatile oriented object detection dataset made u
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For comprehensive argument options, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -105,7 +105,7 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image s
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo obb train data=dota8.yaml model=yolov8n-obb.pt epochs=100 imgsz=640

@ -32,7 +32,7 @@ An example of a `*.txt` label file for the above image, which contains an object
To train a model using these OBB formats:
!!! Example
!!! example
=== "Python"
@ -70,7 +70,7 @@ For those looking to introduce their own datasets with oriented bounding boxes,
Transitioning labels from the DOTA dataset format to the YOLO OBB format can be achieved with this script:
!!! Example
!!! example
=== "Python"
@ -106,10 +106,10 @@ This script will reformat your DOTA annotations into a YOLO-compatible format.
Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO OBB format and then using the Ultralytics API to train the model. Here's an example in both Python and CLI:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -119,15 +119,14 @@ Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO
# Train the model on the custom dataset
results = model.train(data="your_dataset.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Train a new YOLOv8n-OBB model on the custom dataset
yolo obb train data=your_dataset.yaml model=yolov8n-obb.yaml epochs=100 imgsz=640
```
This ensures your model leverages the detailed OBB annotations for improved detection accuracy.
### What datasets are currently supported for OBB training in Ultralytics YOLO models?

@ -13,7 +13,7 @@ The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialize
## COCO-Pose Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|------------------------------------------------------------------------------------------------------|-----------------------|-----------------------|--------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
@ -43,7 +43,7 @@ The COCO-Pose dataset is specifically used for training and evaluating deep lear
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Pose dataset, the `coco-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml).
!!! Example "ultralytics/cfg/datasets/coco-pose.yaml"
!!! example "ultralytics/cfg/datasets/coco-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco-pose.yaml"
@ -53,7 +53,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -88,7 +88,7 @@ The example showcases the variety and complexity of the images in the COCO-Pose
If you use the COCO-Pose dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -115,7 +115,7 @@ The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialize
Training a YOLOv8 model on the COCO-Pose dataset can be accomplished using either Python or CLI commands. For example, to train a YOLOv8n-pose model for 100 epochs with an image size of 640, you can follow the steps below:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"

@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Pose dataset, the `coco8-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml).
!!! Example "ultralytics/cfg/datasets/coco8-pose.yaml"
!!! example "ultralytics/cfg/datasets/coco8-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8-pose.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the COCO8-Pose
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -88,10 +88,10 @@ The COCO8-Pose dataset is a small, versatile pose detection dataset that include
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, follow these examples:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -103,7 +103,7 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an i
```
=== "CLI"
```bash
yolo pose train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
```

@ -42,18 +42,18 @@ The Ultralytics framework uses a YAML file format to define the dataset and mode
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco8-pose # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
path: ../datasets/coco8-pose # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
# Keypoints
kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible)
kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible)
flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15]
# Classes dictionary
names:
0: person
0: person
```
The `train` and `val` fields specify the paths to the directories containing the training and validation images, respectively.
@ -64,7 +64,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
!!! Example
!!! example
=== "Python"
@ -126,7 +126,7 @@ If you have your own dataset and would like to use it for training pose estimati
Ultralytics provides a convenient conversion tool to convert labels from the popular COCO dataset format to YOLO format:
!!! Example
!!! example
=== "Python"
@ -142,7 +142,7 @@ This conversion tool can be used to convert the COCO dataset or any dataset in t
### What is the Ultralytics YOLO format for pose estimation?
The Ultralytics YOLO format for pose estimation datasets involves labeling each image with a corresponding text file. Each row of the text file stores information about an object instance:
The Ultralytics YOLO format for pose estimation datasets involves labeling each image with a corresponding text file. Each row of the text file stores information about an object instance:
- Object class index
- Object center coordinates (normalized x and y)
@ -154,6 +154,7 @@ For 2D poses, keypoints include pixel coordinates. For 3D, each keypoint also ha
### How do I use the COCO-Pose dataset with Ultralytics YOLO?
To use the COCO-Pose dataset with Ultralytics YOLO:
1. Download the dataset and prepare your label files in the YOLO format.
2. Create a YAML configuration file specifying paths to training and validation images, keypoint shape, and class names.
3. Use the configuration file for training:
@ -164,12 +165,13 @@ To use the COCO-Pose dataset with Ultralytics YOLO:
model = YOLO("yolov8n-pose.pt") # load pretrained model
results = model.train(data="coco-pose.yaml", epochs=100, imgsz=640)
```
For more information, visit [COCO-Pose](coco.md) and [train](../../modes/train.md) sections.
### How can I add my own dataset for pose estimation in Ultralytics YOLO?
To add your dataset:
1. Convert your annotations to the Ultralytics YOLO format.
2. Create a YAML configuration file specifying the dataset paths, number of classes, and class names.
3. Use the configuration file to train your model:
@ -180,7 +182,7 @@ To add your dataset:
model = YOLO("yolov8n-pose.pt")
results = model.train(data="your-dataset.yaml", epochs=100, imgsz=640)
```
For complete steps, check the [Adding your own dataset](#adding-your-own-dataset) section.
### What is the purpose of the dataset YAML file in Ultralytics YOLO?
@ -192,7 +194,7 @@ path: ../datasets/coco8-pose
train: images/train
val: images/val
names:
0: person
0: person
```
Read more about creating YAML configuration files in [Dataset YAML format](#dataset-yaml-format).

@ -29,7 +29,7 @@ This dataset is intended for use with [Ultralytics HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file serves as the means to specify the configuration details of a dataset. It encompasses crucial data such as file paths, class definitions, and other pertinent information. Specifically, for the `tiger-pose.yaml` file, you can check [Ultralytics Tiger-Pose Dataset Configuration File](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/tiger-pose.yaml).
!!! Example "ultralytics/cfg/datasets/tiger-pose.yaml"
!!! example "ultralytics/cfg/datasets/tiger-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/tiger-pose.yaml"
@ -39,7 +39,7 @@ A YAML (Yet Another Markup Language) file serves as the means to specify the con
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -72,7 +72,7 @@ The example showcases the variety and complexity of the images in the Tiger-Pose
## Inference Example
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
@ -107,10 +107,10 @@ The Ultralytics Tiger-Pose dataset is designed for pose estimation tasks, consis
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, use the following code snippets. For more details, visit the [Training](../../modes/train.md) page:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -120,10 +120,10 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
# Train the model
results = model.train(data="tiger-pose.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo task=pose mode=train data=tiger-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
@ -137,10 +137,10 @@ The `tiger-pose.yaml` file is used to specify the configuration details of the T
To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you can use the following code snippets. For a detailed guide, visit the [Prediction](../../modes/predict.md) page:
!!! Example "Inference Example"
!!! example "Inference Example"
=== "Python"
```python
from ultralytics import YOLO
@ -150,10 +150,10 @@ To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you
# Run inference
results = model.predict(source="https://youtu.be/MIBAT6BGE6U", show=True)
```
=== "CLI"
```bash
# Run inference using a tiger-pose trained model
yolo task=pose mode=predict source="https://youtu.be/MIBAT6BGE6U" show=True model="path/to/best.pt"

@ -37,7 +37,7 @@ Carparts Segmentation finds applications in automotive quality control, auto rep
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Package Segmentation dataset, the `carparts-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/carparts-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/carparts-seg.yaml).
!!! Example "ultralytics/cfg/datasets/carparts-seg.yaml"
!!! example "ultralytics/cfg/datasets/carparts-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/carparts-seg.yaml"
@ -47,7 +47,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -81,7 +81,7 @@ The Carparts Segmentation dataset includes a diverse array of images and videos
If you integrate the Carparts Segmentation dataset into your research or development projects, please make reference to the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -112,10 +112,10 @@ The [Roboflow Carparts Segmentation Dataset](https://universe.roboflow.com/gianm
To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow these steps:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -138,6 +138,7 @@ For more details, refer to the [Training](../../modes/train.md) documentation.
### What are some applications of Carparts Segmentation?
Carparts Segmentation can be widely applied in various fields such as:
- **Automotive quality control**
- **Auto repair and maintenance**
- **E-commerce cataloging**
@ -155,6 +156,6 @@ The dataset configuration file for the Carparts Segmentation dataset, `carparts-
### Why should I use the Carparts Segmentation Dataset?
The Carparts Segmentation Dataset provides rich, annotated data essential for developing high-accuracy segmentation models in automotive computer vision. This dataset's diversity and detailed annotations improve model training, making it ideal for applications like vehicle maintenance automation, enhancing vehicle safety systems, and supporting autonomous driving technologies. Partnering with a robust dataset accelerates AI development and ensures better model performance.
The Carparts Segmentation Dataset provides rich, annotated data essential for developing high-accuracy segmentation models in automotive computer vision. This dataset's diversity and detailed annotations improve model training, making it ideal for applications like vehicle maintenance automation, enhancing vehicle safety systems, and supporting autonomous driving technologies. Partnering with a robust dataset accelerates AI development and ensures better model performance.
For more details, visit the [CarParts Segmentation Dataset Page](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm?ref=ultralytics).

@ -11,7 +11,7 @@ The [COCO-Seg](https://cocodataset.org/#home) dataset, an extension of the COCO
## COCO-Seg Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|----------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
@ -41,7 +41,7 @@ COCO-Seg is widely used for training and evaluating deep learning models in inst
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Seg dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
!!! Example "ultralytics/cfg/datasets/coco.yaml"
!!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco.yaml"
@ -51,7 +51,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -86,7 +86,7 @@ The example showcases the variety and complexity of the images in the COCO-Seg d
If you use the COCO-Seg dataset in your research or development work, please cite the original COCO paper and acknowledge the extension to COCO-Seg:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -113,10 +113,10 @@ The [COCO-Seg](https://cocodataset.org/#home) dataset is an extension of the ori
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a detailed list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -128,7 +128,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
@ -148,7 +148,7 @@ The COCO-Seg dataset includes several key features:
The COCO-Seg dataset supports multiple pretrained YOLOv8 segmentation models with varying performance metrics. Here's a summary of the available models and their key metrics:
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|----------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |

@ -16,7 +16,7 @@ This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Seg dataset, the `coco8-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml).
!!! Example "ultralytics/cfg/datasets/coco8-seg.yaml"
!!! example "ultralytics/cfg/datasets/coco8-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8-seg.yaml"
@ -26,7 +26,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -61,7 +61,7 @@ The example showcases the variety and complexity of the images in the COCO8-Seg
If you use the COCO dataset in your research or development work, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -88,10 +88,10 @@ The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralyt
To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use Python or CLI commands. Here's a quick example:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -103,7 +103,7 @@ To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640

@ -26,7 +26,7 @@ Crack segmentation finds practical applications in infrastructure maintenance, a
A YAML (Yet Another Markup Language) file is employed to outline the configuration of the dataset, encompassing details about paths, classes, and other pertinent information. Specifically, for the Crack Segmentation dataset, the `crack-seg.yaml` file is managed and accessible at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/crack-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/crack-seg.yaml).
!!! Example "ultralytics/cfg/datasets/crack-seg.yaml"
!!! example "ultralytics/cfg/datasets/crack-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/crack-seg.yaml"
@ -36,7 +36,7 @@ A YAML (Yet Another Markup Language) file is employed to outline the configurati
To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -71,7 +71,7 @@ The Crack Segmentation dataset comprises a varied collection of images and video
If you incorporate the crack segmentation dataset into your research or development endeavors, kindly reference the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -102,7 +102,7 @@ The [Roboflow Crack Segmentation Dataset](https://universe.roboflow.com/universi
To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the following code snippets. Detailed instructions and further parameters can be found on the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -135,7 +135,7 @@ Ultralytics YOLO offers advanced real-time object detection, segmentation, and c
If you incorporate the Crack Segmentation Dataset into your research, please use the following BibTeX reference:
!!! Quote ""
!!! quote ""
=== "BibTeX"

@ -33,7 +33,7 @@ Here is an example of the YOLO dataset format for a single image with two object
1 0.504 0.000 0.501 0.004 0.498 0.004 0.493 0.010 0.492 0.0104
```
!!! Tip "Tip"
!!! tip "Tip"
- The length of each row does **not** have to be equal.
- Each segmentation label must have a **minimum of 3 xy points**: `<class-index> <x1> <y1> <x2> <y2> <x3> <y3>`
@ -44,20 +44,20 @@ The Ultralytics framework uses a YAML file format to define the dataset and mode
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco8-seg # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
path: ../datasets/coco8-seg # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
# Classes (80 COCO classes)
names:
0: person
1: bicycle
2: car
# ...
77: teddy bear
78: hair drier
79: toothbrush
0: person
1: bicycle
2: car
# ...
77: teddy bear
78: hair drier
79: toothbrush
```
The `train` and `val` fields specify the paths to the directories containing the training and validation images, respectively.
@ -66,7 +66,7 @@ The `train` and `val` fields specify the paths to the directories containing the
## Usage
!!! Example
!!! example
=== "Python"
@ -108,7 +108,7 @@ If you have your own dataset and would like to use it for training segmentation
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
!!! Example
!!! example
=== "Python"
@ -130,7 +130,7 @@ Auto-annotation is an essential feature that allows you to generate a segmentati
To auto-annotate your dataset using the Ultralytics framework, you can use the `auto_annotate` function as shown below:
!!! Example
!!! example
=== "Python"
@ -141,7 +141,7 @@ To auto-annotate your dataset using the Ultralytics framework, you can use the `
```
| Argument | Type | Description | Default |
|--------------|-------------------------|-------------------------------------------------------------------------------------------------------------|----------------|
| ------------ | ----------------------- | ----------------------------------------------------------------------------------------------------------- | -------------- |
| `data` | `str` | Path to a folder containing images to be annotated. | `None` |
| `det_model` | `str, optional` | Pre-trained YOLO detection model. Defaults to `'yolov8x.pt'`. | `'yolov8x.pt'` |
| `sam_model` | `str, optional` | Pre-trained SAM segmentation model. Defaults to `'sam_b.pt'`. | `'sam_b.pt'` |
@ -175,15 +175,15 @@ This script converts your COCO dataset annotations to the required YOLO format,
To prepare a YAML file for training YOLO models with Ultralytics, you need to define the dataset paths and class names. Here's an example YAML configuration:
```yaml
path: ../datasets/coco8-seg # dataset root dir
train: images/train # train images (relative to 'path')
val: images/val # val images (relative to 'path')
path: ../datasets/coco8-seg # dataset root dir
train: images/train # train images (relative to 'path')
val: images/val # val images (relative to 'path')
names:
0: person
1: bicycle
2: car
# ...
0: person
1: bicycle
2: car
# ...
```
Ensure you update the paths and class names according to your dataset. For more information, check the [Dataset YAML Format](#dataset-yaml-format) section.

@ -26,7 +26,7 @@ Package segmentation, facilitated by the Package Segmentation Dataset, is crucia
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Package Segmentation dataset, the `package-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml).
!!! Example "ultralytics/cfg/datasets/package-seg.yaml"
!!! example "ultralytics/cfg/datasets/package-seg.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/package-seg.yaml"
@ -36,7 +36,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
@ -70,7 +70,7 @@ The Package Segmentation dataset comprises a varied collection of images and vid
If you integrate the crack segmentation dataset into your research or development initiatives, please cite the following paper:
!!! Quote ""
!!! quote ""
=== "BibTeX"
@ -101,10 +101,10 @@ The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factor
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below:
!!! Example "Train Example"
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -116,7 +116,7 @@ You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Us
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo segment train data=package-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
@ -127,6 +127,7 @@ Refer to the model [Training](../../modes/train.md) page for more details.
### What are the components of the Package Segmentation Dataset, and how is it structured?
The dataset is structured into three main components:
- **Training set**: Contains 1920 images with annotations.
- **Testing set**: Comprises 89 images with corresponding annotations.
- **Validation set**: Includes 188 images with annotations.
@ -139,6 +140,6 @@ Ultralytics YOLOv8 provides state-of-the-art accuracy and speed for real-time ob
### How can I access and use the package-seg.yaml file for the Package Segmentation Dataset?
The `package-seg.yaml` file is hosted on Ultralytics' GitHub repository and contains essential information about the dataset's paths, classes, and configuration. You can download it from [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml). This file is crucial for configuring your models to utilize the dataset efficiently.
The `package-seg.yaml` file is hosted on Ultralytics' GitHub repository and contains essential information about the dataset's paths, classes, and configuration. You can download it from [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/package-seg.yaml). This file is crucial for configuring your models to utilize the dataset efficiently.
For more insights and practical examples, explore our [Usage](https://docs.ultralytics.com/usage/python/) section.

@ -12,7 +12,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
## Usage
!!! Example
!!! example
=== "Python"
@ -35,10 +35,10 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
To use Multi-Object Tracking with Ultralytics YOLO, you can start by using the Python or CLI examples provided. Here is how you can get started:
!!! Example
!!! example
=== "Python"
```python
from ultralytics import YOLO
@ -51,7 +51,7 @@ To use Multi-Object Tracking with Ultralytics YOLO, you can start by using the P
```bash
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3 iou=0.5 show
```
These commands load the YOLOv8 model and use it for tracking objects in the given video source with specific confidence (`conf`) and Intersection over Union (`iou`) thresholds. For more details, refer to the [track mode documentation](../../modes/track.md).
### What are the upcoming features for training trackers in Ultralytics?

@ -22,7 +22,7 @@ This guide provides a comprehensive overview of three fundamental types of data
- Bar plots, on the other hand, are suitable for comparing quantities across different categories and showing relationships between a category and its numerical value.
- Lastly, pie charts are effective for illustrating proportions among categories and showing parts of a whole.
!!! Analytics "Analytics Examples"
!!! analytics "Analytics Examples"
=== "Line Graph"

@ -85,7 +85,7 @@ After installing the runtime, you need to plug in your Coral Edge TPU into a USB
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
!!! Exporting the model
!!! exporting the model
=== "Python"
@ -111,7 +111,7 @@ The exported model will be saved in the `<model_name>_saved_model/` folder with
After exporting your model, you can run inference with it using the following code:
!!! Running the model
!!! running the model
=== "Python"
@ -170,7 +170,7 @@ Make sure to uninstall any previous Coral Edge TPU runtime versions by following
Yes, you can export your Ultralytics YOLOv8 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the [Ultralytics Docker container](docker-quickstart.md). You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:
!!! Exporting the model
!!! exporting the model
=== "Python"
@ -212,7 +212,7 @@ For a specific wheel, such as TensorFlow 2.15.0 `tflite-runtime`, you can downlo
After exporting your YOLOv8 model to an Edge TPU-compatible format, you can run inference using the following code snippets:
!!! Running the model
!!! running the model
=== "Python"

@ -6,11 +6,22 @@ keywords: Ultralytics, YOLOv8, NVIDIA Jetson, JetPack, AI deployment, embedded s
# Ultralytics YOLOv8 on NVIDIA Jetson using DeepStream SDK and TensorRT
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/wWmXKIteRLA"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Run Multiple Streams with DeepStream SDK on Jetson Nano using Ultralytics YOLOv8
</p>
This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLOv8 on [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) devices using DeepStream SDK and TensorRT. Here we use TensorRT to maximize the inference performance on the Jetson platform.
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/deepstream-nvidia-jetson.avif" alt="DeepStream on NVIDIA Jetson">
!!! Note
!!! note
This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.4](https://developer.nvidia.com/jetpack-sdk-464). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.
@ -28,7 +39,7 @@ Before you start to follow this guide:
- For JetPack 4.6.4, install [DeepStream 6.0.1](https://docs.nvidia.com/metropolis/deepstream/6.0.1/dev-guide/text/DS_Quickstart.html)
- For JetPack 5.1.3, install [DeepStream 6.3](https://docs.nvidia.com/metropolis/deepstream/6.3/dev-guide/text/DS_Quickstart.html)
!!! Tip
!!! tip
In this guide we have used the Debian package method of installing DeepStream SDK to the Jetson device. You can also visit the [DeepStream SDK on Jetson (Archived)](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived) to access legacy versions of DeepStream.
@ -56,7 +67,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
wget https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt
```
!!! Note
!!! note
You can also use a [custom trained YOLOv8 model](https://docs.ultralytics.com/modes/train/).
@ -66,7 +77,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
python3 utils/export_yoloV8.py -w yolov8s.pt
```
!!! Note "Pass the below arguments to the above command"
!!! note "Pass the below arguments to the above command"
For DeepStream 6.0.1, use opset 12 or lower. The default opset is 16.
@ -164,13 +175,13 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
deepstream-app -c deepstream_app_config.txt
```
!!! Note
!!! note
It will take a long time to generate the TensorRT engine file before starting the inference. So please be patient.
<div align=center><img width=1000 src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-deepstream.avif" alt="YOLOv8 with deepstream"></div>
!!! Tip
!!! tip
If you want to convert the model to FP16 precision, simply set `model-engine-file=model_b1_gpu0_fp16.engine` and `network-mode=2` inside `config_infer_primary_yoloV8.txt`
@ -206,7 +217,7 @@ If you want to use INT8 precision for inference, you need to follow the steps be
done
```
!!! Note
!!! note
NVIDIA recommends at least 500 images to get a good accuracy. On this example, 1000 images are chosen to get better accuracy (more images = more accuracy). You can set it from **head -1000**. For example, for 2000 images, **head -2000**. This process can take a long time.
@ -223,7 +234,7 @@ If you want to use INT8 precision for inference, you need to follow the steps be
export INT8_CALIB_BATCH_SIZE=1
```
!!! Note
!!! note
Higher INT8_CALIB_BATCH_SIZE values will result in more accuracy and faster calibration speed. Set it according to you GPU memory.

@ -36,7 +36,7 @@ Measuring the gap between two objects is known as distance calculation within a
- Click on any two bounding boxes with Left Mouse click for distance calculation
!!! Example "Distance Calculation using YOLOv8 Example"
!!! example "Distance Calculation using YOLOv8 Example"
=== "Video Stream"

@ -39,7 +39,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
- `heatmap_alpha`: Ensure this value is within the range (0.0 - 1.0).
- `decay_factor`: Used for removing heatmap after an object is no longer in the frame, its value should also be in the range (0.0 - 1.0).
!!! Example "Heatmaps using Ultralytics YOLOv8 Example"
!!! example "Heatmaps using Ultralytics YOLOv8 Example"
=== "Heatmap"

@ -69,7 +69,7 @@ The process is repeated until either the set number of iterations is reached or
Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyperparameter tuning of YOLOv8n on COCO8 for 30 epochs with an AdamW optimizer and skipping plotting, checkpointing and validation other than on final epoch for faster Tuning.
!!! Example
!!! example
=== "Python"
@ -212,7 +212,7 @@ For deeper insights, you can explore the `Tuner` class source code and accompany
To optimize the learning rate for Ultralytics YOLO, start by setting an initial learning rate using the `lr0` parameter. Common values range from `0.001` to `0.01`. During the hyperparameter tuning process, this value will be mutated to find the optimal setting. You can utilize the `model.tune()` method to automate this process. For example:
!!! Example
!!! example
=== "Python"

@ -68,7 +68,7 @@ Let's work together to make the Ultralytics YOLO ecosystem more robust and versa
Training a custom object detection model with Ultralytics YOLO is straightforward. Start by preparing your dataset in the correct format and installing the Ultralytics package. Use the following code to initiate training:
!!! Example
!!! example
=== "Python"

@ -34,7 +34,7 @@ There are two types of instance segmentation tracking available in the Ultralyti
| ![Ultralytics Instance Segmentation](https://github.com/ultralytics/docs/releases/download/0/ultralytics-instance-segmentation.avif) | ![Ultralytics Instance Segmentation with Object Tracking](https://github.com/ultralytics/docs/releases/download/0/ultralytics-instance-segmentation-object-tracking.avif) |
| Ultralytics Instance Segmentation 😍 | Ultralytics Instance Segmentation with Object Tracking 🔥 |
!!! Example "Instance Segmentation and Tracking"
!!! example "Instance Segmentation and Tracking"
=== "Instance Segmentation"
@ -146,7 +146,7 @@ For any inquiries, feel free to post your questions in the [Ultralytics Issue Se
To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO model with a segmentation version of YOLOv8 and process video frames through it. Here's a simplified code example:
!!! Example
!!! example
=== "Python"
@ -200,7 +200,7 @@ Ultralytics YOLOv8 offers real-time performance, superior accuracy, and ease of
To implement object tracking, use the `model.track` method and ensure that each object's ID is consistently assigned across frames. Below is a simple example:
!!! Example
!!! example
=== "Python"

@ -331,7 +331,7 @@ For more insights, check out our [blog post](https://www.ultralytics.com/blog/ac
Yes, YOLOv8 models can be deployed on mobile devices using TensorFlow Lite (TF Lite) for both Android and iOS platforms. TF Lite is designed for mobile and embedded devices, providing efficient on-device inference.
!!! Example
!!! example
=== "Python"

@ -63,7 +63,7 @@ The `imgsz` validation parameter sets the maximum dimension for image resizing,
If you want to get a deeper understanding of your YOLOv8 model's performance, you can easily access specific evaluation metrics with a few lines of Python code. The code snippet below will let you load your model, run an evaluation, and print out various metrics that show how well your model is doing.
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -165,7 +165,7 @@ Improving mean average precision (mAP) for a YOLOv8 model involves several steps
You can access YOLOv8 model evaluation metrics using Python with the following steps:
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -21,7 +21,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/nvidia-jetson-ecosystem.avif" alt="NVIDIA Jetson Ecosystem">
!!! Note
!!! note
This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running the latest stable JetPack release of [JP6.0](https://developer.nvidia.com/embedded/jetpack-sdk-60), JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.1](https://developer.nvidia.com/embedded/jetpack-sdk-461). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.
@ -57,7 +57,7 @@ The first step after getting your hands on an NVIDIA Jetson device is to flash N
3. If you own a Seeed Studio reComputer J4012 device, you can [flash JetPack to the included SSD](https://wiki.seeedstudio.com/reComputer_J4012_Flash_Jetpack/) and if you own a Seeed Studio reComputer J1020 v2 device, you can [flash JetPack to the eMMC/ SSD](https://wiki.seeedstudio.com/reComputer_J2021_J202_Flash_Jetpack/).
4. If you own any other third party device powered by the NVIDIA Jetson module, it is recommended to follow [command-line flashing](https://docs.nvidia.com/jetson/archives/r35.5.0/DeveloperGuide/IN/QuickStart.html).
!!! Note
!!! note
For methods 3 and 4 above, after flashing the system and booting the device, please enter "sudo apt update && sudo apt install nvidia-jetpack -y" on the device terminal to install all the remaining JetPack components needed.
@ -157,7 +157,7 @@ wget https://nvidia.box.com/shared/static/48dtuob7meiw6ebgfsfqakc9vse62sg4.whl -
pip install onnxruntime_gpu-1.18.0-cp310-cp310-linux_aarch64.whl
```
!!! Note
!!! note
`onnxruntime-gpu` will automatically revert back the numpy version to latest. So we need to reinstall numpy to `1.23.5` to fix an issue by executing:
@ -230,7 +230,7 @@ wget https://nvidia.box.com/shared/static/zostg6agm00fb6t5uisw51qi6kpcuwzd.whl -
pip install onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl
```
!!! Note
!!! note
`onnxruntime-gpu` will automatically revert back the numpy version to latest. So we need to reinstall numpy to `1.23.5` to fix an issue by executing:
@ -244,7 +244,7 @@ Out of all the model export formats supported by Ultralytics, TensorRT delivers
The YOLOv8n model in PyTorch format is converted to TensorRT to run inference with the exported model.
!!! Example
!!! example
=== "Python"
@ -274,7 +274,7 @@ The YOLOv8n model in PyTorch format is converted to TensorRT to run inference wi
yolo predict model=yolov8n.engine source='https://ultralytics.com/images/bus.jpg'
```
!!! Note
!!! note
Visit the [Export page](../modes/export.md#arguments) to access additional arguments when exporting models to different model formats
@ -294,7 +294,7 @@ Even though all model exports are working with NVIDIA Jetson, we have only inclu
The below table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
!!! Performance
!!! performance
=== "YOLOv8n"
@ -377,7 +377,7 @@ The below table represents the benchmark results for five different models (YOLO
To reproduce the above Ultralytics benchmarks on all export [formats](../modes/export.md) run this code:
!!! Example
!!! example
=== "Python"

@ -27,7 +27,7 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
- **Selective Focus**: YOLOv8 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
- **Real-time Processing**: YOLOv8's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.
!!! Example "Object Blurring using YOLOv8 Example"
!!! example "Object Blurring using YOLOv8 Example"
=== "Object Blurring"

@ -46,7 +46,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| ![Conveyor Belt Packets Counting Using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/conveyor-belt-packets-counting.avif) | ![Fish Counting in Sea using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/fish-counting-in-sea-using-ultralytics-yolov8.avif) |
| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
!!! Example "Object Counting using YOLOv8 Example"
!!! example "Object Counting using YOLOv8 Example"
=== "Count in Region"
@ -224,23 +224,15 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
Here's a table with the `ObjectCounter` arguments:
| Name | Type | Default | Description |
| -------------------- | ------- | -------------------------- | ---------------------------------------------------------------------- |
| `names` | `dict` | `None` | Dictionary of classes names. |
| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | List of points defining the counting region. |
| `count_reg_color` | `tuple` | `(255, 0, 255)` | RGB color of the counting region. |
| `count_txt_color` | `tuple` | `(0, 0, 0)` | RGB color of the count text. |
| `count_bg_color` | `tuple` | `(255, 255, 255)` | RGB color of the count text background. |
| `line_thickness` | `int` | `2` | Line thickness for bounding boxes. |
| `track_thickness` | `int` | `2` | Thickness of the track lines. |
| `view_img` | `bool` | `False` | Flag to control whether to display the video stream. |
| `view_in_counts` | `bool` | `True` | Flag to control whether to display the in counts on the video stream. |
| `view_out_counts` | `bool` | `True` | Flag to control whether to display the out counts on the video stream. |
| `draw_tracks` | `bool` | `False` | Flag to control whether to draw the object tracks. |
| `track_color` | `tuple` | `None` | RGB color of the tracks. |
| `region_thickness` | `int` | `5` | Thickness of the object counting region. |
| `line_dist_thresh` | `int` | `15` | Euclidean distance threshold for line counter. |
| `cls_txtdisplay_gap` | `int` | `50` | Display gap between each class count. |
| Name | Type | Default | Description |
| ----------------- | ------ | -------------------------- | ---------------------------------------------------------------------- |
| `names` | `dict` | `None` | Dictionary of classes names. |
| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | List of points defining the counting region. |
| `line_thickness` | `int` | `2` | Line thickness for bounding boxes. |
| `view_img` | `bool` | `False` | Flag to control whether to display the video stream. |
| `view_in_counts` | `bool` | `True` | Flag to control whether to display the in counts on the video stream. |
| `view_out_counts` | `bool` | `True` | Flag to control whether to display the out counts on the video stream. |
| `draw_tracks` | `bool` | `False` | Flag to control whether to draw the object tracks. |
### Arguments `model.track`

@ -34,7 +34,7 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| ![Conveyor Belt at Airport Suitcases Cropping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/suitcases-cropping-airport-conveyor-belt.avif) |
| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
!!! Example "Object Cropping using YOLOv8 Example"
!!! example "Object Cropping using YOLOv8 Example"
=== "Object Cropping"

@ -38,18 +38,18 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
### Selection of Points
!!! Tip "Point Selection is now Easy"
!!! tip "Point Selection is now Easy"
Choosing parking points is a critical and complex task in parking management systems. Ultralytics streamlines this process by providing a tool that lets you define parking lot areas, which can be utilized later for additional processing.
- Capture a frame from the video or camera stream where you want to manage the parking lot.
- Use the provided code to launch a graphical interface, where you can select an image and start outlining parking regions by mouse click to create polygons.
!!! Warning "Image Size"
!!! warning "Image Size"
Max Image Size of 1920 * 1080 supported
!!! Example "Parking slots Annotator Ultralytics YOLOv8"
!!! example "Parking slots Annotator Ultralytics YOLOv8"
=== "Parking Annotator"
@ -65,7 +65,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
### Python Code for Parking Management
!!! Example "Parking management using YOLOv8 Example"
!!! example "Parking management using YOLOv8 Example"
=== "Parking Management"

@ -33,7 +33,7 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
| ![Queue management at airport ticket counter using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-management-airport-ticket-counter-ultralytics-yolov8.avif) | ![Queue monitoring in crowd using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-monitoring-crowd-ultralytics-yolov8.avif) |
| Queue management at airport ticket counter Using Ultralytics YOLOv8 | Queue monitoring in crowd Ultralytics YOLOv8 |
!!! Example "Queue Management using YOLOv8 Example"
!!! example "Queue Management using YOLOv8 Example"
=== "Queue Manager"

@ -19,7 +19,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
<strong>Watch:</strong> Raspberry Pi 5 updates and improvements.
</p>
!!! Note
!!! note
This guide has been tested with Raspberry Pi 4 and Raspberry Pi 5 running the latest [Raspberry Pi OS Bookworm (Debian 12)](https://www.raspberrypi.com/software/operating-systems/). Using this guide for older Raspberry Pi devices such as the Raspberry Pi 3 is expected to work as long as the same Raspberry Pi OS Bookworm is installed.
@ -100,7 +100,7 @@ Out of all the model export formats supported by Ultralytics, [NCNN](https://doc
The YOLOv8n model in PyTorch format is converted to NCNN to run inference with the exported model.
!!! Example
!!! example
=== "Python"
@ -130,7 +130,7 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
yolo predict model='yolov8n_ncnn_model' source='https://ultralytics.com/images/bus.jpg'
```
!!! Tip
!!! tip
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](https://docs.ultralytics.com/guides/model-deployment-options).
@ -138,7 +138,7 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
YOLOv8 benchmarks were run by the Ultralytics team on nine different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX, OpenVINO, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN. Benchmarks were run on both Raspberry Pi 5 and Raspberry Pi 4 at FP32 precision with default input image size of 640.
!!! Note
!!! note
We have only included benchmarks for YOLOv8n and YOLOv8s models because other models sizes are too big to run on the Raspberry Pis and does not offer decent performance.
@ -224,7 +224,7 @@ The below table represents the benchmark results for two different models (YOLOv
To reproduce the above Ultralytics benchmarks on all [export formats](../modes/export.md), run this code:
!!! Example
!!! example
=== "Python"
@ -251,11 +251,11 @@ To reproduce the above Ultralytics benchmarks on all [export formats](../modes/e
When using Raspberry Pi for Computer Vision projects, it can be essentially to grab real-time video feeds to perform inference. The onboard MIPI CSI connector on the Raspberry Pi allows you to connect official Raspberry PI camera modules. In this guide, we have used a [Raspberry Pi Camera Module 3](https://www.raspberrypi.com/products/camera-module-3) to grab the video feeds and perform inference using YOLOv8 models.
!!! Tip
!!! tip
Learn more about the [different camera modules offered by Raspberry Pi](https://www.raspberrypi.com/documentation/accessories/camera.html) and also [how to get started with the Raspberry Pi camera modules](https://www.raspberrypi.com/documentation/computers/camera_software.html#introducing-the-raspberry-pi-cameras).
!!! Note
!!! note
Raspberry Pi 5 uses smaller CSI connectors than the Raspberry Pi 4 (15-pin vs 22-pin), so you will need a [15-pin to 22pin adapter cable](https://www.raspberrypi.com/products/camera-cable) to connect to a Raspberry Pi Camera.
@ -267,7 +267,7 @@ Execute the following command after connecting the camera to the Raspberry Pi. Y
rpicam-hello
```
!!! Tip
!!! tip
Learn more about [`rpicam-hello` usage on official Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/camera_software.html#rpicam-hello)
@ -275,13 +275,13 @@ rpicam-hello
There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
!!! Usage
!!! usage
=== "Method 1"
We can use `picamera2`which comes pre-installed with Raspberry Pi OS to access the camera and inference YOLOv8 models.
!!! Example
!!! example
=== "Python"
@ -333,7 +333,7 @@ There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
Learn more about [`rpicam-vid` usage on official Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/camera_software.html#rpicam-vid)
!!! Example
!!! example
=== "Python"
@ -353,7 +353,7 @@ There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
yolo predict model=yolov8n.pt source="tcp://127.0.0.1:8888"
```
!!! Tip
!!! tip
Check our document on [Inference Sources](https://docs.ultralytics.com/modes/predict/#inference-sources) if you want to change the image/ video input type
@ -410,7 +410,7 @@ Ultralytics YOLOv8's NCNN format is highly optimized for mobile and embedded pla
You can convert a PyTorch YOLOv8 model to NCNN format using either Python or CLI commands:
!!! Example
!!! example
=== "Python"

@ -187,7 +187,7 @@ That's it! Now you're equipped to use YOLOv8 with SAHI for both standard and sli
If you use SAHI in your research or development work, please cite the original SAHI paper and acknowledge the authors:
!!! Quote ""
!!! quote ""
=== "BibTeX"

@ -38,7 +38,7 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
| ![Speed Estimation on Road using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-road-using-ultralytics-yolov8.avif) | ![Speed Estimation on Bridge using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-bridge-using-ultralytics-yolov8.avif) |
| Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
!!! Example "Speed Estimation using YOLOv8 Example"
!!! example "Speed Estimation using YOLOv8 Example"
=== "Speed Estimation"

@ -38,7 +38,7 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
Before you start building the application, ensure you have the Ultralytics Python Package installed. You can install it using the command **pip install ultralytics**
!!! Example "Streamlit Application"
!!! example "Streamlit Application"
=== "Python"
@ -60,7 +60,7 @@ This will launch the Streamlit application in your default web browser. You will
You can optionally supply a specific model in Python:
!!! Example "Streamlit Application with a custom model"
!!! example "Streamlit Application with a custom model"
=== "Python"
@ -104,7 +104,7 @@ pip install ultralytics
Then, you can create a basic Streamlit application to run live inference:
!!! Example "Streamlit Application"
!!! example "Streamlit Application"
=== "Python"

@ -17,7 +17,7 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
| ![VisionEye View Object Mapping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-view-object-mapping-yolov8.avif) | ![VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-object-mapping-with-tracking.avif) | ![VisionEye View with Distance Calculation using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-distance-calculation-yolov8.avif) |
| VisionEye View Object Mapping using Ultralytics YOLOv8 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8 | VisionEye View with Distance Calculation using Ultralytics YOLOv8 |
!!! Example "VisionEye Object Mapping using YOLOv8"
!!! example "VisionEye Object Mapping using YOLOv8"
=== "VisionEye Object Mapping"

@ -34,7 +34,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
| ![PushUps Counting](https://github.com/ultralytics/docs/releases/download/0/pushups-counting.avif) | ![PullUps Counting](https://github.com/ultralytics/docs/releases/download/0/pullups-counting.avif) |
| PushUps Counting | PullUps Counting |
!!! Example "Workouts Monitoring Example"
!!! example "Workouts Monitoring Example"
=== "Workouts Monitoring"

@ -57,7 +57,7 @@ I have read the CLA Document and I sign the CLA
When adding new functions or classes, please include [Google-style docstrings](https://google.github.io/styleguide/pyguide.html). These docstrings provide clear, standardized documentation that helps other developers understand and maintain your code.
!!! Example "Example Docstrings"
!!! example "Example Docstrings"
=== "Google-style"

@ -39,7 +39,7 @@ We take several measures to ensure the privacy and security of the data you entr
[Sentry](https://sentry.io/welcome/) is a developer-centric error tracking software that aids in identifying, diagnosing, and resolving issues in real-time, ensuring the robustness and reliability of applications. Within our package, it plays a crucial role by providing insights through crash reporting, significantly contributing to the stability and ongoing refinement of our software.
!!! Note
!!! note
Crash reporting via Sentry is activated only if the `sentry-sdk` Python package is pre-installed on your system. This package isn't included in the `ultralytics` prerequisites and won't be installed automatically by Ultralytics.
@ -74,7 +74,7 @@ To opt out of sending analytics and crash reports, you can simply set `sync=Fals
To gain insight into the current configuration of your settings, you can view them directly:
!!! Example "View settings"
!!! example "View settings"
=== "Python"
@ -100,7 +100,7 @@ To gain insight into the current configuration of your settings, you can view th
Ultralytics allows users to easily modify their settings. Changes can be performed in the following ways:
!!! Example "Update settings"
!!! example "Update settings"
=== "Python"
@ -159,7 +159,7 @@ Ultralytics collects three primary types of data using Google Analytics:
To opt out of data collection, you can simply set `sync=False` in your YOLO settings. This action stops the transmission of any analytics or crash reports. You can disable data collection using Python or CLI methods:
!!! Example "Update settings"
!!! example "Update settings"
=== "Python"
@ -193,7 +193,7 @@ If the `sentry-sdk` package is pre-installed, Sentry collects detailed crash log
Yes, you can easily view your current settings to understand the configuration of your data collection preferences. Use the following methods to inspect these settings:
!!! Example "View settings"
!!! example "View settings"
=== "Python"

@ -54,7 +54,7 @@ FP16 (or half-precision) quantization converts the model's 32-bit floating-point
INT8 (or 8-bit integer) quantization further reduces the model's size and computation requirements by converting its 32-bit floating-point numbers to 8-bit integers. This quantization method can result in a significant speedup, but it may lead to a slight reduction in mean average precision (mAP) due to the lower numerical precision.
!!! Tip "mAP Reduction in INT8 Models"
!!! tip "mAP Reduction in INT8 Models"
The reduced numerical precision in INT8 models can lead to some loss of information during the quantization process, which may result in a slight decrease in mAP. However, this trade-off is often acceptable considering the substantial performance gains offered by INT8 quantization.

@ -40,7 +40,7 @@ You can download our [COCO8](https://github.com/ultralytics/hub/blob/main/exampl
The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format.
!!! Example "coco8.yaml"
!!! example "coco8.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/coco8.yaml"

@ -125,7 +125,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### Classification
!!! Example "Classification Model"
!!! example "Classification Model"
=== "`ultralytics`"
@ -205,7 +205,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### Detection
!!! Example "Detection Model"
!!! example "Detection Model"
=== "`ultralytics`"
@ -291,7 +291,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### OBB
!!! Example "OBB Model"
!!! example "OBB Model"
=== "`ultralytics`"
@ -381,7 +381,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### Segmentation
!!! Example "Segmentation Model"
!!! example "Segmentation Model"
=== "`ultralytics`"
@ -481,7 +481,7 @@ The [Ultralytics HUB](https://www.ultralytics.com/hub) Inference API returns a J
### Pose
!!! Example "Pose Model"
!!! example "Pose Model"
=== "`ultralytics`"

@ -64,7 +64,7 @@ In this step, you have to choose the project in which you want to create your mo
In case you don't have a project created yet, you can set the name of your project in this step and it will be created together with your model.
!!! Info "Info"
!!! info "Info"
You can read more about the available [YOLOv8](https://docs.ultralytics.com/models/yolov8) (and [YOLOv5](https://docs.ultralytics.com/models/yolov5)) architectures in our documentation.

@ -76,7 +76,7 @@ Set the general access to "Unlisted" and click **Save**.
![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-dialog.avif)
!!! Warning "Warning"
!!! warning "Warning"
When changing the general access of a project, the general access of the models inside the project will be changed as well.
@ -116,7 +116,7 @@ Navigate to the Project page of the project you want to delete, open the project
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Delete option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-option-1.avif)
!!! Warning "Warning"
!!! warning "Warning"
When deleting a project, the models inside the project will be deleted as well.

@ -56,7 +56,7 @@ Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, op
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Delete option of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-delete-team-option.avif)
!!! Warning "Warning"
!!! warning "Warning"
When deleting a team, the team can't be restored.

@ -26,7 +26,7 @@ You can bring automation and efficiency to your machine learning workflow by imp
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -43,7 +43,7 @@ Once you have installed the necessary packages, the next step is to initialize a
Begin by initializing the ClearML SDK in your environment. The 'clearml-init' command starts the setup process and prompts you for the necessary credentials.
!!! Tip "Initial SDK Setup"
!!! tip "Initial SDK Setup"
=== "CLI"
@ -58,7 +58,7 @@ After executing this command, visit the [ClearML Settings page](https://app.clea
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -26,7 +26,7 @@ By combining Ultralytics YOLOv8 with Comet ML, you unlock a range of benefits. T
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -39,7 +39,7 @@ To install the required packages, run:
After installing the required packages, you'll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
!!! Tip "Configuring Comet ML"
!!! tip "Configuring Comet ML"
=== "CLI"
@ -62,7 +62,7 @@ If you are using a Google Colab notebook, the code above will prompt you to ente
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -60,7 +60,7 @@ Exporting YOLOv8 to CoreML enables optimized, on-device machine learning perform
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -75,7 +75,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -131,7 +131,7 @@ Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, vi
To export your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models to CoreML format, you'll first need to ensure you have the `ultralytics` package installed. You can install it using:
!!! Example "Installation"
!!! example "Installation"
=== "CLI"
@ -141,7 +141,7 @@ To export your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics)
Next, you can export the model using the following Python or CLI commands:
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -198,7 +198,7 @@ For more information on performance optimization, visit the [CoreML official doc
Yes, you can run inference directly using the exported CoreML model. Below are the commands for Python and CLI:
!!! Example "Running Inference"
!!! example "Running Inference"
=== "Python"

@ -26,7 +26,7 @@ YOLOv8 training sessions can be effectively monitored with DVCLive. Additionally
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -43,7 +43,7 @@ Once you have installed the necessary packages, the next step is to set up and c
Begin by initializing a Git repository, as Git plays a crucial role in version control for both your code and DVCLive configurations.
!!! Tip "Initial Environment Setup"
!!! tip "Initial Environment Setup"
=== "CLI"
@ -176,7 +176,7 @@ Additionally, explore more integrations and capabilities of Ultralytics by visit
Integrating DVCLive with Ultralytics YOLOv8 is straightforward. Start by installing the necessary packages:
!!! Example "Installation"
!!! example "Installation"
=== "CLI"
@ -186,7 +186,7 @@ Integrating DVCLive with Ultralytics YOLOv8 is straightforward. Start by install
Next, initialize a Git repository and configure DVCLive in your project:
!!! Example "Initial Environment Setup"
!!! example "Initial Environment Setup"
=== "CLI"
@ -258,7 +258,7 @@ These steps ensure proper version control and setup for experiment tracking. For
DVCLive offers powerful tools to visualize the results of YOLOv8 experiments. Here's how you can generate comparative plots:
!!! Example "Generate Comparative Plots"
!!! example "Generate Comparative Plots"
=== "CLI"

@ -50,7 +50,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -65,7 +65,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -123,7 +123,7 @@ Also, for more information on other Ultralytics YOLOv8 integrations, please visi
To export a YOLOv8 model to TFLite Edge TPU format, you can follow these steps:
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -56,7 +56,7 @@ Once you do so, a notebook environment will open for you to load your data set.
Next, you can install and import the necessary Python libraries.
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -71,7 +71,7 @@ For detailed instructions and best practices related to the installation process
Then, you can import the needed packages.
!!! Example "Import Relevant Libraries"
!!! example "Import Relevant Libraries"
=== "Python"
@ -92,7 +92,7 @@ We can load the dataset directly into the notebook using the Kaggle API. First,
Copy and paste your Kaggle username and API key into the following code. Then run the code to install the API and load the dataset into Watsonx.
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -103,7 +103,7 @@ Copy and paste your Kaggle username and API key into the following code. Then ru
After installing Kaggle, we can load the dataset into Watsonx.
!!! Example "Load the Data"
!!! example "Load the Data"
=== "Python"
@ -155,7 +155,7 @@ But, YOLO models by default require separate images and labels in subdirectories
To reorganize the data set directory, we can run the following script:
!!! Example "Preprocess the Data"
!!! example "Preprocess the Data"
=== "Python"
@ -207,7 +207,7 @@ names:
Run the following script to delete the current contents of config.yaml and replace it with the above contents that reflect our new data set directory structure. Be certain to replace the work_dir portion of the root directory path in line 4 with your own working directory path we retrieved earlier. Leave the train, val, and test subdirectory definitions. Also, do not change {work_dir} in line 23 of the code.
!!! Example "Edit the .yaml File"
!!! example "Edit the .yaml File"
=== "Python"
@ -240,7 +240,7 @@ Run the following script to delete the current contents of config.yaml and repla
Run the following command-line code to fine tune a pretrained default YOLOv8 model.
!!! Example "Train the YOLOv8 model"
!!! example "Train the YOLOv8 model"
=== "CLI"
@ -263,7 +263,7 @@ For a detailed understanding of the model training process and best practices, r
We can now run inference to test the performance of our fine-tuned model:
!!! Example "Test the YOLOv8 model"
!!! example "Test the YOLOv8 model"
=== "CLI"
@ -279,7 +279,7 @@ The parameter `conf=0.5` informs the model to ignore all predictions with a conf
Lastly, `iou=.5` directs the model to ignore boxes in the same class with an overlap of 50% or greater. It helps to reduce potential duplicate boxes generated for the same object.
we can load the images with predicted bounding box overlays to view how our model performs on a handful of images.
!!! Example "Display Predictions"
!!! example "Display Predictions"
=== "Python"

@ -54,7 +54,7 @@ JupyterLab makes it easy to experiment with YOLOv8. To get started, follow these
First, you need to install JupyterLab. Open your terminal and run the command:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -71,7 +71,7 @@ Next, download the [tutorial.ipynb](https://github.com/ultralytics/ultralytics/b
Navigate to the directory where you saved the notebook file using your terminal. Then, run the following command to launch JupyterLab:
!!! Example "Usage"
!!! example "Usage"
=== "CLI"

@ -34,7 +34,7 @@ pip install mlflow
Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this is controlled by the settings `mflow` key. See the [settings](../quickstart.md#ultralytics-settings) page for more info.
!!! Example "Update Ultralytics MLflow Settings"
!!! example "Update Ultralytics MLflow Settings"
=== "Python"
@ -130,7 +130,7 @@ pip install mlflow
Next, enable MLflow logging in Ultralytics settings. This can be controlled using the `mlflow` key. For more information, see the [settings guide](../quickstart.md#ultralytics-settings).
!!! Example "Update Ultralytics MLflow Settings"
!!! example "Update Ultralytics MLflow Settings"
=== "Python"

@ -52,7 +52,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -67,7 +67,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -70,7 +70,7 @@ Deploying YOLOv8 with Neural Magic's DeepSparse involves a few straightforward s
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -83,7 +83,7 @@ To install the required packages, run:
DeepSparse Engine requires YOLOv8 models in ONNX format. Exporting your model to this format is essential for compatibility with DeepSparse. Use the following command to export YOLOv8 models:
!!! Tip "Model Export"
!!! tip "Model Export"
=== "CLI"
@ -98,7 +98,7 @@ This command will save the `yolov8n.onnx` model to your disk.
With your YOLOv8 model in ONNX format, you can deploy and run inferences using DeepSparse. This can be done easily with their intuitive Python API:
!!! Tip "Deploying and Running Inferences"
!!! tip "Deploying and Running Inferences"
=== "Python"
@ -120,7 +120,7 @@ With your YOLOv8 model in ONNX format, you can deploy and run inferences using D
It's important to check that your YOLOv8 model is performing optimally on DeepSparse. You can benchmark your model's performance to analyze throughput and latency:
!!! Tip "Benchmarking"
!!! tip "Benchmarking"
=== "CLI"
@ -133,7 +133,7 @@ It's important to check that your YOLOv8 model is performing optimally on DeepSp
DeepSparse provides additional features for practical integration of YOLOv8 in applications, such as image annotation and dataset evaluation.
!!! Tip "Additional Features"
!!! tip "Additional Features"
=== "CLI"

@ -68,7 +68,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -83,7 +83,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -139,7 +139,7 @@ Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, vi
To export your YOLOv8 models to ONNX format using Ultralytics, follow these steps:
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -27,7 +27,7 @@ OpenVINO, short for Open Visual Inference & Neural Network Optimization toolkit,
Export a YOLOv8n model to OpenVINO format and run inference with the exported model.
!!! Example
!!! example
=== "Python"
@ -105,7 +105,7 @@ For more detailed steps and code snippets, refer to the [OpenVINO documentation]
YOLOv8 benchmarks below were run by the Ultralytics team on 4 different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX and OpenVINO. Benchmarks were run on Intel Flex and Arc GPUs, and on Intel Xeon CPUs at FP32 precision (with the `half=False` argument).
!!! Note
!!! note
The benchmarking results below are for reference and might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run.
@ -255,7 +255,7 @@ Benchmarks below run on 13th Gen Intel® Core® i7-13700H CPU at FP32 precision.
To reproduce the Ultralytics benchmarks above on all export [formats](../modes/export.md) run this code:
!!! Example
!!! example
=== "Python"
@ -294,7 +294,7 @@ For more detailed information and instructions on using OpenVINO, refer to the [
Exporting YOLOv8 models to the OpenVINO format can significantly enhance CPU speed and enable GPU and NPU accelerations on Intel hardware. To export, you can use either Python or CLI as shown below:
!!! Example
!!! example
=== "Python"
@ -332,7 +332,7 @@ For detailed performance comparisons, visit our [benchmarks section](#openvino-y
After exporting a YOLOv8 model to OpenVINO format, you can run inference using Python or CLI:
!!! Example
!!! example
=== "Python"
@ -369,7 +369,7 @@ For in-depth performance analysis, check our detailed [YOLOv8 benchmarks](#openv
Yes, you can benchmark YOLOv8 models in various formats including PyTorch, TorchScript, ONNX, and OpenVINO. Use the following code snippet to run benchmarks on your chosen dataset:
!!! Example
!!! example
=== "Python"

@ -54,7 +54,7 @@ Converting YOLOv8 models to the PaddlePaddle format can improve execution flexib
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -69,7 +69,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -127,7 +127,7 @@ Want to explore more ways to integrate your Ultralytics YOLOv8 models? Our [inte
Exporting Ultralytics YOLOv8 models to PaddlePaddle format is straightforward. You can use the `export` method of the YOLO class to perform this exportation. Here is an example using Python:
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -28,7 +28,7 @@ YOLOv8 also allows optional integration with [Weights & Biases](https://wandb.ai
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -42,7 +42,7 @@ To install the required packages, run:
## Usage
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -103,7 +103,7 @@ The following table lists the default search space parameters for hyperparameter
In this example, we demonstrate how to use a custom search space for hyperparameter tuning with Ray Tune and YOLOv8. By providing a custom search space, you can focus the tuning process on specific hyperparameters of interest.
!!! Example "Usage"
!!! example "Usage"
```python
from ultralytics import YOLO

@ -8,7 +8,7 @@ keywords: Roboflow, YOLOv8, data labeling, computer vision, model training, mode
[Roboflow](https://roboflow.com/?ref=ultralytics) has everything you need to build and deploy computer vision models. Connect Roboflow at any step in your pipeline with APIs and SDKs, or use the end-to-end interface to automate the entire process from image to inference. Whether you're in need of [data labeling](https://roboflow.com/annotate?ref=ultralytics), [model training](https://roboflow.com/train?ref=ultralytics), or [model deployment](https://roboflow.com/deploy?ref=ultralytics), Roboflow gives you building blocks to bring custom computer vision solutions to your project.
!!! Question "Licensing"
!!! question "Licensing"
Ultralytics offers two licensing options:

@ -26,7 +26,7 @@ Using TensorBoard while training YOLOv8 models is straightforward and offers sig
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -43,7 +43,7 @@ For detailed instructions and best practices related to the installation process
When using Google Colab, it's important to set up TensorBoard before starting your training code:
!!! Example "Configure TensorBoard for Google Colab"
!!! example "Configure TensorBoard for Google Colab"
=== "Python"
@ -56,7 +56,7 @@ When using Google Colab, it's important to set up TensorBoard before starting yo
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -189,7 +189,7 @@ These visualizations are essential for tracking model performance and making nec
Yes, you can use TensorBoard in a Google Colab environment to train YOLOv8 models. Here's a quick setup:
!!! Example "Configure TensorBoard for Google Colab"
!!! example "Configure TensorBoard for Google Colab"
=== "Python"

@ -62,7 +62,7 @@ You can improve execution efficiency and optimize performance by converting YOLO
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -77,7 +77,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -58,7 +58,7 @@ You can convert your YOLOv8 object detection model to the TF GraphDef format, wh
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -73,7 +73,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -131,7 +131,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Ultralytics YOLOv8 models can be exported to TensorFlow GraphDef (TF GraphDef) format seamlessly. This format provides a serialized, platform-independent representation of the model, ideal for deploying in varied environments like mobile and web. To export a YOLOv8 model to TF GraphDef, follow these steps:
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -52,7 +52,7 @@ By exporting YOLOv8 models to the TF SavedModel format, you enhance their adapta
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -67,7 +67,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -125,7 +125,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Exporting an Ultralytics YOLO model to the TensorFlow SavedModel format is straightforward. You can use either Python or CLI to achieve this:
!!! Example "Exporting YOLOv8 to TF SavedModel"
!!! example "Exporting YOLOv8 to TF SavedModel"
=== "Python"

@ -50,7 +50,7 @@ You can expand model compatibility and deployment flexibility by converting YOLO
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -65,7 +65,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -123,7 +123,7 @@ For more information on integrating Ultralytics YOLOv8 with other platforms and
Exporting Ultralytics YOLOv8 models to TensorFlow.js (TF.js) format is straightforward. You can follow these steps:
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -56,7 +56,7 @@ You can improve on-device model execution efficiency and optimize performance by
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -71,7 +71,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"

@ -60,7 +60,7 @@ Exporting YOLOv8 models to TorchScript makes it easier to use them in different
To install the required package, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -75,7 +75,7 @@ For detailed instructions and best practices related to the installation process
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -135,7 +135,7 @@ Exporting an Ultralytics YOLOv8 model to TorchScript allows for flexible, cross-
To export a YOLOv8 model to TorchScript, you can use the following example code:
!!! Example "Usage"
!!! example "Usage"
=== "Python"
@ -182,7 +182,7 @@ For more insights into deployment, visit the [PyTorch Mobile Documentation](http
To install the required package for exporting YOLOv8 models, use the following command:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"

@ -39,7 +39,7 @@ Want to let us know what you use for developing code? Head over to our Discourse
## Installing the Extension
!!! Note
!!! note
Any code environment that will allow for installing VS Code extensions _should be_ compatible with the Ultralytics-snippets extension. After publishing the extension, it was discovered that [neovim](https://neovim.io/) can be made compatible with VS Code extensions. To learn more see the [`neovim` install section][neovim install] of the Readme in the [Ultralytics-Snippets repository][repo].
@ -127,7 +127,7 @@ These are the current snippet categories available to the Ultralytics-snippets e
The `ultra.examples` snippets are to useful for anyone looking to learn how to get started with the basics of working with Ultralytics YOLO. Example snippets are intended to run once inserted (some have dropdown options as well). An example of this is shown at the animation at the [top] of this page, where after the snippet is inserted, all code is selected and run interactively using <kbd>Shift ⇑</kbd>+<kbd>Enter ↵</kbd>.
!!! Example
!!! example
Just like the animation shows at the [top] of this page, you can use the snippet `ultra.example-yolo-predict` to insert the following code example. Once inserted, the only configurable option is for the model scale which can be any one of: `n`, `s`, `m`, `l`, or `x`.
@ -146,7 +146,7 @@ The `ultra.examples` snippets are to useful for anyone looking to learn how to g
The aim for snippets other than the `ultra.examples` are for making development easier and quicker when working with Ultralytics. A common code block to be used in many projects, is to iterate the list of `Results` returned from using the model [predict] method. The `ultra.result-loop` snippet can help with this.
!!! Example
!!! example
Using the `ultra.result-loop` will insert the following default code (including comments).
@ -170,7 +170,7 @@ However, since Ultralytics supports numerous [tasks], when [working with inferen
There are over 💯 keyword arguments for all of the various Ultralytics [tasks] and [modes]! That's a lot to remember and it can be easy to forget if the argument is `save_frame` or `save_frames` (it's definitely `save_frames` by the way). This is where the `ultra.kwargs` snippets can help out!
!!! Example
!!! example
To insert the [predict] method, including all [inference arguments], use `ultra.kwargs-predict`, which will insert the following code (including comments).

@ -37,7 +37,7 @@ You can use Weights & Biases to bring efficiency and automation to your YOLOv8 t
To install the required packages, run:
!!! Tip "Installation"
!!! tip "Installation"
=== "CLI"
@ -54,7 +54,7 @@ After installing the necessary packages, the next step is to set up your Weights
Start by initializing the Weights & Biases environment in your workspace. You can do this by running the following command and following the prompted instructions.
!!! Tip "Initial SDK Setup"
!!! tip "Initial SDK Setup"
=== "CLI"
@ -70,7 +70,7 @@ Navigate to the Weights & Biases authorization page to create and retrieve your
Before diving into the usage instructions for YOLOv8 model training with Weights & Biases, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! Example "Usage: Training YOLOv8 with Weights & Biases"
!!! example "Usage: Training YOLOv8 with Weights & Biases"
=== "Python"

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save