Optimize Docs images (#15900)

Signed-off-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
pull/15182/merge
Muhammad Rizwan Munawar 3 months ago committed by GitHub
parent 0f9f7b806c
commit cfebb5f26b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      docs/README.md
  2. 2
      docs/en/datasets/classify/caltech101.md
  3. 2
      docs/en/datasets/classify/caltech256.md
  4. 2
      docs/en/datasets/classify/cifar10.md
  5. 2
      docs/en/datasets/classify/cifar100.md
  6. 2
      docs/en/datasets/classify/fashion-mnist.md
  7. 2
      docs/en/datasets/classify/imagenet.md
  8. 2
      docs/en/datasets/classify/imagenet10.md
  9. 2
      docs/en/datasets/classify/imagenette.md
  10. 2
      docs/en/datasets/classify/imagewoof.md
  11. 2
      docs/en/datasets/detect/african-wildlife.md
  12. 2
      docs/en/datasets/detect/argoverse.md
  13. 2
      docs/en/datasets/detect/brain-tumor.md
  14. 2
      docs/en/datasets/detect/coco.md
  15. 2
      docs/en/datasets/detect/coco8.md
  16. 2
      docs/en/datasets/detect/globalwheat2020.md
  17. 6
      docs/en/datasets/detect/index.md
  18. 6
      docs/en/datasets/detect/lvis.md
  19. 2
      docs/en/datasets/detect/objects365.md
  20. 4
      docs/en/datasets/detect/open-images-v7.md
  21. 4
      docs/en/datasets/detect/roboflow-100.md
  22. 2
      docs/en/datasets/detect/signature.md
  23. 4
      docs/en/datasets/detect/sku-110k.md
  24. 2
      docs/en/datasets/detect/visdrone.md
  25. 2
      docs/en/datasets/detect/voc.md
  26. 2
      docs/en/datasets/detect/xview.md
  27. 10
      docs/en/datasets/explorer/dashboard.md
  28. 4
      docs/en/datasets/explorer/index.md
  29. 2
      docs/en/datasets/index.md
  30. 4
      docs/en/datasets/obb/dota-v2.md
  31. 2
      docs/en/datasets/obb/dota8.md
  32. 2
      docs/en/datasets/obb/index.md
  33. 4
      docs/en/datasets/pose/coco.md
  34. 2
      docs/en/datasets/pose/coco8-pose.md
  35. 2
      docs/en/datasets/pose/tiger-pose.md
  36. 2
      docs/en/datasets/segment/carparts-seg.md
  37. 2
      docs/en/datasets/segment/coco.md
  38. 2
      docs/en/datasets/segment/coco8-seg.md
  39. 2
      docs/en/datasets/segment/crack-seg.md
  40. 2
      docs/en/datasets/segment/package-seg.md
  41. 6
      docs/en/guides/analytics.md
  42. 6
      docs/en/guides/azureml-quickstart.md
  43. 2
      docs/en/guides/conda-quickstart.md
  44. 2
      docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
  45. 6
      docs/en/guides/data-collection-and-annotation.md
  46. 6
      docs/en/guides/deepstream-nvidia-jetson.md
  47. 8
      docs/en/guides/defining-project-goals.md
  48. 6
      docs/en/guides/distance-calculation.md
  49. 2
      docs/en/guides/docker-quickstart.md
  50. 8
      docs/en/guides/heatmaps.md
  51. 6
      docs/en/guides/hyperparameter-tuning.md
  52. 8
      docs/en/guides/instance-segmentation-and-tracking.md
  53. 10
      docs/en/guides/isolating-segmentation-objects.md
  54. 2
      docs/en/guides/kfold-cross-validation.md
  55. 4
      docs/en/guides/model-deployment-practices.md
  56. 4
      docs/en/guides/model-evaluation-insights.md
  57. 8
      docs/en/guides/model-monitoring-and-maintenance.md
  58. 2
      docs/en/guides/model-testing.md
  59. 6
      docs/en/guides/model-training-tips.md
  60. 6
      docs/en/guides/nvidia-jetson.md
  61. 8
      docs/en/guides/object-counting.md
  62. 8
      docs/en/guides/object-cropping.md
  63. 2
      docs/en/guides/optimizing-openvino-latency-vs-throughput-modes.md
  64. 10
      docs/en/guides/parking-management.md
  65. 4
      docs/en/guides/preprocessing_annotated_data.md
  66. 8
      docs/en/guides/queue-management.md
  67. 4
      docs/en/guides/raspberry-pi.md
  68. 8
      docs/en/guides/region-counting.md
  69. 8
      docs/en/guides/ros-quickstart.md
  70. 6
      docs/en/guides/sahi-tiled-inference.md
  71. 4
      docs/en/guides/security-alarm-system.md
  72. 8
      docs/en/guides/speed-estimation.md
  73. 12
      docs/en/guides/steps-of-a-cv-project.md
  74. 8
      docs/en/guides/streamlit-live-inference.md
  75. 6
      docs/en/guides/view-results-in-terminal.md
  76. 8
      docs/en/guides/vision-eye.md
  77. 10
      docs/en/guides/workouts-monitoring.md
  78. 2
      docs/en/guides/yolo-common-issues.md
  79. 2
      docs/en/guides/yolo-thread-safe-inference.md
  80. 2
      docs/en/help/contributing.md
  81. 2
      docs/en/hub/app/android.md
  82. 2
      docs/en/hub/app/index.md
  83. 2
      docs/en/hub/app/ios.md
  84. 22
      docs/en/hub/cloud-training.md
  85. 44
      docs/en/hub/datasets.md
  86. 2
      docs/en/hub/index.md
  87. 6
      docs/en/hub/inference-api.md
  88. 24
      docs/en/hub/integrations.md
  89. 78
      docs/en/hub/models.md
  90. 16
      docs/en/hub/pro.md
  91. 50
      docs/en/hub/projects.md
  92. 22
      docs/en/hub/quickstart.md
  93. 60
      docs/en/hub/teams.md
  94. 2
      docs/en/index.md
  95. 6
      docs/en/integrations/amazon-sagemaker.md
  96. 4
      docs/en/integrations/clearml.md
  97. 8
      docs/en/integrations/comet.md
  98. 4
      docs/en/integrations/coreml.md
  99. 4
      docs/en/integrations/dvc.md
  100. 2
      docs/en/integrations/edge-tpu.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -107,7 +107,7 @@ Choose a hosting provider and deployment method for your MkDocs documentation:
- Update the "Custom domain" in your repository's settings for a personalized URL. - Update the "Custom domain" in your repository's settings for a personalized URL.
![196814117-fc16e711-d2be-4722-9536-b7c6d78fd167](https://user-images.githubusercontent.com/26833433/210150206-9e86dcd7-10af-43e4-9eb2-9518b3799eac.png) ![MkDocs deployment example](https://github.com/ultralytics/docs/releases/download/0/mkdocs-deployment-example.avif)
- For detailed deployment guidance, consult the [MkDocs documentation](https://www.mkdocs.org/user-guide/deploying-your-docs/). - For detailed deployment guidance, consult the [MkDocs documentation](https://www.mkdocs.org/user-guide/deploying-your-docs/).
@ -115,7 +115,7 @@ Choose a hosting provider and deployment method for your MkDocs documentation:
We cherish the community's input as it drives Ultralytics open-source initiatives. Dive into the [Contributing Guide](https://docs.ultralytics.com/help/contributing) and share your thoughts via our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). A heartfelt thank you 🙏 to each contributor! We cherish the community's input as it drives Ultralytics open-source initiatives. Dive into the [Contributing Guide](https://docs.ultralytics.com/help/contributing) and share your thoughts via our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). A heartfelt thank you 🙏 to each contributor!
![Ultralytics open-source contributors](https://github.com/ultralytics/assets/raw/main/im/image-contributors.png) ![Ultralytics open-source contributors](https://github.com/ultralytics/docs/releases/download/0/ultralytics-open-source-contributors.avif)
## 📜 License ## 📜 License

@ -53,7 +53,7 @@ To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the
The Caltech-101 dataset contains high-quality color images of various objects, providing a well-structured dataset for object recognition tasks. Here are some examples of images from the dataset: The Caltech-101 dataset contains high-quality color images of various objects, providing a well-structured dataset for object recognition tasks. Here are some examples of images from the dataset:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/239366386-44171121-b745-4206-9b59-a3be41e16089.png) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/caltech101-sample-image.avif)
The example showcases the variety and complexity of the objects in the Caltech-101 dataset, emphasizing the significance of a diverse dataset for training robust object recognition models. The example showcases the variety and complexity of the objects in the Caltech-101 dataset, emphasizing the significance of a diverse dataset for training robust object recognition models.

@ -64,7 +64,7 @@ To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the
The Caltech-256 dataset contains high-quality color images of various objects, providing a comprehensive dataset for object recognition tasks. Here are some examples of images from the dataset ([credit](https://ml4a.github.io/demos/tsne_viewer.html)): The Caltech-256 dataset contains high-quality color images of various objects, providing a comprehensive dataset for object recognition tasks. Here are some examples of images from the dataset ([credit](https://ml4a.github.io/demos/tsne_viewer.html)):
![Dataset sample image](https://user-images.githubusercontent.com/26833433/239365061-1e5f7857-b1e8-44ca-b3d7-d0befbcd33f9.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/caltech256-sample-image.avif)
The example showcases the diversity and complexity of the objects in the Caltech-256 dataset, emphasizing the importance of a varied dataset for training robust object recognition models. The example showcases the diversity and complexity of the objects in the Caltech-256 dataset, emphasizing the importance of a varied dataset for training robust object recognition models.

@ -67,7 +67,7 @@ To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size
The CIFAR-10 dataset contains color images of various objects, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset: The CIFAR-10 dataset contains color images of various objects, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset:
![Dataset sample image](https://miro.medium.com/max/1100/1*SZnidBt7CQ4Xqcag6rd8Ew.png) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/cifar10-sample-image.avif)
The example showcases the variety and complexity of the objects in the CIFAR-10 dataset, highlighting the importance of a diverse dataset for training robust image classification models. The example showcases the variety and complexity of the objects in the CIFAR-10 dataset, highlighting the importance of a diverse dataset for training robust image classification models.

@ -56,7 +56,7 @@ To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size
The CIFAR-100 dataset contains color images of various objects, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset: The CIFAR-100 dataset contains color images of various objects, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/239363319-62ebf02f-7469-4178-b066-ccac3cd334db.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/cifar100-sample-image.avif)
The example showcases the variety and complexity of the objects in the CIFAR-100 dataset, highlighting the importance of a diverse dataset for training robust image classification models. The example showcases the variety and complexity of the objects in the CIFAR-100 dataset, highlighting the importance of a diverse dataset for training robust image classification models.

@ -81,7 +81,7 @@ To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image s
The Fashion-MNIST dataset contains grayscale images of Zalando's article images, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset: The Fashion-MNIST dataset contains grayscale images of Zalando's article images, providing a well-structured dataset for image classification tasks. Here are some examples of images from the dataset:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/239359139-ce0a434e-9056-43e0-a306-3214f193dcce.png) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/fashion-mnist-sample.avif)
The example showcases the variety and complexity of the images in the Fashion-MNIST dataset, highlighting the importance of a diverse dataset for training robust image classification models. The example showcases the variety and complexity of the images in the Fashion-MNIST dataset, highlighting the importance of a diverse dataset for training robust image classification models.

@ -66,7 +66,7 @@ To train a deep learning model on the ImageNet dataset for 100 epochs with an im
The ImageNet dataset contains high-resolution images spanning thousands of object categories, providing a diverse and extensive dataset for training and evaluating computer vision models. Here are some examples of images from the dataset: The ImageNet dataset contains high-resolution images spanning thousands of object categories, providing a diverse and extensive dataset for training and evaluating computer vision models. Here are some examples of images from the dataset:
![Dataset sample images](https://user-images.githubusercontent.com/26833433/239280348-3d8f30c7-6f05-4dda-9cfe-d62ad9faecc9.png) ![Dataset sample images](https://github.com/ultralytics/docs/releases/download/0/imagenet-sample-images.avif)
The example showcases the variety and complexity of the images in the ImageNet dataset, highlighting the importance of a diverse dataset for training robust computer vision models. The example showcases the variety and complexity of the images in the ImageNet dataset, highlighting the importance of a diverse dataset for training robust computer vision models.

@ -52,7 +52,7 @@ To test a deep learning model on the ImageNet10 dataset with an image size of 22
The ImageNet10 dataset contains a subset of images from the original ImageNet dataset. These images are chosen to represent the first 10 classes in the dataset, providing a diverse yet compact dataset for quick testing and evaluation. The ImageNet10 dataset contains a subset of images from the original ImageNet dataset. These images are chosen to represent the first 10 classes in the dataset, providing a diverse yet compact dataset for quick testing and evaluation.
![Dataset sample images](https://user-images.githubusercontent.com/26833433/239689723-16f9b4a7-becc-4deb-b875-d3e5c28eb03b.png) The example showcases the variety and complexity of the images in the ImageNet10 dataset, highlighting its usefulness for sanity checks and quick testing of computer vision models. ![Dataset sample images](https://github.com/ultralytics/docs/releases/download/0/imagenet10-sample-images.avif) The example showcases the variety and complexity of the images in the ImageNet10 dataset, highlighting its usefulness for sanity checks and quick testing of computer vision models.
## Citations and Acknowledgments ## Citations and Acknowledgments

@ -54,7 +54,7 @@ To train a model on the ImageNette dataset for 100 epochs with a standard image
The ImageNette dataset contains colored images of various objects and scenes, providing a diverse dataset for image classification tasks. Here are some examples of images from the dataset: The ImageNette dataset contains colored images of various objects and scenes, providing a diverse dataset for image classification tasks. Here are some examples of images from the dataset:
![Dataset sample image](https://docs.fast.ai/22_tutorial.imagenette_files/figure-html/cell-21-output-1.png) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/imagenette-sample-image.avif)
The example showcases the variety and complexity of the images in the ImageNette dataset, highlighting the importance of a diverse dataset for training robust image classification models. The example showcases the variety and complexity of the images in the ImageNette dataset, highlighting the importance of a diverse dataset for training robust image classification models.

@ -89,7 +89,7 @@ It's important to note that using smaller images will likely yield lower perform
The ImageWoof dataset contains colorful images of various dog breeds, providing a challenging dataset for image classification tasks. Here are some examples of images from the dataset: The ImageWoof dataset contains colorful images of various dog breeds, providing a challenging dataset for image classification tasks. Here are some examples of images from the dataset:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/239357533-ec833254-4351-491b-8cb3-59578ea5d0b2.png) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/imagewoof-dataset-sample.avif)
The example showcases the subtle differences and similarities among the different dog breeds in the ImageWoof dataset, highlighting the complexity and difficulty of the classification task. The example showcases the subtle differences and similarities among the different dog breeds in the ImageWoof dataset, highlighting the complexity and difficulty of the classification task.

@ -91,7 +91,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
The African wildlife dataset comprises a wide variety of images showcasing diverse animal species and their natural habitats. Below are examples of images from the dataset, each accompanied by its corresponding annotations. The African wildlife dataset comprises a wide variety of images showcasing diverse animal species and their natural habitats. Below are examples of images from the dataset, each accompanied by its corresponding annotations.
![African wildlife dataset sample image](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/919f8190-ccf3-4a96-a5f1-55d9eebc77ec) ![African wildlife dataset sample image](https://github.com/ultralytics/docs/releases/download/0/african-wildlife-dataset-sample.avif)
- **Mosaiced Image**: Here, we present a training batch consisting of mosaiced dataset images. Mosaicing, a training technique, combines multiple images into one, enriching batch diversity. This method helps enhance the model's ability to generalize across different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: Here, we present a training batch consisting of mosaiced dataset images. Mosaicing, a training technique, combines multiple images into one, enriching batch diversity. This method helps enhance the model's ability to generalize across different object sizes, aspect ratios, and contexts.

@ -70,7 +70,7 @@ To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image s
The Argoverse dataset contains a diverse set of sensor data, including camera images, LiDAR point clouds, and HD map information, providing rich context for autonomous driving tasks. Here are some examples of data from the dataset, along with their corresponding annotations: The Argoverse dataset contains a diverse set of sensor data, including camera images, LiDAR point clouds, and HD map information, providing rich context for autonomous driving tasks. Here are some examples of data from the dataset, along with their corresponding annotations:
![Dataset sample image](https://www.argoverse.org/assets/images/reference_images/av2_ground_height.png) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/argoverse-3d-tracking-sample.avif)
- **Argoverse 3D Tracking**: This image demonstrates an example of 3D object tracking, where objects are annotated with 3D bounding boxes. The dataset provides LiDAR point clouds and camera images to facilitate the development of models for this task. - **Argoverse 3D Tracking**: This image demonstrates an example of 3D object tracking, where objects are annotated with 3D bounding boxes. The dataset provides LiDAR point clouds and camera images to facilitate the development of models for this task.

@ -90,7 +90,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
The brain tumor dataset encompasses a wide array of images featuring diverse object categories and intricate scenes. Presented below are examples of images from the dataset, accompanied by their respective annotations The brain tumor dataset encompasses a wide array of images featuring diverse object categories and intricate scenes. Presented below are examples of images from the dataset, accompanied by their respective annotations
![Brain tumor dataset sample image](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/1741cbf5-2462-4e9a-b0b9-4a07d76cf7ef) ![Brain tumor dataset sample image](https://github.com/ultralytics/docs/releases/download/0/brain-tumor-dataset-sample-image.avif)
- **Mosaiced Image**: Displayed here is a training batch comprising mosaiced dataset images. Mosaicing, a training technique, consolidates multiple images into one, enhancing batch diversity. This approach aids in improving the model's capacity to generalize across various object sizes, aspect ratios, and contexts. - **Mosaiced Image**: Displayed here is a training batch comprising mosaiced dataset images. Mosaicing, a training technique, consolidates multiple images into one, enhancing batch diversity. This approach aids in improving the model's capacity to generalize across various object sizes, aspect ratios, and contexts.

@ -87,7 +87,7 @@ To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size o
The COCO dataset contains a diverse set of images with various object categories and complex scenes. Here are some examples of images from the dataset, along with their corresponding annotations: The COCO dataset contains a diverse set of images with various object categories and complex scenes. Here are some examples of images from the dataset, along with their corresponding annotations:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/236811818-5b566576-1e92-42fa-9462-4b6a848abe89.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/mosaiced-coco-dataset-sample.avif)
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -62,7 +62,7 @@ To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size
Here are some examples of images from the COCO8 dataset, along with their corresponding annotations: Here are some examples of images from the COCO8 dataset, along with their corresponding annotations:
<img src="https://user-images.githubusercontent.com/26833433/236818348-e6260a3d-0454-436b-83a9-de366ba07235.jpg" alt="Dataset sample image" width="800"> <img src="https://github.com/ultralytics/docs/releases/download/0/mosaiced-training-batch-1.avif" alt="Dataset sample image" width="800">
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -65,7 +65,7 @@ To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an
The Global Wheat Head Dataset contains a diverse set of outdoor field images, capturing the natural variability in wheat head appearances, environments, and conditions. Here are some examples of data from the dataset, along with their corresponding annotations: The Global Wheat Head Dataset contains a diverse set of outdoor field images, capturing the natural variability in wheat head appearances, environments, and conditions. Here are some examples of data from the dataset, along with their corresponding annotations:
![Dataset sample image](https://i.ytimg.com/vi/yqvMuw-uedU/maxresdefault.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/wheat-head-detection-sample.avif)
- **Wheat Head Detection**: This image demonstrates an example of wheat head detection, where wheat heads are annotated with bounding boxes. The dataset provides a variety of images to facilitate the development of models for this task. - **Wheat Head Detection**: This image demonstrates an example of wheat head detection, where wheat heads are annotated with bounding boxes. The dataset provides a variety of images to facilitate the development of models for this task.

@ -34,15 +34,15 @@ names:
Labels for this format should be exported to YOLO format with one `*.txt` file per image. If there are no objects in an image, no `*.txt` file is required. The `*.txt` file should be formatted with one row per object in `class x_center y_center width height` format. Box coordinates must be in **normalized xywh** format (from 0 to 1). If your boxes are in pixels, you should divide `x_center` and `width` by image width, and `y_center` and `height` by image height. Class numbers should be zero-indexed (start with 0). Labels for this format should be exported to YOLO format with one `*.txt` file per image. If there are no objects in an image, no `*.txt` file is required. The `*.txt` file should be formatted with one row per object in `class x_center y_center width height` format. Box coordinates must be in **normalized xywh** format (from 0 to 1). If your boxes are in pixels, you should divide `x_center` and `width` by image width, and `y_center` and `height` by image height. Class numbers should be zero-indexed (start with 0).
<p align="center"><img width="750" src="https://user-images.githubusercontent.com/26833433/91506361-c7965000-e886-11ea-8291-c72b98c25eec.jpg" alt="Example labelled image"></p> <p align="center"><img width="750" src="https://github.com/ultralytics/docs/releases/download/0/two-persons-tie.avif" alt="Example labelled image"></p>
The label file corresponding to the above image contains 2 persons (class `0`) and a tie (class `27`): The label file corresponding to the above image contains 2 persons (class `0`) and a tie (class `27`):
<p align="center"><img width="428" src="https://user-images.githubusercontent.com/26833433/112467037-d2568c00-8d66-11eb-8796-55402ac0d62f.png" alt="Example label file"></p> <p align="center"><img width="428" src="https://github.com/ultralytics/docs/releases/download/0/two-persons-tie-1.avif" alt="Example label file"></p>
When using the Ultralytics YOLO format, organize your training and validation images and labels as shown in the [COCO8 dataset](coco8.md) example below. When using the Ultralytics YOLO format, organize your training and validation images and labels as shown in the [COCO8 dataset](coco8.md) example below.
<p align="center"><img width="800" src="https://github.com/IvorZhu331/ultralytics/assets/26833433/a55ec82d-2bb5-40f9-ac5c-f935e7eb9f07" alt="Example dataset directory structure"></p> <p align="center"><img width="800" src="https://github.com/ultralytics/docs/releases/download/0/two-persons-tie-2.avif" alt="Example dataset directory structure"></p>
## Usage ## Usage

@ -20,7 +20,7 @@ The [LVIS dataset](https://www.lvisdataset.org/) is a large-scale, fine-grained
</p> </p>
<p align="center"> <p align="center">
<img width="640" src="https://github.com/ultralytics/ultralytics/assets/26833433/40230a80-e7bc-4310-a860-4cc0ef4bb02a" alt="LVIS Dataset example images"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/lvis-dataset-example-images.avif" alt="LVIS Dataset example images">
</p> </p>
## Key Features ## Key Features
@ -83,7 +83,7 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
The LVIS dataset contains a diverse set of images with various object categories and complex scenes. Here are some examples of images from the dataset, along with their corresponding annotations: The LVIS dataset contains a diverse set of images with various object categories and complex scenes. Here are some examples of images from the dataset, along with their corresponding annotations:
![LVIS Dataset sample image](https://github.com/ultralytics/ultralytics/assets/26833433/38cc033a-68b0-47f3-a5b8-4ef554362e40) ![LVIS Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/lvis-mosaiced-training-batch.avif)
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
@ -154,6 +154,6 @@ Ultralytics YOLO models, including the latest YOLOv8, are optimized for real-tim
Yes, the LVIS dataset includes a variety of images with diverse object categories and complex scenes. Here is an example of a sample image along with its annotations: Yes, the LVIS dataset includes a variety of images with diverse object categories and complex scenes. Here is an example of a sample image along with its annotations:
![LVIS Dataset sample image](https://github.com/ultralytics/ultralytics/assets/26833433/38cc033a-68b0-47f3-a5b8-4ef554362e40) ![LVIS Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/lvis-mosaiced-training-batch.avif)
This mosaiced image demonstrates a training batch composed of multiple dataset images combined into one. Mosaicing increases the variety of objects and scenes within each training batch, enhancing the model's ability to generalize across different contexts. For more details on the LVIS dataset, explore the [LVIS dataset documentation](#key-features). This mosaiced image demonstrates a training batch composed of multiple dataset images combined into one. Mosaicing increases the variety of objects and scenes within each training batch, enhancing the model's ability to generalize across different contexts. For more details on the LVIS dataset, explore the [LVIS dataset documentation](#key-features).

@ -65,7 +65,7 @@ To train a YOLOv8n model on the Objects365 dataset for 100 epochs with an image
The Objects365 dataset contains a diverse set of high-resolution images with objects from 365 categories, providing rich context for object detection tasks. Here are some examples of the images in the dataset: The Objects365 dataset contains a diverse set of high-resolution images with objects from 365 categories, providing rich context for object detection tasks. Here are some examples of the images in the dataset:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/238215467-caf757dd-0b87-4b0d-bb19-d94a547f7fbf.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/objects365-sample-image.avif)
- **Objects365**: This image demonstrates an example of object detection, where objects are annotated with bounding boxes. The dataset provides a wide range of images to facilitate the development of models for this task. - **Objects365**: This image demonstrates an example of object detection, where objects are annotated with bounding boxes. The dataset provides a wide range of images to facilitate the development of models for this task.

@ -29,7 +29,7 @@ keywords: Open Images V7, Google dataset, computer vision, YOLOv8 models, object
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-oiv7.pt) | 640 | 34.9 | 596.9 | 2.43 | 44.1 | 167.4 | | [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-oiv7.pt) | 640 | 34.9 | 596.9 | 2.43 | 44.1 | 167.4 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-oiv7.pt) | 640 | 36.3 | 860.6 | 3.56 | 68.7 | 260.6 | | [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-oiv7.pt) | 640 | 36.3 | 860.6 | 3.56 | 68.7 | 260.6 |
![Open Images V7 classes visual](https://user-images.githubusercontent.com/26833433/258660358-2dc07771-ec08-4d11-b24a-f66e07550050.png) ![Open Images V7 classes visual](https://github.com/ultralytics/docs/releases/download/0/open-images-v7-classes-visual.avif)
## Key Features ## Key Features
@ -105,7 +105,7 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an im
Illustrations of the dataset help provide insights into its richness: Illustrations of the dataset help provide insights into its richness:
![Dataset sample image](https://storage.googleapis.com/openimages/web/images/oidv7_all-in-one_example_ab.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/oidv7-all-in-one-example-ab.avif)
- **Open Images V7**: This image exemplifies the depth and detail of annotations available, including bounding boxes, relationships, and segmentation masks. - **Open Images V7**: This image exemplifies the depth and detail of annotations available, including bounding boxes, relationships, and segmentation masks.

@ -9,7 +9,7 @@ keywords: Roboflow 100, Ultralytics, object detection, dataset, benchmarking, ma
Roboflow 100, developed by [Roboflow](https://roboflow.com/?ref=ultralytics) and sponsored by Intel, is a groundbreaking [object detection](../../tasks/detect.md) benchmark. It includes 100 diverse datasets sampled from over 90,000 public datasets. This benchmark is designed to test the adaptability of models to various domains, including healthcare, aerial imagery, and video games. Roboflow 100, developed by [Roboflow](https://roboflow.com/?ref=ultralytics) and sponsored by Intel, is a groundbreaking [object detection](../../tasks/detect.md) benchmark. It includes 100 diverse datasets sampled from over 90,000 public datasets. This benchmark is designed to test the adaptability of models to various domains, including healthcare, aerial imagery, and video games.
<p align="center"> <p align="center">
<img width="640" src="https://user-images.githubusercontent.com/15908060/202452898-9ca6b8f7-4805-4e8e-949a-6e080d7b94d2.jpg" alt="Roboflow 100 Overview"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/roboflow-100-overview.avif" alt="Roboflow 100 Overview">
</p> </p>
## Key Features ## Key Features
@ -104,7 +104,7 @@ You can access it directly from the Roboflow 100 GitHub repository. In addition,
Roboflow 100 consists of datasets with diverse images and videos captured from various angles and domains. Here's a look at examples of annotated images in the RF100 benchmark. Roboflow 100 consists of datasets with diverse images and videos captured from various angles and domains. Here's a look at examples of annotated images in the RF100 benchmark.
<p align="center"> <p align="center">
<img width="640" src="https://blog.roboflow.com/content/images/2022/11/image-2.png" alt="Sample Data and Annotations"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/sample-data-annotations.avif" alt="Sample Data and Annotations">
</p> </p>
The diversity in the Roboflow 100 benchmark that can be seen above is a significant advancement from traditional benchmarks which often focus on optimizing a single metric within a limited domain. The diversity in the Roboflow 100 benchmark that can be seen above is a significant advancement from traditional benchmarks which often focus on optimizing a single metric within a limited domain.

@ -79,7 +79,7 @@ To train a YOLOv8n model on the signature detection dataset for 100 epochs with
The signature detection dataset comprises a wide variety of images showcasing different document types and annotated signatures. Below are examples of images from the dataset, each accompanied by its corresponding annotations. The signature detection dataset comprises a wide variety of images showcasing different document types and annotated signatures. Below are examples of images from the dataset, each accompanied by its corresponding annotations.
![Signature detection dataset sample image](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/88a453da-3110-4835-9ae4-97bfb8b19046) ![Signature detection dataset sample image](https://github.com/ultralytics/docs/releases/download/0/signature-detection-mosaiced-sample.avif)
- **Mosaiced Image**: Here, we present a training batch consisting of mosaiced dataset images. Mosaicing, a training technique, combines multiple images into one, enriching batch diversity. This method helps enhance the model's ability to generalize across different signature sizes, aspect ratios, and contexts. - **Mosaiced Image**: Here, we present a training batch consisting of mosaiced dataset images. Mosaicing, a training technique, combines multiple images into one, enriching batch diversity. This method helps enhance the model's ability to generalize across different signature sizes, aspect ratios, and contexts.

@ -19,7 +19,7 @@ The [SKU-110k](https://github.com/eg4000/SKU110K_CVPR19) dataset is a collection
<strong>Watch:</strong> How to Train YOLOv10 on SKU-110k Dataset using Ultralytics | Retail Dataset <strong>Watch:</strong> How to Train YOLOv10 on SKU-110k Dataset using Ultralytics | Retail Dataset
</p> </p>
![Dataset sample image](https://user-images.githubusercontent.com/26833433/277141199-e7cdd803-237e-4b4a-9171-f95cba9388f9.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/densely-packed-retail-shelf.avif)
## Key Features ## Key Features
@ -78,7 +78,7 @@ To train a YOLOv8n model on the SKU-110K dataset for 100 epochs with an image si
The SKU-110k dataset contains a diverse set of retail shelf images with densely packed objects, providing rich context for object detection tasks. Here are some examples of data from the dataset, along with their corresponding annotations: The SKU-110k dataset contains a diverse set of retail shelf images with densely packed objects, providing rich context for object detection tasks. Here are some examples of data from the dataset, along with their corresponding annotations:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/277141197-b63e4aa5-12f6-4673-96a7-9a5207363c59.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/densely-packed-retail-shelf-1.avif)
- **Densely packed retail shelf image**: This image demonstrates an example of densely packed objects in a retail shelf setting. Objects are annotated with bounding boxes and SKU category labels. - **Densely packed retail shelf image**: This image demonstrates an example of densely packed objects in a retail shelf setting. Objects are annotated with bounding boxes and SKU category labels.

@ -74,7 +74,7 @@ To train a YOLOv8n model on the VisDrone dataset for 100 epochs with an image si
The VisDrone dataset contains a diverse set of images and videos captured by drone-mounted cameras. Here are some examples of data from the dataset, along with their corresponding annotations: The VisDrone dataset contains a diverse set of images and videos captured by drone-mounted cameras. Here are some examples of data from the dataset, along with their corresponding annotations:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/238217600-df0b7334-4c9e-4c77-81a5-c70cd33429cc.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/visdrone-object-detection-sample.avif)
- **Task 1**: Object detection in images - This image demonstrates an example of object detection in images, where objects are annotated with bounding boxes. The dataset provides a wide variety of images taken from different locations, environments, and densities to facilitate the development of models for this task. - **Task 1**: Object detection in images - This image demonstrates an example of object detection in images, where objects are annotated with bounding boxes. The dataset provides a wide variety of images taken from different locations, environments, and densities to facilitate the development of models for this task.

@ -66,7 +66,7 @@ To train a YOLOv8n model on the VOC dataset for 100 epochs with an image size of
The VOC dataset contains a diverse set of images with various object categories and complex scenes. Here are some examples of images from the dataset, along with their corresponding annotations: The VOC dataset contains a diverse set of images with various object categories and complex scenes. Here are some examples of images from the dataset, along with their corresponding annotations:
![Dataset sample image](https://github.com/ultralytics/ultralytics/assets/26833433/7d4c18f4-774e-43f8-a5f3-9467cda7de4a) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/mosaiced-voc-dataset-sample.avif)
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -69,7 +69,7 @@ To train a model on the xView dataset for 100 epochs with an image size of 640,
The xView dataset contains high-resolution satellite images with a diverse set of objects annotated using bounding boxes. Here are some examples of data from the dataset, along with their corresponding annotations: The xView dataset contains high-resolution satellite images with a diverse set of objects annotated using bounding boxes. Here are some examples of data from the dataset, along with their corresponding annotations:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/277141257-ae6ba4de-5dcb-4c76-bc05-bc1e386361ba.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/overhead-imagery-object-detection.avif)
- **Overhead Imagery**: This image demonstrates an example of object detection in overhead imagery, where objects are annotated with bounding boxes. The dataset provides high-resolution satellite images to facilitate the development of models for this task. - **Overhead Imagery**: This image demonstrates an example of object detection in overhead imagery, where objects are annotated with bounding boxes. The dataset provides high-resolution satellite images to facilitate the development of models for this task.

@ -9,7 +9,7 @@ keywords: Ultralytics Explorer GUI, semantic search, vector similarity, SQL quer
Explorer GUI is like a playground build using [Ultralytics Explorer API](api.md). It allows you to run semantic/vector similarity search, SQL queries and even search using natural language using our ask AI feature powered by LLMs. Explorer GUI is like a playground build using [Ultralytics Explorer API](api.md). It allows you to run semantic/vector similarity search, SQL queries and even search using natural language using our ask AI feature powered by LLMs.
<p> <p>
<img width="1709" alt="Explorer Dashboard Screenshot 1" src="https://github.com/ultralytics/ultralytics/assets/15766192/feb1fe05-58c5-4173-a9ff-e611e3bba3d0"> <img width="1709" alt="Explorer Dashboard Screenshot 1" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-1.avif">
</p> </p>
<p align="center"> <p align="center">
@ -41,19 +41,19 @@ Semantic search is a technique for finding similar images to a given image. It i
For example: For example:
In this VOC Exploration dashboard, user selects a couple airplane images like this: In this VOC Exploration dashboard, user selects a couple airplane images like this:
<p> <p>
<img width="1710" alt="Explorer Dashboard Screenshot 2" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/3becdc1d-45dc-43b7-88ff-84ff0b443894"> <img width="1710" alt="Explorer Dashboard Screenshot 2" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-2.avif">
</p> </p>
On performing similarity search, you should see a similar result: On performing similarity search, you should see a similar result:
<p> <p>
<img width="1710" alt="Explorer Dashboard Screenshot 3" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/aeea2e16-bc2b-41bb-9aef-4a33bfa1a800"> <img width="1710" alt="Explorer Dashboard Screenshot 3" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-3.avif">
</p> </p>
## Ask AI ## Ask AI
This allows you to write how you want to filter your dataset using natural language. You don't have to be proficient in writing SQL queries. Our AI powered query generator will automatically do that under the hood. For example - you can say - "show me 100 images with exactly one person and 2 dogs. There can be other objects too" and it'll internally generate the query and show you those results. Here's an example output when asked to "Show 10 images with exactly 5 persons" and you'll see a result like this: This allows you to write how you want to filter your dataset using natural language. You don't have to be proficient in writing SQL queries. Our AI powered query generator will automatically do that under the hood. For example - you can say - "show me 100 images with exactly one person and 2 dogs. There can be other objects too" and it'll internally generate the query and show you those results. Here's an example output when asked to "Show 10 images with exactly 5 persons" and you'll see a result like this:
<p> <p>
<img width="1709" alt="Explorer Dashboard Screenshot 4" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/55a67181-3b25-4d2f-b786-2a6a08a0cb6b"> <img width="1709" alt="Explorer Dashboard Screenshot 4" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-4.avif">
</p> </p>
Note: This works using LLMs under the hood so the results are probabilistic and might get things wrong sometimes Note: This works using LLMs under the hood so the results are probabilistic and might get things wrong sometimes
@ -67,7 +67,7 @@ WHERE labels LIKE '%person%' AND labels LIKE '%dog%'
``` ```
<p> <p>
<img width="1707" alt="Explorer Dashboard Screenshot 5" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/14fbb237-0b2d-4b7c-8f62-2fca4e6cc26f"> <img width="1707" alt="Explorer Dashboard Screenshot 5" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-5.avif">
</p> </p>
This is a Demo build using the Explorer API. You can use the API to build your own exploratory notebooks or scripts to get insights into your datasets. Learn more about the Explorer API [here](api.md). This is a Demo build using the Explorer API. You can use the API to build your own exploratory notebooks or scripts to get insights into your datasets. Learn more about the Explorer API [here](api.md).

@ -7,7 +7,7 @@ keywords: Ultralytics Explorer, CV datasets, semantic search, SQL queries, vecto
# Ultralytics Explorer # Ultralytics Explorer
<p> <p>
<img width="1709" alt="Ultralytics Explorer Screenshot 1" src="https://github.com/ultralytics/ultralytics/assets/15766192/feb1fe05-58c5-4173-a9ff-e611e3bba3d0"> <img width="1709" alt="Ultralytics Explorer Screenshot 1" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-1.avif">
</p> </p>
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/docs/en/datasets/explorer/explorer.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/docs/en/datasets/explorer/explorer.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
@ -56,7 +56,7 @@ yolo explorer
You can set it like this - `yolo settings openai_api_key="..."` You can set it like this - `yolo settings openai_api_key="..."`
<p> <p>
<img width="1709" alt="Ultralytics Explorer OpenAI Integration" src="https://github.com/AyushExel/assets/assets/15766192/1b5f3708-be3e-44c5-9ea3-adcd522dfc75"> <img width="1709" alt="Ultralytics Explorer OpenAI Integration" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-explorer-openai-integration.avif">
</p> </p>
## FAQ ## FAQ

@ -24,7 +24,7 @@ Ultralytics provides support for various datasets to facilitate computer vision
Create embeddings for your dataset, search for similar images, run SQL queries, perform semantic search and even search using natural language! You can get started with our GUI app or build your own using the API. Learn more [here](explorer/index.md). Create embeddings for your dataset, search for similar images, run SQL queries, perform semantic search and even search using natural language! You can get started with our GUI app or build your own using the API. Learn more [here](explorer/index.md).
<p> <p>
<img alt="Ultralytics Explorer Screenshot" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/d2ebaffd-e065-4d88-983a-33cb6f593785"> <img alt="Ultralytics Explorer Screenshot" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-explorer-screenshot.avif">
</p> </p>
- Try the [GUI Demo](explorer/index.md) - Try the [GUI Demo](explorer/index.md)

@ -8,7 +8,7 @@ keywords: DOTA dataset, object detection, aerial images, oriented bounding boxes
[DOTA](https://captain-whu.github.io/DOTA/index.html) stands as a specialized dataset, emphasizing object detection in aerial images. Originating from the DOTA series of datasets, it offers annotated images capturing a diverse array of aerial scenes with Oriented Bounding Boxes (OBB). [DOTA](https://captain-whu.github.io/DOTA/index.html) stands as a specialized dataset, emphasizing object detection in aerial images. Originating from the DOTA series of datasets, it offers annotated images capturing a diverse array of aerial scenes with Oriented Bounding Boxes (OBB).
![DOTA classes visual](https://user-images.githubusercontent.com/26833433/259461765-72fdd0d8-266b-44a9-8199-199329bf5ca9.jpg) ![DOTA classes visual](https://github.com/ultralytics/docs/releases/download/0/dota-classes-visual.avif)
## Key Features ## Key Features
@ -126,7 +126,7 @@ To train a model on the DOTA v1 dataset, you can utilize the following code snip
Having a glance at the dataset illustrates its depth: Having a glance at the dataset illustrates its depth:
![Dataset sample image](https://captain-whu.github.io/DOTA/images/instances-DOTA.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/instances-DOTA.avif)
- **DOTA examples**: This snapshot underlines the complexity of aerial scenes and the significance of Oriented Bounding Box annotations, capturing objects in their natural orientation. - **DOTA examples**: This snapshot underlines the complexity of aerial scenes and the significance of Oriented Bounding Box annotations, capturing objects in their natural orientation.

@ -51,7 +51,7 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image s
Here are some examples of images from the DOTA8 dataset, along with their corresponding annotations: Here are some examples of images from the DOTA8 dataset, along with their corresponding annotations:
<img src="https://github.com/Laughing-q/assets/assets/61612323/965d3ff7-5b9b-4add-b62e-9795921b60de" alt="Dataset sample image" width="800"> <img src="https://github.com/ultralytics/docs/releases/download/0/mosaiced-training-batch.avif" alt="Dataset sample image" width="800">
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -20,7 +20,7 @@ class_index x1 y1 x2 y2 x3 y3 x4 y4
Internally, YOLO processes losses and outputs in the `xywhr` format, which represents the bounding box's center point (xy), width, height, and rotation. Internally, YOLO processes losses and outputs in the `xywhr` format, which represents the bounding box's center point (xy), width, height, and rotation.
<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/259471881-59020fe2-09a4-4dcc-acce-9b0f7cfa40ee.png" alt="OBB format examples"></p> <p align="center"><img width="800" src="https://github.com/ultralytics/docs/releases/download/0/obb-format-examples.avif" alt="OBB format examples"></p>
An example of a `*.txt` label file for the above image, which contains an object of class `0` in OBB format, could look like: An example of a `*.txt` label file for the above image, which contains an object of class `0` in OBB format, could look like:

@ -8,7 +8,7 @@ keywords: COCO-Pose, pose estimation, dataset, keypoints, COCO Keypoints 2017, Y
The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialized version of the COCO (Common Objects in Context) dataset, designed for pose estimation tasks. It leverages the COCO Keypoints 2017 images and labels to enable the training of models like YOLO for pose estimation tasks. The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialized version of the COCO (Common Objects in Context) dataset, designed for pose estimation tasks. It leverages the COCO Keypoints 2017 images and labels to enable the training of models like YOLO for pose estimation tasks.
![Pose sample image](https://user-images.githubusercontent.com/26833433/277141128-cd62d09e-1eb0-4d20-9938-c55239a5cb76.jpg) ![Pose sample image](https://github.com/ultralytics/docs/releases/download/0/pose-sample-image.avif)
## COCO-Pose Pretrained Models ## COCO-Pose Pretrained Models
@ -78,7 +78,7 @@ To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 epochs with an im
The COCO-Pose dataset contains a diverse set of images with human figures annotated with keypoints. Here are some examples of images from the dataset, along with their corresponding annotations: The COCO-Pose dataset contains a diverse set of images with human figures annotated with keypoints. Here are some examples of images from the dataset, along with their corresponding annotations:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/239690150-a9dc0bd0-7ad9-4b78-a30f-189ed727ea0e.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/mosaiced-training-batch-6.avif)
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -51,7 +51,7 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an i
Here are some examples of images from the COCO8-Pose dataset, along with their corresponding annotations: Here are some examples of images from the COCO8-Pose dataset, along with their corresponding annotations:
<img src="https://user-images.githubusercontent.com/26833433/236818283-52eecb96-fc6a-420d-8a26-d488b352dd4c.jpg" alt="Dataset sample image" width="800"> <img src="https://github.com/ultralytics/docs/releases/download/0/mosaiced-training-batch-5.avif" alt="Dataset sample image" width="800">
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -64,7 +64,7 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
Here are some examples of images from the Tiger-Pose dataset, along with their corresponding annotations: Here are some examples of images from the Tiger-Pose dataset, along with their corresponding annotations:
<img src="https://user-images.githubusercontent.com/62513924/272491921-c963d2bf-505f-4a15-abd7-259de302cffa.jpg" alt="Dataset sample image" width="100%"> <img src="https://github.com/ultralytics/docs/releases/download/0/mosaiced-training-batch-4.avif" alt="Dataset sample image" width="100%">
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -72,7 +72,7 @@ To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100
The Carparts Segmentation dataset includes a diverse array of images and videos taken from various perspectives. Below, you'll find examples of data from the dataset along with their corresponding annotations: The Carparts Segmentation dataset includes a diverse array of images and videos taken from various perspectives. Below, you'll find examples of data from the dataset along with their corresponding annotations:
![Dataset sample image](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/55da8284-a637-4858-aa1c-fc22d33a9c43) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/dataset-sample-image.avif)
- This image illustrates object segmentation within a sample, featuring annotated bounding boxes with masks surrounding identified objects. The dataset consists of a varied set of images captured in various locations, environments, and densities, serving as a comprehensive resource for crafting models specific to this task. - This image illustrates object segmentation within a sample, featuring annotated bounding boxes with masks surrounding identified objects. The dataset consists of a varied set of images captured in various locations, environments, and densities, serving as a comprehensive resource for crafting models specific to this task.
- This instance highlights the diversity and complexity inherent in the dataset, emphasizing the crucial role of high-quality data in computer vision tasks, particularly in the realm of car parts segmentation. - This instance highlights the diversity and complexity inherent in the dataset, emphasizing the crucial role of high-quality data in computer vision tasks, particularly in the realm of car parts segmentation.

@ -76,7 +76,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
COCO-Seg, like its predecessor COCO, contains a diverse set of images with various object categories and complex scenes. However, COCO-Seg introduces more detailed instance segmentation masks for each object in the images. Here are some examples of images from the dataset, along with their corresponding instance segmentation masks: COCO-Seg, like its predecessor COCO, contains a diverse set of images with various object categories and complex scenes. However, COCO-Seg introduces more detailed instance segmentation masks for each object in the images. Here are some examples of images from the dataset, along with their corresponding instance segmentation masks:
![Dataset sample image](https://user-images.githubusercontent.com/26833433/239690696-93fa8765-47a2-4b34-a6e5-516d0d1c725b.jpg) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/mosaiced-training-batch-3.avif)
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This aids the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This aids the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -51,7 +51,7 @@ To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 epochs with an ima
Here are some examples of images from the COCO8-Seg dataset, along with their corresponding annotations: Here are some examples of images from the COCO8-Seg dataset, along with their corresponding annotations:
<img src="https://user-images.githubusercontent.com/26833433/236818387-f7bde7df-caaa-46d1-8341-1f7504cd11a1.jpg" alt="Dataset sample image" width="800"> <img src="https://github.com/ultralytics/docs/releases/download/0/mosaiced-training-batch-2.avif" alt="Dataset sample image" width="800">
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts. - **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.

@ -61,7 +61,7 @@ To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 epo
The Crack Segmentation dataset comprises a varied collection of images and videos captured from multiple perspectives. Below are instances of data from the dataset, accompanied by their respective annotations: The Crack Segmentation dataset comprises a varied collection of images and videos captured from multiple perspectives. Below are instances of data from the dataset, accompanied by their respective annotations:
![Dataset sample image](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/40ccc20a-9593-412f-b028-643d4a904d0e) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/crack-segmentation-sample.avif)
- This image presents an example of image object segmentation, featuring annotated bounding boxes with masks outlining identified objects. The dataset includes a diverse array of images taken in different locations, environments, and densities, making it a comprehensive resource for developing models designed for this particular task. - This image presents an example of image object segmentation, featuring annotated bounding boxes with masks outlining identified objects. The dataset includes a diverse array of images taken in different locations, environments, and densities, making it a comprehensive resource for developing models designed for this particular task.

@ -61,7 +61,7 @@ To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 e
The Package Segmentation dataset comprises a varied collection of images and videos captured from multiple perspectives. Below are instances of data from the dataset, accompanied by their respective annotations: The Package Segmentation dataset comprises a varied collection of images and videos captured from multiple perspectives. Below are instances of data from the dataset, accompanied by their respective annotations:
![Dataset sample image](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/55bdf5c8-4ae4-4824-8d08-63c15bdd9a92) ![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/dataset-sample-image-1.avif)
- This image displays an instance of image object detection, featuring annotated bounding boxes with masks outlining recognized objects. The dataset incorporates a diverse collection of images taken in different locations, environments, and densities. It serves as a comprehensive resource for developing models specific to this task. - This image displays an instance of image object detection, featuring annotated bounding boxes with masks outlining recognized objects. The dataset incorporates a diverse collection of images taken in different locations, environments, and densities. It serves as a comprehensive resource for developing models specific to this task.
- The example emphasizes the diversity and complexity present in the VisDrone dataset, underscoring the significance of high-quality sensor data for computer vision tasks involving drones. - The example emphasizes the diversity and complexity present in the VisDrone dataset, underscoring the significance of high-quality sensor data for computer vision tasks involving drones.

@ -12,9 +12,9 @@ This guide provides a comprehensive overview of three fundamental types of data
### Visual Samples ### Visual Samples
| Line Graph | Bar Plot | Pie Chart | | Line Graph | Bar Plot | Pie Chart |
| :----------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------: | | :------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------: |
| ![Line Graph](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/eeabd90c-04fd-4e5b-aac9-c7777f892200) | ![Bar Plot](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/c1da2d6a-99ff-43a8-b5dc-ca93127917f8) | ![Pie Chart](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/9d8acce6-d9e4-4685-949d-cd4851483187) | | ![Line Graph](https://github.com/ultralytics/docs/releases/download/0/line-graph.avif) | ![Bar Plot](https://github.com/ultralytics/docs/releases/download/0/bar-plot.avif) | ![Pie Chart](https://github.com/ultralytics/docs/releases/download/0/pie-chart.avif) |
### Why Graphs are Important ### Why Graphs are Important

@ -33,7 +33,7 @@ Before you can get started, make sure you have access to an AzureML workspace. I
From your AzureML workspace, select Compute > Compute instances > New, select the instance with the resources you need. From your AzureML workspace, select Compute > Compute instances > New, select the instance with the resources you need.
<p align="center"> <p align="center">
<img width="1280" src="https://github.com/ouphi/ultralytics/assets/17216799/3e92fcc0-a08e-41a4-af81-d289cfe3b8f2" alt="Create Azure Compute Instance"> <img width="1280" src="https://github.com/ultralytics/docs/releases/download/0/create-compute-arrow.avif" alt="Create Azure Compute Instance">
</p> </p>
## Quickstart from Terminal ## Quickstart from Terminal
@ -41,7 +41,7 @@ From your AzureML workspace, select Compute > Compute instances > New, select th
Start your compute and open a Terminal: Start your compute and open a Terminal:
<p align="center"> <p align="center">
<img width="480" src="https://github.com/ouphi/ultralytics/assets/17216799/635152f1-f4a3-4261-b111-d416cb5ef357" alt="Open Terminal"> <img width="480" src="https://github.com/ultralytics/docs/releases/download/0/open-terminal.avif" alt="Open Terminal">
</p> </p>
### Create virtualenv ### Create virtualenv
@ -86,7 +86,7 @@ You can find more [instructions to use the Ultralytics CLI here](../quickstart.m
Open the compute Terminal. Open the compute Terminal.
<p align="center"> <p align="center">
<img width="480" src="https://github.com/ouphi/ultralytics/assets/17216799/635152f1-f4a3-4261-b111-d416cb5ef357" alt="Open Terminal"> <img width="480" src="https://github.com/ultralytics/docs/releases/download/0/open-terminal.avif" alt="Open Terminal">
</p> </p>
From your compute terminal, you need to create a new ipykernel that will be used by your notebook to manage your dependencies: From your compute terminal, you need to create a new ipykernel that will be used by your notebook to manage your dependencies:

@ -7,7 +7,7 @@ keywords: Ultralytics, Conda, setup, installation, environment, guide, machine l
# Conda Quickstart Guide for Ultralytics # Conda Quickstart Guide for Ultralytics
<p align="center"> <p align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/266324397-32119e21-8c86-43e5-a00e-79827d303d10.png" alt="Ultralytics Conda Package Visual"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-conda-package-visual.avif" alt="Ultralytics Conda Package Visual">
</p> </p>
This guide provides a comprehensive introduction to setting up a Conda environment for your Ultralytics projects. Conda is an open-source package and environment management system that offers an excellent alternative to pip for installing packages and dependencies. Its isolated environments make it particularly well-suited for data science and machine learning endeavors. For more details, visit the Ultralytics Conda package on [Anaconda](https://anaconda.org/conda-forge/ultralytics) and check out the Ultralytics feedstock repository for package updates on [GitHub](https://github.com/conda-forge/ultralytics-feedstock/). This guide provides a comprehensive introduction to setting up a Conda environment for your Ultralytics projects. Conda is an open-source package and environment management system that offers an excellent alternative to pip for installing packages and dependencies. Its isolated environments make it particularly well-suited for data science and machine learning endeavors. For more details, visit the Ultralytics Conda package on [Anaconda](https://anaconda.org/conda-forge/ultralytics) and check out the Ultralytics feedstock repository for package updates on [GitHub](https://github.com/conda-forge/ultralytics-feedstock/).

@ -7,7 +7,7 @@ keywords: Coral Edge TPU, Raspberry Pi, YOLOv8, Ultralytics, TensorFlow Lite, ML
# Coral Edge TPU on a Raspberry Pi with Ultralytics YOLOv8 🚀 # Coral Edge TPU on a Raspberry Pi with Ultralytics YOLOv8 🚀
<p align="center"> <p align="center">
<img width="800" src="https://images.ctfassets.net/2lpsze4g694w/5XK2dV0w55U0TefijPli1H/bf0d119d77faef9a5d2cc0dad2aa4b42/Edge-TPU-USB-Accelerator-and-Pi.jpg" alt="Raspberry Pi single board computer with USB Edge TPU accelerator"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/edge-tpu-usb-accelerator-and-pi.avif" alt="Raspberry Pi single board computer with USB Edge TPU accelerator">
</p> </p>
## What is a Coral Edge TPU? ## What is a Coral Edge TPU?

@ -62,7 +62,7 @@ Depending on the specific requirements of a [computer vision task](../tasks/inde
- **Keypoints**: Specific points marked within an image to identify locations of interest. Keypoints are used in tasks like pose estimation and facial landmark detection. - **Keypoints**: Specific points marked within an image to identify locations of interest. Keypoints are used in tasks like pose estimation and facial landmark detection.
<p align="center"> <p align="center">
<img width="100%" src="https://labelyourdata.com/img/article-illustrations/types_of_da_light.jpg" alt="Types of Data Annotation"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/types-of-data-annotation.avif" alt="Types of Data Annotation">
</p> </p>
### Common Annotation Formats ### Common Annotation Formats
@ -91,7 +91,7 @@ Let's say you are ready to annotate now. There are several open-source tools ava
- **[Labelme](https://github.com/labelmeai/labelme)**: A simple and easy-to-use tool that allows for quick annotation of images with polygons, making it ideal for straightforward tasks. - **[Labelme](https://github.com/labelmeai/labelme)**: A simple and easy-to-use tool that allows for quick annotation of images with polygons, making it ideal for straightforward tasks.
<p align="center"> <p align="center">
<img width="100%" src="https://github.com/labelmeai/labelme/raw/main/examples/instance_segmentation/.readme/annotation.jpg" alt="LabelMe Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/labelme-instance-segmentation-annotation.avif" alt="LabelMe Overview">
</p> </p>
These open-source tools are budget-friendly and provide a range of features to meet different annotation needs. These open-source tools are budget-friendly and provide a range of features to meet different annotation needs.
@ -105,7 +105,7 @@ Before you dive into annotating your data, there are a few more things to keep i
It's important to understand the difference between accuracy and precision and how it relates to annotation. Accuracy refers to how close the annotated data is to the true values. It helps us measure how closely the labels reflect real-world scenarios. Precision indicates the consistency of annotations. It checks if you are giving the same label to the same object or feature throughout the dataset. High accuracy and precision lead to better-trained models by reducing noise and improving the model's ability to generalize from the training data. It's important to understand the difference between accuracy and precision and how it relates to annotation. Accuracy refers to how close the annotated data is to the true values. It helps us measure how closely the labels reflect real-world scenarios. Precision indicates the consistency of annotations. It checks if you are giving the same label to the same object or feature throughout the dataset. High accuracy and precision lead to better-trained models by reducing noise and improving the model's ability to generalize from the training data.
<p align="center"> <p align="center">
<img width="100%" src="https://keylabs.ai/blog/content/images/size/w1600/2023/12/new26-3.jpg" alt="Example of Precision"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/example-of-precision.avif" alt="Example of Precision">
</p> </p>
#### Identifying Outliers #### Identifying Outliers

@ -8,7 +8,7 @@ keywords: Ultralytics, YOLOv8, NVIDIA Jetson, JetPack, AI deployment, embedded s
This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLOv8 on [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) devices using DeepStream SDK and TensorRT. Here we use TensorRT to maximize the inference performance on the Jetson platform. This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLOv8 on [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) devices using DeepStream SDK and TensorRT. Here we use TensorRT to maximize the inference performance on the Jetson platform.
<img width="1024" src="https://github.com/ultralytics/ultralytics/assets/20147381/67403d6c-e10c-439a-a731-f1478c0656c8" alt="DeepStream on NVIDIA Jetson"> <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/deepstream-nvidia-jetson.avif" alt="DeepStream on NVIDIA Jetson">
!!! Note !!! Note
@ -168,7 +168,7 @@ deepstream-app -c deepstream_app_config.txt
It will take a long time to generate the TensorRT engine file before starting the inference. So please be patient. It will take a long time to generate the TensorRT engine file before starting the inference. So please be patient.
<div align=center><img width=1000 src="https://github.com/ultralytics/ultralytics/assets/20147381/61bd7710-d009-4ca6-9536-2575f3eaec4a" alt="YOLOv8 with deepstream"></div> <div align=center><img width=1000 src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-deepstream.avif" alt="YOLOv8 with deepstream"></div>
!!! Tip !!! Tip
@ -288,7 +288,7 @@ To set up multiple streams under a single deepstream application, you can do the
deepstream-app -c deepstream_app_config.txt deepstream-app -c deepstream_app_config.txt
``` ```
<div align=center><img width=1000 src="https://github.com/ultralytics/ultralytics/assets/20147381/c2b327c8-75a4-4bc9-8e2d-cf023862a5d6" alt="Multistream setup"></div> <div align=center><img width=1000 src="https://github.com/ultralytics/docs/releases/download/0/multistream-setup.avif" alt="Multistream setup"></div>
## Benchmark Results ## Benchmark Results

@ -30,7 +30,7 @@ Let's walk through an example.
Consider a computer vision project where you want to [estimate the speed of vehicles](./speed-estimation.md) on a highway. The core issue is that current speed monitoring methods are inefficient and error-prone due to outdated radar systems and manual processes. The project aims to develop a real-time computer vision system that can replace legacy [speed estimation](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects) systems. Consider a computer vision project where you want to [estimate the speed of vehicles](./speed-estimation.md) on a highway. The core issue is that current speed monitoring methods are inefficient and error-prone due to outdated radar systems and manual processes. The project aims to develop a real-time computer vision system that can replace legacy [speed estimation](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects) systems.
<p align="center"> <p align="center">
<img width="100%" src="https://assets-global.website-files.com/6479eab6eb2ed5e597810e9e/664efc6e1c4bef6407824558_Abi%20Speed%20fig1.png" alt="Speed Estimation Using YOLOv8"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/speed-estimation-using-yolov8.avif" alt="Speed Estimation Using YOLOv8">
</p> </p>
Primary users include traffic management authorities and law enforcement, while secondary stakeholders are highway planners and the public benefiting from safer roads. Key requirements involve evaluating budget, time, and personnel, as well as addressing technical needs like high-resolution cameras and real-time data processing. Additionally, regulatory constraints on privacy and data security must be considered. Primary users include traffic management authorities and law enforcement, while secondary stakeholders are highway planners and the public benefiting from safer roads. Key requirements involve evaluating budget, time, and personnel, as well as addressing technical needs like high-resolution cameras and real-time data processing. Additionally, regulatory constraints on privacy and data security must be considered.
@ -53,7 +53,7 @@ Your problem statement helps you conceptualize which computer vision task can so
For example, if your problem is monitoring vehicle speeds on a highway, the relevant computer vision task is object tracking. [Object tracking](../modes/track.md) is suitable because it allows the system to continuously follow each vehicle in the video feed, which is crucial for accurately calculating their speeds. For example, if your problem is monitoring vehicle speeds on a highway, the relevant computer vision task is object tracking. [Object tracking](../modes/track.md) is suitable because it allows the system to continuously follow each vehicle in the video feed, which is crucial for accurately calculating their speeds.
<p align="center"> <p align="center">
<img width="100%" src="https://assets-global.website-files.com/6479eab6eb2ed5e597810e9e/664f03ba300cf6e61689862f_FIG%20444.gif" alt="Example of Object Tracking"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/example-of-object-tracking.avif" alt="Example of Object Tracking">
</p> </p>
Other tasks, like [object detection](../tasks/detect.md), are not suitable as they don't provide continuous location or movement information. Once you've identified the appropriate computer vision task, it guides several critical aspects of your project, like model selection, dataset preparation, and model training approaches. Other tasks, like [object detection](../tasks/detect.md), are not suitable as they don't provide continuous location or movement information. Once you've identified the appropriate computer vision task, it guides several critical aspects of your project, like model selection, dataset preparation, and model training approaches.
@ -82,7 +82,7 @@ Next, let's look at a few common discussion points in the community regarding co
The most popular computer vision tasks include image classification, object detection, and image segmentation. The most popular computer vision tasks include image classification, object detection, and image segmentation.
<p align="center"> <p align="center">
<img width="100%" src="https://assets-global.website-files.com/614c82ed388d53640613982e/64aeb16e742bde3dc050e048_image%20classification%20vs%20object%20detection%20vs%20image%20segmentation.webp" alt="Overview of Computer Vision Tasks"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/image-classification-vs-object-detection-vs-image-segmentation.avif" alt="Overview of Computer Vision Tasks">
</p> </p>
For a detailed explanation of various tasks, please take a look at the Ultralytics Docs page on [YOLOv8 Tasks](../tasks/index.md). For a detailed explanation of various tasks, please take a look at the Ultralytics Docs page on [YOLOv8 Tasks](../tasks/index.md).
@ -92,7 +92,7 @@ For a detailed explanation of various tasks, please take a look at the Ultralyti
No, pre-trained models don't "remember" classes in the traditional sense. They learn patterns from massive datasets, and during custom training (fine-tuning), these patterns are adjusted for your specific task. The model's capacity is limited, and focusing on new information can overwrite some previous learnings. No, pre-trained models don't "remember" classes in the traditional sense. They learn patterns from massive datasets, and during custom training (fine-tuning), these patterns are adjusted for your specific task. The model's capacity is limited, and focusing on new information can overwrite some previous learnings.
<p align="center"> <p align="center">
<img width="100%" src="https://media.licdn.com/dms/image/D4D12AQHIJdbNXjBXEQ/article-cover_image-shrink_720_1280/0/1692158503859?e=2147483647&v=beta&t=pib5jFzINB9RzKIATGHMsE0jK1_4_m5LRqx7GkYiFqA" alt="Overview of Transfer Learning"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/overview-of-transfer-learning.avif" alt="Overview of Transfer Learning">
</p> </p>
If you want to use the classes the model was pre-trained on, a practical approach is to use two models: one retains the original performance, and the other is fine-tuned for your specific task. This way, you can combine the outputs of both models. There are other options like freezing layers, using the pre-trained model as a feature extractor, and task-specific branching, but these are more complex solutions and require more expertise. If you want to use the classes the model was pre-trained on, a practical approach is to use two models: one retains the original performance, and the other is fine-tuned for your specific task. This way, you can combine the outputs of both models. There are other options like freezing layers, using the pre-trained model as a feature extractor, and task-specific branching, but these are more complex solutions and require more expertise.

@ -23,9 +23,9 @@ Measuring the gap between two objects is known as distance calculation within a
## Visuals ## Visuals
| Distance Calculation using Ultralytics YOLOv8 | | Distance Calculation using Ultralytics YOLOv8 |
| :---------------------------------------------------------------------------------------------------------------------------------------------: | | :----------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Ultralytics YOLOv8 Distance Calculation](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/6b6b735d-3c49-4b84-a022-2bf6e3c72f8b) | | ![Ultralytics YOLOv8 Distance Calculation](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-distance-calculation.avif) |
## Advantages of Distance Calculation? ## Advantages of Distance Calculation?

@ -7,7 +7,7 @@ keywords: Ultralytics, Docker, Quickstart Guide, CPU support, GPU support, NVIDI
# Docker Quickstart Guide for Ultralytics # Docker Quickstart Guide for Ultralytics
<p align="center"> <p align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/270173601-fc7011bd-e67c-452f-a31a-aa047dcd2771.png" alt="Ultralytics Docker Package Visual"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-docker-package-visual.avif" alt="Ultralytics Docker Package Visual">
</p> </p>
This guide serves as a comprehensive introduction to setting up a Docker environment for your Ultralytics projects. [Docker](https://docker.com/) is a platform for developing, shipping, and running applications in containers. It is particularly beneficial for ensuring that the software will always run the same, regardless of where it's deployed. For more details, visit the Ultralytics Docker repository on [Docker Hub](https://hub.docker.com/r/ultralytics/ultralytics). This guide serves as a comprehensive introduction to setting up a Docker environment for your Ultralytics projects. [Docker](https://docker.com/) is a platform for developing, shipping, and running applications in containers. It is particularly beneficial for ensuring that the software will always run the same, regardless of where it's deployed. For more details, visit the Ultralytics Docker repository on [Docker Hub](https://hub.docker.com/r/ultralytics/ultralytics).

@ -29,10 +29,10 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
## Real World Applications ## Real World Applications
| Transportation | Retail | | Transportation | Retail |
| :---------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------: | | :--------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------: |
| ![Ultralytics YOLOv8 Transportation Heatmap](https://github.com/RizwanMunawar/ultralytics/assets/62513924/288d7053-622b-4452-b4e4-1f41aeb764aa) | ![Ultralytics YOLOv8 Retail Heatmap](https://github.com/RizwanMunawar/ultralytics/assets/62513924/edef75ad-50a7-4c0a-be4a-a66cdfc12802) | | ![Ultralytics YOLOv8 Transportation Heatmap](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-transportation-heatmap.avif) | ![Ultralytics YOLOv8 Retail Heatmap](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-retail-heatmap.avif) |
| Ultralytics YOLOv8 Transportation Heatmap | Ultralytics YOLOv8 Retail Heatmap | | Ultralytics YOLOv8 Transportation Heatmap | Ultralytics YOLOv8 Retail Heatmap |
!!! tip "Heatmap Configuration" !!! tip "Heatmap Configuration"

@ -20,7 +20,7 @@ Hyperparameters are high-level, structural settings for the algorithm. They are
- **Architecture Specifics**: Such as channel counts, number of layers, types of activation functions, etc. - **Architecture Specifics**: Such as channel counts, number of layers, types of activation functions, etc.
<p align="center"> <p align="center">
<img width="640" src="https://user-images.githubusercontent.com/26833433/263858934-4f109a2f-82d9-4d08-8bd6-6fd1ff520bcd.png" alt="Hyperparameter Tuning Visual"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/hyperparameter-tuning-visual.avif" alt="Hyperparameter Tuning Visual">
</p> </p>
For a full list of augmentation hyperparameters used in YOLOv8 please refer to the [configurations page](../usage/cfg.md#augmentation-settings). For a full list of augmentation hyperparameters used in YOLOv8 please refer to the [configurations page](../usage/cfg.md#augmentation-settings).
@ -157,7 +157,7 @@ This is a plot displaying fitness (typically a performance metric like AP50) aga
- **Usage**: Performance visualization - **Usage**: Performance visualization
<p align="center"> <p align="center">
<img width="640" src="https://user-images.githubusercontent.com/26833433/266847423-9d0aea13-d5c4-4771-b06e-0b817a498260.png" alt="Hyperparameter Tuning Fitness vs Iteration"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/best-fitness.avif" alt="Hyperparameter Tuning Fitness vs Iteration">
</p> </p>
#### tune_results.csv #### tune_results.csv
@ -182,7 +182,7 @@ This file contains scatter plots generated from `tune_results.csv`, helping you
- **Usage**: Exploratory data analysis - **Usage**: Exploratory data analysis
<p align="center"> <p align="center">
<img width="1000" src="https://user-images.githubusercontent.com/26833433/266847488-ec382f3d-79bc-4087-a0e0-42fb8b62cad2.png" alt="Hyperparameter Tuning Scatter Plots"> <img width="1000" src="https://github.com/ultralytics/docs/releases/download/0/tune-scatter-plots.avif" alt="Hyperparameter Tuning Scatter Plots">
</p> </p>
#### weights/ #### weights/

@ -29,10 +29,10 @@ There are two types of instance segmentation tracking available in the Ultralyti
## Samples ## Samples
| Instance Segmentation | Instance Segmentation + Object Tracking | | Instance Segmentation | Instance Segmentation + Object Tracking |
| :-------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------: | | :----------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Ultralytics Instance Segmentation](https://github.com/RizwanMunawar/ultralytics/assets/62513924/d4ad3499-1f33-4871-8fbc-1be0b2643aa2) | ![Ultralytics Instance Segmentation with Object Tracking](https://github.com/RizwanMunawar/ultralytics/assets/62513924/2e5c38cc-fd5c-4145-9682-fa94ae2010a0) | | ![Ultralytics Instance Segmentation](https://github.com/ultralytics/docs/releases/download/0/ultralytics-instance-segmentation.avif) | ![Ultralytics Instance Segmentation with Object Tracking](https://github.com/ultralytics/docs/releases/download/0/ultralytics-instance-segmentation-object-tracking.avif) |
| Ultralytics Instance Segmentation 😍 | Ultralytics Instance Segmentation with Object Tracking 🔥 | | Ultralytics Instance Segmentation 😍 | Ultralytics Instance Segmentation with Object Tracking 🔥 |
!!! Example "Instance Segmentation and Tracking" !!! Example "Instance Segmentation and Tracking"

@ -9,7 +9,7 @@ keywords: Ultralytics, segmentation, object isolation, Predict Mode, YOLOv8, mac
After performing the [Segment Task](../tasks/segment.md), it's sometimes desirable to extract the isolated objects from the inference results. This guide provides a generic recipe on how to accomplish this using the Ultralytics [Predict Mode](../modes/predict.md). After performing the [Segment Task](../tasks/segment.md), it's sometimes desirable to extract the isolated objects from the inference results. This guide provides a generic recipe on how to accomplish this using the Ultralytics [Predict Mode](../modes/predict.md).
<p align="center"> <p align="center">
<img src="https://github.com/ultralytics/ultralytics/assets/62214284/1787d76b-ad5f-43f9-a39c-d45c9157f38a" alt="Example Isolated Object Segmentation"> <img src="https://github.com/ultralytics/docs/releases/download/0/isolated-object-segmentation.avif" alt="Example Isolated Object Segmentation">
</p> </p>
## Recipe Walk Through ## Recipe Walk Through
@ -162,7 +162,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
There are no additional steps required if keeping full size image. There are no additional steps required if keeping full size image.
<figure markdown> <figure markdown>
![Example Full size Isolated Object Image Black Background](https://github.com/ultralytics/ultralytics/assets/62214284/845c00d0-52a6-4b1e-8010-4ba73e011b99){ width=240 } ![Example Full size Isolated Object Image Black Background](https://github.com/ultralytics/docs/releases/download/0/full-size-isolated-object-black-background.avif){ width=240 }
<figcaption>Example full-size output</figcaption> <figcaption>Example full-size output</figcaption>
</figure> </figure>
@ -170,7 +170,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
Additional steps required to crop image to only include object region. Additional steps required to crop image to only include object region.
![Example Crop Isolated Object Image Black Background](https://github.com/ultralytics/ultralytics/assets/62214284/103dbf90-c169-4f77-b791-76cdf09c6f22){ align="right" } ![Example Crop Isolated Object Image Black Background](https://github.com/ultralytics/docs/releases/download/0/example-crop-isolated-object-image-black-background.avif){ align="right" }
```{ .py .annotate } ```{ .py .annotate }
# (1) Bounding box coordinates # (1) Bounding box coordinates
x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32) x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32)
@ -208,7 +208,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
There are no additional steps required if keeping full size image. There are no additional steps required if keeping full size image.
<figure markdown> <figure markdown>
![Example Full size Isolated Object Image No Background](https://github.com/ultralytics/ultralytics/assets/62214284/b1043ee0-369a-4019-941a-9447a9771042){ width=240 } ![Example Full size Isolated Object Image No Background](https://github.com/ultralytics/docs/releases/download/0/example-full-size-isolated-object-image-no-background.avif){ width=240 }
<figcaption>Example full-size output + transparent background</figcaption> <figcaption>Example full-size output + transparent background</figcaption>
</figure> </figure>
@ -216,7 +216,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
Additional steps required to crop image to only include object region. Additional steps required to crop image to only include object region.
![Example Crop Isolated Object Image No Background](https://github.com/ultralytics/ultralytics/assets/62214284/5910244f-d1e1-44af-af7f-6dea4c688da8){ align="right" } ![Example Crop Isolated Object Image No Background](https://github.com/ultralytics/docs/releases/download/0/example-crop-isolated-object-image-no-background.avif){ align="right" }
```{ .py .annotate } ```{ .py .annotate }
# (1) Bounding box coordinates # (1) Bounding box coordinates
x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32) x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32)

@ -11,7 +11,7 @@ keywords: Ultralytics, YOLO, K-Fold Cross Validation, object detection, sklearn,
This comprehensive guide illustrates the implementation of K-Fold Cross Validation for object detection datasets within the Ultralytics ecosystem. We'll leverage the YOLO detection format and key Python libraries such as sklearn, pandas, and PyYaml to guide you through the necessary setup, the process of generating feature vectors, and the execution of a K-Fold dataset split. This comprehensive guide illustrates the implementation of K-Fold Cross Validation for object detection datasets within the Ultralytics ecosystem. We'll leverage the YOLO detection format and key Python libraries such as sklearn, pandas, and PyYaml to guide you through the necessary setup, the process of generating feature vectors, and the execution of a K-Fold dataset split.
<p align="center"> <p align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/258589390-8d815058-ece8-48b9-a94e-0e1ab53ea0f6.png" alt="K-Fold Cross Validation Overview"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/k-fold-cross-validation-overview.avif" alt="K-Fold Cross Validation Overview">
</p> </p>
Whether your project involves the Fruit Detection dataset or a custom data source, this tutorial aims to help you comprehend and apply K-Fold Cross Validation to bolster the reliability and robustness of your machine learning models. While we're applying `k=5` folds for this tutorial, keep in mind that the optimal number of folds can vary depending on your dataset and the specifics of your project. Whether your project involves the Fruit Detection dataset or a custom data source, this tutorial aims to help you comprehend and apply K-Fold Cross Validation to bolster the reliability and robustness of your machine learning models. While we're applying `k=5` folds for this tutorial, keep in mind that the optimal number of folds can vary depending on your dataset and the specifics of your project.

@ -49,7 +49,7 @@ Optimizing your computer vision model helps it runs efficiently, especially when
Pruning reduces the size of the model by removing weights that contribute little to the final output. It makes the model smaller and faster without significantly affecting accuracy. Pruning involves identifying and eliminating unnecessary parameters, resulting in a lighter model that requires less computational power. It is particularly useful for deploying models on devices with limited resources. Pruning reduces the size of the model by removing weights that contribute little to the final output. It makes the model smaller and faster without significantly affecting accuracy. Pruning involves identifying and eliminating unnecessary parameters, resulting in a lighter model that requires less computational power. It is particularly useful for deploying models on devices with limited resources.
<p align="center"> <p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*rw2zAHw9Xlm7nSq1PCKbzQ.png" alt="Model Pruning Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/model-pruning-overview.avif" alt="Model Pruning Overview">
</p> </p>
### Model Quantization ### Model Quantization
@ -65,7 +65,7 @@ Quantization converts the model's weights and activations from high precision (l
Knowledge distillation involves training a smaller, simpler model (the student) to mimic the outputs of a larger, more complex model (the teacher). The student model learns to approximate the teacher's predictions, resulting in a compact model that retains much of the teacher's accuracy. This technique is beneficial for creating efficient models suitable for deployment on edge devices with constrained resources. Knowledge distillation involves training a smaller, simpler model (the student) to mimic the outputs of a larger, more complex model (the teacher). The student model learns to approximate the teacher's predictions, resulting in a compact model that retains much of the teacher's accuracy. This technique is beneficial for creating efficient models suitable for deployment on edge devices with constrained resources.
<p align="center"> <p align="center">
<img width="100%" src="https://editor.analyticsvidhya.com/uploads/30818Knowledge%20Distillation%20Flow%20Chart%201.2.jpg" alt="Knowledge Distillation Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/knowledge-distillation-overview.avif" alt="Knowledge Distillation Overview">
</p> </p>
## Troubleshooting Deployment Issues ## Troubleshooting Deployment Issues

@ -27,7 +27,7 @@ _Quick Tip:_ When running inferences, if you aren't seeing any predictions and y
Intersection over Union (IoU) is a metric in object detection that measures how well the predicted bounding box overlaps with the ground truth bounding box. IoU values range from 0 to 1, where one stands for a perfect match. IoU is essential because it measures how closely the predicted boundaries match the actual object boundaries. Intersection over Union (IoU) is a metric in object detection that measures how well the predicted bounding box overlaps with the ground truth bounding box. IoU values range from 0 to 1, where one stands for a perfect match. IoU is essential because it measures how closely the predicted boundaries match the actual object boundaries.
<p align="center"> <p align="center">
<img width="100%" src="https://learnopencv.com/wp-content/uploads/2022/12/feature-image-iou-1.jpg" alt="Intersection over Union Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/intersection-over-union-overview.avif" alt="Intersection over Union Overview">
</p> </p>
### Mean Average Precision ### Mean Average Precision
@ -42,7 +42,7 @@ Let's focus on two specific mAP metrics:
Other mAP metrics include mAP@0.75, which uses a stricter IoU threshold of 0.75, and mAP@small, medium, and large, which evaluate precision across objects of different sizes. Other mAP metrics include mAP@0.75, which uses a stricter IoU threshold of 0.75, and mAP@small, medium, and large, which evaluate precision across objects of different sizes.
<p align="center"> <p align="center">
<img width="100%" src="https://a.storyblok.com/f/139616/1200x800/913f78e511/ways-to-improve-mean-average-precision.webp" alt="Mean Average Precision Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/mean-average-precision-overview.avif" alt="Mean Average Precision Overview">
</p> </p>
## Evaluating YOLOv8 Model Performance ## Evaluating YOLOv8 Model Performance

@ -40,7 +40,7 @@ You can use automated monitoring tools to make it easier to monitor models after
The three tools introduced above, Evidently AI, Prometheus, and Grafana, can work together seamlessly as a fully open-source ML monitoring solution that is ready for production. Evidently AI is used to collect and calculate metrics, Prometheus stores these metrics, and Grafana displays them and sets up alerts. While there are many other tools available, this setup is an exciting open-source option that provides robust capabilities for monitoring and maintaining your models. The three tools introduced above, Evidently AI, Prometheus, and Grafana, can work together seamlessly as a fully open-source ML monitoring solution that is ready for production. Evidently AI is used to collect and calculate metrics, Prometheus stores these metrics, and Grafana displays them and sets up alerts. While there are many other tools available, this setup is an exciting open-source option that provides robust capabilities for monitoring and maintaining your models.
<p align="center"> <p align="center">
<img width="100%" src="https://cdn.prod.website-files.com/660ef16a9e0687d9cc27474a/6625e0d5fe28fe414563ad0d_64498c4145adad5ecd2bfdcb_5_evidently_grafana_-min.png" alt="Overview of Open Source Model Monitoring Tools"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/evidently-prometheus-grafana-monitoring-tools.avif" alt="Overview of Open Source Model Monitoring Tools">
</p> </p>
### Anomaly Detection and Alert Systems ### Anomaly Detection and Alert Systems
@ -62,7 +62,7 @@ When you are setting up your alert systems, keep these best practices in mind:
Data drift detection is a concept that helps identify when the statistical properties of the input data change over time, which can degrade model performance. Before you decide to retrain or adjust your models, this technique helps spot that there is an issue. Data drift deals with changes in the overall data landscape over time, while anomaly detection focuses on identifying rare or unexpected data points that may require immediate attention. Data drift detection is a concept that helps identify when the statistical properties of the input data change over time, which can degrade model performance. Before you decide to retrain or adjust your models, this technique helps spot that there is an issue. Data drift deals with changes in the overall data landscape over time, while anomaly detection focuses on identifying rare or unexpected data points that may require immediate attention.
<p align="center"> <p align="center">
<img width="100%" src="https://cdn.prod.website-files.com/660ef16a9e0687d9cc27474a/662c3c84dc614ac9ad250314_65406ec83d5a6ca96619262f_data_drift10.png" alt="Data Drift Detection Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/data-drift-detection-overview.avif" alt="Data Drift Detection Overview">
</p> </p>
Here are several methods to detect data drift: Here are several methods to detect data drift:
@ -82,7 +82,7 @@ Model maintenance is crucial to keep computer vision models accurate and relevan
Once a model is deployed, while monitoring, you may notice changes in data patterns or performance, indicating model drift. Regular updates and re-training become essential parts of model maintenance to ensure the model can handle new patterns and scenarios. There are a few techniques you can use based on how your data is changing. Once a model is deployed, while monitoring, you may notice changes in data patterns or performance, indicating model drift. Regular updates and re-training become essential parts of model maintenance to ensure the model can handle new patterns and scenarios. There are a few techniques you can use based on how your data is changing.
<p align="center"> <p align="center">
<img width="100%" src="https://f8federal.com/wp-content/uploads/2021/06/Asset-2@5x.png" alt="Computer Vision Model Drift Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/computer-vision-model-drift-overview.avif" alt="Computer Vision Model Drift Overview">
</p> </p>
For example, if the data is changing gradually over time, incremental learning is a good approach. Incremental learning involves updating the model with new data without completely retraining it from scratch, saving computational resources and time. However, if the data has changed drastically, a periodic full re-training might be a better option to ensure the model does not overfit on the new data while losing track of older patterns. For example, if the data is changing gradually over time, incremental learning is a good approach. Incremental learning involves updating the model with new data without completely retraining it from scratch, saving computational resources and time. However, if the data has changed drastically, a periodic full re-training might be a better option to ensure the model does not overfit on the new data while losing track of older patterns.
@ -94,7 +94,7 @@ Regardless of the method, validation and testing are a must after updates. It is
The frequency of retraining your computer vision model depends on data changes and model performance. Retrain your model whenever you observe a significant performance drop or detect data drift. Regular evaluations can help determine the right retraining schedule by testing the model against new data. Monitoring performance metrics and data patterns lets you decide if your model needs more frequent updates to maintain accuracy. The frequency of retraining your computer vision model depends on data changes and model performance. Retrain your model whenever you observe a significant performance drop or detect data drift. Regular evaluations can help determine the right retraining schedule by testing the model against new data. Monitoring performance metrics and data patterns lets you decide if your model needs more frequent updates to maintain accuracy.
<p align="center"> <p align="center">
<img width="100%" src="https://cdn.prod.website-files.com/660ef16a9e0687d9cc27474a/6625e0e2ce5af6ba15764bf6_62e1b89973a9fd20eb9cde71_blog_retrain_or_not_-20.png" alt="When to Retrain Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/when-to-retrain-overview.avif" alt="When to Retrain Overview">
</p> </p>
## Documentation ## Documentation

@ -88,7 +88,7 @@ Underfitting occurs when your model can't capture the underlying patterns in the
The key is to find a balance between overfitting and underfitting. Ideally, a model should perform well on both training and validation datasets. Regularly monitoring your model's performance through metrics and visual inspections, along with applying the right strategies, can help you achieve the best results. The key is to find a balance between overfitting and underfitting. Ideally, a model should perform well on both training and validation datasets. Regularly monitoring your model's performance through metrics and visual inspections, along with applying the right strategies, can help you achieve the best results.
<p align="center"> <p align="center">
<img width="100%" src="https://viso.ai/wp-content/uploads/2022/07/overfitting-underfitting-appropriate-fitting.jpg" alt="Overfitting and Underfitting Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/overfitting-underfitting-appropriate-fitting.avif" alt="Overfitting and Underfitting Overview">
</p> </p>
## Data Leakage in Computer Vision and How to Avoid It ## Data Leakage in Computer Vision and How to Avoid It

@ -19,7 +19,7 @@ A computer vision model is trained by adjusting its internal parameters to minim
During training, the model iteratively makes predictions, calculates errors, and updates its parameters through a process called backpropagation. In this process, the model adjusts its internal parameters (weights and biases) to reduce the errors. By repeating this cycle many times, the model gradually improves its accuracy. Over time, it learns to recognize complex patterns such as shapes, colors, and textures. During training, the model iteratively makes predictions, calculates errors, and updates its parameters through a process called backpropagation. In this process, the model adjusts its internal parameters (weights and biases) to reduce the errors. By repeating this cycle many times, the model gradually improves its accuracy. Over time, it learns to recognize complex patterns such as shapes, colors, and textures.
<p align="center"> <p align="center">
<img width="100%" src="https://editor.analyticsvidhya.com/uploads/18870backprop2.png" alt="What is Backpropagation?"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/backpropagation-diagram.avif" alt="What is Backpropagation?">
</p> </p>
This learning process makes it possible for the computer vision model to perform various [tasks](../tasks/index.md), including [object detection](../tasks/detect.md), [instance segmentation](../tasks/segment.md), and [image classification](../tasks/classify.md). The ultimate goal is to create a model that can generalize its learning to new, unseen images so that it can accurately understand visual data in real-world applications. This learning process makes it possible for the computer vision model to perform various [tasks](../tasks/index.md), including [object detection](../tasks/detect.md), [instance segmentation](../tasks/segment.md), and [image classification](../tasks/classify.md). The ultimate goal is to create a model that can generalize its learning to new, unseen images so that it can accurately understand visual data in real-world applications.
@ -64,7 +64,7 @@ Caching can be controlled when training YOLOv8 using the `cache` parameter:
Mixed precision training uses both 16-bit (FP16) and 32-bit (FP32) floating-point types. The strengths of both FP16 and FP32 are leveraged by using FP16 for faster computation and FP32 to maintain precision where needed. Most of the neural network's operations are done in FP16 to benefit from faster computation and lower memory usage. However, a master copy of the model's weights is kept in FP32 to ensure accuracy during the weight update steps. You can handle larger models or larger batch sizes within the same hardware constraints. Mixed precision training uses both 16-bit (FP16) and 32-bit (FP32) floating-point types. The strengths of both FP16 and FP32 are leveraged by using FP16 for faster computation and FP32 to maintain precision where needed. Most of the neural network's operations are done in FP16 to benefit from faster computation and lower memory usage. However, a master copy of the model's weights is kept in FP32 to ensure accuracy during the weight update steps. You can handle larger models or larger batch sizes within the same hardware constraints.
<p align="center"> <p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*htZ4PF2fZ0ttJ5HdsIaAbQ.png" alt="Mixed Precision Training Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/mixed-precision-training-overview.avif" alt="Mixed Precision Training Overview">
</p> </p>
To implement mixed precision training, you'll need to modify your training scripts and ensure your hardware (like GPUs) supports it. Many modern deep learning frameworks, such as Tensorflow, offer built-in support for mixed precision. To implement mixed precision training, you'll need to modify your training scripts and ensure your hardware (like GPUs) supports it. Many modern deep learning frameworks, such as Tensorflow, offer built-in support for mixed precision.
@ -99,7 +99,7 @@ Early stopping is a valuable technique for optimizing model training. By monitor
The process involves setting a patience parameter that determines how many epochs to wait for an improvement in validation metrics before stopping training. If the model's performance does not improve within these epochs, training is stopped to avoid wasting time and resources. The process involves setting a patience parameter that determines how many epochs to wait for an improvement in validation metrics before stopping training. If the model's performance does not improve within these epochs, training is stopped to avoid wasting time and resources.
<p align="center"> <p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*06sTlOC3AYeZAjzUDwbaMw@2x.jpeg" alt="Early Stopping Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/early-stopping-overview.avif" alt="Early Stopping Overview">
</p> </p>
For YOLOv8, you can enable early stopping by setting the patience parameter in your training configuration. For example, `patience=5` means training will stop if there's no improvement in validation metrics for 5 consecutive epochs. Using this method ensures the training process remains efficient and achieves optimal performance without excessive computation. For YOLOv8, you can enable early stopping by setting the patience parameter in your training configuration. For example, `patience=5` means training will stop if there's no improvement in validation metrics for 5 consecutive epochs. Using this method ensures the training process remains efficient and achieves optimal performance without excessive computation.

@ -19,7 +19,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
<strong>Watch:</strong> How to Setup NVIDIA Jetson with Ultralytics YOLOv8 <strong>Watch:</strong> How to Setup NVIDIA Jetson with Ultralytics YOLOv8
</p> </p>
<img width="1024" src="https://github.com/ultralytics/ultralytics/assets/20147381/c68fb2eb-371a-43e5-b7b8-2b869d90bc07" alt="NVIDIA Jetson Ecosystem"> <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/nvidia-jetson-ecosystem.avif" alt="NVIDIA Jetson Ecosystem">
!!! Note !!! Note
@ -287,7 +287,7 @@ YOLOv8 benchmarks were run by the Ultralytics team on 10 different model formats
Even though all model exports are working with NVIDIA Jetson, we have only included **PyTorch, TorchScript, TensorRT** for the comparison chart below because, they make use of the GPU on the Jetson and are guaranteed to produce the best results. All the other exports only utilize the CPU and the performance is not as good as the above three. You can find benchmarks for all exports in the section after this chart. Even though all model exports are working with NVIDIA Jetson, we have only included **PyTorch, TorchScript, TensorRT** for the comparison chart below because, they make use of the GPU on the Jetson and are guaranteed to produce the best results. All the other exports only utilize the CPU and the performance is not as good as the above three. You can find benchmarks for all exports in the section after this chart.
<div style="text-align: center;"> <div style="text-align: center;">
<img width="800" src="https://github.com/ultralytics/ultralytics/assets/20147381/202950fa-c24a-43ec-90c8-4d7b6a6c406e" alt="NVIDIA Jetson Ecosystem"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/nvidia-jetson-ecosystem-1.avif" alt="NVIDIA Jetson Ecosystem">
</div> </div>
### Detailed Comparison Table ### Detailed Comparison Table
@ -431,7 +431,7 @@ When using NVIDIA Jetson, there are a couple of best practices to follow in orde
jtop jtop
``` ```
<img width="1024" src="https://github.com/ultralytics/ultralytics/assets/20147381/f7017975-6eaa-4d02-8007-ab52314cebfd" alt="Jetson Stats"> <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/jetson-stats-application.avif" alt="Jetson Stats">
## Next Steps ## Next Steps

@ -41,10 +41,10 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
## Real World Applications ## Real World Applications
| Logistics | Aquaculture | | Logistics | Aquaculture |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------: | | :-----------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Conveyor Belt Packets Counting Using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/70e2d106-510c-4c6c-a57a-d34a765aa757) | ![Fish Counting in Sea using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/c60d047b-3837-435f-8d29-bb9fc95d2191) | | ![Conveyor Belt Packets Counting Using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/conveyor-belt-packets-counting.avif) | ![Fish Counting in Sea using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/fish-counting-in-sea-using-ultralytics-yolov8.avif) |
| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 | | Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
!!! Example "Object Counting using YOLOv8 Example" !!! Example "Object Counting using YOLOv8 Example"

@ -29,10 +29,10 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
## Visuals ## Visuals
| Airport Luggage | | Airport Luggage |
| :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Conveyor Belt at Airport Suitcases Cropping using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/648f46be-f233-4307-a8e5-046eea38d2e4) | | ![Conveyor Belt at Airport Suitcases Cropping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/suitcases-cropping-airport-conveyor-belt.avif) |
| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 | | Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
!!! Example "Object Cropping using YOLOv8 Example" !!! Example "Object Cropping using YOLOv8 Example"

@ -6,7 +6,7 @@ keywords: Ultralytics YOLO, OpenVINO optimization, deep learning, model inferenc
# Optimizing OpenVINO Inference for Ultralytics YOLO Models: A Comprehensive Guide # Optimizing OpenVINO Inference for Ultralytics YOLO Models: A Comprehensive Guide
<img width="1024" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/2b181f68-aa91-4514-ba09-497cc3c83b00" alt="OpenVINO Ecosystem"> <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/openvino-ecosystem.avif" alt="OpenVINO Ecosystem">
## Introduction ## Introduction

@ -29,10 +29,10 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
## Real World Applications ## Real World Applications
| Parking Management System | Parking Management System | | Parking Management System | Parking Management System |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------: | | :----------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Parking lots Analytics Using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/e3d4bc3e-cf4a-4da9-b42e-0da55cc74ad6) | ![Parking management top view using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/fe186719-1aca-43c9-b388-1ded91280eb5) | | ![Parking lots Analytics Using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/parking-management-aerial-view-ultralytics-yolov8.avif) | ![Parking management top view using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/parking-management-top-view-ultralytics-yolov8.avif) |
| Parking management Aerial View using Ultralytics YOLOv8 | Parking management Top View using Ultralytics YOLOv8 | | Parking management Aerial View using Ultralytics YOLOv8 | Parking management Top View using Ultralytics YOLOv8 |
## Parking Management System Code Workflow ## Parking Management System Code Workflow
@ -61,7 +61,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
- After defining the parking areas with polygons, click `save` to store a JSON file with the data in your working directory. - After defining the parking areas with polygons, click `save` to store a JSON file with the data in your working directory.
![Ultralytics YOLOv8 Points Selection Demo](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/72737b8a-0f0f-4efb-98ad-b917a0039535) ![Ultralytics YOLOv8 Points Selection Demo](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-points-selection-demo.avif)
### Python Code for Parking Management ### Python Code for Parking Management

@ -73,7 +73,7 @@ Here are some other benefits of data augmentation:
Common augmentation techniques include flipping, rotation, scaling, and color adjustments. Several libraries, such as Albumentations, Imgaug, and TensorFlow's ImageDataGenerator, can generate these augmentations. Common augmentation techniques include flipping, rotation, scaling, and color adjustments. Several libraries, such as Albumentations, Imgaug, and TensorFlow's ImageDataGenerator, can generate these augmentations.
<p align="center"> <p align="center">
<img width="100%" src="https://i0.wp.com/ubiai.tools/wp-content/uploads/2023/11/UKwFg.jpg" alt="Overview of Data Augmentations"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/overview-of-data-augmentations.avif" alt="Overview of Data Augmentations">
</p> </p>
With respect to YOLOv8, you can [augment your custom dataset](../modes/train.md) by modifying the dataset configuration file, a .yaml file. In this file, you can add an augmentation section with parameters that specify how you want to augment your data. With respect to YOLOv8, you can [augment your custom dataset](../modes/train.md) by modifying the dataset configuration file, a .yaml file. In this file, you can add an augmentation section with parameters that specify how you want to augment your data.
@ -123,7 +123,7 @@ Common tools for visualizations include:
For a more advanced approach to EDA, you can use the Ultralytics Explorer tool. It offers robust capabilities for exploring computer vision datasets. By supporting semantic search, SQL queries, and vector similarity search, the tool makes it easy to analyze and understand your data. With Ultralytics Explorer, you can create embeddings for your dataset to find similar images, run SQL queries for detailed analysis, and perform semantic searches, all through a user-friendly graphical interface. For a more advanced approach to EDA, you can use the Ultralytics Explorer tool. It offers robust capabilities for exploring computer vision datasets. By supporting semantic search, SQL queries, and vector similarity search, the tool makes it easy to analyze and understand your data. With Ultralytics Explorer, you can create embeddings for your dataset to find similar images, run SQL queries for detailed analysis, and perform semantic searches, all through a user-friendly graphical interface.
<p align="center"> <p align="center">
<img width="100%" src="https://github.com/AyushExel/assets/assets/15766192/1b5f3708-be3e-44c5-9ea3-adcd522dfc75" alt="Overview of Ultralytics Explorer"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-explorer-openai-integration.avif" alt="Overview of Ultralytics Explorer">
</p> </p>
## Reach Out and Connect ## Reach Out and Connect

@ -28,10 +28,10 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
## Real World Applications ## Real World Applications
| Logistics | Retail | | Logistics | Retail |
| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: | | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Queue management at airport ticket counter using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/10487e76-bf60-4a9c-a0f3-5a75a05fa7a3) | ![Queue monitoring in crowd using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/dcc6d2ca-5576-434d-83c6-e57fe07bc693) | | ![Queue management at airport ticket counter using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-management-airport-ticket-counter-ultralytics-yolov8.avif) | ![Queue monitoring in crowd using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-monitoring-crowd-ultralytics-yolov8.avif) |
| Queue management at airport ticket counter Using Ultralytics YOLOv8 | Queue monitoring in crowd Ultralytics YOLOv8 | | Queue management at airport ticket counter Using Ultralytics YOLOv8 | Queue monitoring in crowd Ultralytics YOLOv8 |
!!! Example "Queue Management using YOLOv8 Example" !!! Example "Queue Management using YOLOv8 Example"

@ -149,13 +149,13 @@ YOLOv8 benchmarks were run by the Ultralytics team on nine different model forma
=== "YOLOv8n" === "YOLOv8n"
<div style="text-align: center;"> <div style="text-align: center;">
<img width="800" src="https://github.com/ultralytics/ultralytics/assets/20147381/43421a4e-0ac0-42ca-995b-5e71d9748af5" alt="NVIDIA Jetson Ecosystem"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/yolov8n-benchmark-comparison.avif" alt="NVIDIA Jetson Ecosystem">
</div> </div>
=== "YOLOv8s" === "YOLOv8s"
<div style="text-align: center;"> <div style="text-align: center;">
<img width="800" src="https://github.com/ultralytics/ultralytics/assets/20147381/e85e18a2-abfc-431d-8b23-812820ee390e" alt="NVIDIA Jetson Ecosystem"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/yolov8s-performance-comparison.avif" alt="NVIDIA Jetson Ecosystem">
</div> </div>
### Detailed Comparison Table ### Detailed Comparison Table

@ -29,10 +29,10 @@ keywords: object counting, regions, YOLOv8, computer vision, Ultralytics, effici
## Real World Applications ## Real World Applications
| Retail | Market Streets | | Retail | Market Streets |
| :----------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------: | | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![People Counting in Different Region using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/5ab3bbd7-fd12-4849-928e-5f294d6c3fcf) | ![Crowd Counting in Different Region using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/e7c1aea7-474d-4d78-8d48-b50854ffe1ca) | | ![People Counting in Different Region using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/people-counting-different-region-ultralytics-yolov8.avif) | ![Crowd Counting in Different Region using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/crowd-counting-different-region-ultralytics-yolov8.avif) |
| People Counting in Different Region using Ultralytics YOLOv8 | Crowd Counting in Different Region using Ultralytics YOLOv8 | | People Counting in Different Region using Ultralytics YOLOv8 | Crowd Counting in Different Region using Ultralytics YOLOv8 |
## Steps to Run ## Steps to Run

@ -48,7 +48,7 @@ In ROS, communication between nodes is facilitated through [messages](https://wi
This guide has been tested using [this ROS environment](https://github.com/ambitious-octopus/rosbot_ros/tree/noetic), which is a fork of the [ROSbot ROS repository](https://github.com/husarion/rosbot_ros). This environment includes the Ultralytics YOLO package, a Docker container for easy setup, comprehensive ROS packages, and Gazebo worlds for rapid testing. It is designed to work with the [Husarion ROSbot 2 PRO](https://husarion.com/manuals/rosbot/). The code examples provided will work in any ROS Noetic/Melodic environment, including both simulation and real-world. This guide has been tested using [this ROS environment](https://github.com/ambitious-octopus/rosbot_ros/tree/noetic), which is a fork of the [ROSbot ROS repository](https://github.com/husarion/rosbot_ros). This environment includes the Ultralytics YOLO package, a Docker container for easy setup, comprehensive ROS packages, and Gazebo worlds for rapid testing. It is designed to work with the [Husarion ROSbot 2 PRO](https://husarion.com/manuals/rosbot/). The code examples provided will work in any ROS Noetic/Melodic environment, including both simulation and real-world.
<p align="center"> <p align="center">
<img width="50%" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/242b431d-6ea2-4dad-81d6-e31be69141af" alt="Husarion ROSbot 2 PRO"> <img width="50%" src="https://github.com/ultralytics/docs/releases/download/0/husarion-rosbot-2-pro.avif" alt="Husarion ROSbot 2 PRO">
</p> </p>
### Dependencies Installation ### Dependencies Installation
@ -72,7 +72,7 @@ Apart from the ROS environment, you will need to install the following dependenc
The `sensor_msgs/Image` [message type](https://docs.ros.org/en/api/sensor_msgs/html/msg/Image.html) is commonly used in ROS for representing image data. It contains fields for encoding, height, width, and pixel data, making it suitable for transmitting images captured by cameras or other sensors. Image messages are widely used in robotic applications for tasks such as visual perception, object detection, and navigation. The `sensor_msgs/Image` [message type](https://docs.ros.org/en/api/sensor_msgs/html/msg/Image.html) is commonly used in ROS for representing image data. It contains fields for encoding, height, width, and pixel data, making it suitable for transmitting images captured by cameras or other sensors. Image messages are widely used in robotic applications for tasks such as visual perception, object detection, and navigation.
<p align="center"> <p align="center">
<img width="100%" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/652cb3e8-ecb0-45cf-9ce1-a514dc06c605" alt="Detection and Segmentation in ROS Gazebo"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/detection-segmentation-ros-gazebo.avif" alt="Detection and Segmentation in ROS Gazebo">
</p> </p>
### Image Step-by-Step Usage ### Image Step-by-Step Usage
@ -345,7 +345,7 @@ while True:
## Use Ultralytics with ROS `sensor_msgs/PointCloud2` ## Use Ultralytics with ROS `sensor_msgs/PointCloud2`
<p align="center"> <p align="center">
<img width="100%" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/ef2e1ed9-a840-499a-b324-574bd26c3bc7" alt="Detection and Segmentation in ROS Gazebo"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/detection-segmentation-ros-gazebo-1.avif" alt="Detection and Segmentation in ROS Gazebo">
</p> </p>
The `sensor_msgs/PointCloud2` [message type](https://docs.ros.org/en/api/sensor_msgs/html/msg/PointCloud2.html) is a data structure used in ROS to represent 3D point cloud data. This message type is integral to robotic applications, enabling tasks such as 3D mapping, object recognition, and localization. The `sensor_msgs/PointCloud2` [message type](https://docs.ros.org/en/api/sensor_msgs/html/msg/PointCloud2.html) is a data structure used in ROS to represent 3D point cloud data. This message type is integral to robotic applications, enabling tasks such as 3D mapping, object recognition, and localization.
@ -510,7 +510,7 @@ for index, class_id in enumerate(classes):
``` ```
<p align="center"> <p align="center">
<img width="100%" src="https://github.com/ultralytics/ultralytics/assets/3855193/3caafc4a-0edd-4e5f-8dd1-37e30be70123" alt="Point Cloud Segmentation with Ultralytics "> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/point-cloud-segmentation-ultralytics.avif" alt="Point Cloud Segmentation with Ultralytics ">
</p> </p>
## FAQ ## FAQ

@ -9,7 +9,7 @@ keywords: YOLOv8, SAHI, Sliced Inference, Object Detection, Ultralytics, High-re
Welcome to the Ultralytics documentation on how to use YOLOv8 with [SAHI](https://github.com/obss/sahi) (Slicing Aided Hyper Inference). This comprehensive guide aims to furnish you with all the essential knowledge you'll need to implement SAHI alongside YOLOv8. We'll deep-dive into what SAHI is, why sliced inference is critical for large-scale applications, and how to integrate these functionalities with YOLOv8 for enhanced object detection performance. Welcome to the Ultralytics documentation on how to use YOLOv8 with [SAHI](https://github.com/obss/sahi) (Slicing Aided Hyper Inference). This comprehensive guide aims to furnish you with all the essential knowledge you'll need to implement SAHI alongside YOLOv8. We'll deep-dive into what SAHI is, why sliced inference is critical for large-scale applications, and how to integrate these functionalities with YOLOv8 for enhanced object detection performance.
<p align="center"> <p align="center">
<img width="1024" src="https://raw.githubusercontent.com/obss/sahi/main/resources/sliced_inference.gif" alt="SAHI Sliced Inference Overview"> <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/sahi-sliced-inference-overview.avif" alt="SAHI Sliced Inference Overview">
</p> </p>
## Introduction to SAHI ## Introduction to SAHI
@ -51,8 +51,8 @@ Sliced Inference refers to the practice of subdividing a large or high-resolutio
<th>YOLOv8 with SAHI</th> <th>YOLOv8 with SAHI</th>
</tr> </tr>
<tr> <tr>
<td><img src="https://user-images.githubusercontent.com/26833433/266123241-260a9740-5998-4e9a-ad04-b39b7767e731.png" alt="YOLOv8 without SAHI" width="640"></td> <td><img src="https://github.com/ultralytics/docs/releases/download/0/yolov8-without-sahi.avif" alt="YOLOv8 without SAHI" width="640"></td>
<td><img src="https://user-images.githubusercontent.com/26833433/266123245-55f696ad-ec74-4e71-9155-c211d693bb69.png" alt="YOLOv8 with SAHI" width="640"></td> <td><img src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-sahi.avif" alt="YOLOv8 with SAHI" width="640"></td>
</tr> </tr>
</table> </table>

@ -6,7 +6,7 @@ keywords: YOLOv8, Security Alarm System, real-time object detection, Ultralytics
# Security Alarm System Project Using Ultralytics YOLOv8 # Security Alarm System Project Using Ultralytics YOLOv8
<img src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/f4e4a613-fb25-4bd0-9ec5-78352ddb62bd" alt="Security Alarm System"> <img src="https://github.com/ultralytics/docs/releases/download/0/security-alarm-system-ultralytics-yolov8.avif" alt="Security Alarm System">
The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanced computer vision capabilities to enhance security measures. YOLOv8, developed by Ultralytics, provides real-time object detection, allowing the system to identify and respond to potential security threats promptly. This project offers several advantages: The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanced computer vision capabilities to enhance security measures. YOLOv8, developed by Ultralytics, provides real-time object detection, allowing the system to identify and respond to potential security threats promptly. This project offers several advantages:
@ -175,7 +175,7 @@ That's it! When you execute the code, you'll receive a single notification on yo
#### Email Received Sample #### Email Received Sample
<img width="256" src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/db79ccc6-aabd-4566-a825-b34e679c90f9" alt="Email Received Sample"> <img width="256" src="https://github.com/ultralytics/docs/releases/download/0/email-received-sample.avif" alt="Email Received Sample">
## FAQ ## FAQ

@ -33,10 +33,10 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
## Real World Applications ## Real World Applications
| Transportation | Transportation | | Transportation | Transportation |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------: | | :------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Speed Estimation on Road using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/c8a0fd4a-d394-436d-8de3-d5b754755fc7) | ![Speed Estimation on Bridge using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/cee10e02-b268-4304-b73a-5b9cb42da669) | | ![Speed Estimation on Road using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-road-using-ultralytics-yolov8.avif) | ![Speed Estimation on Bridge using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-bridge-using-ultralytics-yolov8.avif) |
| Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 | | Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
!!! Example "Speed Estimation using YOLOv8 Example" !!! Example "Speed Estimation using YOLOv8 Example"

@ -40,7 +40,7 @@ Before discussing the details of each step involved in a computer vision project
- Finally, you'd deploy your model into the real world and update it based on new insights and feedback. - Finally, you'd deploy your model into the real world and update it based on new insights and feedback.
<p align="center"> <p align="center">
<img width="100%" src="https://assets-global.website-files.com/6108e07db6795265f203a636/626bf3577837448d9ed716ff_The%20five%20stages%20of%20ML%20development%20lifecycle%20(1).jpeg" alt="Computer Vision Project Steps Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/five-stages-of-ml-development-lifecycle.avif" alt="Computer Vision Project Steps Overview">
</p> </p>
Now that we know what to expect, let's dive right into the steps and get your project moving forward. Now that we know what to expect, let's dive right into the steps and get your project moving forward.
@ -71,7 +71,7 @@ Depending on the objective, you might choose to select the model first or after
Choosing between training from scratch or using transfer learning affects how you prepare your data. Training from scratch requires a diverse dataset to build the model's understanding from the ground up. Transfer learning, on the other hand, allows you to use a pre-trained model and adapt it with a smaller, more specific dataset. Also, choosing a specific model to train will determine how you need to prepare your data, such as resizing images or adding annotations, according to the model's specific requirements. Choosing between training from scratch or using transfer learning affects how you prepare your data. Training from scratch requires a diverse dataset to build the model's understanding from the ground up. Transfer learning, on the other hand, allows you to use a pre-trained model and adapt it with a smaller, more specific dataset. Also, choosing a specific model to train will determine how you need to prepare your data, such as resizing images or adding annotations, according to the model's specific requirements.
<p align="center"> <p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1330/format:webp/1*zCnoXfPVcdXizTmhL68Rlw.jpeg" alt="Training From Scratch Vs. Using Transfer Learning"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/training-from-scratch-vs-transfer-learning.avif" alt="Training From Scratch Vs. Using Transfer Learning">
</p> </p>
Note: When choosing a model, consider its [deployment](./model-deployment-options.md) to ensure compatibility and performance. For example, lightweight models are ideal for edge computing due to their efficiency on resource-constrained devices. To learn more about the key points related to defining your project, read [our guide](./defining-project-goals.md) on defining your project's goals and selecting the right model. Note: When choosing a model, consider its [deployment](./model-deployment-options.md) to ensure compatibility and performance. For example, lightweight models are ideal for edge computing due to their efficiency on resource-constrained devices. To learn more about the key points related to defining your project, read [our guide](./defining-project-goals.md) on defining your project's goals and selecting the right model.
@ -97,7 +97,7 @@ However, if you choose to collect images or take your own pictures, you'll need
- **Image Segmentation:** You'll label each pixel in the image according to the object it belongs to, creating detailed object boundaries. - **Image Segmentation:** You'll label each pixel in the image according to the object it belongs to, creating detailed object boundaries.
<p align="center"> <p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1400/format:webp/0*VhpVAAJnvq5ZE_pv" alt="Different Types of Image Annotation"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/different-types-of-image-annotation.avif" alt="Different Types of Image Annotation">
</p> </p>
[Data collection and annotation](./data-collection-and-annotation.md) can be a time-consuming manual effort. Annotation tools can help make this process easier. Here are some useful open annotation tools: [LabeI Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/labelmeai/labelme). [Data collection and annotation](./data-collection-and-annotation.md) can be a time-consuming manual effort. Annotation tools can help make this process easier. Here are some useful open annotation tools: [LabeI Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/labelmeai/labelme).
@ -115,7 +115,7 @@ Here's how to split your data:
After splitting your data, you can perform data augmentation by applying transformations like rotating, scaling, and flipping images to artificially increase the size of your dataset. Data augmentation makes your model more robust to variations and improves its performance on unseen images. After splitting your data, you can perform data augmentation by applying transformations like rotating, scaling, and flipping images to artificially increase the size of your dataset. Data augmentation makes your model more robust to variations and improves its performance on unseen images.
<p align="center"> <p align="center">
<img width="100%" src="https://www.labellerr.com/blog/content/images/size/w2000/2022/11/banner-data-augmentation--1-.webp" alt="Examples of Data Augmentations"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/examples-of-data-augmentations.avif" alt="Examples of Data Augmentations">
</p> </p>
Libraries like OpenCV, Albumentations, and TensorFlow offer flexible augmentation functions that you can use. Additionally, some libraries, such as Ultralytics, have [built-in augmentation settings](../modes/train.md) directly within its model training function, simplifying the process. Libraries like OpenCV, Albumentations, and TensorFlow offer flexible augmentation functions that you can use. Additionally, some libraries, such as Ultralytics, have [built-in augmentation settings](../modes/train.md) directly within its model training function, simplifying the process.
@ -123,7 +123,7 @@ Libraries like OpenCV, Albumentations, and TensorFlow offer flexible augmentatio
To understand your data better, you can use tools like [Matplotlib](https://matplotlib.org/) or [Seaborn](https://seaborn.pydata.org/) to visualize the images and analyze their distribution and characteristics. Visualizing your data helps identify patterns, anomalies, and the effectiveness of your augmentation techniques. You can also use [Ultralytics Explorer](../datasets/explorer/index.md), a tool for exploring computer vision datasets with semantic search, SQL queries, and vector similarity search. To understand your data better, you can use tools like [Matplotlib](https://matplotlib.org/) or [Seaborn](https://seaborn.pydata.org/) to visualize the images and analyze their distribution and characteristics. Visualizing your data helps identify patterns, anomalies, and the effectiveness of your augmentation techniques. You can also use [Ultralytics Explorer](../datasets/explorer/index.md), a tool for exploring computer vision datasets with semantic search, SQL queries, and vector similarity search.
<p align="center"> <p align="center">
<img width="100%" src="https://github.com/ultralytics/ultralytics/assets/15766192/feb1fe05-58c5-4173-a9ff-e611e3bba3d0" alt="The Ultralytics Explorer Tool"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-1.avif" alt="The Ultralytics Explorer Tool">
</p> </p>
By properly [understanding, splitting, and augmenting your data](./preprocessing_annotated_data.md), you can develop a well-trained, validated, and tested model that performs well in real-world applications. By properly [understanding, splitting, and augmenting your data](./preprocessing_annotated_data.md), you can develop a well-trained, validated, and tested model that performs well in real-world applications.
@ -177,7 +177,7 @@ Once your model is deployed, it's important to continuously monitor its performa
Monitoring tools can help you track key performance indicators (KPIs) and detect anomalies or drops in accuracy. By monitoring the model, you can be aware of model drift, where the model's performance declines over time due to changes in the input data. Periodically retrain the model with updated data to maintain accuracy and relevance. Monitoring tools can help you track key performance indicators (KPIs) and detect anomalies or drops in accuracy. By monitoring the model, you can be aware of model drift, where the model's performance declines over time due to changes in the input data. Periodically retrain the model with updated data to maintain accuracy and relevance.
<p align="center"> <p align="center">
<img width="100%" src="https://www.kdnuggets.com/wp-content/uploads//ai-infinite-training-maintaining-loop.jpg" alt="Model Monitoring"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/model-monitoring-maintenance-loop.avif" alt="Model Monitoring">
</p> </p>
In addition to monitoring and maintenance, documentation is also key. Thoroughly document the entire process, including model architecture, training procedures, hyperparameters, data preprocessing steps, and any changes made during deployment and maintenance. Good documentation ensures reproducibility and makes future updates or troubleshooting easier. By effectively monitoring, maintaining, and documenting your model, you can ensure it remains accurate, reliable, and easy to manage over its lifecycle. In addition to monitoring and maintenance, documentation is also key. Thoroughly document the entire process, including model architecture, training procedures, hyperparameters, data preprocessing steps, and any changes made during deployment and maintenance. Good documentation ensures reproducibility and makes future updates or troubleshooting easier. By effectively monitoring, maintaining, and documenting your model, you can ensure it remains accurate, reliable, and easy to manage over its lifecycle.

@ -21,10 +21,10 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
<strong>Watch:</strong> How to Use Streamlit with Ultralytics for Real-Time Computer Vision in Your Browser <strong>Watch:</strong> How to Use Streamlit with Ultralytics for Real-Time Computer Vision in Your Browser
</p> </p>
| Aquaculture | Animals husbandry | | Aquaculture | Animals husbandry |
| :---------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | | :----------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------: |
| ![Fish Detection using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/ea6d7ece-cded-4db7-b810-1f8433df2c96) | ![Animals Detection using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/2e1f4781-60ab-4e72-b3e4-726c10cd223c) | | ![Fish Detection using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/fish-detection-ultralytics-yolov8.avif) | ![Animals Detection using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/animals-detection-yolov8.avif) |
| Fish Detection using Ultralytics YOLOv8 | Animals Detection using Ultralytics YOLOv8 | | Fish Detection using Ultralytics YOLOv8 | Animals Detection using Ultralytics YOLOv8 |
## Advantages of Live Inference ## Advantages of Live Inference

@ -7,7 +7,7 @@ keywords: YOLO, inference results, VSCode terminal, sixel, display images, Linux
# Viewing Inference Results in a Terminal # Viewing Inference Results in a Terminal
<p align="center"> <p align="center">
<img width="800" src="https://raw.githubusercontent.com/saitoha/libsixel/data/data/sixel.gif" alt="Sixel example of image in Terminal"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/sixel-example-terminal.avif" alt="Sixel example of image in Terminal">
</p> </p>
Image from the [libsixel](https://saitoha.github.io/libsixel/) website. Image from the [libsixel](https://saitoha.github.io/libsixel/) website.
@ -32,7 +32,7 @@ The VSCode compatible protocols for viewing images using the integrated terminal
``` ```
<p align="center"> <p align="center">
<img width="800" src="https://github.com/ultralytics/ultralytics/assets/62214284/d158ab1c-893c-4397-a5de-2f9f74f81175" alt="VSCode enable terminal images setting"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/vscode-enable-terminal-images-setting.avif" alt="VSCode enable terminal images setting">
</p> </p>
2. Install the `python-sixel` library in your virtual environment. This is a [fork](https://github.com/lubosz/python-sixel?tab=readme-ov-file) of the `PySixel` library, which is no longer maintained. 2. Install the `python-sixel` library in your virtual environment. This is a [fork](https://github.com/lubosz/python-sixel?tab=readme-ov-file) of the `PySixel` library, which is no longer maintained.
@ -93,7 +93,7 @@ The VSCode compatible protocols for viewing images using the integrated terminal
## Example Inference Results ## Example Inference Results
<p align="center"> <p align="center">
<img width="800" src="https://github.com/ultralytics/ultralytics/assets/62214284/6743ab64-300d-4429-bdce-e246455f7b68" alt="View Image in Terminal"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/view-image-in-terminal.avif" alt="View Image in Terminal">
</p> </p>
!!! danger !!! danger

@ -12,10 +12,10 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
## Samples ## Samples
| VisionEye View | VisionEye View With Object Tracking | VisionEye View With Distance Calculation | | VisionEye View | VisionEye View With Object Tracking | VisionEye View With Distance Calculation |
| :----------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | :----------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![VisionEye View Object Mapping using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/7d593acc-2e37-41b0-ad0e-92b4ffae6647) | ![VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/fcd85952-390f-451e-8fb0-b82e943af89c) | ![VisionEye View with Distance Calculation using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/18c4dafe-a22e-4fa9-a7d4-2bb293562a95) | | ![VisionEye View Object Mapping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-view-object-mapping-yolov8.avif) | ![VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-object-mapping-with-tracking.avif) | ![VisionEye View with Distance Calculation using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-distance-calculation-yolov8.avif) |
| VisionEye View Object Mapping using Ultralytics YOLOv8 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8 | VisionEye View with Distance Calculation using Ultralytics YOLOv8 | | VisionEye View Object Mapping using Ultralytics YOLOv8 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8 | VisionEye View with Distance Calculation using Ultralytics YOLOv8 |
!!! Example "VisionEye Object Mapping using YOLOv8" !!! Example "VisionEye Object Mapping using YOLOv8"

@ -29,10 +29,10 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
## Real World Applications ## Real World Applications
| Workouts Monitoring | Workouts Monitoring | | Workouts Monitoring | Workouts Monitoring |
| :--------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------: | | :------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------: |
| ![PushUps Counting](https://github.com/RizwanMunawar/ultralytics/assets/62513924/cf016a41-589f-420f-8a8c-2cc8174a16de) | ![PullUps Counting](https://github.com/RizwanMunawar/ultralytics/assets/62513924/cb20f316-fac2-4330-8445-dcf5ffebe329) | | ![PushUps Counting](https://github.com/ultralytics/docs/releases/download/0/pushups-counting.avif) | ![PullUps Counting](https://github.com/ultralytics/docs/releases/download/0/pullups-counting.avif) |
| PushUps Counting | PullUps Counting | | PushUps Counting | PullUps Counting |
!!! Example "Workouts Monitoring Example" !!! Example "Workouts Monitoring Example"
@ -108,7 +108,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
### KeyPoints Map ### KeyPoints Map
![keyPoints Order Ultralytics YOLOv8 Pose](https://github.com/ultralytics/ultralytics/assets/62513924/f45d8315-b59f-47b7-b9c8-c61af1ce865b) ![keyPoints Order Ultralytics YOLOv8 Pose](https://github.com/ultralytics/docs/releases/download/0/keypoints-order-ultralytics-yolov8-pose.avif)
### Arguments `AIGym` ### Arguments `AIGym`

@ -7,7 +7,7 @@ keywords: YOLO, YOLOv8, troubleshooting, installation errors, model training, GP
# Troubleshooting Common YOLO Issues # Troubleshooting Common YOLO Issues
<p align="center"> <p align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/273067258-7c1b9aee-b4e8-43b5-befd-588d4f0bd361.png" alt="YOLO Common Issues Image"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/yolo-common-issues.avif" alt="YOLO Common Issues Image">
</p> </p>
## Introduction ## Introduction

@ -13,7 +13,7 @@ Running YOLO models in a multi-threaded environment requires careful considerati
Python threads are a form of parallelism that allow your program to run multiple operations at once. However, Python's Global Interpreter Lock (GIL) means that only one thread can execute Python bytecode at a time. Python threads are a form of parallelism that allow your program to run multiple operations at once. However, Python's Global Interpreter Lock (GIL) means that only one thread can execute Python bytecode at a time.
<p align="center"> <p align="center">
<img width="800" src="https://user-images.githubusercontent.com/26833433/281418476-7f478570-fd77-4a40-bf3d-74b4db4d668c.png" alt="Single vs Multi-Thread Examples"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/single-vs-multi-thread-examples.avif" alt="Single vs Multi-Thread Examples">
</p> </p>
While this sounds like a limitation, threads can still provide concurrency, especially for I/O-bound operations or when using operations that release the GIL, like those performed by YOLO's underlying C libraries. While this sounds like a limitation, threads can still provide concurrency, especially for I/O-bound operations or when using operations that release the GIL, like those performed by YOLO's underlying C libraries.

@ -9,7 +9,7 @@ keywords: Ultralytics, YOLO, open-source, contribution, pull request, code of co
Welcome! We're thrilled that you're considering contributing to our [Ultralytics](https://ultralytics.com) [open-source](https://github.com/ultralytics) projects. Your involvement not only helps enhance the quality of our repositories but also benefits the entire community. This guide provides clear guidelines and best practices to help you get started. Welcome! We're thrilled that you're considering contributing to our [Ultralytics](https://ultralytics.com) [open-source](https://github.com/ultralytics) projects. Your involvement not only helps enhance the quality of our repositories but also benefits the entire community. This guide provides clear guidelines and best practices to help you get started.
<a href="https://github.com/ultralytics/ultralytics/graphs/contributors"> <a href="https://github.com/ultralytics/ultralytics/graphs/contributors">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" alt="Ultralytics open-source contributors"></a> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-open-source-contributors.avif" alt="Ultralytics open-source contributors"></a>
## Table of Contents ## Table of Contents

@ -7,7 +7,7 @@ keywords: Ultralytics, Android app, real-time object detection, YOLO models, Ten
# Ultralytics Android App: Real-time Object Detection with YOLO Models # Ultralytics Android App: Real-time Object Detection with YOLO Models
<a href="https://ultralytics.com/hub" target="_blank"> <a href="https://ultralytics.com/hub" target="_blank">
<img width="100%" src="https://user-images.githubusercontent.com/26833433/281124469-6b3b0945-dbb1-44c8-80a9-ef6bc778b299.jpg" alt="Ultralytics HUB preview image"></a> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-android-app-detection.avif" alt="Ultralytics HUB preview image"></a>
<br> <br>
<div align="center"> <div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a> <a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>

@ -7,7 +7,7 @@ keywords: Ultralytics HUB, YOLO models, mobile app, iOS, Android, hardware accel
# Ultralytics HUB App # Ultralytics HUB App
<a href="https://ultralytics.com/hub" target="_blank"> <a href="https://ultralytics.com/hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB preview image"></a> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub.avif" alt="Ultralytics HUB preview image"></a>
<br> <br>
<div align="center"> <div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a> <a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>

@ -7,7 +7,7 @@ keywords: Ultralytics, iOS App, YOLO models, real-time object detection, Apple N
# Ultralytics iOS App: Real-time Object Detection with YOLO Models # Ultralytics iOS App: Real-time Object Detection with YOLO Models
<a href="https://ultralytics.com/hub" target="_blank"> <a href="https://ultralytics.com/hub" target="_blank">
<img width="100%" src="https://user-images.githubusercontent.com/26833433/281124469-6b3b0945-dbb1-44c8-80a9-ef6bc778b299.jpg" alt="Ultralytics HUB preview image"></a> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-android-app-detection.avif" alt="Ultralytics HUB preview image"></a>
<br> <br>
<div align="center"> <div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a> <a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>

@ -26,13 +26,13 @@ In order to train models using Ultralytics Cloud Training, you need to [upgrade]
Follow the [Train Model](./models.md#train-model) instructions from the [Models](./models.md) page until you reach the third step ([Train](./models.md#3-train)) of the **Train Model** dialog. Once you are on this step, simply select the training duration (Epochs or Timed), the training instance, the payment method, and click the **Start Training** button. That's it! Follow the [Train Model](./models.md#train-model) instructions from the [Models](./models.md) page until you reach the third step ([Train](./models.md#3-train)) of the **Train Model** dialog. Once you are on this step, simply select the training duration (Epochs or Timed), the training instance, the payment method, and click the **Start Training** button. That's it!
![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to the Cloud Training options and the Start Training button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_1.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to the Cloud Training options and the Start Training button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog.avif)
??? note "Note" ??? note "Note"
When you are on this step, you have the option to close the **Train Model** dialog and start training your model from the Model page later. When you are on this step, you have the option to close the **Train Model** dialog and start training your model from the Model page later.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Start Training card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_2.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Start Training card](https://github.com/ultralytics/docs/releases/download/0/hub-cloud-training-model-page-start-training.avif)
Most of the times, you will use the Epochs training. The number of epochs can be adjusted on this step (if the training didn't start yet) and represents the number of times your dataset needs to go through the cycle of train, label, and test. The exact pricing based on the number of epochs is hard to determine, reason why we only allow the [Account Balance](./pro.md#account-balance) payment method. Most of the times, you will use the Epochs training. The number of epochs can be adjusted on this step (if the training didn't start yet) and represents the number of times your dataset needs to go through the cycle of train, label, and test. The exact pricing based on the number of epochs is hard to determine, reason why we only allow the [Account Balance](./pro.md#account-balance) payment method.
@ -40,7 +40,7 @@ Most of the times, you will use the Epochs training. The number of epochs can be
When using the Epochs training, your [account balance](./pro.md#account-balance) needs to be at least US$5.00 to start training. In case you have a low balance, you can top-up directly from this step. When using the Epochs training, your [account balance](./pro.md#account-balance) needs to be at least US$5.00 to start training. In case you have a low balance, you can top-up directly from this step.
![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Top-Up button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_3.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Top-Up button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-top-up.avif)
!!! note "Note" !!! note "Note"
@ -48,21 +48,21 @@ Most of the times, you will use the Epochs training. The number of epochs can be
Also, after every epoch, we check if you have enough [account balance](./pro.md#account-balance) for the next epoch. In case you don't have enough [account balance](./pro.md#account-balance) for the next epoch, we will stop the training session, allowing you to resume training your model from the last checkpoint saved. Also, after every epoch, we check if you have enough [account balance](./pro.md#account-balance) for the next epoch. In case you don't have enough [account balance](./pro.md#account-balance) for the next epoch, we will stop the training session, allowing you to resume training your model from the last checkpoint saved.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Resume Training button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_4.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Resume Training button](https://github.com/ultralytics/docs/releases/download/0/hub-cloud-training-resume-training-button.avif)
Alternatively, you can use the Timed training. This option allows you to set the training duration. In this case, we can determine the exact pricing. You can pay upfront or using your [account balance](./pro.md#account-balance). Alternatively, you can use the Timed training. This option allows you to set the training duration. In this case, we can determine the exact pricing. You can pay upfront or using your [account balance](./pro.md#account-balance).
If you have enough [account balance](./pro.md#account-balance), you can use the [Account Balance](./pro.md#account-balance) payment method. If you have enough [account balance](./pro.md#account-balance), you can use the [Account Balance](./pro.md#account-balance) payment method.
![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Start Training button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_5.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Start Training button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-start-training.avif)
If you don't have enough [account balance](./pro.md#account-balance), you won't be able to use the [Account Balance](./pro.md#account-balance) payment method. You can pay upfront or top-up directly from this step. If you don't have enough [account balance](./pro.md#account-balance), you won't be able to use the [Account Balance](./pro.md#account-balance) payment method. You can pay upfront or top-up directly from this step.
![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Pay Now button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_6.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Pay Now button](https://github.com/ultralytics/docs/releases/download/0/hub-cloud-training-train-model-pay-now-button.avif)
Before the training session starts, the initialization process spins up a dedicated instance equipped with GPU resources, which can sometimes take a while depending on the current demand and availability of GPU resources. Before the training session starts, the initialization process spins up a dedicated instance equipped with GPU resources, which can sometimes take a while depending on the current demand and availability of GPU resources.
![Ultralytics HUB screenshot of the Model page during the initialization process](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_7.jpg) ![Ultralytics HUB screenshot of the Model page during the initialization process](https://github.com/ultralytics/docs/releases/download/0/model-page-initialization-process.avif)
!!! note "Note" !!! note "Note"
@ -72,13 +72,13 @@ After the training session starts, you can monitor each step of the progress.
If needed, you can stop the training by clicking on the **Stop Training** button. If needed, you can stop the training by clicking on the **Stop Training** button.
![Ultralytics HUB screenshot of the Model page of a model that is currently training with an arrow pointing to the Stop Training button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_8.jpg) ![Ultralytics HUB screenshot of the Model page of a model that is currently training with an arrow pointing to the Stop Training button](https://github.com/ultralytics/docs/releases/download/0/model-page-training-stop-button.avif)
!!! note "Note" !!! note "Note"
You can resume training your model from the last checkpoint saved. You can resume training your model from the last checkpoint saved.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Resume Training button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_4.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Resume Training button](https://github.com/ultralytics/docs/releases/download/0/hub-cloud-training-resume-training-button.avif)
<p align="center"> <p align="center">
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/H3qL8ImCSV8" <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/H3qL8ImCSV8"
@ -94,10 +94,10 @@ If needed, you can stop the training by clicking on the **Stop Training** button
Unfortunately, at the moment, you can only train one model at a time using Ultralytics Cloud. Unfortunately, at the moment, you can only train one model at a time using Ultralytics Cloud.
![Ultralytics HUB screenshot of the Train Model dialog with the Ultralytics Cloud unavailable](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_9.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with the Ultralytics Cloud unavailable](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-1.avif)
## Billing ## Billing
During training or after training, you can check the cost of your model by clicking on the **Billing** tab. Furthermore, you can download the cost report by clicking on the **Download** button. During training or after training, you can check the cost of your model by clicking on the **Billing** tab. Furthermore, you can download the cost report by clicking on the **Download** button.
![Ultralytics HUB screenshot of the Billing tab inside the Model page with an arrow pointing to the Billing tab and one to the Download button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_billing_1.jpg) ![Ultralytics HUB screenshot of the Billing tab inside the Model page with an arrow pointing to the Billing tab and one to the Download button](https://github.com/ultralytics/docs/releases/download/0/hub-cloud-training-billing-tab.avif)

@ -35,7 +35,7 @@ zip -r coco8.zip coco8
You can download our [COCO8](https://github.com/ultralytics/hub/blob/main/example_datasets/coco8.zip) example dataset and unzip it to see exactly how to structure your dataset. You can download our [COCO8](https://github.com/ultralytics/hub/blob/main/example_datasets/coco8.zip) example dataset and unzip it to see exactly how to structure your dataset.
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/dataset_structure.jpg" alt="COCO8 Dataset Structure" width="80%"> <img src="https://github.com/ultralytics/docs/releases/download/0/coco8-dataset-structure.avif" alt="COCO8 Dataset Structure" width="80%">
</p> </p>
The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format. The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format.
@ -56,13 +56,13 @@ check_dataset("path/to/dataset.zip", task="detect")
Once your dataset ZIP is ready, navigate to the [Datasets](https://hub.ultralytics.com/datasets) page by clicking on the **Datasets** button in the sidebar and click on the **Upload Dataset** button on the top right of the page. Once your dataset ZIP is ready, navigate to the [Datasets](https://hub.ultralytics.com/datasets) page by clicking on the **Datasets** button in the sidebar and click on the **Upload Dataset** button on the top right of the page.
![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Datasets button in the sidebar and one to the Upload Dataset button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_2.jpg) ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Datasets button in the sidebar and one to the Upload Dataset button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-datasets-upload.avif)
??? tip "Tip" ??? tip "Tip"
You can upload a dataset directly from the [Home](https://hub.ultralytics.com/home) page. You can upload a dataset directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Upload Dataset card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_1.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Upload Dataset card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-upload-dataset-card.avif)
This action will trigger the **Upload Dataset** dialog. This action will trigger the **Upload Dataset** dialog.
@ -72,43 +72,43 @@ You have the additional option to set a custom name and description for your [Ul
When you're happy with your dataset configuration, click **Upload**. When you're happy with your dataset configuration, click **Upload**.
![Ultralytics HUB screenshot of the Upload Dataset dialog with arrows pointing to dataset task, dataset file and Upload button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_3.jpg) ![Ultralytics HUB screenshot of the Upload Dataset dialog with arrows pointing to dataset task, dataset file and Upload button](https://github.com/ultralytics/docs/releases/download/0/hub-upload-dataset-dialog.avif)
After your dataset is uploaded and processed, you will be able to access it from the [Datasets](https://hub.ultralytics.com/datasets) page. After your dataset is uploaded and processed, you will be able to access it from the [Datasets](https://hub.ultralytics.com/datasets) page.
![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_4.jpg) ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to one of the datasets](https://github.com/ultralytics/docs/releases/download/0/hub-datasets-page.avif)
You can view the images in your dataset grouped by splits (Train, Validation, Test). You can view the images in your dataset grouped by splits (Train, Validation, Test).
![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Images tab](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_5.jpg) ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Images tab](https://github.com/ultralytics/docs/releases/download/0/hub-dataset-page-images-tab.avif)
??? tip "Tip" ??? tip "Tip"
Each image can be enlarged for better visualization. Each image can be enlarged for better visualization.
![Ultralytics HUB screenshot of the Images tab inside the Dataset page with an arrow pointing to the expand icon](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_6.jpg) ![Ultralytics HUB screenshot of the Images tab inside the Dataset page with an arrow pointing to the expand icon](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-images-tab-expand-icon.avif)
![Ultralytics HUB screenshot of the Images tab inside the Dataset page with one of the images expanded](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_7.jpg) ![Ultralytics HUB screenshot of the Images tab inside the Dataset page with one of the images expanded](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-dataset-page-expanded-image.avif)
Also, you can analyze your dataset by click on the **Overview** tab. Also, you can analyze your dataset by click on the **Overview** tab.
![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Overview tab](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_8.jpg) ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Overview tab](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-overview-tab.avif)
Next, [train a model](./models.md#train-model) on your dataset. Next, [train a model](./models.md#train-model) on your dataset.
![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Train Model button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_9.jpg) ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-dataset-page-train-model-button.avif)
## Download Dataset ## Download Dataset
Navigate to the Dataset page of the dataset you want to download, open the dataset actions dropdown and click on the **Download** option. This action will start downloading your dataset. Navigate to the Dataset page of the dataset you want to download, open the dataset actions dropdown and click on the **Download** option. This action will start downloading your dataset.
![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Download option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_download_dataset_1.jpg) ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Download option](https://github.com/ultralytics/docs/releases/download/0/hub-download-dataset-1.avif)
??? tip "Tip" ??? tip "Tip"
You can download a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. You can download a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page.
![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Download option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_download_dataset_2.jpg) ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Download option of one of the datasets](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-datasets-download-option.avif)
## Share Dataset ## Share Dataset
@ -124,17 +124,17 @@ Navigate to the Dataset page of the dataset you want to download, open the datas
Navigate to the Dataset page of the dataset you want to share, open the dataset actions dropdown and click on the **Share** option. This action will trigger the **Share Dataset** dialog. Navigate to the Dataset page of the dataset you want to share, open the dataset actions dropdown and click on the **Share** option. This action will trigger the **Share Dataset** dialog.
![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Share option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_share_dataset_1.jpg) ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Share option](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-share-dataset.avif)
??? tip "Tip" ??? tip "Tip"
You can share a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. You can share a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page.
![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Share option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_share_dataset_2.jpg) ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Share option of one of the datasets](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-2.avif)
Set the general access to "Unlisted" and click **Save**. Set the general access to "Unlisted" and click **Save**.
![Ultralytics HUB screenshot of the Share Dataset dialog with an arrow pointing to the dropdown and one to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_share_dataset_3.jpg) ![Ultralytics HUB screenshot of the Share Dataset dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-dialog.avif)
Now, anyone who has the direct link to your dataset can view it. Now, anyone who has the direct link to your dataset can view it.
@ -142,38 +142,38 @@ Now, anyone who has the direct link to your dataset can view it.
You can easily click on the dataset's link shown in the **Share Dataset** dialog to copy it. You can easily click on the dataset's link shown in the **Share Dataset** dialog to copy it.
![Ultralytics HUB screenshot of the Share Dataset dialog with an arrow pointing to the dataset's link](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_share_dataset_4.jpg) ![Ultralytics HUB screenshot of the Share Dataset dialog with an arrow pointing to the dataset's link](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-link.avif)
## Edit Dataset ## Edit Dataset
Navigate to the Dataset page of the dataset you want to edit, open the dataset actions dropdown and click on the **Edit** option. This action will trigger the **Update Dataset** dialog. Navigate to the Dataset page of the dataset you want to edit, open the dataset actions dropdown and click on the **Edit** option. This action will trigger the **Update Dataset** dialog.
![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Edit option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_edit_dataset_1.jpg) ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Edit option](https://github.com/ultralytics/docs/releases/download/0/hub-edit-dataset-1.avif)
??? tip "Tip" ??? tip "Tip"
You can edit a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. You can edit a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page.
![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Edit option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_edit_dataset_2.jpg) ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Edit option of one of the datasets](https://github.com/ultralytics/docs/releases/download/0/hub-edit-dataset-page.avif)
Apply the desired modifications to your dataset and then confirm the changes by clicking **Save**. Apply the desired modifications to your dataset and then confirm the changes by clicking **Save**.
![Ultralytics HUB screenshot of the Update Dataset dialog with an arrow pointing to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_edit_dataset_3.jpg) ![Ultralytics HUB screenshot of the Update Dataset dialog with an arrow pointing to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-edit-dataset-save-button.avif)
## Delete Dataset ## Delete Dataset
Navigate to the Dataset page of the dataset you want to delete, open the dataset actions dropdown and click on the **Delete** option. This action will delete the dataset. Navigate to the Dataset page of the dataset you want to delete, open the dataset actions dropdown and click on the **Delete** option. This action will delete the dataset.
![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Delete option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_delete_dataset_1.jpg) ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Delete option](https://github.com/ultralytics/docs/releases/download/0/hub-delete-dataset-option.avif)
??? tip "Tip" ??? tip "Tip"
You can delete a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. You can delete a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page.
![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Delete option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_delete_dataset_2.jpg) ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Delete option of one of the datasets](https://github.com/ultralytics/docs/releases/download/0/hub-delete-dataset-page.avif)
!!! note "Note" !!! note "Note"
If you change your mind, you can restore the dataset from the [Trash](https://hub.ultralytics.com/trash) page. If you change your mind, you can restore the dataset from the [Trash](https://hub.ultralytics.com/trash) page.
![Ultralytics HUB screenshot of the Trash page with an arrow pointing to Trash button in the sidebar and one to the Restore option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_delete_dataset_3.jpg) ![Ultralytics HUB screenshot of the Trash page with an arrow pointing to Trash button in the sidebar and one to the Restore option of one of the datasets](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-trash-restore.avif)

@ -7,7 +7,7 @@ keywords: Ultralytics HUB, YOLO models, train YOLO, YOLOv5, YOLOv8, object detec
# Ultralytics HUB # Ultralytics HUB
<div align="center"> <div align="center">
<a href="https://ultralytics.com/hub" target="_blank"><img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a> <a href="https://ultralytics.com/hub" target="_blank"><img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub.avif"></a>
<a href="https://docs.ultralytics.com/zh/hub/">中文</a> | <a href="https://docs.ultralytics.com/zh/hub/">中文</a> |
<a href="https://docs.ultralytics.com/ko/hub/">한국어</a> | <a href="https://docs.ultralytics.com/ko/hub/">한국어</a> |
<a href="https://docs.ultralytics.com/ja/hub/">日本語</a> | <a href="https://docs.ultralytics.com/ja/hub/">日本語</a> |

@ -8,7 +8,7 @@ keywords: Ultralytics, HUB, Inference API, Python, cURL, REST API, YOLO, image p
After you [train a model](./models.md#train-model), you can use the [Shared Inference API](#shared-inference-api) for free. If you are a [Pro](./pro.md) user, you can access the [Dedicated Inference API](#dedicated-inference-api). The [Ultralytics HUB](https://ultralytics.com/hub) Inference API allows you to run inference through our REST API without the need to install and set up the Ultralytics YOLO environment locally. After you [train a model](./models.md#train-model), you can use the [Shared Inference API](#shared-inference-api) for free. If you are a [Pro](./pro.md) user, you can access the [Dedicated Inference API](#dedicated-inference-api). The [Ultralytics HUB](https://ultralytics.com/hub) Inference API allows you to run inference through our REST API without the need to install and set up the Ultralytics YOLO environment locally.
![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Dedicated Inference API card and one to the Shared Inference API card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/inference-api/hub_inference_api_1.jpg) ![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Dedicated Inference API card and one to the Shared Inference API card](https://github.com/ultralytics/docs/releases/download/0/hub-inference-api-card.avif)
<p align="center"> <p align="center">
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/OpWpBI35A5Y" <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/OpWpBI35A5Y"
@ -35,7 +35,7 @@ In response to high demand and widespread interest, we are thrilled to unveil th
To use the [Ultralytics HUB](https://ultralytics.com/hub) Dedicated Inference API, click on the **Start Endpoint** button. Next, use the unique endpoint URL as described in the guides below. To use the [Ultralytics HUB](https://ultralytics.com/hub) Dedicated Inference API, click on the **Start Endpoint** button. Next, use the unique endpoint URL as described in the guides below.
![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Start Endpoint button in Dedicated Inference API card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/inference-api/hub_dedicated_inference_api_1.jpg) ![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Start Endpoint button in Dedicated Inference API card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-dedicated-inference-api.avif)
!!! tip "Tip" !!! tip "Tip"
@ -43,7 +43,7 @@ To use the [Ultralytics HUB](https://ultralytics.com/hub) Dedicated Inference AP
To shut down the dedicated endpoint, click on the **Stop Endpoint** button. To shut down the dedicated endpoint, click on the **Stop Endpoint** button.
![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Stop Endpoint button in Dedicated Inference API card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/inference-api/hub_dedicated_inference_api_2.jpg) ![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Stop Endpoint button in Dedicated Inference API card](https://github.com/ultralytics/docs/releases/download/0/deploy-tab-model-page-stop-endpoint.avif)
## Shared Inference API ## Shared Inference API

@ -18,7 +18,7 @@ After a dataset is imported in [Ultralytics HUB](https://ultralytics.com/hub), y
You can easily filter the [Roboflow](https://roboflow.com/?ref=ultralytics) datasets on the [Ultralytics HUB](https://ultralytics.com/hub) [Datasets](https://hub.ultralytics.com/datasets) page. You can easily filter the [Roboflow](https://roboflow.com/?ref=ultralytics) datasets on the [Ultralytics HUB](https://ultralytics.com/hub) [Datasets](https://hub.ultralytics.com/datasets) page.
![Ultralytics HUB screenshot of the Datasets page with Roboflow provider filter](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_1.jpg) ![Ultralytics HUB screenshot of the Datasets page with Roboflow provider filter](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-datasets-page-roboflow-filter.avif)
[Ultralytics HUB](https://ultralytics.com/hub) supports two types of integrations with [Roboflow](https://roboflow.com/?ref=ultralytics), [Universe](#universe) and [Workspace](#workspace). [Ultralytics HUB](https://ultralytics.com/hub) supports two types of integrations with [Roboflow](https://roboflow.com/?ref=ultralytics), [Universe](#universe) and [Workspace](#workspace).
@ -32,23 +32,23 @@ When you export a [Roboflow](https://roboflow.com/?ref=ultralytics) dataset, sel
You can import your [Roboflow](https://roboflow.com/?ref=ultralytics) dataset by clicking on the **Import** button. You can import your [Roboflow](https://roboflow.com/?ref=ultralytics) dataset by clicking on the **Import** button.
![Ultralytics HUB screenshot of the Dataset Import dialog with an arrow pointing to the Import button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_universe_import_1.jpg) ![Ultralytics HUB screenshot of the Dataset Import dialog with an arrow pointing to the Import button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-dataset-import-dialog.avif)
Next, [train a model](./models.md#train-model) on your dataset. Next, [train a model](./models.md#train-model) on your dataset.
![Ultralytics HUB screenshot of the Dataset page of a Roboflow Universe dataset with an arrow pointing to the Train Model button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_universe_import_2.jpg) ![Ultralytics HUB screenshot of the Dataset page of a Roboflow Universe dataset with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/hub-roboflow-universe-import-2.avif)
##### Remove ##### Remove
Navigate to the Dataset page of the [Roboflow](https://roboflow.com/?ref=ultralytics) dataset you want to remove, open the dataset actions dropdown and click on the **Remove** option. Navigate to the Dataset page of the [Roboflow](https://roboflow.com/?ref=ultralytics) dataset you want to remove, open the dataset actions dropdown and click on the **Remove** option.
![Ultralytics HUB screenshot of the Dataset page of a Roboflow Universe dataset with an arrow pointing to the Remove option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_universe_remove_1.jpg) ![Ultralytics HUB screenshot of the Dataset page of a Roboflow Universe dataset with an arrow pointing to the Remove option](https://github.com/ultralytics/docs/releases/download/0/hub-roboflow-universe-remove.avif)
??? tip "Tip" ??? tip "Tip"
You can remove an imported [Roboflow](https://roboflow.com/?ref=ultralytics) dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. You can remove an imported [Roboflow](https://roboflow.com/?ref=ultralytics) dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page.
![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Remove option of one of the Roboflow Universe datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_remove_1.jpg) ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Remove option of one of the Roboflow Universe datasets](https://github.com/ultralytics/docs/releases/download/0/hub-roboflow-remove-option.avif)
#### Workspace #### Workspace
@ -64,33 +64,33 @@ Type your [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace private AP
You can click on the **Get my API key** button which will redirect you to the settings of your [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace from where you can obtain your private API key. You can click on the **Get my API key** button which will redirect you to the settings of your [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace from where you can obtain your private API key.
![Ultralytics HUB screenshot of the Integrations page with an arrow pointing to the Integrations button in the sidebar and one to the Add button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_workspace_import_1.jpg) ![Ultralytics HUB screenshot of the Integrations page with an arrow pointing to the Integrations button in the sidebar and one to the Add button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-integrations-page.avif)
This will connect your [Ultralytics HUB](https://ultralytics.com/hub) account with your [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace and make your [Roboflow](https://roboflow.com/?ref=ultralytics) datasets available in [Ultralytics HUB](https://ultralytics.com/hub). This will connect your [Ultralytics HUB](https://ultralytics.com/hub) account with your [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace and make your [Roboflow](https://roboflow.com/?ref=ultralytics) datasets available in [Ultralytics HUB](https://ultralytics.com/hub).
![Ultralytics HUB screenshot of the Integrations page with an arrow pointing to one of the connected workspaces](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_workspace_import_2.jpg) ![Ultralytics HUB screenshot of the Integrations page with an arrow pointing to one of the connected workspaces](https://github.com/ultralytics/docs/releases/download/0/hub-roboflow-workspace-import-2.avif)
Next, [train a model](./models.md#train-model) on your dataset. Next, [train a model](./models.md#train-model) on your dataset.
![Ultralytics HUB screenshot of the Dataset page of a Roboflow Workspace dataset with an arrow pointing to the Train Model button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_workspace_import_3.jpg) ![Ultralytics HUB screenshot of the Dataset page of a Roboflow Workspace dataset with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-dataset-train-model.avif)
##### Remove ##### Remove
Navigate to the [Integrations](https://hub.ultralytics.com/settings?tab=integrations) page by clicking on the **Integrations** button in the sidebar and click on the **Unlink** button of the [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace you want to remove. Navigate to the [Integrations](https://hub.ultralytics.com/settings?tab=integrations) page by clicking on the **Integrations** button in the sidebar and click on the **Unlink** button of the [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace you want to remove.
![Ultralytics HUB screenshot of the Integrations page with an arrow pointing to the Integrations button in the sidebar and one to the Unlink button of one of the connected workspaces](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_workspace_remove_1.jpg) ![Ultralytics HUB screenshot of the Integrations page with an arrow pointing to the Integrations button in the sidebar and one to the Unlink button of one of the connected workspaces](https://github.com/ultralytics/docs/releases/download/0/hub-roboflow-workspace-remove-1.avif)
??? tip "Tip" ??? tip "Tip"
You can remove a connected [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace directly from the Dataset page of one of the datasets from your [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace. You can remove a connected [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace directly from the Dataset page of one of the datasets from your [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace.
![Ultralytics HUB screenshot of the Dataset page of a Roboflow Workspace dataset with an arrow pointing to the remove option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_workspace_remove_2.jpg) ![Ultralytics HUB screenshot of the Dataset page of a Roboflow Workspace dataset with an arrow pointing to the remove option](https://github.com/ultralytics/docs/releases/download/0/hub-roboflow-workspace-remove-2.avif)
??? tip "Tip" ??? tip "Tip"
You can remove a connected [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace directly from the [Datasets](https://hub.ultralytics.com/datasets) page. You can remove a connected [Roboflow](https://roboflow.com/?ref=ultralytics) Workspace directly from the [Datasets](https://hub.ultralytics.com/datasets) page.
![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Remove option of one of the Roboflow Workspace datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/integrations/hub_roboflow_remove_1.jpg) ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Remove option of one of the Roboflow Workspace datasets](https://github.com/ultralytics/docs/releases/download/0/hub-roboflow-remove-option.avif)
## Models ## Models
@ -98,7 +98,7 @@ Navigate to the [Integrations](https://hub.ultralytics.com/settings?tab=integrat
After you [train a model](./models.md#train-model), you can [export it](./models.md#deploy-model) to 13 different formats, including ONNX, OpenVINO, CoreML, TensorFlow, Paddle and many others. After you [train a model](./models.md#train-model), you can [export it](./models.md#deploy-model) to 13 different formats, including ONNX, OpenVINO, CoreML, TensorFlow, Paddle and many others.
![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Export card and all formats exported](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_deploy_model_1.jpg) ![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Export card and all formats exported](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-deploy-export-formats.avif)
The available export formats are presented in the table below. The available export formats are presented in the table below.

@ -24,13 +24,13 @@ The process is user-friendly and efficient, involving a simple three-step creati
Navigate to the [Models](https://hub.ultralytics.com/models) page by clicking on the **Models** button in the sidebar and click on the **Train Model** button on the top right of the page. Navigate to the [Models](https://hub.ultralytics.com/models) page by clicking on the **Models** button in the sidebar and click on the **Train Model** button on the top right of the page.
![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Models button in the sidebar and one to the Train Model button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_2.jpg) ![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Models button in the sidebar and one to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-page.avif)
??? tip "Tip" ??? tip "Tip"
You can train a model directly from the [Home](https://hub.ultralytics.com/home) page. You can train a model directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Train Model card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_1.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Train Model card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-card.avif)
This action will trigger the **Train Model** dialog which has three simple steps: This action will trigger the **Train Model** dialog which has three simple steps:
@ -38,19 +38,19 @@ This action will trigger the **Train Model** dialog which has three simple steps
In this step, you have to select the dataset you want to train your model on. After you selected a dataset, click **Continue**. In this step, you have to select the dataset you want to train your model on. After you selected a dataset, click **Continue**.
![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to a dataset and one to the Continue button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_3.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to a dataset and one to the Continue button](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-dialog-dataset-step.avif)
??? tip "Tip" ??? tip "Tip"
You can skip this step if you train a model directly from the Dataset page. You can skip this step if you train a model directly from the Dataset page.
![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Train Model button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_9.jpg) ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-dataset-page-train-model-button.avif)
### 2. Model ### 2. Model
In this step, you have to choose the project in which you want to create your model, the name of your model and your model's architecture. In this step, you have to choose the project in which you want to create your model, the name of your model and your model's architecture.
![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to the project dropdown, model name and Continue button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_4.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to the project dropdown, model name and Continue button](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-dialog.avif)
??? note "Note" ??? note "Note"
@ -60,7 +60,7 @@ In this step, you have to choose the project in which you want to create your mo
If you opened the **Train Model** dialog from the Project page, [Ultralytics HUB](https://ultralytics.com/hub) will pre-select the project you were inside of. If you opened the **Train Model** dialog from the Project page, [Ultralytics HUB](https://ultralytics.com/hub) will pre-select the project you were inside of.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Train Model button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_create_project_5.jpg) ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-button.avif)
In case you don't have a project created yet, you can set the name of your project in this step and it will be created together with your model. In case you don't have a project created yet, you can set the name of your project in this step and it will be created together with your model.
@ -70,7 +70,7 @@ In this step, you have to choose the project in which you want to create your mo
By default, your model will use a pre-trained model (trained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco) dataset) to reduce training time. You can change this behavior and tweak your model's configuration by opening the **Advanced Model Configuration** accordion. By default, your model will use a pre-trained model (trained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco) dataset) to reduce training time. You can change this behavior and tweak your model's configuration by opening the **Advanced Model Configuration** accordion.
![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Advanced Model Configuration accordion](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_5.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Advanced Model Configuration accordion](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-2.avif)
!!! note "Note" !!! note "Note"
@ -89,7 +89,7 @@ By default, your model will use a pre-trained model (trained on the [COCO](https
Alternatively, you start training from one of your previously trained models by clicking on the **Custom** tab. Alternatively, you start training from one of your previously trained models by clicking on the **Custom** tab.
![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Custom tab](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_6.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Custom tab](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-3.avif)
When you're happy with your model configuration, click **Continue**. When you're happy with your model configuration, click **Continue**.
@ -101,7 +101,7 @@ In this step, you will start training you model.
When you are on this step, you have the option to close the **Train Model** dialog and start training your model from the Model page later. When you are on this step, you have the option to close the **Train Model** dialog and start training your model from the Model page later.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Start Training card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/cloud-training/hub_cloud_training_train_model_2.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Start Training card](https://github.com/ultralytics/docs/releases/download/0/hub-cloud-training-model-page-start-training.avif)
[Ultralytics HUB](https://ultralytics.com/hub) offers three training options: [Ultralytics HUB](https://ultralytics.com/hub) offers three training options:
@ -113,7 +113,7 @@ In this step, you will start training you model.
You need to [upgrade](./pro.md#upgrade) to the [Pro Plan](./pro.md) in order to access [Ultralytics Cloud](./cloud-training.md). You need to [upgrade](./pro.md#upgrade) to the [Pro Plan](./pro.md) in order to access [Ultralytics Cloud](./cloud-training.md).
![Ultralytics HUB screenshot of the Train Model dialog](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_7.jpg) ![Ultralytics HUB screenshot of the Train Model dialog](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-4.avif)
To train models using our [Cloud Training](./cloud-training.md) solution, read the [Ultralytics Cloud Training](./cloud-training.md) documentation. To train models using our [Cloud Training](./cloud-training.md) solution, read the [Ultralytics Cloud Training](./cloud-training.md) documentation.
@ -125,19 +125,19 @@ To start training your model using [Google Colab](https://colab.research.google.
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab">
</a> </a>
![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to instructions](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_8.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to instructions](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-instructions.avif)
When the training starts, you can click **Done** and monitor the training progress on the Model page. When the training starts, you can click **Done** and monitor the training progress on the Model page.
![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Done button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_9.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Done button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-done-button.avif)
![Ultralytics HUB screenshot of the Model page of a model that is currently training](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_10.jpg) ![Ultralytics HUB screenshot of the Model page of a model that is currently training](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-progress.avif)
!!! note "Note" !!! note "Note"
In case the training stops and a checkpoint was saved, you can resume training your model from the Model page. In case the training stops and a checkpoint was saved, you can resume training your model from the Model page.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Resume Training card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_11.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Resume Training card](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-resume-training.avif)
#### c. Bring your own agent #### c. Bring your own agent
@ -153,7 +153,7 @@ When the training starts, you can click **Done** and monitor the training progre
To start training your model using your own agent, follow the instructions shown in the [Ultralytics HUB](https://ultralytics.com/hub) **Train Model** dialog. To start training your model using your own agent, follow the instructions shown in the [Ultralytics HUB](https://ultralytics.com/hub) **Train Model** dialog.
![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to instructions](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_12.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to instructions](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-instructions-1.avif)
Install the `ultralytics` package from [PyPI](https://pypi.org/project/ultralytics). Install the `ultralytics` package from [PyPI](https://pypi.org/project/ultralytics).
@ -165,15 +165,15 @@ Next, use the Python code provided to start training the model.
When the training starts, you can click **Done** and monitor the training progress on the Model page. When the training starts, you can click **Done** and monitor the training progress on the Model page.
![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Done button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_13.jpg) ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Done button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-done-button-1.avif)
![Ultralytics HUB screenshot of the Model page of a model that is currently training](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_14.jpg) ![Ultralytics HUB screenshot of the Model page of a model that is currently training](https://github.com/ultralytics/docs/releases/download/0/model-training-progress.avif)
!!! note "Note" !!! note "Note"
In case the training stops and a checkpoint was saved, you can resume training your model from the Model page. In case the training stops and a checkpoint was saved, you can resume training your model from the Model page.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Resume Training card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_15.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Resume Training card](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-resume-training-1.avif)
## Analyze Model ## Analyze Model
@ -181,23 +181,23 @@ After you [train a model](#train-model), you can analyze the model metrics.
The **Train** tab presents the most important metrics carefully grouped based on the task. The **Train** tab presents the most important metrics carefully grouped based on the task.
![Ultralytics HUB screenshot of the Model page of a trained model](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_analyze_model_1.jpg) ![Ultralytics HUB screenshot of the Model page of a trained model](https://github.com/ultralytics/docs/releases/download/0/hub-analyze-model.avif)
To access all model metrics, click on the **Charts** tab. To access all model metrics, click on the **Charts** tab.
![Ultralytics HUB screenshot of the Preview tab inside the Model page with an arrow pointing to the Charts tab](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_analyze_model_2.jpg) ![Ultralytics HUB screenshot of the Preview tab inside the Model page with an arrow pointing to the Charts tab](https://github.com/ultralytics/docs/releases/download/0/hub-analyze-model-2.avif)
??? tip "Tip" ??? tip "Tip"
Each chart can be enlarged for better visualization. Each chart can be enlarged for better visualization.
![Ultralytics HUB screenshot of the Train tab inside the Model page with an arrow pointing to the expand icon of one of the charts](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_analyze_model_3.jpg) ![Ultralytics HUB screenshot of the Train tab inside the Model page with an arrow pointing to the expand icon of one of the charts](https://github.com/ultralytics/docs/releases/download/0/hub-analyze-model-train-tab-expand-icon.avif)
![Ultralytics HUB screenshot of the Train tab inside the Model page with one of the charts expanded](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_analyze_model_4.jpg) ![Ultralytics HUB screenshot of the Train tab inside the Model page with one of the charts expanded](https://github.com/ultralytics/docs/releases/download/0/hub-analyze-model-train-tab-expanded-chart.avif)
Furthermore, to properly analyze the data, you can utilize the zoom feature. Furthermore, to properly analyze the data, you can utilize the zoom feature.
![Ultralytics HUB screenshot of the Train tab inside the Model page with one of the charts expanded and zoomed](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_analyze_model_5.jpg) ![Ultralytics HUB screenshot of the Train tab inside the Model page with one of the charts expanded and zoomed](https://github.com/ultralytics/docs/releases/download/0/hub-analyze-model-zoomed-chart.avif)
## Preview Model ## Preview Model
@ -205,29 +205,29 @@ After you [train a model](#train-model), you can preview it by clicking on the *
In the **Test** card, you can select a preview image from the dataset used during training or upload an image from your device. In the **Test** card, you can select a preview image from the dataset used during training or upload an image from your device.
![Ultralytics HUB screenshot of the Preview tab inside the Model page with an arrow pointing to Charts tab and one to the Test card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_preview_model_1.jpg) ![Ultralytics HUB screenshot of the Preview tab inside the Model page with an arrow pointing to Charts tab and one to the Test card](https://github.com/ultralytics/docs/releases/download/0/hub-preview-model-charts-test-card.avif)
!!! note "Note" !!! note "Note"
You can also use your camera to take a picture and run inference on it directly. You can also use your camera to take a picture and run inference on it directly.
![Ultralytics HUB screenshot of the Preview tab inside the Model page with an arrow pointing to Camera tab inside the Test card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_preview_model_2.jpg) ![Ultralytics HUB screenshot of the Preview tab inside the Model page with an arrow pointing to Camera tab inside the Test card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-preview-camera-tab.avif)
Furthermore, you can preview your model in real-time directly on your [iOS](https://apps.apple.com/xk/app/ultralytics/id1583935240) or [Android](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app) mobile device by [downloading](https://ultralytics.com/app_install) our [Ultralytics HUB App](app/index.md). Furthermore, you can preview your model in real-time directly on your [iOS](https://apps.apple.com/xk/app/ultralytics/id1583935240) or [Android](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app) mobile device by [downloading](https://ultralytics.com/app_install) our [Ultralytics HUB App](app/index.md).
![Ultralytics HUB screenshot of the Deploy tab inside the Model page with arrow pointing to the Real-Time Preview card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_preview_model_3.jpg) ![Ultralytics HUB screenshot of the Deploy tab inside the Model page with arrow pointing to the Real-Time Preview card](https://github.com/ultralytics/docs/releases/download/0/deploy-tab-real-time-preview-card.avif)
## Deploy Model ## Deploy Model
After you [train a model](#train-model), you can export it to 13 different formats, including ONNX, OpenVINO, CoreML, TensorFlow, Paddle and many others. After you [train a model](#train-model), you can export it to 13 different formats, including ONNX, OpenVINO, CoreML, TensorFlow, Paddle and many others.
![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Export card and all formats exported](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_deploy_model_1.jpg) ![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Export card and all formats exported](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-deploy-export-formats.avif)
??? tip "Tip" ??? tip "Tip"
You can customize the export options of each format if you open the export actions dropdown and click on the **Advanced** option. You can customize the export options of each format if you open the export actions dropdown and click on the **Advanced** option.
![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Advanced option of one of the formats](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_deploy_model_2.jpg) ![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Advanced option of one of the formats](https://github.com/ultralytics/docs/releases/download/0/hub-deploy-model-advanced-option.avif)
!!! note "Note" !!! note "Note"
@ -235,7 +235,7 @@ After you [train a model](#train-model), you can export it to 13 different forma
You can also use our [Inference API](./inference-api.md) in production. You can also use our [Inference API](./inference-api.md) in production.
![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Ultralytics Inference API card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/inference-api/hub_inference_api_1.jpg) ![Ultralytics HUB screenshot of the Deploy tab inside the Model page with an arrow pointing to the Ultralytics Inference API card](https://github.com/ultralytics/docs/releases/download/0/hub-inference-api-card.avif)
Read the [Ultralytics Inference API](./inference-api.md) documentation for more information. Read the [Ultralytics Inference API](./inference-api.md) documentation for more information.
@ -253,17 +253,17 @@ Read the [Ultralytics Inference API](./inference-api.md) documentation for more
Navigate to the Model page of the model you want to share, open the model actions dropdown and click on the **Share** option. This action will trigger the **Share Model** dialog. Navigate to the Model page of the model you want to share, open the model actions dropdown and click on the **Share** option. This action will trigger the **Share Model** dialog.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Share option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_share_model_1.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Share option](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-share-model.avif)
??? tip "Tip" ??? tip "Tip"
You can also share a model directly from the [Models](https://hub.ultralytics.com/models) page or from the Project page of the project where your model is located. You can also share a model directly from the [Models](https://hub.ultralytics.com/models) page or from the Project page of the project where your model is located.
![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Share option of one of the models](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_share_model_2.jpg) ![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Share option of one of the models](https://github.com/ultralytics/docs/releases/download/0/hub-share-model-2.avif)
Set the general access to "Unlisted" and click **Save**. Set the general access to "Unlisted" and click **Save**.
![Ultralytics HUB screenshot of the Share Model dialog with an arrow pointing to the dropdown and one to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_share_model_3.jpg) ![Ultralytics HUB screenshot of the Share Model dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-share-model-dialog.avif)
Now, anyone who has the direct link to your model can view it. Now, anyone who has the direct link to your model can view it.
@ -271,38 +271,38 @@ Now, anyone who has the direct link to your model can view it.
You can easily click on the model's link shown in the **Share Model** dialog to copy it. You can easily click on the model's link shown in the **Share Model** dialog to copy it.
![Ultralytics HUB screenshot of the Share Model dialog with an arrow pointing to the model's link](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_share_model_4.jpg) ![Ultralytics HUB screenshot of the Share Model dialog with an arrow pointing to the model's link](https://github.com/ultralytics/docs/releases/download/0/hub-share-model-link.avif)
## Edit Model ## Edit Model
Navigate to the Model page of the model you want to edit, open the model actions dropdown and click on the **Edit** option. This action will trigger the **Update Model** dialog. Navigate to the Model page of the model you want to edit, open the model actions dropdown and click on the **Edit** option. This action will trigger the **Update Model** dialog.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Edit option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_edit_model_1.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Edit option](https://github.com/ultralytics/docs/releases/download/0/hub-edit-model-1.avif)
??? tip "Tip" ??? tip "Tip"
You can also edit a model directly from the [Models](https://hub.ultralytics.com/models) page or from the Project page of the project where your model is located. You can also edit a model directly from the [Models](https://hub.ultralytics.com/models) page or from the Project page of the project where your model is located.
![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Edit option of one of the models](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_edit_model_2.jpg) ![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Edit option of one of the models](https://github.com/ultralytics/docs/releases/download/0/hub-edit-model-2.avif)
Apply the desired modifications to your model and then confirm the changes by clicking **Save**. Apply the desired modifications to your model and then confirm the changes by clicking **Save**.
![Ultralytics HUB screenshot of the Update Model dialog with an arrow pointing to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_edit_model_3.jpg) ![Ultralytics HUB screenshot of the Update Model dialog with an arrow pointing to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-edit-model-save-button.avif)
## Delete Model ## Delete Model
Navigate to the Model page of the model you want to delete, open the model actions dropdown and click on the **Delete** option. This action will delete the model. Navigate to the Model page of the model you want to delete, open the model actions dropdown and click on the **Delete** option. This action will delete the model.
![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Delete option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_delete_model_1.jpg) ![Ultralytics HUB screenshot of the Model page with an arrow pointing to the Delete option](https://github.com/ultralytics/docs/releases/download/0/hub-delete-model-1.avif)
??? tip "Tip" ??? tip "Tip"
You can also delete a model directly from the [Models](https://hub.ultralytics.com/models) page or from the Project page of the project where your model is located. You can also delete a model directly from the [Models](https://hub.ultralytics.com/models) page or from the Project page of the project where your model is located.
![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Delete option of one of the models](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_delete_model_2.jpg) ![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Delete option of one of the models](https://github.com/ultralytics/docs/releases/download/0/hub-delete-model-2.avif)
!!! note "Note" !!! note "Note"
If you change your mind, you can restore the model from the [Trash](https://hub.ultralytics.com/trash) page. If you change your mind, you can restore the model from the [Trash](https://hub.ultralytics.com/trash) page.
![Ultralytics HUB screenshot of the Trash page with an arrow pointing to the Restore option of one of the models](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_delete_model_3.jpg) ![Ultralytics HUB screenshot of the Trash page with an arrow pointing to the Restore option of one of the models](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-trash-restore-option.avif)

@ -20,21 +20,21 @@ The Pro Plan provides early access to upcoming features and includes enhanced be
You can upgrade to the Pro Plan from the [Billing & License](https://hub.ultralytics.com/settings?tab=billing) tab on the [Settings](https://hub.ultralytics.com/settings) page by clicking on the **Upgrade** button. You can upgrade to the Pro Plan from the [Billing & License](https://hub.ultralytics.com/settings?tab=billing) tab on the [Settings](https://hub.ultralytics.com/settings) page by clicking on the **Upgrade** button.
![Ultralytics HUB screenshot of the Settings page Billing & License tab with an arrow pointing to the Upgrade button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/pro/hub_pro_upgrade_1.jpg) ![Ultralytics HUB screenshot of the Settings page Billing & License tab with an arrow pointing to the Upgrade button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-settings-upgrade-button.avif)
Next, select the Pro Plan. Next, select the Pro Plan.
![Ultralytics HUB screenshot of the Upgrade dialog with an arrow pointing to the Select Plan button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/pro/hub_pro_upgrade_2.jpg) ![Ultralytics HUB screenshot of the Upgrade dialog with an arrow pointing to the Select Plan button](https://github.com/ultralytics/docs/releases/download/0/hub-pro-upgrade-select-plan.avif)
!!! tip "Tip" !!! tip "Tip"
You can save 20% if you choose the annual Pro Plan. You can save 20% if you choose the annual Pro Plan.
![Ultralytics HUB screenshot of the Upgrade dialog with an arrow pointing to the Save 20% toggle and one to the Select Plan button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/pro/hub_pro_upgrade_3.jpg) ![Ultralytics HUB screenshot of the Upgrade dialog with an arrow pointing to the Save 20% toggle and one to the Select Plan button](https://github.com/ultralytics/docs/releases/download/0/hub-pro-upgrade-save-20-toggle.avif)
Fill in your details during the checkout. Fill in your details during the checkout.
![Ultralytics HUB screenshot of the Checkout with an arrow pointing to the checkbox for saving the payment information for future purchases](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/pro/hub_pro_upgrade_4.jpg) ![Ultralytics HUB screenshot of the Checkout with an arrow pointing to the checkbox for saving the payment information for future purchases](https://github.com/ultralytics/docs/releases/download/0/hub-pro-upgrade-save-payment-info.avif)
!!! tip "Tip" !!! tip "Tip"
@ -42,7 +42,7 @@ Fill in your details during the checkout.
That's it! That's it!
![Ultralytics HUB screenshot of the Payment Successful dialog](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/pro/hub_pro_upgrade_5.jpg) ![Ultralytics HUB screenshot of the Payment Successful dialog](https://github.com/ultralytics/docs/releases/download/0/payment-successful-dialog.avif)
## Account Balance ## Account Balance
@ -50,12 +50,12 @@ The account balance is used to pay for [Ultralytics Cloud Training](./cloud-trai
In order to top up your account balance, simply click on the **Top-Up** button. In order to top up your account balance, simply click on the **Top-Up** button.
![Ultralytics HUB screenshot of the Settings page Billing & License tab with an arrow pointing to the Top-Up button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/pro/hub_pro_account_balance_1.jpg) ![Ultralytics HUB screenshot of the Settings page Billing & License tab with an arrow pointing to the Top-Up button](https://github.com/ultralytics/docs/releases/download/0/hub-pro-account-balance-top-up-button.avif)
Next, set the amount you want to top-up. Next, set the amount you want to top-up.
![Ultralytics HUB screenshot of the Checkout with an arrow pointing to the Change amount button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/pro/hub_pro_account_balance_2.jpg) ![Ultralytics HUB screenshot of the Checkout with an arrow pointing to the Change amount button](https://github.com/ultralytics/docs/releases/download/0/hub-pro-account-balance-change-amount.avif)
That's it! That's it!
![Ultralytics HUB screenshot of the Payment Successful dialog](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/pro/hub_pro_account_balance_3.jpg) ![Ultralytics HUB screenshot of the Payment Successful dialog](https://github.com/ultralytics/docs/releases/download/0/payment-successful-dialog-1.avif)

@ -24,13 +24,13 @@ This creates a unified and organized workspace that facilitates easier model man
Navigate to the [Projects](https://hub.ultralytics.com/projects) page by clicking on the **Projects** button in the sidebar and click on the **Create Project** button on the top right of the page. Navigate to the [Projects](https://hub.ultralytics.com/projects) page by clicking on the **Projects** button in the sidebar and click on the **Create Project** button on the top right of the page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Projects button in the sidebar and one to the Create Project button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_create_project_2.jpg) ![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Projects button in the sidebar and one to the Create Project button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-create-project-page.avif)
??? tip "Tip" ??? tip "Tip"
You can create a project directly from the [Home](https://hub.ultralytics.com/home) page. You can create a project directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Create Project card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_create_project_1.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Create Project card](https://github.com/ultralytics/docs/releases/download/0/hub-create-project-card.avif)
This action will trigger the **Create Project** dialog, opening up a suite of options for tailoring your project to your needs. This action will trigger the **Create Project** dialog, opening up a suite of options for tailoring your project to your needs.
@ -40,15 +40,15 @@ You have the additional option to enrich your project with a description and a u
When you're happy with your project configuration, click **Create**. When you're happy with your project configuration, click **Create**.
![Ultralytics HUB screenshot of the Create Project dialog with an arrow pointing to the Create button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_create_project_3.jpg) ![Ultralytics HUB screenshot of the Create Project dialog with an arrow pointing to the Create button](https://github.com/ultralytics/docs/releases/download/0/hub-create-project-dialog.avif)
After your project is created, you will be able to access it from the [Projects](https://hub.ultralytics.com/projects) page. After your project is created, you will be able to access it from the [Projects](https://hub.ultralytics.com/projects) page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to one of the projects](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_create_project_4.jpg) ![Ultralytics HUB screenshot of the Projects page with an arrow pointing to one of the projects](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-projects-page.avif)
Next, [train a model](./models.md#train-model) inside your project. Next, [train a model](./models.md#train-model) inside your project.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Train Model button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_create_project_5.jpg) ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-button.avif)
## Share Project ## Share Project
@ -64,17 +64,17 @@ Next, [train a model](./models.md#train-model) inside your project.
Navigate to the Project page of the project you want to share, open the project actions dropdown and click on the **Share** option. This action will trigger the **Share Project** dialog. Navigate to the Project page of the project you want to share, open the project actions dropdown and click on the **Share** option. This action will trigger the **Share Project** dialog.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Share option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_share_project_1.jpg) ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Share option](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-share-project-dialog.avif)
??? tip "Tip" ??? tip "Tip"
You can share a project directly from the [Projects](https://hub.ultralytics.com/projects) page. You can share a project directly from the [Projects](https://hub.ultralytics.com/projects) page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Share option of one of the projects](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_share_project_2.jpg) ![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Share option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-share-project-option.avif)
Set the general access to "Unlisted" and click **Save**. Set the general access to "Unlisted" and click **Save**.
![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the dropdown and one to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_share_project_3.jpg) ![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-dialog.avif)
!!! Warning "Warning" !!! Warning "Warning"
@ -86,35 +86,35 @@ Now, anyone who has the direct link to your project can view it.
You can easily click on the project's link shown in the **Share Project** dialog to copy it. You can easily click on the project's link shown in the **Share Project** dialog to copy it.
![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the project's link](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_share_project_4.jpg) ![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the project's link](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-dialog-arrow.avif)
## Edit Project ## Edit Project
Navigate to the Project page of the project you want to edit, open the project actions dropdown and click on the **Edit** option. This action will trigger the **Update Project** dialog. Navigate to the Project page of the project you want to edit, open the project actions dropdown and click on the **Edit** option. This action will trigger the **Update Project** dialog.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Edit option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_edit_project_1.jpg) ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Edit option](https://github.com/ultralytics/docs/releases/download/0/hub-edit-project-1.avif)
??? tip "Tip" ??? tip "Tip"
You can edit a project directly from the [Projects](https://hub.ultralytics.com/projects) page. You can edit a project directly from the [Projects](https://hub.ultralytics.com/projects) page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Edit option of one of the projects](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_edit_project_2.jpg) ![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Edit option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-edit-project-2.avif)
Apply the desired modifications to your project and then confirm the changes by clicking **Save**. Apply the desired modifications to your project and then confirm the changes by clicking **Save**.
![Ultralytics HUB screenshot of the Update Project dialog with an arrow pointing to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_edit_project_3.jpg) ![Ultralytics HUB screenshot of the Update Project dialog with an arrow pointing to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-edit-project-save-button.avif)
## Delete Project ## Delete Project
Navigate to the Project page of the project you want to delete, open the project actions dropdown and click on the **Delete** option. This action will delete the project. Navigate to the Project page of the project you want to delete, open the project actions dropdown and click on the **Delete** option. This action will delete the project.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Delete option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_delete_project_1.jpg) ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Delete option](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-option.avif)
??? tip "Tip" ??? tip "Tip"
You can delete a project directly from the [Projects](https://hub.ultralytics.com/projects) page. You can delete a project directly from the [Projects](https://hub.ultralytics.com/projects) page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Delete option of one of the projects](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_delete_project_2.jpg) ![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Delete option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-option-1.avif)
!!! Warning "Warning" !!! Warning "Warning"
@ -124,35 +124,35 @@ Navigate to the Project page of the project you want to delete, open the project
If you change your mind, you can restore the project from the [Trash](https://hub.ultralytics.com/trash) page. If you change your mind, you can restore the project from the [Trash](https://hub.ultralytics.com/trash) page.
![Ultralytics HUB screenshot of the Trash page with an arrow pointing to Trash button in the sidebar and one to the Restore option of one of the projects](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_delete_project_3.jpg) ![Ultralytics HUB screenshot of the Trash page with an arrow pointing to Trash button in the sidebar and one to the Restore option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-restore-option.avif)
## Compare Models ## Compare Models
Navigate to the Project page of the project where the models you want to compare are located. To use the model comparison feature, click on the **Charts** tab. Navigate to the Project page of the project where the models you want to compare are located. To use the model comparison feature, click on the **Charts** tab.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Charts tab](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_compare_models_1.jpg) ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Charts tab](https://github.com/ultralytics/docs/releases/download/0/hub-compare-models-1.avif)
This will display all the relevant charts. Each chart corresponds to a different metric and contains the performance of each model for that metric. The models are represented by different colors, and you can hover over each data point to get more information. This will display all the relevant charts. Each chart corresponds to a different metric and contains the performance of each model for that metric. The models are represented by different colors, and you can hover over each data point to get more information.
![Ultralytics HUB screenshot of the Charts tab inside the Project page](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_compare_models_2.jpg) ![Ultralytics HUB screenshot of the Charts tab inside the Project page](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-charts-tab.avif)
??? tip "Tip" ??? tip "Tip"
Each chart can be enlarged for better visualization. Each chart can be enlarged for better visualization.
![Ultralytics HUB screenshot of the Charts tab inside the Project page with an arrow pointing to the expand icon](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_compare_models_3.jpg) ![Ultralytics HUB screenshot of the Charts tab inside the Project page with an arrow pointing to the expand icon](https://github.com/ultralytics/docs/releases/download/0/hub-compare-models-expand-icon.avif)
![Ultralytics HUB screenshot of the Charts tab inside the Project page with one of the charts expanded](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_compare_models_4.jpg) ![Ultralytics HUB screenshot of the Charts tab inside the Project page with one of the charts expanded](https://github.com/ultralytics/docs/releases/download/0/hub-compare-models-expanded-chart.avif)
Furthermore, to properly analyze the data, you can utilize the zoom feature. Furthermore, to properly analyze the data, you can utilize the zoom feature.
![Ultralytics HUB screenshot of the Charts tab inside the Project page with one of the charts expanded and zoomed](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_compare_models_5.jpg) ![Ultralytics HUB screenshot of the Charts tab inside the Project page with one of the charts expanded and zoomed](https://github.com/ultralytics/docs/releases/download/0/hub-charts-tab-expanded-zoomed.avif)
??? tip "Tip" ??? tip "Tip"
You have the flexibility to customize your view by selectively hiding certain models. This feature allows you to concentrate on the models of interest. You have the flexibility to customize your view by selectively hiding certain models. This feature allows you to concentrate on the models of interest.
![Ultralytics HUB screenshot of the Charts tab inside the Project page with an arrow pointing to the hide/unhide icon of one of the model](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_compare_models_6.jpg) ![Ultralytics HUB screenshot of the Charts tab inside the Project page with an arrow pointing to the hide/unhide icon of one of the model](https://github.com/ultralytics/docs/releases/download/0/hub-compare-models-hide-icon.avif)
## Reorder Models ## Reorder Models
@ -162,20 +162,20 @@ This will display all the relevant charts. Each chart corresponds to a different
Navigate to the Project page of the project where the models you want to reorder are located. Click on the designated reorder icon of the model you want to move and drag it to the desired location. Navigate to the Project page of the project where the models you want to reorder are located. Click on the designated reorder icon of the model you want to move and drag it to the desired location.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the reorder icon](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_reorder_models_1.jpg) ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the reorder icon](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-reorder-models.avif)
## Transfer Models ## Transfer Models
Navigate to the Project page of the project where the model you want to mode is located, open the project actions dropdown and click on the **Transfer** option. This action will trigger the **Transfer Model** dialog. Navigate to the Project page of the project where the model you want to mode is located, open the project actions dropdown and click on the **Transfer** option. This action will trigger the **Transfer Model** dialog.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Transfer option of one of the models](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_transfer_models_1.jpg) ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Transfer option of one of the models](https://github.com/ultralytics/docs/releases/download/0/hub-transfer-models-1.avif)
??? tip "Tip" ??? tip "Tip"
You can also transfer a model directly from the [Models](https://hub.ultralytics.com/models) page. You can also transfer a model directly from the [Models](https://hub.ultralytics.com/models) page.
![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Transfer option of one of the models](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_transfer_models_2.jpg) ![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Transfer option of one of the models](https://github.com/ultralytics/docs/releases/download/0/hub-transfer-models-2.avif)
Select the project you want to transfer the model to and click **Save**. Select the project you want to transfer the model to and click **Save**.
![Ultralytics HUB screenshot of the Transfer Model dialog with an arrow pointing to the dropdown and one to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_transfer_models_3.jpg) ![Ultralytics HUB screenshot of the Transfer Model dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-transfer-models-dialog.avif)

@ -22,17 +22,17 @@ keywords: Ultralytics HUB, Quickstart, YOLO models, dataset upload, project mana
[Ultralytics HUB](https://ultralytics.com/hub) offers a variety easy of signup options. You can register and log in using your Google, Apple, or GitHub accounts, or simply with your email address. [Ultralytics HUB](https://ultralytics.com/hub) offers a variety easy of signup options. You can register and log in using your Google, Apple, or GitHub accounts, or simply with your email address.
![Ultralytics HUB screenshot of the Signup page](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/quickstart/hub_get_started_1.jpg) ![Ultralytics HUB screenshot of the Signup page](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-signup-page.avif)
During the signup, you will be asked to complete your profile. During the signup, you will be asked to complete your profile.
![Ultralytics HUB screenshot of the Signup page profile form](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/quickstart/hub_get_started_2.jpg) ![Ultralytics HUB screenshot of the Signup page profile form](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-signup-profile-form.avif)
??? tip "Tip" ??? tip "Tip"
You can update your profile from the [Account](https://hub.ultralytics.com/settings?tab=account) tab on the [Settings](https://hub.ultralytics.com/settings) page. You can update your profile from the [Account](https://hub.ultralytics.com/settings?tab=account) tab on the [Settings](https://hub.ultralytics.com/settings) page.
![Ultralytics HUB screenshot of the Settings page Account tab with an arrow pointing to the Profile card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/quickstart/hub_get_started_3.jpg) ![Ultralytics HUB screenshot of the Settings page Account tab with an arrow pointing to the Profile card](https://github.com/ultralytics/docs/releases/download/0/hub-settings-account-profile.avif)
## Home ## Home
@ -40,19 +40,19 @@ After signing in, you will be directed to the [Home](https://hub.ultralytics.com
The sidebar conveniently offers links to important modules of the platform, such as [Datasets](https://hub.ultralytics.com/datasets), [Projects](https://hub.ultralytics.com/projects), and [Models](https://hub.ultralytics.com/models). The sidebar conveniently offers links to important modules of the platform, such as [Datasets](https://hub.ultralytics.com/datasets), [Projects](https://hub.ultralytics.com/projects), and [Models](https://hub.ultralytics.com/models).
![Ultralytics HUB screenshot of the Home page](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/quickstart/hub_home.jpg) ![Ultralytics HUB screenshot of the Home page](https://github.com/ultralytics/docs/releases/download/0/hub-home.avif)
### Recent ### Recent
You can easily search globally or directly access your last updated [Datasets](https://hub.ultralytics.com/datasets), [Projects](https://hub.ultralytics.com/projects), or [Models](https://hub.ultralytics.com/models) using the Recent card on the [Home](https://hub.ultralytics.com/home) page. You can easily search globally or directly access your last updated [Datasets](https://hub.ultralytics.com/datasets), [Projects](https://hub.ultralytics.com/projects), or [Models](https://hub.ultralytics.com/models) using the Recent card on the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Recent card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/quickstart/hub_recent.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Recent card](https://github.com/ultralytics/docs/releases/download/0/hub-recent-card.avif)
### Upload Dataset ### Upload Dataset
You can upload a dataset directly from the [Home](https://hub.ultralytics.com/home) page. You can upload a dataset directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Upload Dataset card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_1.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Upload Dataset card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-upload-dataset-card.avif)
Read more about [datasets](https://docs.ultralytics.com/hub/datasets). Read more about [datasets](https://docs.ultralytics.com/hub/datasets).
@ -60,7 +60,7 @@ Read more about [datasets](https://docs.ultralytics.com/hub/datasets).
You can create a project directly from the [Home](https://hub.ultralytics.com/home) page. You can create a project directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Create Project card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/projects/hub_create_project_1.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Create Project card](https://github.com/ultralytics/docs/releases/download/0/hub-create-project-card.avif)
Read more about [projects](https://docs.ultralytics.com/hub/projects). Read more about [projects](https://docs.ultralytics.com/hub/projects).
@ -68,7 +68,7 @@ Read more about [projects](https://docs.ultralytics.com/hub/projects).
You can train a model directly from the [Home](https://hub.ultralytics.com/home) page. You can train a model directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Train Model card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/models/hub_train_model_1.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Train Model card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-card.avif)
Read more about [models](https://docs.ultralytics.com/hub/models). Read more about [models](https://docs.ultralytics.com/hub/models).
@ -76,9 +76,9 @@ Read more about [models](https://docs.ultralytics.com/hub/models).
We value your feedback! Feel free to leave a review at any time. We value your feedback! Feel free to leave a review at any time.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Feedback button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/quickstart/hub_feedback_1.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Feedback button](https://github.com/ultralytics/docs/releases/download/0/hub-feedback-button.avif)
![Ultralytics HUB screenshot of the Feedback dialog](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/quickstart/hub_feedback_2.jpg) ![Ultralytics HUB screenshot of the Feedback dialog](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-feedback-dialog.avif)
??? info "Info" ??? info "Info"
@ -94,7 +94,7 @@ You can report a bug, request a feature, or ask a question on <a href="https://g
When reporting a bug, please include your Environment Details from the [Support](https://hub.ultralytics.com/support) page. When reporting a bug, please include your Environment Details from the [Support](https://hub.ultralytics.com/support) page.
![Ultralytics HUB screenshot of the Support page with an arrow pointing to Support button in the sidebar and one to the Copy Environment Details button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/quickstart/hub_support.jpg) ![Ultralytics HUB screenshot of the Support page with an arrow pointing to Support button in the sidebar and one to the Copy Environment Details button](https://github.com/ultralytics/docs/releases/download/0/hub-support-page.avif)
??? tip "Tip" ??? tip "Tip"

@ -20,11 +20,11 @@ Here, you'll learn how to manage team members, share resources seamlessly, and c
You need to [upgrade](./pro.md#upgrade) to the [Pro Plan](./pro.md) in order to create a team. You need to [upgrade](./pro.md#upgrade) to the [Pro Plan](./pro.md) in order to create a team.
![Ultralytics HUB screenshot of the Settings page Teams tab with an arrow pointing to the Upgrade button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_create_team_1.jpg) ![Ultralytics HUB screenshot of the Settings page Teams tab with an arrow pointing to the Upgrade button](https://github.com/ultralytics/docs/releases/download/0/hub-create-team-settings-page.avif)
Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page by clicking on the **Teams** tab in the [Settings](https://hub.ultralytics.com/settings) page and click on the **Create Team** button. Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page by clicking on the **Teams** tab in the [Settings](https://hub.ultralytics.com/settings) page and click on the **Create Team** button.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Create Team button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_create_team_2.jpg) ![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Create Team button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-create-team-button.avif)
This action will trigger the **Create Team** dialog. This action will trigger the **Create Team** dialog.
@ -34,27 +34,27 @@ You have the additional option to enrich your team with a description and a uniq
When you're happy with your team configuration, click **Create**. When you're happy with your team configuration, click **Create**.
![Ultralytics HUB screenshot of the Create Team dialog with an arrow pointing to the Create button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_create_team_3.jpg) ![Ultralytics HUB screenshot of the Create Team dialog with an arrow pointing to the Create button](https://github.com/ultralytics/docs/releases/download/0/hub-create-team-dialog.avif)
After your team is created, you will be able to access it from the [Teams](https://hub.ultralytics.com/settings?tab=teams) page. After your team is created, you will be able to access it from the [Teams](https://hub.ultralytics.com/settings?tab=teams) page.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to one of the teams](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_create_team_4.jpg) ![Ultralytics HUB screenshot of the Teams page with an arrow pointing to one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-teams-page-arrow-pointing-to-team.avif)
## Edit Team ## Edit Team
Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, open the team actions dropdown of team you want to edit and click on the **Edit** option. This action will trigger the **Update Team** dialog. Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, open the team actions dropdown of team you want to edit and click on the **Edit** option. This action will trigger the **Update Team** dialog.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Edit option of one of the teams](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_edit_team_1.jpg) ![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Edit option of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-edit-team-1.avif)
Apply the desired modifications to your team and then confirm the changes by clicking **Save**. Apply the desired modifications to your team and then confirm the changes by clicking **Save**.
![Ultralytics HUB screenshot of the Update Team dialog with an arrow pointing to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_edit_team_2.jpg) ![Ultralytics HUB screenshot of the Update Team dialog with an arrow pointing to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-update-team-save-button.avif)
## Delete Team ## Delete Team
Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, open the team actions dropdown of team you want to delete and click on the **Delete** option. Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, open the team actions dropdown of team you want to delete and click on the **Delete** option.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Delete option of one of the teams](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_delete_team_1.jpg) ![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Delete option of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-delete-team-option.avif)
!!! Warning "Warning" !!! Warning "Warning"
@ -64,23 +64,23 @@ Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, op
Navigate to the Team page of the team to which you want to add a new member and click on the **Invite Member** button. This action will trigger the **Invite Member** dialog. Navigate to the Team page of the team to which you want to add a new member and click on the **Invite Member** button. This action will trigger the **Invite Member** dialog.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Invite Member button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_invite_member_1.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Invite Member button](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-button.avif)
Type the email and select the role of the new member and click **Invite**. Type the email and select the role of the new member and click **Invite**.
![Ultralytics HUB screenshot of the Invite Member dialog with an arrow pointing to the Invite button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_invite_member_2.jpg) ![Ultralytics HUB screenshot of the Invite Member dialog with an arrow pointing to the Invite button](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-dialog.avif)
![Ultralytics HUB screenshot of the Team page with a new member invited](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_invite_member_3.jpg) ![Ultralytics HUB screenshot of the Team page with a new member invited](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-3.avif)
??? tip "Tip" ??? tip "Tip"
You can cancel the invite before the new member accepts it. You can cancel the invite before the new member accepts it.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Cancel Invite option of one of the members](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_invite_member_4.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Cancel Invite option of one of the members](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-cancel-invite.avif)
The **Pending** status disappears after the new member accepts the invite. The **Pending** status disappears after the new member accepts the invite.
![Ultralytics HUB screenshot of the Team page with two members](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_invite_member_5.jpg) ![Ultralytics HUB screenshot of the Team page with two members](https://github.com/ultralytics/docs/releases/download/0/team-page-two-members.avif)
??? tip "Tip" ??? tip "Tip"
@ -88,7 +88,7 @@ The **Pending** status disappears after the new member accepts the invite.
The **Admin** role allows inviting and removing members, as well as removing shared datasets or projects. The **Admin** role allows inviting and removing members, as well as removing shared datasets or projects.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Change Role option of one of the members](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_invite_member_6.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Change Role option of one of the members](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-change-role.avif)
### Seats ### Seats
@ -102,13 +102,13 @@ When you remove a unique member from the last team they are a member of, the num
You can see the number of seats on the [Teams](https://hub.ultralytics.com/settings?tab=teams) page. You can see the number of seats on the [Teams](https://hub.ultralytics.com/settings?tab=teams) page.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the number of seats](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_invite_member_7.jpg) ![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the number of seats](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-teams-number-of-seats.avif)
## Remove Member ## Remove Member
Navigate to the Team page of the team from which you want to remove a member, open the member actions dropdown, and click on the **Remove** option. Navigate to the Team page of the team from which you want to remove a member, open the member actions dropdown, and click on the **Remove** option.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the members](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_remove_member_1.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the members](https://github.com/ultralytics/docs/releases/download/0/hub-remove-member.avif)
## Join Team ## Join Team
@ -116,76 +116,76 @@ When you are invited to a team, you receive an in-app notification.
You can view your notifications by clicking on the **View** button on the **Notifications** card on the [Home](https://hub.ultralytics.com/home) page. You can view your notifications by clicking on the **View** button on the **Notifications** card on the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the View button on the Notifications card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_join_team_1.jpg) ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the View button on the Notifications card](https://github.com/ultralytics/docs/releases/download/0/hub-join-team-1.avif)
Alternatively, you can view your notifications by accessing the [Notifications](https://hub.ultralytics.com/notifications) page directly. Alternatively, you can view your notifications by accessing the [Notifications](https://hub.ultralytics.com/notifications) page directly.
![Ultralytics HUB screenshot of the Notifications page with an arrow pointing to one of the notifications](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_join_team_2.jpg) ![Ultralytics HUB screenshot of the Notifications page with an arrow pointing to one of the notifications](https://github.com/ultralytics/docs/releases/download/0/notifications-page-arrow.avif)
You can decide whether to join the team on the Team page of the team to which you were invited. You can decide whether to join the team on the Team page of the team to which you were invited.
If you want to join the team, click on the **Join Team** button. If you want to join the team, click on the **Join Team** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Join Team button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_join_team_3.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Join Team button](https://github.com/ultralytics/docs/releases/download/0/hub-join-team-button.avif)
If you don't want to join the team, click on the **Reject Invitation** button. If you don't want to join the team, click on the **Reject Invitation** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Reject Invitation button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_join_team_4.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Reject Invitation button](https://github.com/ultralytics/docs/releases/download/0/hub-join-team-reject-invitation.avif)
??? tip "Tip" ??? tip "Tip"
You can join the team directly from the [Teams](https://hub.ultralytics.com/settings?tab=teams) page. You can join the team directly from the [Teams](https://hub.ultralytics.com/settings?tab=teams) page.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Join Team button of one of the teams](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_join_team_5.jpg) ![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Join Team button of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-join-team-button-1.avif)
## Leave Team ## Leave Team
Navigate to the Team page of the team you want to leave and click on the **Leave Team** button. Navigate to the Team page of the team you want to leave and click on the **Leave Team** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Leave Team button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_leave_team_1.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Leave Team button](https://github.com/ultralytics/docs/releases/download/0/hub-leave-team-1.avif)
## Share Dataset ## Share Dataset
Navigate to the Team page of the team you want to share your dataset with and click on the **Add Dataset** button. Navigate to the Team page of the team you want to share your dataset with and click on the **Add Dataset** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Add Dataset button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_dataset_1.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Add Dataset button](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-button.avif)
Select the dataset you want to share with your team and click on the **Add** button. Select the dataset you want to share with your team and click on the **Add** button.
![Ultralytics HUB screenshot of the Add Dataset to Team dialog with an arrow pointing to the Add button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_dataset_2.jpg) ![Ultralytics HUB screenshot of the Add Dataset to Team dialog with an arrow pointing to the Add button](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-add-button.avif)
That's it! Your team now has access to your dataset. That's it! Your team now has access to your dataset.
![Ultralytics HUB screenshot of the Team page with a dataset shared](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_dataset_3.jpg) ![Ultralytics HUB screenshot of the Team page with a dataset shared](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-team-page.avif)
??? tip "Tip" ??? tip "Tip"
As a team owner or team admin, you can remove a shared dataset. As a team owner or team admin, you can remove a shared dataset.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the datasets shared](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_dataset_4.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the datasets shared](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-remove-option.avif)
## Share Project ## Share Project
Navigate to the Team page of the team you want to share your project with and click on the **Add Project** button. Navigate to the Team page of the team you want to share your project with and click on the **Add Project** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Add Project button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_project_1.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Add Project button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-button.avif)
Select the project you want to share with your team and click on the **Add** button. Select the project you want to share with your team and click on the **Add** button.
![Ultralytics HUB screenshot of the Add Project to Team dialog with an arrow pointing to the Add button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_project_2.jpg) ![Ultralytics HUB screenshot of the Add Project to Team dialog with an arrow pointing to the Add button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-add-button.avif)
That's it! Your team now has access to your project. That's it! Your team now has access to your project.
![Ultralytics HUB screenshot of the Team page with a project shared](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_project_3.jpg) ![Ultralytics HUB screenshot of the Team page with a project shared](https://github.com/ultralytics/docs/releases/download/0/team-page-project-shared.avif)
??? tip "Tip" ??? tip "Tip"
As a team owner or team admin, you can remove a shared project. As a team owner or team admin, you can remove a shared project.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the projects shared](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_project_4.jpg) ![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the projects shared](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-remove-option.avif)
!!! note "Note" !!! note "Note"
When you share a project with your team, all models inside the project are shared as well. When you share a project with your team, all models inside the project are shared as well.
![Ultralytics HUB screenshot of the Team page with a model shared](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/teams/hub_share_project_5.jpg) ![Ultralytics HUB screenshot of the Team page with a model shared](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-team-model.avif)

@ -5,7 +5,7 @@ keywords: Ultralytics, YOLOv8, object detection, image segmentation, deep learni
--- ---
<div align="center"> <div align="center">
<a href="https://www.ultralytics.com/events/yolovision" target="_blank"><img width="1024%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png" alt="Ultralytics YOLO banner"></a> <a href="https://www.ultralytics.com/events/yolovision" target="_blank"><img width="1024%" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-banner.avif" alt="Ultralytics YOLO banner"></a>
<a href="https://docs.ultralytics.com/zh/">中文</a> | <a href="https://docs.ultralytics.com/zh/">中文</a> |
<a href="https://docs.ultralytics.com/ko/">한국어</a> | <a href="https://docs.ultralytics.com/ko/">한국어</a> |
<a href="https://docs.ultralytics.com/ja/">日本語</a> | <a href="https://docs.ultralytics.com/ja/">日本語</a> |

@ -13,7 +13,7 @@ This guide will take you through the process of deploying YOLOv8 PyTorch models
## Amazon SageMaker ## Amazon SageMaker
<p align="center"> <p align="center">
<img width="640" src="https://d1.awsstatic.com/sagemaker/Amazon-SageMaker-Studio%402x.aa0572ebf4ea9237571644c7f853c914c1d0c985.png" alt="Amazon SageMaker Overview"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/amazon-sagemaker-overview.avif" alt="Amazon SageMaker Overview">
</p> </p>
[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a machine learning service from Amazon Web Services (AWS) that simplifies the process of building, training, and deploying machine learning models. It provides a broad range of tools for handling various aspects of machine learning workflows. This includes automated features for tuning models, options for training models at scale, and straightforward methods for deploying models into production. SageMaker supports popular machine learning frameworks, offering the flexibility needed for diverse projects. Its features also cover data labeling, workflow management, and performance analysis. [Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a machine learning service from Amazon Web Services (AWS) that simplifies the process of building, training, and deploying machine learning models. It provides a broad range of tools for handling various aspects of machine learning workflows. This includes automated features for tuning models, options for training models at scale, and straightforward methods for deploying models into production. SageMaker supports popular machine learning frameworks, offering the flexibility needed for diverse projects. Its features also cover data labeling, workflow management, and performance analysis.
@ -23,7 +23,7 @@ This guide will take you through the process of deploying YOLOv8 PyTorch models
Deploying YOLOv8 on Amazon SageMaker lets you use its managed environment for real-time inference and take advantage of features like autoscaling. Take a look at the AWS architecture below. Deploying YOLOv8 on Amazon SageMaker lets you use its managed environment for real-time inference and take advantage of features like autoscaling. Take a look at the AWS architecture below.
<p align="center"> <p align="center">
<img width="640" src="https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2023/02/28/ML13353_AWSArchitecture-1024x605.png" alt="AWS Architecture"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/aws-architecture.avif" alt="AWS Architecture">
</p> </p>
### Step 1: Setup Your AWS Environment ### Step 1: Setup Your AWS Environment
@ -147,7 +147,7 @@ Now that your YOLOv8 model is deployed, it's important to test its performance a
- Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, you'll plot the output to visualize the model's performance and accuracy, as shown below. - Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, you'll plot the output to visualize the model's performance and accuracy, as shown below.
<p align="center"> <p align="center">
<img width="640" src="https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2023/02/28/ML13353_InferenceOutput.png" alt="Testing Results YOLOv8"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/testing-results-yolov8.avif" alt="Testing Results YOLOv8">
</p> </p>
- Clean-Up Resources: The test notebook will also guide you through the process of cleaning up the endpoint and the hosted model. This is an important step to manage costs and resources effectively, especially if you do not plan to use the deployed model immediately. - Clean-Up Resources: The test notebook will also guide you through the process of cleaning up the endpoint and the hosted model. This is an important step to manage costs and resources effectively, especially if you do not plan to use the deployed model immediately.

@ -13,7 +13,7 @@ MLOps bridges the gap between creating and deploying machine learning models in
## ClearML ## ClearML
<p align="center"> <p align="center">
<img width="100%" src="https://clear.ml/wp-content/uploads/2023/06/DataOps@2x-1.png" alt="ClearML Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/clearml-overview.avif" alt="ClearML Overview">
</p> </p>
[ClearML](https://clear.ml/) is an innovative open-source MLOps platform that is skillfully designed to automate, monitor, and orchestrate machine learning workflows. Its key features include automated logging of all training and inference data for full experiment reproducibility, an intuitive web UI for easy data visualization and analysis, advanced hyperparameter optimization algorithms, and robust model management for efficient deployment across various platforms. [ClearML](https://clear.ml/) is an innovative open-source MLOps platform that is skillfully designed to automate, monitor, and orchestrate machine learning workflows. Its key features include automated logging of all training and inference data for full experiment reproducibility, an intuitive web UI for easy data visualization and analysis, advanced hyperparameter optimization algorithms, and robust model management for efficient deployment across various platforms.
@ -175,7 +175,7 @@ This setup is applicable to cloud VMs, local GPUs, or laptops. ClearML Autoscale
ClearML's user-friendly interface allows easy cloning, editing, and enqueuing of tasks. Users can clone an existing experiment, adjust parameters or other details through the UI, and enqueue the task for execution. This streamlined process ensures that the ClearML Agent executing the task uses updated configurations, making it ideal for iterative experimentation and model fine-tuning. ClearML's user-friendly interface allows easy cloning, editing, and enqueuing of tasks. Users can clone an existing experiment, adjust parameters or other details through the UI, and enqueue the task for execution. This streamlined process ensures that the ClearML Agent executing the task uses updated configurations, making it ideal for iterative experimentation and model fine-tuning.
<p align="center"><br> <p align="center"><br>
<img width="100%" src="https://clear.ml/docs/latest/assets/images/integrations_yolov5-2483adea91df4d41bfdf1a37d28864d4.gif" alt="Cloning, Editing, and Enqueuing with ClearML"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/cloning-editing-enqueuing-clearml.avif" alt="Cloning, Editing, and Enqueuing with ClearML">
</p> </p>
## Summary ## Summary

@ -96,7 +96,7 @@ Let's dive into what you'll see on the Comet ML dashboard once your YOLOv8 model
The experiment panels section of the Comet ML dashboard organize and present the different runs and their metrics, such as segment mask loss, class loss, precision, and mean average precision. The experiment panels section of the Comet ML dashboard organize and present the different runs and their metrics, such as segment mask loss, class loss, precision, and mean average precision.
<p align="center"> <p align="center">
<img width="640" src="https://www.comet.com/site/wp-content/uploads/2023/07/1_I20ts7j995-D86-BvtWYaw.png" alt="Comet ML Overview"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/comet-ml-dashboard-overview.avif" alt="Comet ML Overview">
</p> </p>
**Metrics** **Metrics**
@ -104,7 +104,7 @@ The experiment panels section of the Comet ML dashboard organize and present the
In the metrics section, you have the option to examine the metrics in a tabular format as well, which is displayed in a dedicated pane as illustrated here. In the metrics section, you have the option to examine the metrics in a tabular format as well, which is displayed in a dedicated pane as illustrated here.
<p align="center"> <p align="center">
<img width="640" src="https://www.comet.com/site/wp-content/uploads/2023/07/1_FNAkQKq9o02wRRSCJh4gDw.png" alt="Comet ML Overview"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/comet-ml-metrics-tabular.avif" alt="Comet ML Overview">
</p> </p>
**Interactive Confusion Matrix** **Interactive Confusion Matrix**
@ -112,7 +112,7 @@ In the metrics section, you have the option to examine the metrics in a tabular
The confusion matrix, found in the Confusion Matrix tab, provides an interactive way to assess the model's classification accuracy. It details the correct and incorrect predictions, allowing you to understand the model's strengths and weaknesses. The confusion matrix, found in the Confusion Matrix tab, provides an interactive way to assess the model's classification accuracy. It details the correct and incorrect predictions, allowing you to understand the model's strengths and weaknesses.
<p align="center"> <p align="center">
<img width="640" src="https://www.comet.com/site/wp-content/uploads/2023/07/1_h-Nf-tCm8HbsvVK0d6rTng-1500x768.png" alt="Comet ML Overview"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/comet-ml-interactive-confusion-matrix.avif" alt="Comet ML Overview">
</p> </p>
**System Metrics** **System Metrics**
@ -120,7 +120,7 @@ The confusion matrix, found in the Confusion Matrix tab, provides an interactive
Comet ML logs system metrics to help identify any bottlenecks in the training process. It includes metrics such as GPU utilization, GPU memory usage, CPU utilization, and RAM usage. These are essential for monitoring the efficiency of resource usage during model training. Comet ML logs system metrics to help identify any bottlenecks in the training process. It includes metrics such as GPU utilization, GPU memory usage, CPU utilization, and RAM usage. These are essential for monitoring the efficiency of resource usage during model training.
<p align="center"> <p align="center">
<img width="640" src="https://www.comet.com/site/wp-content/uploads/2023/07/1_B7dmqqUMyOtyH9XsVMr58Q.png" alt="Comet ML Overview"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/comet-ml-system-metrics.avif" alt="Comet ML Overview">
</p> </p>
## Customizing Comet ML Logging ## Customizing Comet ML Logging

@ -13,7 +13,7 @@ The CoreML export format allows you to optimize your [Ultralytics YOLOv8](https:
## CoreML ## CoreML
<p align="center"> <p align="center">
<img width="100%" src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/0c757e32-3a9f-422e-9526-efde5f663ccd" alt="CoreML Overview"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/coreml-overview.avif" alt="CoreML Overview">
</p> </p>
[CoreML](https://developer.apple.com/documentation/coreml) is Apple's foundational machine learning framework that builds upon Accelerate, BNNS, and Metal Performance Shaders. It provides a machine-learning model format that seamlessly integrates into iOS applications and supports tasks such as image analysis, natural language processing, audio-to-text conversion, and sound analysis. [CoreML](https://developer.apple.com/documentation/coreml) is Apple's foundational machine learning framework that builds upon Accelerate, BNNS, and Metal Performance Shaders. It provides a machine-learning model format that seamlessly integrates into iOS applications and supports tasks such as image analysis, natural language processing, audio-to-text conversion, and sound analysis.
@ -27,7 +27,7 @@ Apple's CoreML framework offers robust features for on-device machine learning.
- **Comprehensive Model Support**: Converts and runs models from popular frameworks like TensorFlow, PyTorch, scikit-learn, XGBoost, and LibSVM. - **Comprehensive Model Support**: Converts and runs models from popular frameworks like TensorFlow, PyTorch, scikit-learn, XGBoost, and LibSVM.
<p align="center"> <p align="center">
<img width="100%" src="https://apple.github.io/coremltools/docs-guides/_images/introduction-coremltools.png" alt="CoreML Supported Models"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/coreml-supported-models.avif" alt="CoreML Supported Models">
</p> </p>
- **On-device Machine Learning**: Ensures data privacy and swift processing by executing models directly on the user's device, eliminating the need for network connectivity. - **On-device Machine Learning**: Ensures data privacy and swift processing by executing models directly on the user's device, eliminating the need for network connectivity.

@ -13,7 +13,7 @@ Integrating DVCLive with [Ultralytics YOLOv8](https://ultralytics.com) transform
## DVCLive ## DVCLive
<p align="center"> <p align="center">
<img width="640" src="https://dvc.org/static/6daeb07124bab895bea3f4930e3116e9/aa619/dvclive-studio.webp" alt="DVCLive Overview"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/dvclive-overview.avif" alt="DVCLive Overview">
</p> </p>
[DVCLive](https://dvc.org/doc/dvclive), developed by DVC, is an innovative open-source tool for experiment tracking in machine learning. Integrating seamlessly with Git and DVC, it automates the logging of crucial experiment data like model parameters and training metrics. Designed for simplicity, DVCLive enables effortless comparison and analysis of multiple runs, enhancing the efficiency of machine learning projects with intuitive data visualization and analysis tools. [DVCLive](https://dvc.org/doc/dvclive), developed by DVC, is an innovative open-source tool for experiment tracking in machine learning. Integrating seamlessly with Git and DVC, it automates the logging of crucial experiment data like model parameters and training metrics. Designed for simplicity, DVCLive enables effortless comparison and analysis of multiple runs, enhancing the efficiency of machine learning projects with intuitive data visualization and analysis tools.
@ -138,7 +138,7 @@ dvc plots diff $(dvc exp list --names-only)
After executing this command, DVC generates plots comparing the metrics across different experiments, which are saved as HTML files. Below is an example image illustrating typical plots generated by this process. The image showcases various graphs, including those representing mAP, recall, precision, loss values, and more, providing a visual overview of key performance metrics: After executing this command, DVC generates plots comparing the metrics across different experiments, which are saved as HTML files. Below is an example image illustrating typical plots generated by this process. The image showcases various graphs, including those representing mAP, recall, precision, loss values, and more, providing a visual overview of key performance metrics:
<p align="center"> <p align="center">
<img width="640" src="https://dvc.org/yolo-studio-plots-0f1243f5a0c5ea940a080478de267cba.gif" alt="DVCLive Plots"> <img width="640" src="https://github.com/ultralytics/docs/releases/download/0/dvclive-comparative-plots.avif" alt="DVCLive Plots">
</p> </p>
### Displaying DVC Plots ### Displaying DVC Plots

@ -15,7 +15,7 @@ The export to TFLite Edge TPU format feature allows you to optimize your [Ultral
Exporting models to TensorFlow Edge TPU makes machine learning tasks fast and efficient. This technology suits applications with limited power, computing resources, and connectivity. The Edge TPU is a hardware accelerator by Google. It speeds up TensorFlow Lite models on edge devices. The image below shows an example of the process involved. Exporting models to TensorFlow Edge TPU makes machine learning tasks fast and efficient. This technology suits applications with limited power, computing resources, and connectivity. The Edge TPU is a hardware accelerator by Google. It speeds up TensorFlow Lite models on edge devices. The image below shows an example of the process involved.
<p align="center"> <p align="center">
<img width="100%" src="https://coral.ai/static/docs/images/edgetpu/compile-workflow.png" alt="TFLite Edge TPU"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/tflite-edge-tpu-compile-workflow.avif" alt="TFLite Edge TPU">
</p> </p>
The Edge TPU works with quantized models. Quantization makes models smaller and faster without losing much accuracy. It is ideal for the limited resources of edge computing, allowing applications to respond quickly by reducing latency and allowing for quick data processing locally, without cloud dependency. Local processing also keeps user data private and secure since it's not sent to a remote server. The Edge TPU works with quantized models. Quantization makes models smaller and faster without losing much accuracy. It is ideal for the limited resources of edge computing, allowing applications to respond quickly by reducing latency and allowing for quick data processing locally, without cloud dependency. Local processing also keeps user data private and secure since it's not sent to a remote server.

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save