Reformat Markdown code blocks (#12795)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
pull/12796/head^2
Glenn Jocher 6 months ago committed by GitHub
parent 2af71d15a6
commit fceea033ad
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 140
      CONTRIBUTING.md
  2. 4
      docs/en/datasets/classify/caltech101.md
  3. 4
      docs/en/datasets/classify/caltech256.md
  4. 4
      docs/en/datasets/classify/cifar10.md
  5. 4
      docs/en/datasets/classify/cifar100.md
  6. 4
      docs/en/datasets/classify/fashion-mnist.md
  7. 4
      docs/en/datasets/classify/imagenet.md
  8. 4
      docs/en/datasets/classify/imagenet10.md
  9. 12
      docs/en/datasets/classify/imagenette.md
  10. 10
      docs/en/datasets/classify/imagewoof.md
  11. 4
      docs/en/datasets/classify/index.md
  12. 4
      docs/en/datasets/classify/mnist.md
  13. 6
      docs/en/datasets/detect/african-wildlife.md
  14. 4
      docs/en/datasets/detect/argoverse.md
  15. 6
      docs/en/datasets/detect/brain-tumor.md
  16. 4
      docs/en/datasets/detect/coco.md
  17. 4
      docs/en/datasets/detect/coco8.md
  18. 4
      docs/en/datasets/detect/globalwheat2020.md
  19. 6
      docs/en/datasets/detect/index.md
  20. 4
      docs/en/datasets/detect/lvis.md
  21. 4
      docs/en/datasets/detect/objects365.md
  22. 4
      docs/en/datasets/detect/open-images-v7.md
  23. 21
      docs/en/datasets/detect/roboflow-100.md
  24. 4
      docs/en/datasets/detect/sku-110k.md
  25. 4
      docs/en/datasets/detect/visdrone.md
  26. 4
      docs/en/datasets/detect/voc.md
  27. 4
      docs/en/datasets/detect/xview.md
  28. 48
      docs/en/datasets/explorer/api.md
  29. 5
      docs/en/datasets/index.md
  30. 22
      docs/en/datasets/obb/dota-v2.md
  31. 4
      docs/en/datasets/obb/dota8.md
  32. 6
      docs/en/datasets/obb/index.md
  33. 4
      docs/en/datasets/pose/coco.md
  34. 4
      docs/en/datasets/pose/coco8-pose.md
  35. 6
      docs/en/datasets/pose/index.md
  36. 4
      docs/en/datasets/pose/tiger-pose.md
  37. 4
      docs/en/datasets/segment/carparts-seg.md
  38. 4
      docs/en/datasets/segment/coco.md
  39. 4
      docs/en/datasets/segment/coco8-seg.md
  40. 4
      docs/en/datasets/segment/crack-seg.md
  41. 8
      docs/en/datasets/segment/index.md
  42. 4
      docs/en/datasets/segment/package-seg.md
  43. 2
      docs/en/datasets/track/index.md
  44. 4
      docs/en/guides/conda-quickstart.md
  45. 6
      docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
  46. 5
      docs/en/guides/distance-calculation.md
  47. 113
      docs/en/guides/heatmaps.md
  48. 4
      docs/en/guides/hyperparameter-tuning.md
  49. 23
      docs/en/guides/instance-segmentation-and-tracking.md
  50. 6
      docs/en/guides/isolating-segmentation-objects.md
  51. 81
      docs/en/guides/kfold-cross-validation.md
  52. 12
      docs/en/guides/nvidia-jetson.md
  53. 12
      docs/en/guides/object-blurring.md
  54. 56
      docs/en/guides/object-counting.md
  55. 15
      docs/en/guides/object-cropping.md
  56. 22
      docs/en/guides/parking-management.md
  57. 78
      docs/en/guides/queue-management.md
  58. 16
      docs/en/guides/raspberry-pi.md
  59. 23
      docs/en/guides/sahi-tiled-inference.md
  60. 39
      docs/en/guides/security-alarm-system.md
  61. 14
      docs/en/guides/speed-estimation.md
  62. 37
      docs/en/guides/triton-inference-server.md
  63. 3
      docs/en/guides/view-results-in-terminal.md
  64. 61
      docs/en/guides/vision-eye.md
  65. 38
      docs/en/guides/workouts-monitoring.md
  66. 8
      docs/en/guides/yolo-common-issues.md
  67. 9
      docs/en/guides/yolo-thread-safe-inference.md
  68. 16
      docs/en/help/contributing.md
  69. 4
      docs/en/help/privacy.md
  70. 11
      docs/en/integrations/amazon-sagemaker.md
  71. 7
      docs/en/integrations/clearml.md
  72. 14
      docs/en/integrations/comet.md
  73. 8
      docs/en/integrations/coreml.md
  74. 2
      docs/en/integrations/dvc.md
  75. 8
      docs/en/integrations/edge-tpu.md
  76. 9
      docs/en/integrations/gradio.md
  77. 2
      docs/en/integrations/mlflow.md
  78. 16
      docs/en/integrations/ncnn.md
  79. 5
      docs/en/integrations/neural-magic.md
  80. 8
      docs/en/integrations/onnx.md
  81. 12
      docs/en/integrations/openvino.md
  82. 8
      docs/en/integrations/paddlepaddle.md
  83. 26
      docs/en/integrations/ray-tune.md
  84. 12
      docs/en/integrations/tensorrt.md
  85. 8
      docs/en/integrations/tf-graphdef.md
  86. 8
      docs/en/integrations/tf-savedmodel.md
  87. 8
      docs/en/integrations/tfjs.md
  88. 16
      docs/en/integrations/tflite.md
  89. 8
      docs/en/integrations/torchscript.md
  90. 2
      docs/en/integrations/weights-biases.md
  91. 18
      docs/en/models/fast-sam.md
  92. 6
      docs/en/models/index.md
  93. 8
      docs/en/models/mobile-sam.md
  94. 6
      docs/en/models/rtdetr.md
  95. 34
      docs/en/models/sam.md
  96. 6
      docs/en/models/yolo-nas.md
  97. 33
      docs/en/models/yolo-world.md
  98. 6
      docs/en/models/yolov3.md
  99. 6
      docs/en/models/yolov5.md
  100. 6
      docs/en/models/yolov6.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -1,96 +1,132 @@
# Contributing to YOLOv8 🚀
---
comments: true
description: Learn how to contribute to Ultralytics YOLO projects – guidelines for pull requests, reporting bugs, code conduct and CLA signing.
keywords: Ultralytics, YOLO, open-source, contribute, pull request, bug report, coding guidelines, CLA, code of conduct, GitHub
---
We love your input! We want to make contributing to YOLOv8 as easy and transparent as possible, whether it's:
# Contributing to Ultralytics Open-Source YOLO Repositories
- Reporting a bug
- Discussing the current state of the code
- Submitting a fix
- Proposing a new feature
- Becoming a maintainer
First of all, thank you for your interest in contributing to Ultralytics open-source YOLO repositories! Your contributions will help improve the project and benefit the community. This document provides guidelines and best practices to get you started.
YOLOv8 works so well due to our combined community effort, and for every small improvement you contribute you will be helping push the frontiers of what's possible in AI 😃!
## Table of Contents
## Submitting a Pull Request (PR) 🛠
1. [Code of Conduct](#code-of-conduct)
2. [Contributing via Pull Requests](#contributing-via-pull-requests)
- [CLA Signing](#cla-signing)
- [Google-Style Docstrings](#google-style-docstrings)
- [GitHub Actions CI Tests](#github-actions-ci-tests)
3. [Reporting Bugs](#reporting-bugs)
4. [License](#license)
5. [Conclusion](#conclusion)
Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
## Code of Conduct
### 1. Select File to Update
All contributors are expected to adhere to the [Code of Conduct](https://docs.ultralytics.com/help/code_of_conduct/) to ensure a welcoming and inclusive environment for everyone.
Select `requirements.txt` to update by clicking on it in GitHub.
## Contributing via Pull Requests
<p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
We welcome contributions in the form of pull requests. To make the review process smoother, please follow these guidelines:
### 2. Click 'Edit this file'
1. **[Fork the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo)**: Fork the Ultralytics YOLO repository to your own GitHub account.
Button is in top-right corner.
2. **[Create a branch](https://docs.github.com/en/desktop/making-changes-in-a-branch/managing-branches-in-github-desktop)**: Create a new branch in your forked repository with a descriptive name for your changes.
<p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
3. **Make your changes**: Make the changes you want to contribute. Ensure that your changes follow the coding style of the project and do not introduce new errors or warnings.
### 3. Make Changes
4. **[Test your changes](https://github.com/ultralytics/ultralytics/tree/main/tests)**: Test your changes locally to ensure that they work as expected and do not introduce new issues.
Change `matplotlib` version from `3.2.2` to `3.3`.
5. **[Commit your changes](https://docs.github.com/en/desktop/making-changes-in-a-branch/committing-and-reviewing-changes-to-your-project-in-github-desktop)**: Commit your changes with a descriptive commit message. Make sure to include any relevant issue numbers in your commit message.
<p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
6. **[Create a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)**: Create a pull request from your forked repository to the main Ultralytics YOLO repository. In the pull request description, provide a clear explanation of your changes and how they improve the project.
### 4. Preview Changes and Submit PR
### CLA Signing
Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose changes** button. All done, your PR is now submitted to YOLOv8 for review and approval 😃!
Before we can accept your pull request, you need to sign a [Contributor License Agreement (CLA)](https://docs.ultralytics.com/help/CLA/). This is a legal document stating that you agree to the terms of contributing to the Ultralytics YOLO repositories. The CLA ensures that your contributions are properly licensed and that the project can continue to be distributed under the AGPL-3.0 license.
<p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
To sign the CLA, follow the instructions provided by the CLA bot after you submit your PR and add a comment in your PR saying:
### PR recommendations
```
I have read the CLA Document and I sign the CLA
```
To allow your work to be integrated as seamlessly as possible, we advise you to:
### Google-Style Docstrings
- ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge main` locally.
When adding new functions or classes, please include a [Google-style docstring](https://google.github.io/styleguide/pyguide.html) to provide clear and concise documentation for other developers. This will help ensure that your contributions are easy to understand and maintain.
<p align="center"><img width="751" alt="PR recommendation 1" src="https://user-images.githubusercontent.com/26833433/187295893-50ed9f44-b2c9-4138-a614-de69bd1753d7.png"></p>
#### Google-style
- ✅ Verify all YOLOv8 Continuous Integration (CI) **checks are passing**.
This example shows a Google-style docstring. Note that both input and output `types` must always be enclosed by parentheses, i.e. `(bool)`.
<p align="center"><img width="751" alt="PR recommendation 2" src="https://user-images.githubusercontent.com/26833433/187296922-545c5498-f64a-4d8c-8300-5fa764360da6.png"></p>
```python
def example_function(arg1, arg2=4):
"""
Example function that demonstrates Google-style docstrings.
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
Args:
arg1 (int): The first argument.
arg2 (int): The second argument. Default value is 4.
Returns:
(bool): True if successful, False otherwise.
Examples:
>>> result = example_function(1, 2) # returns False
"""
if arg1 == arg2:
return True
return False
```
### Docstrings
#### Google-style with type hints
Not all functions or classes require docstrings but when they do, we follow [google-style docstrings format](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings). Here is an example:
This example shows both a Google-style docstring and argument and return type hints, though both are not required, one can be used without the other.
```python
"""
What the function does. Performs NMS on given detection predictions.
def example_function(arg1: int, arg2: int = 4) -> bool:
"""
Example function that demonstrates Google-style docstrings.
Args:
arg1: The description of the 1st argument
arg2: The description of the 2nd argument
arg1: The first argument.
arg2: The second argument. Default value is 4.
Returns:
What the function returns. Empty if nothing is returned.
True if successful, False otherwise.
Examples:
>>> result = example_function(1, 2) # returns False
"""
if arg1 == arg2:
return True
return False
```
#### Single-line
Smaller or simpler functions can utilize a single-line docstring. Note the docstring must use 3 double-quotes, and be a complete sentence starting with a capital letter and ending with a period.
Raises:
Exception Class: When and why this exception can be raised by the function.
"""
```python
def example_small_function(arg1: int, arg2: int = 4) -> bool:
"""Example function that demonstrates a single-line docstring."""
return arg1 == arg2
```
## Submitting a Bug Report 🐛
### GitHub Actions CI Tests
If you spot a problem with YOLOv8 please submit a Bug Report!
Before your pull request can be merged, all GitHub Actions [Continuous Integration](https://docs.ultralytics.com/help/CI/) (CI) tests must pass. These tests include linting, unit tests, and other checks to ensure that your changes meet the quality standards of the project. Make sure to review the output of the GitHub Actions and fix any issues
For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few short guidelines below to help users provide what we need in order to get started.
## Reporting Bugs
When asking a question, people will be better able to provide help if you provide **code** that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/). Your code that reproduces the problem should be:
We appreciate bug reports as they play a crucial role in maintaining the project's quality. When reporting bugs it is important to provide a [Minimum Reproducible Example](https://docs.ultralytics.com/help/minimum_reproducible_example/): a clear, concise code example that replicates the issue. This helps in quick identification and resolution of the bug.
- ✅ **Minimal** – Use as little code as possible that still produces the same problem
- ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
- ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
## License
In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code should be:
Ultralytics embraces the [GNU Affero General Public License v3.0 (AGPL-3.0)](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) for its repositories, promoting openness, transparency, and collaborative enhancement in software development. This strong copyleft license ensures that all users and developers retain the freedom to use, modify, and share the software. It fosters community collaboration, ensuring that any improvements remain accessible to all.
- ✅ **Current** – Verify that your code is up-to-date with current GitHub [main](https://github.com/ultralytics/ultralytics/tree/main) branch, and if necessary `git pull` or `git clone` a new copy to ensure your problem has not already been resolved by previous commits.
- ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠.
Users and developers are encouraged to familiarize themselves with the terms of AGPL-3.0 to contribute effectively and ethically to the Ultralytics open-source community.
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **Bug Report** [template](https://github.com/ultralytics/ultralytics/issues/new/choose) and providing a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) to help us better understand and diagnose your problem.
## Conclusion
## License
Thank you for your interest in contributing to [Ultralytics open-source](https://github.com/ultralytics) YOLO projects. Your participation is crucial in shaping the future of our software and fostering a community of innovation and collaboration. Whether you're improving code, reporting bugs, or suggesting features, your contributions make a significant impact.
By contributing, you agree that your contributions will be licensed under the [AGPL-3.0 license](https://choosealicense.com/licenses/agpl-3.0/)
We're eager to see your ideas in action and appreciate your commitment to advancing object detection technology. Let's continue to grow and innovate together in this exciting open-source journey. Happy coding! 🚀🌟

@ -36,10 +36,10 @@ To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='caltech101', epochs=100, imgsz=416)
results = model.train(data="caltech101", epochs=100, imgsz=416)
```
=== "CLI"

@ -36,10 +36,10 @@ To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='caltech256', epochs=100, imgsz=416)
results = model.train(data="caltech256", epochs=100, imgsz=416)
```
=== "CLI"

@ -39,10 +39,10 @@ To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='cifar10', epochs=100, imgsz=32)
results = model.train(data="cifar10", epochs=100, imgsz=32)
```
=== "CLI"

@ -39,10 +39,10 @@ To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='cifar100', epochs=100, imgsz=32)
results = model.train(data="cifar100", epochs=100, imgsz=32)
```
=== "CLI"

@ -53,10 +53,10 @@ To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image s
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='fashion-mnist', epochs=100, imgsz=28)
results = model.train(data="fashion-mnist", epochs=100, imgsz=28)
```
=== "CLI"

@ -49,10 +49,10 @@ To train a deep learning model on the ImageNet dataset for 100 epochs with an im
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='imagenet', epochs=100, imgsz=224)
results = model.train(data="imagenet", epochs=100, imgsz=224)
```
=== "CLI"

@ -35,10 +35,10 @@ To test a deep learning model on the ImageNet10 dataset with an image size of 22
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='imagenet10', epochs=5, imgsz=224)
results = model.train(data="imagenet10", epochs=5, imgsz=224)
```
=== "CLI"

@ -37,10 +37,10 @@ To train a model on the ImageNette dataset for 100 epochs with a standard image
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='imagenette', epochs=100, imgsz=224)
results = model.train(data="imagenette", epochs=100, imgsz=224)
```
=== "CLI"
@ -72,10 +72,10 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model with ImageNette160
results = model.train(data='imagenette160', epochs=100, imgsz=160)
results = model.train(data="imagenette160", epochs=100, imgsz=160)
```
=== "CLI"
@ -93,10 +93,10 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model with ImageNette320
results = model.train(data='imagenette320', epochs=100, imgsz=320)
results = model.train(data="imagenette320", epochs=100, imgsz=320)
```
=== "CLI"

@ -34,10 +34,10 @@ To train a CNN model on the ImageWoof dataset for 100 epochs with an image size
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='imagewoof', epochs=100, imgsz=224)
results = model.train(data="imagewoof", epochs=100, imgsz=224)
```
=== "CLI"
@ -63,13 +63,13 @@ To use these variants in your training, simply replace 'imagewoof' in the datase
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# For medium-sized dataset
model.train(data='imagewoof320', epochs=100, imgsz=224)
model.train(data="imagewoof320", epochs=100, imgsz=224)
# For small-sized dataset
model.train(data='imagewoof160', epochs=100, imgsz=224)
model.train(data="imagewoof160", epochs=100, imgsz=224)
```
It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping.

@ -86,10 +86,10 @@ This structured approach ensures that the model can effectively learn from well-
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='path/to/dataset', epochs=100, imgsz=640)
results = model.train(data="path/to/dataset", epochs=100, imgsz=640)
```
=== "CLI"

@ -42,10 +42,10 @@ To train a CNN model on the MNIST dataset for 100 epochs with an image size of 3
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='mnist', epochs=100, imgsz=32)
results = model.train(data="mnist", epochs=100, imgsz=32)
```
=== "CLI"

@ -42,10 +42,10 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='african-wildlife.yaml', epochs=100, imgsz=640)
results = model.train(data="african-wildlife.yaml", epochs=100, imgsz=640)
```
=== "CLI"
@ -63,7 +63,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 epochs with an
from ultralytics import YOLO
# Load a model
model = YOLO('path/to/best.pt') # load a brain-tumor fine-tuned model
model = YOLO("path/to/best.pt") # load a brain-tumor fine-tuned model
# Inference using the model
results = model.predict("https://ultralytics.com/assets/african-wildlife-sample.jpg")

@ -53,10 +53,10 @@ To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image s
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='Argoverse.yaml', epochs=100, imgsz=640)
results = model.train(data="Argoverse.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -52,10 +52,10 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='brain-tumor.yaml', epochs=100, imgsz=640)
results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
```
=== "CLI"
@ -73,7 +73,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 epochs with an image
from ultralytics import YOLO
# Load a model
model = YOLO('path/to/best.pt') # load a brain-tumor fine-tuned model
model = YOLO("path/to/best.pt") # load a brain-tumor fine-tuned model
# Inference using the model
results = model.predict("https://ultralytics.com/assets/brain-tumor-sample.jpg")

@ -70,10 +70,10 @@ To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size o
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco.yaml', epochs=100, imgsz=640)
results = model.train(data="coco.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -45,10 +45,10 @@ To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -48,10 +48,10 @@ To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='GlobalWheat2020.yaml', epochs=100, imgsz=640)
results = model.train(data="GlobalWheat2020.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -56,10 +56,10 @@ Here's how you can use these formats to train your model:
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
```
=== "CLI"
@ -103,7 +103,7 @@ You can easily convert labels from the popular COCO dataset format to the YOLO f
```python
from ultralytics.data.converter import convert_coco
convert_coco(labels_dir='path/to/coco/annotations/')
convert_coco(labels_dir="path/to/coco/annotations/")
```
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format.

@ -66,10 +66,10 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='lvis.yaml', epochs=100, imgsz=640)
results = model.train(data="lvis.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -48,10 +48,10 @@ To train a YOLOv8n model on the Objects365 dataset for 100 epochs with an image
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='Objects365.yaml', epochs=100, imgsz=640)
results = model.train(data="Objects365.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -88,10 +88,10 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an im
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv8n model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Train the model on the Open Images V7 dataset
results = model.train(data='open-images-v7.yaml', epochs=100, imgsz=640)
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -46,39 +46,40 @@ Dataset benchmarking evaluates machine learning model performance on specific da
=== "Python"
```python
from pathlib import Path
import shutil
import os
import shutil
from pathlib import Path
from ultralytics.utils.benchmarks import RF100Benchmark
# Initialize RF100Benchmark and set API key
benchmark = RF100Benchmark()
benchmark.set_key(api_key="YOUR_ROBOFLOW_API_KEY")
# Parse dataset and define file paths
names, cfg_yamls = benchmark.parse_dataset()
val_log_file = Path("ultralytics-benchmarks") / "validation.txt"
eval_log_file = Path("ultralytics-benchmarks") / "evaluation.txt"
# Run benchmarks on each dataset in RF100
for ind, path in enumerate(cfg_yamls):
path = Path(path)
if path.exists():
# Fix YAML file and run training
benchmark.fix_yaml(str(path))
os.system(f'yolo detect train data={path} model=yolov8s.pt epochs=1 batch=16')
os.system(f"yolo detect train data={path} model=yolov8s.pt epochs=1 batch=16")
# Run validation and evaluate
os.system(f'yolo detect val data={path} model=runs/detect/train/weights/best.pt > {val_log_file} 2>&1')
os.system(f"yolo detect val data={path} model=runs/detect/train/weights/best.pt > {val_log_file} 2>&1")
benchmark.evaluate(str(path), str(val_log_file), str(eval_log_file), ind)
# Remove the 'runs' directory
runs_dir = Path.cwd() / "runs"
shutil.rmtree(runs_dir)
else:
print("YAML file path does not exist")
continue
print("RF100 Benchmarking completed!")
```

@ -50,10 +50,10 @@ To train a YOLOv8n model on the SKU-110K dataset for 100 epochs with an image si
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='SKU-110K.yaml', epochs=100, imgsz=640)
results = model.train(data="SKU-110K.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -46,10 +46,10 @@ To train a YOLOv8n model on the VisDrone dataset for 100 epochs with an image si
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='VisDrone.yaml', epochs=100, imgsz=640)
results = model.train(data="VisDrone.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -49,10 +49,10 @@ To train a YOLOv8n model on the VOC dataset for 100 epochs with an image size of
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='VOC.yaml', epochs=100, imgsz=640)
results = model.train(data="VOC.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -52,10 +52,10 @@ To train a model on the xView dataset for 100 epochs with an image size of 640,
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='xView.yaml', epochs=100, imgsz=640)
results = model.train(data="xView.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -36,13 +36,13 @@ pip install ultralytics[explorer]
from ultralytics import Explorer
# Create an Explorer object
explorer = Explorer(data='coco128.yaml', model='yolov8n.pt')
explorer = Explorer(data="coco128.yaml", model="yolov8n.pt")
# Create embeddings for your dataset
explorer.create_embeddings_table()
# Search for similar images to a given image/images
dataframe = explorer.get_similar(img='path/to/image.jpg')
dataframe = explorer.get_similar(img="path/to/image.jpg")
# Or search for similar images to a given index/indices
dataframe = explorer.get_similar(idx=0)
@ -75,18 +75,17 @@ You get a pandas dataframe with the `limit` number of most similar data points t
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
similar = exp.get_similar(img='https://ultralytics.com/images/bus.jpg', limit=10)
similar = exp.get_similar(img="https://ultralytics.com/images/bus.jpg", limit=10)
print(similar.head())
# Search using multiple indices
similar = exp.get_similar(
img=['https://ultralytics.com/images/bus.jpg',
'https://ultralytics.com/images/bus.jpg'],
limit=10
)
img=["https://ultralytics.com/images/bus.jpg", "https://ultralytics.com/images/bus.jpg"],
limit=10,
)
print(similar.head())
```
@ -96,14 +95,14 @@ You get a pandas dataframe with the `limit` number of most similar data points t
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
similar = exp.get_similar(idx=1, limit=10)
print(similar.head())
# Search using multiple indices
similar = exp.get_similar(idx=[1,10], limit=10)
similar = exp.get_similar(idx=[1, 10], limit=10)
print(similar.head())
```
@ -119,10 +118,10 @@ You can also plot the similar images using the `plot_similar` method. This metho
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
plt = exp.plot_similar(img='https://ultralytics.com/images/bus.jpg', limit=10)
plt = exp.plot_similar(img="https://ultralytics.com/images/bus.jpg", limit=10)
plt.show()
```
@ -132,7 +131,7 @@ You can also plot the similar images using the `plot_similar` method. This metho
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
plt = exp.plot_similar(idx=1, limit=10)
@ -150,9 +149,8 @@ Note: This works using LLMs under the hood so the results are probabilistic and
from ultralytics import Explorer
from ultralytics.data.explorer import plot_query_result
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
df = exp.ask_ai("show me 100 images with exactly one person and 2 dogs. There can be other objects too")
@ -173,7 +171,7 @@ You can run SQL queries on your dataset using the `sql_query` method. This metho
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
df = exp.sql_query("WHERE labels LIKE '%person%' AND labels LIKE '%dog%'")
@ -190,7 +188,7 @@ You can also plot the results of a SQL query using the `plot_sql_query` method.
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data='coco128.yaml', model='yolov8n.pt')
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
# plot the SQL Query
@ -293,7 +291,7 @@ You can use similarity index to build custom conditions to filter out the datase
import numpy as np
sim_count = np.array(sim_idx["count"])
sim_idx['im_file'][sim_count > 30]
sim_idx["im_file"][sim_count > 30]
```
### Visualize Embedding Space
@ -301,10 +299,10 @@ sim_idx['im_file'][sim_count > 30]
You can also visualize the embedding space using the plotting tool of your choice. For example here is a simple example using matplotlib:
```python
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA
# Reduce dimensions using PCA to 3 components for visualization in 3D
pca = PCA(n_components=3)
@ -312,14 +310,14 @@ reduced_data = pca.fit_transform(embeddings)
# Create a 3D scatter plot using Matplotlib Axes3D
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d')
ax = fig.add_subplot(111, projection="3d")
# Scatter plot
ax.scatter(reduced_data[:, 0], reduced_data[:, 1], reduced_data[:, 2], alpha=0.5)
ax.set_title('3D Scatter Plot of Reduced 256-Dimensional Data (PCA)')
ax.set_xlabel('Component 1')
ax.set_ylabel('Component 2')
ax.set_zlabel('Component 3')
ax.set_title("3D Scatter Plot of Reduced 256-Dimensional Data (PCA)")
ax.set_xlabel("Component 1")
ax.set_ylabel("Component 2")
ax.set_zlabel("Component 3")
plt.show()
```

@ -135,14 +135,15 @@ Contributing a new dataset involves several steps to ensure that it aligns well
```python
from pathlib import Path
from ultralytics.data.utils import compress_one_image
from ultralytics.utils.downloads import zip_directory
# Define dataset directory
path = Path('path/to/dataset')
path = Path("path/to/dataset")
# Optimize images in dataset (optional)
for f in path.rglob('*.jpg'):
for f in path.rglob("*.jpg"):
compress_one_image(f)
# Zip dataset into 'path/to/dataset.zip'

@ -75,21 +75,21 @@ To train DOTA dataset, we split original DOTA images with high-resolution into i
=== "Python"
```python
from ultralytics.data.split_dota import split_trainval, split_test
from ultralytics.data.split_dota import split_test, split_trainval
# split train and val set, with labels.
split_trainval(
data_root='path/to/DOTAv1.0/',
save_dir='path/to/DOTAv1.0-split/',
rates=[0.5, 1.0, 1.5], # multiscale
gap=500
data_root="path/to/DOTAv1.0/",
save_dir="path/to/DOTAv1.0-split/",
rates=[0.5, 1.0, 1.5], # multiscale
gap=500,
)
# split test set, without labels.
split_test(
data_root='path/to/DOTAv1.0/',
save_dir='path/to/DOTAv1.0-split/',
rates=[0.5, 1.0, 1.5], # multiscale
gap=500
data_root="path/to/DOTAv1.0/",
save_dir="path/to/DOTAv1.0-split/",
rates=[0.5, 1.0, 1.5], # multiscale
gap=500,
)
```
@ -109,10 +109,10 @@ To train a model on the DOTA v1 dataset, you can utilize the following code snip
from ultralytics import YOLO
# Create a new YOLOv8n-OBB model from scratch
model = YOLO('yolov8n-obb.yaml')
model = YOLO("yolov8n-obb.yaml")
# Train the model on the DOTAv2 dataset
results = model.train(data='DOTAv1.yaml', epochs=100, imgsz=640)
results = model.train(data="DOTAv1.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -34,10 +34,10 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image s
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-obb.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-obb.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='dota8.yaml', epochs=100, imgsz=640)
results = model.train(data="dota8.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -40,10 +40,10 @@ To train a model using these OBB formats:
from ultralytics import YOLO
# Create a new YOLOv8n-OBB model from scratch
model = YOLO('yolov8n-obb.yaml')
model = YOLO("yolov8n-obb.yaml")
# Train the model on the DOTAv2 dataset
results = model.train(data='DOTAv1.yaml', epochs=100, imgsz=640)
results = model.train(data="DOTAv1.yaml", epochs=100, imgsz=640)
```
=== "CLI"
@ -78,7 +78,7 @@ Transitioning labels from the DOTA dataset format to the YOLO OBB format can be
```python
from ultralytics.data.converter import convert_dota_to_yolo_obb
convert_dota_to_yolo_obb('path/to/DOTA')
convert_dota_to_yolo_obb("path/to/DOTA")
```
This conversion mechanism is instrumental for datasets in the DOTA format, ensuring alignment with the Ultralytics YOLO OBB format.

@ -61,10 +61,10 @@ To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 epochs with an im
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco-pose.yaml', epochs=100, imgsz=640)
results = model.train(data="coco-pose.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -34,10 +34,10 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an i
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco8-pose.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -72,10 +72,10 @@ The `train` and `val` fields specify the paths to the directories containing the
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco8-pose.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
```
=== "CLI"
@ -132,7 +132,7 @@ Ultralytics provides a convenient conversion tool to convert labels from the pop
```python
from ultralytics.data.converter import convert_coco
convert_coco(labels_dir='path/to/coco/annotations/', use_keypoints=True)
convert_coco(labels_dir="path/to/coco/annotations/", use_keypoints=True)
```
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format. The `use_keypoints` parameter specifies whether to include keypoints (for pose estimation) in the converted labels.

@ -47,10 +47,10 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='tiger-pose.yaml', epochs=100, imgsz=640)
results = model.train(data="tiger-pose.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -55,10 +55,10 @@ To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='carparts-seg.yaml', epochs=100, imgsz=640)
results = model.train(data="carparts-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -59,10 +59,10 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco-seg.yaml', epochs=100, imgsz=640)
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -34,10 +34,10 @@ To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 epochs with an ima
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -44,10 +44,10 @@ To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 epo
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='crack-seg.yaml', epochs=100, imgsz=640)
results = model.train(data="crack-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -74,10 +74,10 @@ The `train` and `val` fields specify the paths to the directories containing the
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='coco8-seg.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"
@ -117,7 +117,7 @@ You can easily convert labels from the popular COCO dataset format to the YOLO f
```python
from ultralytics.data.converter import convert_coco
convert_coco(labels_dir='path/to/coco/annotations/', use_segments=True)
convert_coco(labels_dir="path/to/coco/annotations/", use_segments=True)
```
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format.
@ -139,7 +139,7 @@ To auto-annotate your dataset using the Ultralytics framework, you can use the `
```python
from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt')
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model="sam_b.pt")
```
Certainly, here is the table updated with code snippets:

@ -44,10 +44,10 @@ To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 e
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data='package-seg.yaml', epochs=100, imgsz=640)
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"

@ -19,7 +19,7 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
```python
from ultralytics import YOLO
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
results = model.track(source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True)
```
=== "CLI"

@ -70,8 +70,8 @@ With Ultralytics installed, you can now start using its robust features for obje
```python
from ultralytics import YOLO
model = YOLO('yolov8n.pt') # initialize model
results = model('path/to/image.jpg') # perform inference
model = YOLO("yolov8n.pt") # initialize model
results = model("path/to/image.jpg") # perform inference
results[0].show() # display results for the first image
```

@ -82,10 +82,10 @@ To use the Edge TPU, you need to convert your model into a compatible format. It
from ultralytics import YOLO
# Load a model
model = YOLO('path/to/model.pt') # Load an official model or custom model
model = YOLO("path/to/model.pt") # Load an official model or custom model
# Export the model
model.export(format='edgetpu')
model.export(format="edgetpu")
```
=== "CLI"
@ -108,7 +108,7 @@ After exporting your model, you can run inference with it using the following co
from ultralytics import YOLO
# Load a model
model = YOLO('path/to/edgetpu_model.tflite') # Load an official model or custom model
model = YOLO("path/to/edgetpu_model.tflite") # Load an official model or custom model
# Run Prediction
model.predict("path/to/source.png")

@ -42,8 +42,8 @@ Measuring the gap between two objects is known as distance calculation within a
=== "Video Stream"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
names = model.model.names
@ -53,7 +53,7 @@ Measuring the gap between two objects is known as distance calculation within a
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("distance_calculation.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("distance_calculation.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init distance-calculation obj
dist_obj = solutions.DistanceCalculation(names=names, view_img=True)
@ -71,7 +71,6 @@ Measuring the gap between two objects is known as distance calculation within a
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
???+ tip "Note"

@ -44,8 +44,8 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
=== "Heatmap"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -53,13 +53,15 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init heatmap
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
classes_names=model.names)
heatmap_obj = solutions.Heatmap(
colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
classes_names=model.names,
)
while cap.isOpened():
success, im0 = cap.read()
@ -74,14 +76,13 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Line Counting"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -89,16 +90,18 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
line_points = [(20, 400), (1080, 404)] # line for object counting
# Init heatmap
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
count_reg_pts=line_points,
classes_names=model.names)
heatmap_obj = solutions.Heatmap(
colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
count_reg_pts=line_points,
classes_names=model.names,
)
while cap.isOpened():
success, im0 = cap.read()
@ -117,30 +120,29 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
=== "Polygon Counting"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi",
cv2.VideoWriter_fourcc(*'mp4v'),
fps,
(w, h))
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Define polygon points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]
# Init heatmap
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
count_reg_pts=region_points,
classes_names=model.names)
heatmap_obj = solutions.Heatmap(
colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
count_reg_pts=region_points,
classes_names=model.names,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
@ -150,7 +152,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
tracks = model.track(im0, persist=True, show=False)
im0 = heatmap_obj.generate_heatmap(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
@ -159,8 +161,8 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
=== "Region Counting"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -168,24 +170,26 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Define region points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
# Init heatmap
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
count_reg_pts=region_points,
classes_names=model.names)
heatmap_obj = solutions.Heatmap(
colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
count_reg_pts=region_points,
classes_names=model.names,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = heatmap_obj.generate_heatmap(im0, tracks)
video_writer.write(im0)
@ -198,19 +202,21 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
=== "Im0"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt") # YOLOv8 custom/pretrained model
model = YOLO("yolov8s.pt") # YOLOv8 custom/pretrained model
im0 = cv2.imread("path/to/image.png") # path to image file
h, w = im0.shape[:2] # image height and width
# Heatmap Init
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
classes_names=model.names)
heatmap_obj = solutions.Heatmap(
colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
classes_names=model.names,
)
results = model.track(im0, persist=True)
im0 = heatmap_obj.generate_heatmap(im0, tracks=results)
@ -220,8 +226,8 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
=== "Specific Classes"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -229,23 +235,24 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
classes_for_heatmap = [0, 2] # classes for heatmap
# Init heatmap
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
classes_names=model.names)
heatmap_obj = solutions.Heatmap(
colormap=cv2.COLORMAP_PARULA,
view_img=True,
shape="circle",
classes_names=model.names,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False,
classes=classes_for_heatmap)
tracks = model.track(im0, persist=True, show=False, classes=classes_for_heatmap)
im0 = heatmap_obj.generate_heatmap(im0, tracks)
video_writer.write(im0)

@ -77,10 +77,10 @@ Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyp
from ultralytics import YOLO
# Initialize the YOLO model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Tune hyperparameters on COCO8 for 30 epochs
model.tune(data='coco8.yaml', epochs=30, iterations=300, optimizer='AdamW', plots=False, save=False, val=False)
model.tune(data="coco8.yaml", epochs=30, iterations=300, optimizer="AdamW", plots=False, save=False, val=False)
```
## Results

@ -48,7 +48,7 @@ There are two types of instance segmentation tracking available in the Ultralyti
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter('instance-segmentation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
out = cv2.VideoWriter("instance-segmentation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
@ -63,38 +63,35 @@ There are two types of instance segmentation tracking available in the Ultralyti
clss = results[0].boxes.cls.cpu().tolist()
masks = results[0].masks.xy
for mask, cls in zip(masks, clss):
annotator.seg_bbox(mask=mask,
mask_color=colors(int(cls), True),
det_label=names[int(cls)])
annotator.seg_bbox(mask=mask, mask_color=colors(int(cls), True), det_label=names[int(cls)])
out.write(im0)
cv2.imshow("instance-segmentation", im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
```
=== "Instance Segmentation with Object Tracking"
```python
from collections import defaultdict
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
from collections import defaultdict
track_history = defaultdict(lambda: [])
model = YOLO("yolov8n-seg.pt") # segmentation model
model = YOLO("yolov8n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter('instance-segmentation-object-tracking.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
out = cv2.VideoWriter("instance-segmentation-object-tracking.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
@ -111,14 +108,12 @@ There are two types of instance segmentation tracking available in the Ultralyti
track_ids = results[0].boxes.id.int().cpu().tolist()
for mask, track_id in zip(masks, track_ids):
annotator.seg_bbox(mask=mask,
mask_color=colors(track_id, True),
track_label=str(track_id))
annotator.seg_bbox(mask=mask, mask_color=colors(track_id, True), track_label=str(track_id))
out.write(im0)
cv2.imshow("instance-segmentation-object-tracking", im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()

@ -36,7 +36,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt')
model = YOLO("yolov8n-seg.pt")
# Run inference
results = model.predict()
@ -159,7 +159,6 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
# Isolate object with binary mask
isolated = cv2.bitwise_and(mask3ch, img)
```
??? question "How does this work?"
@ -209,7 +208,6 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
```py
# Isolate object with transparent background (when saved as PNG)
isolated = np.dstack([img, b_mask])
```
??? question "How does this work?"
@ -266,7 +264,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
```py
# Save isolated object to file
_ = cv2.imwrite(f'{img_name}_{label}-{ci}.png', iso_crop)
_ = cv2.imwrite(f"{img_name}_{label}-{ci}.png", iso_crop)
```
- In this example, the `img_name` is the base-name of the source image file, `label` is the detected class-name, and `ci` is the index of the object detection (in case of multiple instances with the same class name).

@ -62,36 +62,36 @@ Without further ado, let's dive in!
```python
import datetime
import shutil
from pathlib import Path
from collections import Counter
from pathlib import Path
import yaml
import numpy as np
import pandas as pd
from ultralytics import YOLO
import yaml
from sklearn.model_selection import KFold
from ultralytics import YOLO
```
2. Proceed to retrieve all label files for your dataset.
```python
dataset_path = Path('./Fruit-detection') # replace with 'path/to/dataset' for your custom data
labels = sorted(dataset_path.rglob("*labels/*.txt")) # all data in 'labels'
dataset_path = Path("./Fruit-detection") # replace with 'path/to/dataset' for your custom data
labels = sorted(dataset_path.rglob("*labels/*.txt")) # all data in 'labels'
```
3. Now, read the contents of the dataset YAML file and extract the indices of the class labels.
```python
yaml_file = 'path/to/data.yaml' # your data YAML with data directories and names dictionary
with open(yaml_file, 'r', encoding="utf8") as y:
classes = yaml.safe_load(y)['names']
yaml_file = "path/to/data.yaml" # your data YAML with data directories and names dictionary
with open(yaml_file, "r", encoding="utf8") as y:
classes = yaml.safe_load(y)["names"]
cls_idx = sorted(classes.keys())
```
4. Initialize an empty `pandas` DataFrame.
```python
indx = [l.stem for l in labels] # uses base filename as ID (no extension)
indx = [l.stem for l in labels] # uses base filename as ID (no extension)
labels_df = pd.DataFrame([], columns=cls_idx, index=indx)
```
@ -101,16 +101,16 @@ Without further ado, let's dive in!
for label in labels:
lbl_counter = Counter()
with open(label,'r') as lf:
with open(label, "r") as lf:
lines = lf.readlines()
for l in lines:
# classes for YOLO label uses integer at first position of each line
lbl_counter[int(l.split(' ')[0])] += 1
lbl_counter[int(l.split(" ")[0])] += 1
labels_df.loc[label.stem] = lbl_counter
labels_df = labels_df.fillna(0.0) # replace `nan` values with `0.0`
labels_df = labels_df.fillna(0.0) # replace `nan` values with `0.0`
```
6. The following is a sample view of the populated DataFrame:
@ -142,7 +142,7 @@ The rows index the label files, each corresponding to an image in your dataset,
```python
ksplit = 5
kf = KFold(n_splits=ksplit, shuffle=True, random_state=20) # setting random_state for repeatable results
kf = KFold(n_splits=ksplit, shuffle=True, random_state=20) # setting random_state for repeatable results
kfolds = list(kf.split(labels_df))
```
@ -150,12 +150,12 @@ The rows index the label files, each corresponding to an image in your dataset,
2. The dataset has now been split into `k` folds, each having a list of `train` and `val` indices. We will construct a DataFrame to display these results more clearly.
```python
folds = [f'split_{n}' for n in range(1, ksplit + 1)]
folds = [f"split_{n}" for n in range(1, ksplit + 1)]
folds_df = pd.DataFrame(index=indx, columns=folds)
for idx, (train, val) in enumerate(kfolds, start=1):
folds_df[f'split_{idx}'].loc[labels_df.iloc[train].index] = 'train'
folds_df[f'split_{idx}'].loc[labels_df.iloc[val].index] = 'val'
folds_df[f"split_{idx}"].loc[labels_df.iloc[train].index] = "train"
folds_df[f"split_{idx}"].loc[labels_df.iloc[val].index] = "val"
```
3. Now we will calculate the distribution of class labels for each fold as a ratio of the classes present in `val` to those present in `train`.
@ -168,8 +168,8 @@ The rows index the label files, each corresponding to an image in your dataset,
val_totals = labels_df.iloc[val_indices].sum()
# To avoid division by zero, we add a small value (1E-7) to the denominator
ratio = val_totals / (train_totals + 1E-7)
fold_lbl_distrb.loc[f'split_{n}'] = ratio
ratio = val_totals / (train_totals + 1e-7)
fold_lbl_distrb.loc[f"split_{n}"] = ratio
```
The ideal scenario is for all class ratios to be reasonably similar for each split and across classes. This, however, will be subject to the specifics of your dataset.
@ -177,17 +177,17 @@ The rows index the label files, each corresponding to an image in your dataset,
4. Next, we create the directories and dataset YAML files for each split.
```python
supported_extensions = ['.jpg', '.jpeg', '.png']
supported_extensions = [".jpg", ".jpeg", ".png"]
# Initialize an empty list to store image file paths
images = []
# Loop through supported extensions and gather image files
for ext in supported_extensions:
images.extend(sorted((dataset_path / 'images').rglob(f"*{ext}")))
images.extend(sorted((dataset_path / "images").rglob(f"*{ext}")))
# Create the necessary directories and dataset YAML files (unchanged)
save_path = Path(dataset_path / f'{datetime.date.today().isoformat()}_{ksplit}-Fold_Cross-val')
save_path = Path(dataset_path / f"{datetime.date.today().isoformat()}_{ksplit}-Fold_Cross-val")
save_path.mkdir(parents=True, exist_ok=True)
ds_yamls = []
@ -195,22 +195,25 @@ The rows index the label files, each corresponding to an image in your dataset,
# Create directories
split_dir = save_path / split
split_dir.mkdir(parents=True, exist_ok=True)
(split_dir / 'train' / 'images').mkdir(parents=True, exist_ok=True)
(split_dir / 'train' / 'labels').mkdir(parents=True, exist_ok=True)
(split_dir / 'val' / 'images').mkdir(parents=True, exist_ok=True)
(split_dir / 'val' / 'labels').mkdir(parents=True, exist_ok=True)
(split_dir / "train" / "images").mkdir(parents=True, exist_ok=True)
(split_dir / "train" / "labels").mkdir(parents=True, exist_ok=True)
(split_dir / "val" / "images").mkdir(parents=True, exist_ok=True)
(split_dir / "val" / "labels").mkdir(parents=True, exist_ok=True)
# Create dataset YAML files
dataset_yaml = split_dir / f'{split}_dataset.yaml'
dataset_yaml = split_dir / f"{split}_dataset.yaml"
ds_yamls.append(dataset_yaml)
with open(dataset_yaml, 'w') as ds_y:
yaml.safe_dump({
'path': split_dir.as_posix(),
'train': 'train',
'val': 'val',
'names': classes
}, ds_y)
with open(dataset_yaml, "w") as ds_y:
yaml.safe_dump(
{
"path": split_dir.as_posix(),
"train": "train",
"val": "val",
"names": classes,
},
ds_y,
)
```
5. Lastly, copy images and labels into the respective directory ('train' or 'val') for each split.
@ -221,8 +224,8 @@ The rows index the label files, each corresponding to an image in your dataset,
for image, label in zip(images, labels):
for split, k_split in folds_df.loc[image.stem].items():
# Destination directory
img_to_path = save_path / split / k_split / 'images'
lbl_to_path = save_path / split / k_split / 'labels'
img_to_path = save_path / split / k_split / "images"
lbl_to_path = save_path / split / k_split / "labels"
# Copy image and label files to new directory (SamefileError if file already exists)
shutil.copy(image, img_to_path / image.name)
@ -243,8 +246,8 @@ fold_lbl_distrb.to_csv(save_path / "kfold_label_distribution.csv")
1. First, load the YOLO model.
```python
weights_path = 'path/to/weights.pt'
model = YOLO(weights_path, task='detect')
weights_path = "path/to/weights.pt"
model = YOLO(weights_path, task="detect")
```
2. Next, iterate over the dataset YAML files to run training. The results will be saved to a directory specified by the `project` and `name` arguments. By default, this directory is 'exp/runs#' where # is an integer index.
@ -254,12 +257,12 @@ fold_lbl_distrb.to_csv(save_path / "kfold_label_distribution.csv")
# Define your additional arguments here
batch = 16
project = 'kfold_demo'
project = "kfold_demo"
epochs = 100
for k in range(ksplit):
dataset_yaml = ds_yamls[k]
model.train(data=dataset_yaml,epochs=epochs, batch=batch, project=project) # include any train arguments
model.train(data=dataset_yaml, epochs=epochs, batch=batch, project=project) # include any train arguments
results[k] = model.metrics # save output metrics for further analysis
```

@ -158,16 +158,16 @@ The YOLOv8n model in PyTorch format is converted to TensorRT to run inference wi
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model
model.export(format='engine') # creates 'yolov8n.engine'
model.export(format="engine") # creates 'yolov8n.engine'
# Load the exported TensorRT model
trt_model = YOLO('yolov8n.engine')
trt_model = YOLO("yolov8n.engine")
# Run inference
results = trt_model('https://ultralytics.com/images/bus.jpg')
results = trt_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
@ -290,10 +290,10 @@ To reproduce the above Ultralytics benchmarks on all export [formats](../modes/e
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
results = model.benchmarks(data='coco8.yaml', imgsz=640)
results = model.benchmarks(data="coco8.yaml", imgsz=640)
```
=== "CLI"

@ -21,9 +21,9 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
=== "Object Blurring"
```python
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
import cv2
model = YOLO("yolov8n.pt")
names = model.names
@ -36,9 +36,7 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
blur_ratio = 50
# Video writer
video_writer = cv2.VideoWriter("object_blurring_output.avi",
cv2.VideoWriter_fourcc(*'mp4v'),
fps, (w, h))
video_writer = cv2.VideoWriter("object_blurring_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
while cap.isOpened():
success, im0 = cap.read()
@ -55,14 +53,14 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
for box, cls in zip(boxes, clss):
annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
obj = im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])]
obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
blur_obj = cv2.blur(obj, (blur_ratio, blur_ratio))
im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])] = blur_obj
im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = blur_obj
cv2.imshow("ultralytics", im0)
video_writer.write(im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()

@ -53,18 +53,18 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
@ -73,17 +73,17 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
draw_tracks=True,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
@ -94,18 +94,18 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points as a polygon with 5 points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
@ -114,17 +114,17 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
draw_tracks=True,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
@ -135,18 +135,18 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define line points
line_points = [(20, 400), (1080, 400)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
@ -155,17 +155,17 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
draw_tracks=True,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
@ -176,18 +176,18 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
line_points = [(20, 400), (1080, 400)] # line or region points
classes_to_count = [0, 2] # person and car classes for count
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
@ -196,17 +196,17 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
draw_tracks=True,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False, classes=classes_to_count)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()

@ -28,10 +28,11 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
=== "Object Cropping"
```python
import os
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
import cv2
import os
model = YOLO("yolov8n.pt")
names = model.names
@ -45,9 +46,7 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
os.mkdir(crop_dir_name)
# Video writer
video_writer = cv2.VideoWriter("object_cropping_output.avi",
cv2.VideoWriter_fourcc(*'mp4v'),
fps, (w, h))
video_writer = cv2.VideoWriter("object_cropping_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
idx = 0
while cap.isOpened():
@ -66,14 +65,14 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
idx += 1
annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
crop_obj = im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])]
crop_obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
cv2.imwrite(os.path.join(crop_dir_name, str(idx)+".png"), crop_obj)
cv2.imwrite(os.path.join(crop_dir_name, str(idx) + ".png"), crop_obj)
cv2.imshow("ultralytics", im0)
video_writer.write(im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()

@ -62,36 +62,34 @@ root.mainloop()
# Path to json file, that created with above point selection app
polygon_json_path = "bounding_boxes.json"
# Video capture
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH,
cv2.CAP_PROP_FRAME_HEIGHT,
cv2.CAP_PROP_FPS))
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("parking management.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("parking management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Initialize parking management object
management = solutions.ParkingManagement(model_path="yolov8n.pt")
while cap.isOpened():
ret, im0 = cap.read()
if not ret:
break
json_data = management.parking_regions_extraction(polygon_json_path)
results = management.model.track(im0, persist=True, show=False)
if results[0].boxes.id is not None:
boxes = results[0].boxes.xyxy.cpu().tolist()
clss = results[0].boxes.cls.cpu().tolist()
management.process_data(json_data, im0, boxes, clss)
management.display_frames(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()

@ -29,39 +29,40 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
queue = solutions.QueueManager(classes_names=model.names,
reg_pts=queue_region,
line_thickness=3,
fontsize=1.0,
region_color=(255, 144, 31))
queue = solutions.QueueManager(
classes_names=model.names,
reg_pts=queue_region,
line_thickness=3,
fontsize=1.0,
region_color=(255, 144, 31),
)
while cap.isOpened():
success, im0 = cap.read()
if success:
tracks = model.track(im0, show=False, persist=True,
verbose=False)
tracks = model.track(im0, show=False, persist=True, verbose=False)
out = queue.process_queue(im0, tracks)
video_writer.write(im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
continue
print("Video frame is empty or video processing has been successfully completed.")
break
cap.release()
cv2.destroyAllWindows()
```
@ -71,39 +72,40 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
queue = solutions.QueueManager(classes_names=model.names,
reg_pts=queue_region,
line_thickness=3,
fontsize=1.0,
region_color=(255, 144, 31))
queue = solutions.QueueManager(
classes_names=model.names,
reg_pts=queue_region,
line_thickness=3,
fontsize=1.0,
region_color=(255, 144, 31),
)
while cap.isOpened():
success, im0 = cap.read()
if success:
tracks = model.track(im0, show=False, persist=True,
verbose=False, classes=0) # Only person class
tracks = model.track(im0, show=False, persist=True, verbose=False, classes=0) # Only person class
out = queue.process_queue(im0, tracks)
video_writer.write(im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
continue
print("Video frame is empty or video processing has been successfully completed.")
break
cap.release()
cv2.destroyAllWindows()
```

@ -108,16 +108,16 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to NCNN format
model.export(format='ncnn') # creates 'yolov8n_ncnn_model'
model.export(format="ncnn") # creates 'yolov8n_ncnn_model'
# Load the exported NCNN model
ncnn_model = YOLO('yolov8n_ncnn_model')
ncnn_model = YOLO("yolov8n_ncnn_model")
# Run inference
results = ncnn_model('https://ultralytics.com/images/bus.jpg')
results = ncnn_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
@ -231,10 +231,10 @@ To reproduce the above Ultralytics benchmarks on all [export formats](../modes/e
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
results = model.benchmarks(data='coco8.yaml', imgsz=640)
results = model.benchmarks(data="coco8.yaml", imgsz=640)
```
=== "CLI"
@ -293,10 +293,10 @@ With the TCP stream initiated, you can perform YOLOv8 inference.
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Run inference
results = model('tcp://127.0.0.1:8888')
results = model("tcp://127.0.0.1:8888")
```
=== "CLI"

@ -60,21 +60,28 @@ pip install -U ultralytics sahi
Here's how to import the necessary modules and download a YOLOv8 model and some test images:
```python
from sahi.utils.yolov8 import download_yolov8s_model
from pathlib import Path
from IPython.display import Image
from sahi import AutoDetectionModel
from sahi.predict import get_prediction, get_sliced_prediction, predict
from sahi.utils.cv import read_image
from sahi.utils.file import download_from_url
from sahi.predict import get_prediction, get_sliced_prediction, predict
from pathlib import Path
from IPython.display import Image
from sahi.utils.yolov8 import download_yolov8s_model
# Download YOLOv8 model
yolov8_model_path = "models/yolov8s.pt"
download_yolov8s_model(yolov8_model_path)
# Download test images
download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg', 'demo_data/small-vehicles1.jpeg')
download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/terrain2.png', 'demo_data/terrain2.png')
download_from_url(
"https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg",
"demo_data/small-vehicles1.jpeg",
)
download_from_url(
"https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/terrain2.png",
"demo_data/terrain2.png",
)
```
## Standard Inference with YOLOv8
@ -85,7 +92,7 @@ You can instantiate a YOLOv8 model for object detection like this:
```python
detection_model = AutoDetectionModel.from_pretrained(
model_type='yolov8',
model_type="yolov8",
model_path=yolov8_model_path,
confidence_threshold=0.3,
device="cpu", # or 'cuda:0'
@ -124,7 +131,7 @@ result = get_sliced_prediction(
slice_height=256,
slice_width=256,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2
overlap_width_ratio=0.2,
)
```

@ -30,15 +30,16 @@ The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanc
#### Import Libraries
```python
import torch
import numpy as np
import cv2
from time import time
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from time import time
import cv2
import numpy as np
import torch
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
```
#### Set up the parameters of the message
@ -58,7 +59,7 @@ to_email = "" # receiver email
#### Server creation and authentication
```python
server = smtplib.SMTP('smtp.gmail.com: 587')
server = smtplib.SMTP("smtp.gmail.com: 587")
server.starttls()
server.login(from_email, password)
```
@ -69,13 +70,13 @@ server.login(from_email, password)
def send_email(to_email, from_email, object_detected=1):
"""Sends an email notification indicating the number of objects detected; defaults to 1 object."""
message = MIMEMultipart()
message['From'] = from_email
message['To'] = to_email
message['Subject'] = "Security Alert"
message["From"] = from_email
message["To"] = to_email
message["Subject"] = "Security Alert"
# Add in the message body
message_body = f'ALERT - {object_detected} objects has been detected!!'
message_body = f"ALERT - {object_detected} objects has been detected!!"
message.attach(MIMEText(message_body, 'plain'))
message.attach(MIMEText(message_body, "plain"))
server.sendmail(from_email, to_email, message.as_string())
```
@ -97,7 +98,7 @@ class ObjectDetection:
self.end_time = 0
# device information
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
self.device = "cuda" if torch.cuda.is_available() else "cpu"
def predict(self, im0):
"""Run prediction using a YOLO model for the input image `im0`."""
@ -108,10 +109,16 @@ class ObjectDetection:
"""Displays the FPS on an image `im0` by calculating and overlaying as white text on a black rectangle."""
self.end_time = time()
fps = 1 / np.round(self.end_time - self.start_time, 2)
text = f'FPS: {int(fps)}'
text = f"FPS: {int(fps)}"
text_size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 1.0, 2)[0]
gap = 10
cv2.rectangle(im0, (20 - gap, 70 - text_size[1] - gap), (20 + text_size[0] + gap, 70 + gap), (255, 255, 255), -1)
cv2.rectangle(
im0,
(20 - gap, 70 - text_size[1] - gap),
(20 + text_size[0] + gap, 70 + gap),
(255, 255, 255),
-1,
)
cv2.putText(im0, text, (20, 70), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 0), 2)
def plot_bboxes(self, results, im0):
@ -148,7 +155,7 @@ class ObjectDetection:
self.email_sent = False
self.display_fps(im0)
cv2.imshow('YOLOv8 Detection', im0)
cv2.imshow("YOLOv8 Detection", im0)
frame_count += 1
if cv2.waitKey(5) & 0xFF == 27:
break

@ -39,8 +39,8 @@ Speed estimation is the process of calculating the rate of movement of an object
=== "Speed Estimation"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
names = model.model.names
@ -50,17 +50,18 @@ Speed estimation is the process of calculating the rate of movement of an object
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("speed_estimation.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("speed_estimation.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
line_pts = [(0, 360), (1280, 360)]
# Init speed-estimation obj
speed_obj = solutions.SpeedEstimator(reg_pts=line_pts,
names=names,
view_img=True)
speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=names,
view_img=True,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
@ -74,7 +75,6 @@ Speed estimation is the process of calculating the rate of movement of an object
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
???+ warning "Speed is Estimate"

@ -46,10 +46,10 @@ Before deploying the model on Triton, it must be exported to the ONNX format. ON
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # load an official model
model = YOLO("yolov8n.pt") # load an official model
# Export the model
onnx_file = model.export(format='onnx', dynamic=True)
onnx_file = model.export(format="onnx", dynamic=True)
```
## Setting Up Triton Model Repository
@ -62,11 +62,11 @@ The Triton Model Repository is a storage location where Triton can access and lo
from pathlib import Path
# Define paths
triton_repo_path = Path('tmp') / 'triton_repo'
triton_model_path = triton_repo_path / 'yolo'
triton_repo_path = Path("tmp") / "triton_repo"
triton_model_path = triton_repo_path / "yolo"
# Create directories
(triton_model_path / '1').mkdir(parents=True, exist_ok=True)
(triton_model_path / "1").mkdir(parents=True, exist_ok=True)
```
2. Move the exported ONNX model to the Triton repository:
@ -75,10 +75,10 @@ The Triton Model Repository is a storage location where Triton can access and lo
from pathlib import Path
# Move ONNX model to Triton Model path
Path(onnx_file).rename(triton_model_path / '1' / 'model.onnx')
Path(onnx_file).rename(triton_model_path / "1" / "model.onnx")
# Create config file
(triton_model_path / 'config.pbtxt').touch()
(triton_model_path / "config.pbtxt").touch()
```
## Running Triton Inference Server
@ -92,18 +92,23 @@ import time
from tritonclient.http import InferenceServerClient
# Define image https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
tag = 'nvcr.io/nvidia/tritonserver:23.09-py3' # 6.4 GB
tag = "nvcr.io/nvidia/tritonserver:23.09-py3" # 6.4 GB
# Pull the image
subprocess.call(f'docker pull {tag}', shell=True)
subprocess.call(f"docker pull {tag}", shell=True)
# Run the Triton server and capture the container ID
container_id = subprocess.check_output(
f'docker run -d --rm -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models',
shell=True).decode('utf-8').strip()
container_id = (
subprocess.check_output(
f"docker run -d --rm -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
shell=True,
)
.decode("utf-8")
.strip()
)
# Wait for the Triton server to start
triton_client = InferenceServerClient(url='localhost:8000', verbose=False, ssl=False)
triton_client = InferenceServerClient(url="localhost:8000", verbose=False, ssl=False)
# Wait until model is ready
for _ in range(10):
@ -119,17 +124,17 @@ Then run inference using the Triton Server model:
from ultralytics import YOLO
# Load the Triton Server model
model = YOLO(f'http://localhost:8000/yolo', task='detect')
model = YOLO(f"http://localhost:8000/yolo", task="detect")
# Run inference on the server
results = model('path/to/image.jpg')
results = model("path/to/image.jpg")
```
Cleanup the container:
```python
# Kill and remove the container at the end of the test
subprocess.call(f'docker kill {container_id}', shell=True)
subprocess.call(f"docker kill {container_id}", shell=True)
```
---

@ -47,9 +47,8 @@ The VSCode compatible protocols for viewing images using the integrated terminal
import io
import cv2 as cv
from ultralytics import YOLO
from sixel import SixelWriter
from ultralytics import YOLO
```
1. Load a model and execute inference, then plot the results and store in a variable. See more about inference arguments and working with results on the [predict mode](../modes/predict.md) page.

@ -24,14 +24,14 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
```python
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import colors, Annotator
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter('visioneye-pinpoint.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
center_point = (-10, h)
@ -54,7 +54,7 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
out.write(im0)
cv2.imshow("visioneye-pinpoint", im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
@ -67,13 +67,13 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
```python
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import colors, Annotator
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter('visioneye-pinpoint.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
center_point = (-10, h)
@ -98,7 +98,7 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
out.write(im0)
cv2.imshow("visioneye-pinpoint", im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
@ -109,55 +109,56 @@ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, Vi
=== "VisionEye with Distance Calculation"
```python
import cv2
import math
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter('visioneye-distance-calculation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
out = cv2.VideoWriter("visioneye-distance-calculation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
center_point = (0, h)
pixel_per_meter = 10
txt_color, txt_background, bbox_clr = ((0, 0, 0), (255, 255, 255), (255, 0, 255))
while True:
ret, im0 = cap.read()
if not ret:
print("Video frame is empty or video processing has been successfully completed.")
break
annotator = Annotator(im0, line_width=2)
results = model.track(im0, persist=True)
boxes = results[0].boxes.xyxy.cpu()
if results[0].boxes.id is not None:
track_ids = results[0].boxes.id.int().cpu().tolist()
for box, track_id in zip(boxes, track_ids):
annotator.box_label(box, label=str(track_id), color=bbox_clr)
annotator.visioneye(box, center_point)
x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2) # Bounding box centroid
distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2))/pixel_per_meter
text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX,1.2, 3)
cv2.rectangle(im0, (x1, y1 - text_size[1] - 10),(x1 + text_size[0] + 10, y1), txt_background, -1)
cv2.putText(im0, f"Distance: {distance:.2f} m",(x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2,txt_color, 3)
x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2) # Bounding box centroid
distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2)) / pixel_per_meter
text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX, 1.2, 3)
cv2.rectangle(im0, (x1, y1 - text_size[1] - 10), (x1 + text_size[0] + 10, y1), txt_background, -1)
cv2.putText(im0, f"Distance: {distance:.2f} m", (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2, txt_color, 3)
out.write(im0)
cv2.imshow("visioneye-distance-calculation", im0)
if cv2.waitKey(1) & 0xFF == ord('q'):
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()

@ -39,28 +39,30 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
=== "Workouts Monitoring"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
gym_object = solutions.AIGym(line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10])
gym_object = solutions.AIGym(
line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10],
)
frame_count = 0
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
print("Video frame is empty or video processing has been successfully completed.")
break
frame_count += 1
results = model.track(im0, verbose=False) # Tracking recommended
#results = model.predict(im0) # Prediction also supported
# results = model.predict(im0) # Prediction also supported
im0 = gym_object.start_counting(im0, results, frame_count)
cv2.destroyAllWindows()
@ -69,30 +71,32 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
=== "Workouts Monitoring with Save Output"
```python
from ultralytics import YOLO, solutions
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("workouts.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
video_writer = cv2.VideoWriter("workouts.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
gym_object = solutions.AIGym(line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10])
gym_object = solutions.AIGym(
line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10],
)
frame_count = 0
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
print("Video frame is empty or video processing has been successfully completed.")
break
frame_count += 1
results = model.track(im0, verbose=False) # Tracking recommended
#results = model.predict(im0) # Prediction also supported
# results = model.predict(im0) # Prediction also supported
im0 = gym_object.start_counting(im0, results, frame_count)
video_writer.write(im0)

@ -79,7 +79,7 @@ This section will address common issues faced while training and their respectiv
- Make sure you pass the path to your `.yaml` file as the `data` argument when calling `model.train()`, as shown below:
```python
model.train(data='/path/to/your/data.yaml', batch=4)
model.train(data="/path/to/your/data.yaml", batch=4)
```
#### Accelerating Training with Multiple GPUs
@ -98,7 +98,7 @@ model.train(data='/path/to/your/data.yaml', batch=4)
```python
# Adjust the batch size and other settings as needed to optimize training speed
model.train(data='/path/to/your/data.yaml', batch=32, multi_scale=True)
model.train(data="/path/to/your/data.yaml", batch=32, multi_scale=True)
```
#### Continuous Monitoring Parameters
@ -221,10 +221,10 @@ yolo task=detect mode=segment model=yolov8n-seg.pt source='path/to/car.mp4' show
from ultralytics import YOLO
# Load a pre-trained YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Specify the source image
source = 'https://ultralytics.com/images/bus.jpg'
source = "https://ultralytics.com/images/bus.jpg"
# Make predictions
results = model.predict(source, save=True, imgsz=320, conf=0.5)

@ -28,9 +28,10 @@ When using threads in Python, it's important to recognize patterns that can lead
```python
# Unsafe: Sharing a single model instance across threads
from ultralytics import YOLO
from threading import Thread
from ultralytics import YOLO
# Instantiate the model outside the thread
shared_model = YOLO("yolov8n.pt")
@ -54,9 +55,10 @@ Similarly, here is an unsafe pattern with multiple YOLO model instances:
```python
# Unsafe: Sharing multiple model instances across threads can still lead to issues
from ultralytics import YOLO
from threading import Thread
from ultralytics import YOLO
# Instantiate multiple models outside the thread
shared_model_1 = YOLO("yolov8n_1.pt")
shared_model_2 = YOLO("yolov8n_2.pt")
@ -85,9 +87,10 @@ Here's how to instantiate a YOLO model inside each thread for safe parallel infe
```python
# Safe: Instantiating a single model inside each thread
from ultralytics import YOLO
from threading import Thread
from ultralytics import YOLO
def thread_safe_predict(image_path):
"""Predict on an image using a new YOLO model instance in a thread-safe manner; takes image path as input."""

@ -57,19 +57,19 @@ When adding new functions or classes, please include a [Google-style docstring](
=== "Google-style"
This example shows both Google-style docstrings. Note that both input and output `types` must always be enclosed by parentheses, i.e. `(bool)`.
This example shows a Google-style docstring. Note that both input and output `types` must always be enclosed by parentheses, i.e. `(bool)`.
```python
def example_function(arg1, arg2=4):
"""
Example function that demonstrates Google-style docstrings.
Args:
arg1 (int): The first argument.
arg2 (int): The second argument. Default value is 4.
Returns:
(bool): True if successful, False otherwise.
Examples:
>>> result = example_function(1, 2) # returns False
"""
@ -80,19 +80,19 @@ When adding new functions or classes, please include a [Google-style docstring](
=== "Google-style with type hints"
This example shows both Google-style docstrings and argument and return type hints, though both are not required, one can be used without the other.
This example shows both a Google-style docstring and argument and return type hints, though both are not required, one can be used without the other.
```python
def example_function(arg1: int, arg2: int = 4) -> bool:
"""
Example function that demonstrates Google-style docstrings.
Args:
arg1: The first argument.
arg2: The second argument. Default value is 4.
Returns:
True if successful, False otherwise.
Examples:
>>> result = example_function(1, 2) # returns False
"""

@ -85,7 +85,7 @@ To gain insight into the current configuration of your settings, you can view th
print(settings)
# Return analytics and crash reporting setting
value = settings['sync']
value = settings["sync"]
```
=== "CLI"
@ -106,7 +106,7 @@ Ultralytics allows users to easily modify their settings. Changes can be perform
from ultralytics import settings
# Disable analytics and crash reporting
settings.update({'sync': False})
settings.update({"sync": False})
# Reset settings to default values
settings.reset()

@ -117,21 +117,22 @@ After creating the AWS CloudFormation Stack, the next step is to deploy YOLOv8.
```python
import json
def output_fn(prediction_output, content_type):
"""Formats model outputs as JSON string according to content_type, extracting attributes like boxes, masks, keypoints."""
print("Executing output_fn from inference.py ...")
infer = {}
for result in prediction_output:
if result.boxes is not None:
infer['boxes'] = result.boxes.numpy().data.tolist()
infer["boxes"] = result.boxes.numpy().data.tolist()
if result.masks is not None:
infer['masks'] = result.masks.numpy().data.tolist()
infer["masks"] = result.masks.numpy().data.tolist()
if result.keypoints is not None:
infer['keypoints'] = result.keypoints.numpy().data.tolist()
infer["keypoints"] = result.keypoints.numpy().data.tolist()
if result.obb is not None:
infer['obb'] = result.obb.numpy().data.tolist()
infer["obb"] = result.obb.numpy().data.tolist()
if result.probs is not None:
infer['probs'] = result.probs.numpy().data.tolist()
infer["probs"] = result.probs.numpy().data.tolist()
return json.dumps(infer)
```

@ -67,17 +67,14 @@ Before diving into the usage instructions, be sure to check out the range of [YO
from ultralytics import YOLO
# Step 1: Creating a ClearML Task
task = Task.init(
project_name="my_project",
task_name="my_yolov8_task"
)
task = Task.init(project_name="my_project", task_name="my_yolov8_task")
# Step 2: Selecting the YOLOv8 Model
model_variant = "yolov8n"
task.set_parameter("model_variant", model_variant)
# Step 3: Loading the YOLOv8 Model
model = YOLO(f'{model_variant}.pt')
model = YOLO(f"{model_variant}.pt")
# Step 4: Setting Up Training Arguments
args = dict(data="coco8.yaml", epochs=16)

@ -74,12 +74,12 @@ Before diving into the usage instructions, be sure to check out the range of [YO
# train the model
results = model.train(
data="coco8.yaml",
project="comet-example-yolov8-coco128",
batch=32,
save_period=1,
save_json=True,
epochs=3
data="coco8.yaml",
project="comet-example-yolov8-coco128",
batch=32,
save_period=1,
save_json=True,
epochs=3,
)
```
@ -144,7 +144,7 @@ Comet ML allows you to specify how often batches of image predictions are logged
```python
import os
os.environ['COMET_EVAL_BATCH_LOGGING_INTERVAL'] = "4"
os.environ["COMET_EVAL_BATCH_LOGGING_INTERVAL"] = "4"
```
### Disabling Confusion Matrix Logging

@ -83,16 +83,16 @@ Before diving into the usage instructions, be sure to check out the range of [YO
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to CoreML format
model.export(format='coreml') # creates 'yolov8n.mlpackage'
model.export(format="coreml") # creates 'yolov8n.mlpackage'
# Load the exported CoreML model
coreml_model = YOLO('yolov8n.mlpackage')
coreml_model = YOLO("yolov8n.mlpackage")
# Run inference
results = coreml_model('https://ultralytics.com/images/bus.jpg')
results = coreml_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -149,7 +149,7 @@ If you are using a Jupyter Notebook and you want to display the generated DVC pl
from IPython.display import HTML
# Display the DVC plots as HTML
HTML(filename='./dvc_plots/index.html')
HTML(filename="./dvc_plots/index.html")
```
This code will render the HTML file containing the DVC plots directly in your Jupyter Notebook, providing an easy and convenient way to analyze the visualized experiment data.

@ -73,16 +73,16 @@ Before diving into the usage instructions, it's important to note that while all
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to TFLite Edge TPU format
model.export(format='edgetpu') # creates 'yolov8n_full_integer_quant_edgetpu.tflite’
model.export(format="edgetpu") # creates 'yolov8n_full_integer_quant_edgetpu.tflite’
# Load the exported TFLite Edge TPU model
edgetpu_model = YOLO('yolov8n_full_integer_quant_edgetpu.tflite')
edgetpu_model = YOLO("yolov8n_full_integer_quant_edgetpu.tflite")
# Run inference
results = edgetpu_model('https://ultralytics.com/images/bus.jpg')
results = edgetpu_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -44,9 +44,8 @@ pip install gradio
This section provides the Python code used to create the Gradio interface with the Ultralytics YOLOv8 model. Supports classification tasks, detection tasks, segmentation tasks, and key point tasks.
```python
import PIL.Image as Image
import gradio as gr
import PIL.Image as Image
from ultralytics import ASSETS, YOLO
model = YOLO("yolov8n.pt")
@ -75,7 +74,7 @@ iface = gr.Interface(
inputs=[
gr.Image(type="pil", label="Upload Image"),
gr.Slider(minimum=0, maximum=1, value=0.25, label="Confidence threshold"),
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold")
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold"),
],
outputs=gr.Image(type="pil", label="Result"),
title="Ultralytics Gradio",
@ -83,10 +82,10 @@ iface = gr.Interface(
examples=[
[ASSETS / "bus.jpg", 0.25, 0.45],
[ASSETS / "zidane.jpg", 0.25, 0.45],
]
],
)
if __name__ == '__main__':
if __name__ == "__main__":
iface.launch()
```

@ -42,7 +42,7 @@ Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this
from ultralytics import settings
# Update a setting
settings.update({'mlflow': True})
settings.update({"mlflow": True})
# Reset settings to default values
settings.reset()

@ -73,18 +73,18 @@ Before diving into the usage instructions, it's important to note that while all
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to NCNN format
model.export(format='ncnn') # creates '/yolov8n_ncnn_model'
model.export(format="ncnn") # creates '/yolov8n_ncnn_model'
# Load the exported NCNN model
ncnn_model = YOLO('./yolov8n_ncnn_model')
ncnn_model = YOLO("./yolov8n_ncnn_model")
# Run inference
results = ncnn_model('https://ultralytics.com/images/bus.jpg')
results = ncnn_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -109,10 +109,7 @@ With your YOLOv8 model in ONNX format, you can deploy and run inferences using D
model_path = "path/to/yolov8n.onnx"
# Set up the DeepSparse Pipeline
yolo_pipeline = Pipeline.create(
task="yolov8",
model_path=model_path
)
yolo_pipeline = Pipeline.create(task="yolov8", model_path=model_path)
# Run the model on your images
images = ["path/to/image.jpg"]

@ -91,16 +91,16 @@ Before diving into the usage instructions, be sure to check out the range of [YO
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to ONNX format
model.export(format='onnx') # creates 'yolov8n.onnx'
model.export(format="onnx") # creates 'yolov8n.onnx'
# Load the exported ONNX model
onnx_model = YOLO('yolov8n.onnx')
onnx_model = YOLO("yolov8n.onnx")
# Run inference
results = onnx_model('https://ultralytics.com/images/bus.jpg')
results = onnx_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -35,16 +35,16 @@ Export a YOLOv8n model to OpenVINO format and run inference with the exported mo
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model
model.export(format='openvino') # creates 'yolov8n_openvino_model/'
model.export(format="openvino") # creates 'yolov8n_openvino_model/'
# Load the exported OpenVINO model
ov_model = YOLO('yolov8n_openvino_model/')
ov_model = YOLO("yolov8n_openvino_model/")
# Run inference
results = ov_model('https://ultralytics.com/images/bus.jpg')
results = ov_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
@ -259,10 +259,10 @@ To reproduce the Ultralytics benchmarks above on all export [formats](../modes/e
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats
results= model.benchmarks(data='coco8.yaml')
results = model.benchmarks(data="coco8.yaml")
```
=== "CLI"

@ -77,16 +77,16 @@ Before diving into the usage instructions, it's important to note that while all
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to PaddlePaddle format
model.export(format='paddle') # creates '/yolov8n_paddle_model'
model.export(format="paddle") # creates '/yolov8n_paddle_model'
# Load the exported PaddlePaddle model
paddle_model = YOLO('./yolov8n_paddle_model')
paddle_model = YOLO("./yolov8n_paddle_model")
# Run inference
results = paddle_model('https://ultralytics.com/images/bus.jpg')
results = paddle_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -50,10 +50,10 @@ To install the required packages, run:
from ultralytics import YOLO
# Load a YOLOv8n model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Start tuning hyperparameters for YOLOv8n training on the COCO8 dataset
result_grid = model.tune(data='coco8.yaml', use_ray=True)
result_grid = model.tune(data="coco8.yaml", use_ray=True)
```
## `tune()` Method Parameters
@ -112,10 +112,12 @@ In this example, we demonstrate how to use a custom search space for hyperparame
model = YOLO("yolov8n.pt")
# Run Ray Tune on the model
result_grid = model.tune(data="coco8.yaml",
space={"lr0": tune.uniform(1e-5, 1e-1)},
epochs=50,
use_ray=True)
result_grid = model.tune(
data="coco8.yaml",
space={"lr0": tune.uniform(1e-5, 1e-1)},
epochs=50,
use_ray=True,
)
```
In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco8.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`.
@ -164,10 +166,14 @@ You can plot the history of reported metrics for each trial to see how the metri
import matplotlib.pyplot as plt
for result in result_grid:
plt.plot(result.metrics_dataframe["training_iteration"], result.metrics_dataframe["mean_accuracy"], label=f"Trial {i}")
plt.xlabel('Training Iterations')
plt.ylabel('Mean Accuracy')
plt.plot(
result.metrics_dataframe["training_iteration"],
result.metrics_dataframe["mean_accuracy"],
label=f"Trial {i}",
)
plt.xlabel("Training Iterations")
plt.ylabel("Mean Accuracy")
plt.legend()
plt.show()
```

@ -85,16 +85,16 @@ Before diving into the usage instructions, be sure to check out the range of [YO
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to TensorRT format
model.export(format='engine') # creates 'yolov8n.engine'
model.export(format="engine") # creates 'yolov8n.engine'
# Load the exported TensorRT model
tensorrt_model = YOLO('yolov8n.engine')
tensorrt_model = YOLO("yolov8n.engine")
# Run inference
results = tensorrt_model('https://ultralytics.com/images/bus.jpg')
results = tensorrt_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
@ -434,7 +434,7 @@ Expand sections below for information on how these models were exported and test
result = model.predict(
[img] * 8, # batch=8 of the same image
verbose=False,
device="cuda"
device="cuda",
)
```
@ -451,7 +451,7 @@ Expand sections below for information on how these models were exported and test
batch=1,
imgsz=640,
verbose=False,
device="cuda"
device="cuda",
)
```

@ -81,16 +81,16 @@ Before diving into the usage instructions, it's important to note that while all
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to TF GraphDef format
model.export(format='pb') # creates 'yolov8n.pb'
model.export(format="pb") # creates 'yolov8n.pb'
# Load the exported TF GraphDef model
tf_graphdef_model = YOLO('yolov8n.pb')
tf_graphdef_model = YOLO("yolov8n.pb")
# Run inference
results = tf_graphdef_model('https://ultralytics.com/images/bus.jpg')
results = tf_graphdef_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -75,16 +75,16 @@ Before diving into the usage instructions, it's important to note that while all
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to TF SavedModel format
model.export(format='saved_model') # creates '/yolov8n_saved_model'
model.export(format="saved_model") # creates '/yolov8n_saved_model'
# Load the exported TF SavedModel model
tf_savedmodel_model = YOLO('./yolov8n_saved_model')
tf_savedmodel_model = YOLO("./yolov8n_saved_model")
# Run inference
results = tf_savedmodel_model('https://ultralytics.com/images/bus.jpg')
results = tf_savedmodel_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -73,16 +73,16 @@ Before diving into the usage instructions, it's important to note that while all
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to TF.js format
model.export(format='tfjs') # creates '/yolov8n_web_model'
model.export(format="tfjs") # creates '/yolov8n_web_model'
# Load the exported TF.js model
tfjs_model = YOLO('./yolov8n_web_model')
tfjs_model = YOLO("./yolov8n_web_model")
# Run inference
results = tfjs_model('https://ultralytics.com/images/bus.jpg')
results = tfjs_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -77,18 +77,18 @@ Before diving into the usage instructions, it's important to note that while all
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to TFLite format
model.export(format='tflite') # creates 'yolov8n_float32.tflite'
model.export(format="tflite") # creates 'yolov8n_float32.tflite'
# Load the exported TFLite model
tflite_model = YOLO('yolov8n_float32.tflite')
tflite_model = YOLO("yolov8n_float32.tflite")
# Run inference
results = tflite_model('https://ultralytics.com/images/bus.jpg')
results = tflite_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -83,16 +83,16 @@ Before diving into the usage instructions, it's important to note that while all
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Export the model to TorchScript format
model.export(format='torchscript') # creates 'yolov8n.torchscript'
model.export(format="torchscript") # creates 'yolov8n.torchscript'
# Load the exported TorchScript model
torchscript_model = YOLO('yolov8n.torchscript')
torchscript_model = YOLO("yolov8n.torchscript")
# Run inference
results = torchscript_model('https://ultralytics.com/images/bus.jpg')
results = torchscript_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"

@ -63,9 +63,9 @@ Before diving into the usage instructions for YOLOv8 model training with Weights
=== "Python"
```python
import wandb
from ultralytics import YOLO
from wandb.integration.ultralytics import add_wandb_callback
import wandb
# Step 1: Initialize a Weights & Biases run
wandb.init(project="ultralytics", job_type="training")

@ -56,16 +56,16 @@ To perform object detection on an image, use the `predict` method as shown below
from ultralytics.models.fastsam import FastSAMPrompt
# Define an inference source
source = 'path/to/bus.jpg'
source = "path/to/bus.jpg"
# Create a FastSAM model
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
model = FastSAM("FastSAM-s.pt") # or FastSAM-x.pt
# Run inference on an image
everything_results = model(source, device='cpu', retina_masks=True, imgsz=1024, conf=0.4, iou=0.9)
everything_results = model(source, device="cpu", retina_masks=True, imgsz=1024, conf=0.4, iou=0.9)
# Prepare a Prompt Process object
prompt_process = FastSAMPrompt(source, everything_results, device='cpu')
prompt_process = FastSAMPrompt(source, everything_results, device="cpu")
# Everything prompt
ann = prompt_process.everything_prompt()
@ -74,13 +74,13 @@ To perform object detection on an image, use the `predict` method as shown below
ann = prompt_process.box_prompt(bbox=[200, 200, 300, 300])
# Text prompt
ann = prompt_process.text_prompt(text='a photo of a dog')
ann = prompt_process.text_prompt(text="a photo of a dog")
# Point prompt
# points default [[0,0]] [[x1,y1],[x2,y2]]
# point_label default [0] [1,0] 0:background, 1:foreground
ann = prompt_process.point_prompt(points=[[200, 200]], pointlabel=[1])
prompt_process.plot(annotations=ann, output='./')
prompt_process.plot(annotations=ann, output="./")
```
=== "CLI"
@ -104,10 +104,10 @@ Validation of the model on a dataset can be done as follows:
from ultralytics import FastSAM
# Create a FastSAM model
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
model = FastSAM("FastSAM-s.pt") # or FastSAM-x.pt
# Validate the model
results = model.val(data='coco8-seg.yaml')
results = model.val(data="coco8-seg.yaml")
```
=== "CLI"
@ -131,7 +131,7 @@ To perform object tracking on an image, use the `track` method as shown below:
from ultralytics import FastSAM
# Create a FastSAM model
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
model = FastSAM("FastSAM-s.pt") # or FastSAM-x.pt
# Track with a FastSAM model on a video
results = model.track(source="path/to/video.mp4", imgsz=640)

@ -53,16 +53,16 @@ Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for obj
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv8n model
model = YOLO('yolov8n.pt')
model = YOLO("yolov8n.pt")
# Display model information (optional)
model.info()
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
# Run inference with the YOLOv8n model on the 'bus.jpg' image
results = model('path/to/bus.jpg')
results = model("path/to/bus.jpg")
```
=== "CLI"

@ -77,10 +77,10 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
from ultralytics import SAM
# Load the model
model = SAM('mobile_sam.pt')
model = SAM("mobile_sam.pt")
# Predict a segment based on a point prompt
model.predict('ultralytics/assets/zidane.jpg', points=[900, 370], labels=[1])
model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
```
### Box Prompt
@ -93,10 +93,10 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
from ultralytics import SAM
# Load the model
model = SAM('mobile_sam.pt')
model = SAM("mobile_sam.pt")
# Predict a segment based on a box prompt
model.predict('ultralytics/assets/zidane.jpg', bboxes=[439, 437, 524, 709])
model.predict("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709])
```
We have implemented `MobileSAM` and `SAM` using the same API. For more usage information, please see the [SAM page](sam.md).

@ -48,16 +48,16 @@ This example provides simple RT-DETR training and inference examples. For full d
from ultralytics import RTDETR
# Load a COCO-pretrained RT-DETR-l model
model = RTDETR('rtdetr-l.pt')
model = RTDETR("rtdetr-l.pt")
# Display model information (optional)
model.info()
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
# Run inference with the RT-DETR-l model on the 'bus.jpg' image
results = model('path/to/bus.jpg')
results = model("path/to/bus.jpg")
```
=== "CLI"

@ -50,16 +50,16 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
from ultralytics import SAM
# Load a model
model = SAM('sam_b.pt')
model = SAM("sam_b.pt")
# Display model information (optional)
model.info()
# Run inference with bboxes prompt
model('ultralytics/assets/zidane.jpg', bboxes=[439, 437, 524, 709])
model("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709])
# Run inference with points prompt
model('ultralytics/assets/zidane.jpg', points=[900, 370], labels=[1])
model("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
```
!!! Example "Segment everything"
@ -72,13 +72,13 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
from ultralytics import SAM
# Load a model
model = SAM('sam_b.pt')
model = SAM("sam_b.pt")
# Display model information (optional)
model.info()
# Run inference
model('path/to/image.jpg')
model("path/to/image.jpg")
```
=== "CLI"
@ -100,7 +100,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
from ultralytics.models.sam import Predictor as SAMPredictor
# Create SAMPredictor
overrides = dict(conf=0.25, task='segment', mode='predict', imgsz=1024, model="mobile_sam.pt")
overrides = dict(conf=0.25, task="segment", mode="predict", imgsz=1024, model="mobile_sam.pt")
predictor = SAMPredictor(overrides=overrides)
# Set image
@ -121,7 +121,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
from ultralytics.models.sam import Predictor as SAMPredictor
# Create SAMPredictor
overrides = dict(conf=0.25, task='segment', mode='predict', imgsz=1024, model="mobile_sam.pt")
overrides = dict(conf=0.25, task="segment", mode="predict", imgsz=1024, model="mobile_sam.pt")
predictor = SAMPredictor(overrides=overrides)
# Segment with additional args
@ -150,27 +150,27 @@ Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
=== "Python"
```python
from ultralytics import FastSAM, SAM, YOLO
from ultralytics import SAM, YOLO, FastSAM
# Profile SAM-b
model = SAM('sam_b.pt')
model = SAM("sam_b.pt")
model.info()
model('ultralytics/assets')
model("ultralytics/assets")
# Profile MobileSAM
model = SAM('mobile_sam.pt')
model = SAM("mobile_sam.pt")
model.info()
model('ultralytics/assets')
model("ultralytics/assets")
# Profile FastSAM-s
model = FastSAM('FastSAM-s.pt')
model = FastSAM("FastSAM-s.pt")
model.info()
model('ultralytics/assets')
model("ultralytics/assets")
# Profile YOLOv8n-seg
model = YOLO('yolov8n-seg.pt')
model = YOLO("yolov8n-seg.pt")
model.info()
model('ultralytics/assets')
model("ultralytics/assets")
```
## Auto-Annotation: A Quick Path to Segmentation Datasets
@ -188,7 +188,7 @@ To auto-annotate your dataset with the Ultralytics framework, use the `auto_anno
```python
from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt')
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model="sam_b.pt")
```
| Argument | Type | Description | Default |

@ -55,16 +55,16 @@ In this example we validate YOLO-NAS-s on the COCO8 dataset.
from ultralytics import NAS
# Load a COCO-pretrained YOLO-NAS-s model
model = NAS('yolo_nas_s.pt')
model = NAS("yolo_nas_s.pt")
# Display model information (optional)
model.info()
# Validate the model on the COCO8 example dataset
results = model.val(data='coco8.yaml')
results = model.val(data="coco8.yaml")
# Run inference with the YOLO-NAS-s model on the 'bus.jpg' image
results = model('path/to/bus.jpg')
results = model("path/to/bus.jpg")
```
=== "CLI"

@ -92,13 +92,13 @@ Object detection is straightforward with the `train` method, as illustrated belo
from ultralytics import YOLOWorld
# Load a pretrained YOLOv8s-worldv2 model
model = YOLOWorld('yolov8s-worldv2.pt')
model = YOLOWorld("yolov8s-worldv2.pt")
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
# Run inference with the YOLOv8n model on the 'bus.jpg' image
results = model('path/to/bus.jpg')
results = model("path/to/bus.jpg")
```
=== "CLI"
@ -120,10 +120,10 @@ Object detection is straightforward with the `predict` method, as illustrated be
from ultralytics import YOLOWorld
# Initialize a YOLO-World model
model = YOLOWorld('yolov8s-world.pt') # or select yolov8m/l-world.pt for different sizes
model = YOLOWorld("yolov8s-world.pt") # or select yolov8m/l-world.pt for different sizes
# Execute inference with the YOLOv8s-world model on the specified image
results = model.predict('path/to/image.jpg')
results = model.predict("path/to/image.jpg")
# Show results
results[0].show()
@ -150,10 +150,10 @@ Model validation on a dataset is streamlined as follows:
from ultralytics import YOLO
# Create a YOLO-World model
model = YOLO('yolov8s-world.pt') # or select yolov8m/l-world.pt for different sizes
model = YOLO("yolov8s-world.pt") # or select yolov8m/l-world.pt for different sizes
# Conduct model validation on the COCO8 example dataset
metrics = model.val(data='coco8.yaml')
metrics = model.val(data="coco8.yaml")
```
=== "CLI"
@ -175,7 +175,7 @@ Object tracking with YOLO-World model on a video/images is streamlined as follow
from ultralytics import YOLO
# Create a YOLO-World model
model = YOLO('yolov8s-world.pt') # or select yolov8m/l-world.pt for different sizes
model = YOLO("yolov8s-world.pt") # or select yolov8m/l-world.pt for different sizes
# Track with a YOLO-World model on a video
results = model.track(source="path/to/video.mp4")
@ -208,13 +208,13 @@ For instance, if your application only requires detecting 'person' and 'bus' obj
from ultralytics import YOLO
# Initialize a YOLO-World model
model = YOLO('yolov8s-world.pt') # or choose yolov8m/l-world.pt
model = YOLO("yolov8s-world.pt") # or choose yolov8m/l-world.pt
# Define custom classes
model.set_classes(["person", "bus"])
# Execute prediction for specified categories on an image
results = model.predict('path/to/image.jpg')
results = model.predict("path/to/image.jpg")
# Show results
results[0].show()
@ -232,8 +232,8 @@ You can also save a model after setting custom classes. By doing this you create
from ultralytics import YOLO
# Initialize a YOLO-World model
model = YOLO('yolov8s-world.pt') # or select yolov8m/l-world.pt
model = YOLO("yolov8s-world.pt") # or select yolov8m/l-world.pt
# Define custom classes
model.set_classes(["person", "bus"])
@ -247,10 +247,10 @@ You can also save a model after setting custom classes. By doing this you create
from ultralytics import YOLO
# Load your custom model
model = YOLO('custom_yolov8s.pt')
model = YOLO("custom_yolov8s.pt")
# Run inference to detect your custom classes
results = model.predict('path/to/image.jpg')
results = model.predict("path/to/image.jpg")
# Show results
results[0].show()
@ -294,8 +294,8 @@ This approach provides a powerful means of customizing state-of-the-art object d
=== "Python"
```python
from ultralytics.models.yolo.world.train_world import WorldTrainerFromScratch
from ultralytics import YOLOWorld
from ultralytics.models.yolo.world.train_world import WorldTrainerFromScratch
data = dict(
train=dict(
@ -315,7 +315,6 @@ This approach provides a powerful means of customizing state-of-the-art object d
)
model = YOLOWorld("yolov8s-worldv2.yaml")
model.train(data=data, batch=128, epochs=100, trainer=WorldTrainerFromScratch)
```
## Citations and Acknowledgements

@ -54,16 +54,16 @@ This example provides simple YOLOv3 training and inference examples. For full do
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv3n model
model = YOLO('yolov3n.pt')
model = YOLO("yolov3n.pt")
# Display model information (optional)
model.info()
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
# Run inference with the YOLOv3n model on the 'bus.jpg' image
results = model('path/to/bus.jpg')
results = model("path/to/bus.jpg")
```
=== "CLI"

@ -66,16 +66,16 @@ This example provides simple YOLOv5 training and inference examples. For full do
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv5n model
model = YOLO('yolov5n.pt')
model = YOLO("yolov5n.pt")
# Display model information (optional)
model.info()
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
# Run inference with the YOLOv5n model on the 'bus.jpg' image
results = model('path/to/bus.jpg')
results = model("path/to/bus.jpg")
```
=== "CLI"

@ -46,16 +46,16 @@ This example provides simple YOLOv6 training and inference examples. For full do
from ultralytics import YOLO
# Build a YOLOv6n model from scratch
model = YOLO('yolov6n.yaml')
model = YOLO("yolov6n.yaml")
# Display model information (optional)
model.info()
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
# Run inference with the YOLOv6n model on the 'bus.jpg' image
results = model('path/to/bus.jpg')
results = model("path/to/bus.jpg")
```
=== "CLI"

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save