Merge branch 'main' into autobackend-fix

autobackend-fix
Ultralytics Assistant 2 months ago committed by GitHub
commit 5939757a86
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      .github/ISSUE_TEMPLATE/bug-report.yml
  2. 2
      .github/ISSUE_TEMPLATE/config.yml
  3. 12
      .github/ISSUE_TEMPLATE/feature-request.yml
  4. 43
      .github/workflows/ci.yaml
  5. 2
      .github/workflows/cla.yml
  6. 4
      .github/workflows/docker.yaml
  7. 4
      .github/workflows/format.yml
  8. 1
      .github/workflows/merge-main-into-prs.yml
  9. 8
      CITATION.cff
  10. 128
      README.md
  11. 200
      README.zh-CN.md
  12. 19
      docker/Dockerfile
  13. 10
      docker/Dockerfile-arm64
  14. 10
      docker/Dockerfile-conda
  15. 18
      docker/Dockerfile-cpu
  16. 14
      docker/Dockerfile-jetson-jetpack4
  17. 13
      docker/Dockerfile-jetson-jetpack5
  18. 14
      docker/Dockerfile-jetson-jetpack6
  19. 22
      docker/Dockerfile-python
  20. 2
      docker/Dockerfile-runner
  21. 8
      docs/en/datasets/classify/caltech101.md
  22. 12
      docs/en/datasets/classify/caltech256.md
  23. 10
      docs/en/datasets/classify/cifar10.md
  24. 8
      docs/en/datasets/classify/cifar100.md
  25. 12
      docs/en/datasets/classify/fashion-mnist.md
  26. 20
      docs/en/datasets/classify/imagenet.md
  27. 8
      docs/en/datasets/classify/imagenet10.md
  28. 20
      docs/en/datasets/classify/imagenette.md
  29. 12
      docs/en/datasets/classify/imagewoof.md
  30. 12
      docs/en/datasets/classify/index.md
  31. 8
      docs/en/datasets/classify/mnist.md
  32. 18
      docs/en/datasets/detect/african-wildlife.md
  33. 12
      docs/en/datasets/detect/argoverse.md
  34. 18
      docs/en/datasets/detect/brain-tumor.md
  35. 32
      docs/en/datasets/detect/coco.md
  36. 24
      docs/en/datasets/detect/coco8.md
  37. 14
      docs/en/datasets/detect/globalwheat2020.md
  38. 16
      docs/en/datasets/detect/index.md
  39. 16
      docs/en/datasets/detect/lvis.md
  40. 16
      docs/en/datasets/detect/objects365.md
  41. 28
      docs/en/datasets/detect/open-images-v7.md
  42. 4
      docs/en/datasets/detect/roboflow-100.md
  43. 16
      docs/en/datasets/detect/signature.md
  44. 14
      docs/en/datasets/detect/sku-110k.md
  45. 14
      docs/en/datasets/detect/visdrone.md
  46. 14
      docs/en/datasets/detect/voc.md
  47. 8
      docs/en/datasets/detect/xview.md
  48. 22
      docs/en/datasets/explorer/api.md
  49. 2
      docs/en/datasets/explorer/explorer.ipynb
  50. 1
      docs/en/datasets/index.md
  51. 16
      docs/en/datasets/obb/dota-v2.md
  52. 26
      docs/en/datasets/obb/dota8.md
  53. 26
      docs/en/datasets/obb/index.md
  54. 29
      docs/en/datasets/pose/coco.md
  55. 28
      docs/en/datasets/pose/coco8-pose.md
  56. 175
      docs/en/datasets/pose/hand-keypoints.md
  57. 17
      docs/en/datasets/pose/index.md
  58. 28
      docs/en/datasets/pose/tiger-pose.md
  59. 16
      docs/en/datasets/segment/carparts-seg.md
  60. 34
      docs/en/datasets/segment/coco.md
  61. 22
      docs/en/datasets/segment/coco8-seg.md
  62. 14
      docs/en/datasets/segment/crack-seg.md
  63. 10
      docs/en/datasets/segment/index.md
  64. 18
      docs/en/datasets/segment/package-seg.md
  65. 10
      docs/en/datasets/track/index.md
  66. 59
      docs/en/guides/analytics.md
  67. 74
      docs/en/guides/azureml-quickstart.md
  68. 2
      docs/en/guides/conda-quickstart.md
  69. 18
      docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
  70. 10
      docs/en/guides/data-collection-and-annotation.md
  71. 42
      docs/en/guides/deepstream-nvidia-jetson.md
  72. 16
      docs/en/guides/defining-project-goals.md
  73. 38
      docs/en/guides/distance-calculation.md
  74. 4
      docs/en/guides/docker-quickstart.md
  75. 50
      docs/en/guides/heatmaps.md
  76. 16
      docs/en/guides/hyperparameter-tuning.md
  77. 12
      docs/en/guides/index.md
  78. 38
      docs/en/guides/instance-segmentation-and-tracking.md
  79. 30
      docs/en/guides/isolating-segmentation-objects.md
  80. 58
      docs/en/guides/model-deployment-options.md
  81. 26
      docs/en/guides/model-deployment-practices.md
  82. 50
      docs/en/guides/model-evaluation-insights.md
  83. 4
      docs/en/guides/model-monitoring-and-maintenance.md
  84. 24
      docs/en/guides/model-testing.md
  85. 44
      docs/en/guides/model-training-tips.md
  86. 42
      docs/en/guides/object-blurring.md
  87. 105
      docs/en/guides/object-counting.md
  88. 40
      docs/en/guides/object-cropping.md
  89. 48
      docs/en/guides/parking-management.md
  90. 28
      docs/en/guides/preprocessing_annotated_data.md
  91. 46
      docs/en/guides/queue-management.md
  92. 68
      docs/en/guides/sahi-tiled-inference.md
  93. 38
      docs/en/guides/security-alarm-system.md
  94. 56
      docs/en/guides/speed-estimation.md
  95. 12
      docs/en/guides/steps-of-a-cv-project.md
  96. 48
      docs/en/guides/streamlit-live-inference.md
  97. 52
      docs/en/guides/triton-inference-server.md
  98. 6
      docs/en/guides/view-results-in-terminal.md
  99. 48
      docs/en/guides/vision-eye.md
  100. 40
      docs/en/guides/workouts-monitoring.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -56,7 +56,7 @@ body:
placeholder: |
Paste output of `yolo checks` or `ultralytics.checks()` command, i.e.:
```
Ultralytics YOLOv8.0.181 🚀 Python-3.11.2 torch-2.0.1 CPU (Apple M2)
Ultralytics 8.3.2 🚀 Python-3.11.2 torch-2.4.1 CPU (Apple M3)
Setup complete ✅ (8 CPUs, 16.0 GB RAM, 266.5/460.4 GB disk)
OS macOS-13.5.2
@ -64,7 +64,7 @@ body:
Python 3.11.2
Install git
RAM 16.00 GB
CPU Apple M2
CPU Apple M3
CUDA None
```
validations:

@ -4,7 +4,7 @@ blank_issues_enabled: true
contact_links:
- name: 📄 Docs
url: https://docs.ultralytics.com/
about: Full Ultralytics YOLOv8 Documentation
about: Full Ultralytics YOLO Documentation
- name: 💬 Forum
url: https://community.ultralytics.com/
about: Ask on Ultralytics Community Forum

@ -1,14 +1,14 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
name: 🚀 Feature Request
description: Suggest a YOLOv8 idea
description: Suggest an Ultralytics YOLO idea
# title: " "
labels: [enhancement]
body:
- type: markdown
attributes:
value: |
Thank you for submitting a YOLOv8 🚀 Feature Request!
Thank you for submitting an Ultralytics 🚀 Feature Request!
- type: checkboxes
attributes:
@ -17,7 +17,7 @@ body:
Please search the Ultralytics [Docs](https://docs.ultralytics.com) and [issues](https://github.com/ultralytics/ultralytics/issues) to see if a similar feature request already exists.
options:
- label: >
I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
required: true
- type: textarea
@ -25,7 +25,7 @@ body:
label: Description
description: A short description of your feature.
placeholder: |
What new feature would you like to see in YOLOv8?
What new feature would you like to see in YOLO?
validations:
required: true
@ -46,7 +46,7 @@ body:
attributes:
label: Are you willing to submit a PR?
description: >
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/ultralytics/pulls) (PR) to help improve YOLOv8 for everyone, especially if you have a good understanding of how to implement a fix or feature.
See the YOLOv8 [Contributing Guide](https://docs.ultralytics.com/help/contributing) to get started.
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/ultralytics/pulls) (PR) to help improve YOLO for everyone, especially if you have a good understanding of how to implement a fix or feature.
See the Ultralytics [Contributing Guide](https://docs.ultralytics.com/help/contributing) to get started.
options:
- label: Yes I'd like to help by submitting a PR!

@ -9,7 +9,7 @@ on:
pull_request:
branches: [main]
schedule:
- cron: "0 0 * * *" # runs at 00:00 UTC every day
- cron: "0 8 * * *" # runs at 08:00 UTC every day
workflow_dispatch:
inputs:
hub:
@ -98,9 +98,9 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-14]
os: [ubuntu-latest, windows-latest, macos-14]
python-version: ["3.11"]
model: [yolov8n]
model: [yolo11n]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
@ -116,24 +116,27 @@ jobs:
run: |
yolo checks
pip list
- name: Benchmark DetectionModel
shell: bash
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}.pt' imgsz=160 verbose=0.309
- name: Benchmark ClassificationModel
shell: bash
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-cls.pt' imgsz=160 verbose=0.166
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-cls.pt' imgsz=160 verbose=0.249
- name: Benchmark YOLOWorld DetectionModel
shell: bash
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/yolov8s-worldv2.pt' imgsz=160 verbose=0.318
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/yolov8s-worldv2.pt' imgsz=160 verbose=0.337
- name: Benchmark SegmentationModel
shell: bash
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-seg.pt' imgsz=160 verbose=0.279
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-seg.pt' imgsz=160 verbose=0.195
- name: Benchmark PoseModel
shell: bash
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-pose.pt' imgsz=160 verbose=0.183
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-pose.pt' imgsz=160 verbose=0.197
- name: Benchmark OBBModel
shell: bash
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-obb.pt' imgsz=160 verbose=0.472
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-obb.pt' imgsz=160 verbose=0.597
- name: Benchmark YOLOv10Model
shell: bash
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/yolov10n.pt' imgsz=160 verbose=0.178
run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/yolov10n.pt' imgsz=160 verbose=0.205
- name: Merge Coverage Reports
run: |
coverage xml -o coverage-benchmarks.xml
@ -251,17 +254,17 @@ jobs:
- name: Pytest tests
run: pytest --slow tests/
- name: Benchmark ClassificationModel
run: python -m ultralytics.cfg.__init__ benchmark model='yolov8n-cls.pt' imgsz=160 verbose=0.166
run: python -m ultralytics.cfg.__init__ benchmark model='yolo11n-cls.pt' imgsz=160 verbose=0.249
- name: Benchmark YOLOWorld DetectionModel
run: python -m ultralytics.cfg.__init__ benchmark model='yolov8s-worldv2.pt' imgsz=160 verbose=0.318
run: python -m ultralytics.cfg.__init__ benchmark model='yolov8s-worldv2.pt' imgsz=160 verbose=0.337
- name: Benchmark SegmentationModel
run: python -m ultralytics.cfg.__init__ benchmark model='yolov8n-seg.pt' imgsz=160 verbose=0.267
run: python -m ultralytics.cfg.__init__ benchmark model='yolo11n-seg.pt' imgsz=160 verbose=0.195
- name: Benchmark PoseModel
run: python -m ultralytics.cfg.__init__ benchmark model='yolov8n-pose.pt' imgsz=160 verbose=0.179
run: python -m ultralytics.cfg.__init__ benchmark model='yolo11n-pose.pt' imgsz=160 verbose=0.197
- name: Benchmark OBBModel
run: python -m ultralytics.cfg.__init__ benchmark model='yolov8n-obb.pt' imgsz=160 verbose=0.472
run: python -m ultralytics.cfg.__init__ benchmark model='yolo11n-obb.pt' imgsz=160 verbose=0.597
- name: Benchmark YOLOv10Model
run: python -m ultralytics.cfg.__init__ benchmark model='yolov10n.pt' imgsz=160 verbose=0.178
run: python -m ultralytics.cfg.__init__ benchmark model='yolov10n.pt' imgsz=160 verbose=0.205
- name: Benchmark Summary
run: |
cat benchmarks.log
@ -317,16 +320,16 @@ jobs:
conda list
- name: Test CLI
run: |
yolo predict model=yolov8n.pt imgsz=320
yolo train model=yolov8n.pt data=coco8.yaml epochs=1 imgsz=32
yolo val model=yolov8n.pt data=coco8.yaml imgsz=32
yolo export model=yolov8n.pt format=torchscript imgsz=160
yolo predict model=yolo11n.pt imgsz=320
yolo train model=yolo11n.pt data=coco8.yaml epochs=1 imgsz=32
yolo val model=yolo11n.pt data=coco8.yaml imgsz=32
yolo export model=yolo11n.pt format=torchscript imgsz=160
- name: Test Python
# Note this step must use the updated default bash environment, not a python environment
run: |
python -c "
from ultralytics import YOLO
model = YOLO('yolov8n.pt')
model = YOLO('yolo11n.pt')
results = model.train(data='coco8.yaml', epochs=3, imgsz=160)
results = model.val(imgsz=160)
results = model.predict(imgsz=160)

@ -26,7 +26,7 @@ jobs:
steps:
- name: CLA Assistant
if: (github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I sign the CLA') || github.event_name == 'pull_request_target'
uses: contributor-assistant/github-action@v2.5.2
uses: contributor-assistant/github-action@v2.6.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Must be repository secret PAT

@ -140,7 +140,7 @@ jobs:
with:
timeout_minutes: 120
retry_wait_seconds: 60
max_attempts: 2 # retry once
max_attempts: 3 # retry twice
command: |
docker build \
--platform ${{ matrix.platforms }} \
@ -156,7 +156,7 @@ jobs:
- name: Run Benchmarks
# WARNING: Dockerfile (GPU) error on TF.js export 'module 'numpy' has no attribute 'object'.
if: (github.event_name == 'push' || github.event.inputs[matrix.dockerfile] == 'true') && matrix.platforms == 'linux/amd64' && matrix.dockerfile != 'Dockerfile' && matrix.dockerfile != 'Dockerfile-conda' # arm64 images not supported on GitHub CI runners
run: docker run ultralytics/ultralytics:${{ matrix.tags }} yolo benchmark model=yolov8n.pt imgsz=160 verbose=0.318
run: docker run ultralytics/ultralytics:${{ matrix.tags }} yolo benchmark model=yolo11n.pt imgsz=160 verbose=0.309
- name: Push Docker Image with Ultralytics version tag
if: (github.event_name == 'push' || (github.event.inputs[matrix.dockerfile] == 'true' && github.event.inputs.push == 'true')) && steps.check_tag.outputs.exists == 'false' && matrix.dockerfile != 'Dockerfile-conda'

@ -48,7 +48,7 @@ jobs:
## Environments
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
YOLO may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Notebooks** with free GPU: <a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"/></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov8"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
@ -59,4 +59,4 @@ jobs:
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
If this badge is green, all [Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule) tests are currently passing. CI tests verify correct operation of all YOLOv8 [Modes](https://docs.ultralytics.com/modes/) and [Tasks](https://docs.ultralytics.com/tasks/) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
If this badge is green, all [Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule) tests are currently passing. CI tests verify correct operation of all YOLO [Modes](https://docs.ultralytics.com/modes/) and [Tasks](https://docs.ultralytics.com/tasks/) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@ -85,4 +85,3 @@ jobs:
print(f"Branches updated: {updated_branches}")
print(f"Branches already up-to-date: {up_to_date_branches}")
print(f"Total errors: {errors}")

@ -11,14 +11,14 @@ authors:
family-names: Jocher
affiliation: Ultralytics
orcid: 'https://orcid.org/0000-0001-5950-6979'
- given-names: Ayush
family-names: Chaurasia
affiliation: Ultralytics
orcid: 'https://orcid.org/0000-0002-7603-6750'
- family-names: Qiu
given-names: Jing
affiliation: Ultralytics
orcid: 'https://orcid.org/0000-0003-3783-7069'
- given-names: Ayush
family-names: Chaurasia
affiliation: Ultralytics
orcid: 'https://orcid.org/0000-0002-7603-6750'
repository-code: 'https://github.com/ultralytics/ultralytics'
url: 'https://ultralytics.com'
license: AGPL-3.0

@ -8,7 +8,7 @@
<div>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLOv8 Citation"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Ultralytics Docker Pulls"></a>
<a href="https://ultralytics.com/discord"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
@ -20,13 +20,13 @@
</div>
<br>
[Ultralytics](https://www.ultralytics.com/) [YOLOv8](https://github.com/ultralytics/ultralytics) is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.
[Ultralytics](https://www.ultralytics.com/) [YOLO11](https://github.com/ultralytics/ultralytics) is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLO11 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.
We hope that the resources here will help you get the most out of YOLOv8. Please browse the YOLOv8 <a href="https://docs.ultralytics.com/">Docs</a> for details, raise an issue on <a href="https://github.com/ultralytics/ultralytics/issues/new/choose">GitHub</a> for support, questions, or discussions, become a member of the Ultralytics <a href="https://ultralytics.com/discord">Discord</a>, <a href="https://reddit.com/r/ultralytics">Reddit</a> and <a href="https://community.ultralytics.com">Forums</a>!
We hope that the resources here will help you get the most out of YOLO. Please browse the Ultralytics <a href="https://docs.ultralytics.com/">Docs</a> for details, raise an issue on <a href="https://github.com/ultralytics/ultralytics/issues/new/choose">GitHub</a> for support, questions, or discussions, become a member of the Ultralytics <a href="https://ultralytics.com/discord">Discord</a>, <a href="https://reddit.com/r/ultralytics">Reddit</a> and <a href="https://community.ultralytics.com">Forums</a>!
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://www.ultralytics.com/license).
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/yolo-comparison-plots.png" alt="YOLOv8 performance plots"></a>
<img width="100%" src="https://github.com/user-attachments/assets/a311a4ed-bbf2-43b5-8012-5f183a28a845" alt="YOLO11 performance plots"></a>
<div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
@ -47,7 +47,7 @@ To request an Enterprise License please complete the form at [Ultralytics Licens
## <div align="center">Documentation</div>
See below for a quickstart installation and usage example, and see the [YOLOv8 Docs](https://docs.ultralytics.com/) for full documentation on training, validation, prediction and deployment.
See below for a quickstart install and usage examples, and see our [Docs](https://docs.ultralytics.com/) for full documentation on training, validation, prediction and deployment.
<details open>
<summary>Install</summary>
@ -71,23 +71,23 @@ For alternative installation methods including [Conda](https://anaconda.org/cond
### CLI
YOLOv8 may be used directly in the Command Line Interface (CLI) with a `yolo` command:
YOLO may be used directly in the Command Line Interface (CLI) with a `yolo` command:
```bash
yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
```
`yolo` can be used for a variety of tasks and modes and accepts additional arguments, i.e. `imgsz=640`. See the YOLOv8 [CLI Docs](https://docs.ultralytics.com/usage/cli/) for examples.
`yolo` can be used for a variety of tasks and modes and accepts additional arguments, i.e. `imgsz=640`. See the YOLO [CLI Docs](https://docs.ultralytics.com/usage/cli/) for examples.
### Python
YOLOv8 may also be used directly in a Python environment, and accepts the same [arguments](https://docs.ultralytics.com/usage/cfg/) as in the CLI example above:
YOLO may also be used directly in a Python environment, and accepts the same [arguments](https://docs.ultralytics.com/usage/cfg/) as in the CLI example above:
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Train the model
train_results = model.train(
@ -108,26 +108,13 @@ results[0].show()
path = model.export(format="onnx") # return path to exported model
```
See YOLOv8 [Python Docs](https://docs.ultralytics.com/usage/python/) for more examples.
See YOLO [Python Docs](https://docs.ultralytics.com/usage/python/) for more examples.
</details>
### Notebooks
Ultralytics provides interactive notebooks for YOLOv8, covering training, validation, tracking, and more. Each notebook is paired with a [YouTube](https://www.youtube.com/ultralytics?sub_confirmation=1) tutorial, making it easy to learn and implement advanced YOLOv8 features.
| Docs | Notebook | YouTube |
| ---------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| <a href="https://docs.ultralytics.com/modes/">YOLOv8 Train, Val, Predict and Export Modes</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | <a href="https://youtu.be/j8uQc0qB91s"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube Video"></center></a> |
| <a href="https://docs.ultralytics.com/hub/quickstart/">Ultralytics HUB QuickStart</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/hub.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | <a href="https://youtu.be/lveF9iCMIzc"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube Video"></center></a> |
| <a href="https://docs.ultralytics.com/modes/track/">YOLOv8 Multi-Object Tracking in Videos</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/object_tracking.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | <a href="https://youtu.be/hHyHmOtmEgs"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube Video"></center></a> |
| <a href="https://docs.ultralytics.com/guides/object-counting/">YOLOv8 Object Counting in Videos</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/object_counting.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | <a href="https://youtu.be/Ag2e-5_NpS0"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube Video"></center></a> |
| <a href="https://docs.ultralytics.com/guides/heatmaps/">YOLOv8 Heatmaps in Videos</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/heatmaps.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | <a href="https://youtu.be/4ezde5-nZZw"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube Video"></center></a> |
| <a href="https://docs.ultralytics.com/datasets/explorer/">Ultralytics Datasets Explorer with SQL and OpenAI Integration 🚀 New</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/docs/en/datasets/explorer/explorer.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | <a href="https://youtu.be/3VryynorQeo"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube Video"></center></a> |
## <div align="center">Models</div>
YOLOv8 [Detect](https://docs.ultralytics.com/tasks/detect/), [Segment](https://docs.ultralytics.com/tasks/segment/) and [Pose](https://docs.ultralytics.com/tasks/pose/) models pretrained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset are available here, as well as YOLOv8 [Classify](https://docs.ultralytics.com/tasks/classify/) models pretrained on the [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) dataset. [Track](https://docs.ultralytics.com/modes/track/) mode is available for all Detect, Segment and Pose models.
YOLO11 [Detect](https://docs.ultralytics.com/tasks/detect/), [Segment](https://docs.ultralytics.com/tasks/segment/) and [Pose](https://docs.ultralytics.com/tasks/pose/) models pretrained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset are available here, as well as YOLO11 [Classify](https://docs.ultralytics.com/tasks/classify/) models pretrained on the [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) dataset. [Track](https://docs.ultralytics.com/modes/track/) mode is available for all Detect, Segment and Pose models.
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png" alt="Ultralytics YOLO supported tasks">
@ -137,13 +124,13 @@ All [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cf
See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examples with these models trained on [COCO](https://docs.ultralytics.com/datasets/detect/coco/), which include 80 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
| [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt) | 640 | 39.5 | 56.1 ± 0.8 | 1.5 ± 0.0 | 2.6 | 6.5 |
| [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt) | 640 | 47.0 | 90.0 ± 1.2 | 2.5 ± 0.0 | 9.4 | 21.5 |
| [YOLO11m](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m.pt) | 640 | 51.5 | 183.2 ± 2.0 | 4.7 ± 0.1 | 20.1 | 68.0 |
| [YOLO11l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l.pt) | 640 | 53.4 | 238.6 ± 1.4 | 6.2 ± 0.1 | 25.3 | 86.9 |
| [YOLO11x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x.pt) | 640 | 54.7 | 462.8 ± 6.7 | 11.3 ± 0.2 | 56.9 | 194.9 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val detect data=coco.yaml batch=1 device=0|cpu`
@ -154,31 +141,47 @@ See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examp
See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage examples with these models trained on [COCO-Seg](https://docs.ultralytics.com/datasets/segment/coco/), which include 80 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
| [YOLO11n-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt) | 640 | 38.9 | 32.0 | 65.9 ± 1.1 | 1.8 ± 0.0 | 2.9 | 10.4 |
| [YOLO11s-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-seg.pt) | 640 | 46.6 | 37.8 | 117.6 ± 4.9 | 2.9 ± 0.0 | 10.1 | 35.5 |
| [YOLO11m-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-seg.pt) | 640 | 51.5 | 41.5 | 281.6 ± 1.2 | 6.3 ± 0.1 | 22.4 | 123.3 |
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 |
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco-seg.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu`
</details>
<details><summary>Classification (ImageNet)</summary>
See [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usage examples with these models trained on [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/), which include 1000 pretrained classes.
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLO11n-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-cls.pt) | 224 | 70.0 | 89.4 | 5.0 ± 0.3 | 1.1 ± 0.0 | 1.6 | 3.3 |
| [YOLO11s-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-cls.pt) | 224 | 75.4 | 92.7 | 7.9 ± 0.2 | 1.3 ± 0.0 | 5.5 | 12.1 |
| [YOLO11m-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-cls.pt) | 224 | 77.3 | 93.9 | 17.2 ± 0.4 | 2.0 ± 0.0 | 10.4 | 39.3 |
| [YOLO11l-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-cls.pt) | 224 | 78.3 | 94.3 | 23.2 ± 0.3 | 2.8 ± 0.0 | 12.9 | 49.4 |
| [YOLO11x-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-cls.pt) | 224 | 79.5 | 94.9 | 41.4 ± 0.9 | 3.8 ± 0.0 | 28.4 | 110.4 |
- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set. <br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
</details>
<details><summary>Pose (COCO)</summary>
See [Pose Docs](https://docs.ultralytics.com/tasks/pose/) for usage examples with these models trained on [COCO-Pose](https://docs.ultralytics.com/datasets/pose/coco/), which include 1 pre-trained class, person.
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLO11n-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-pose.pt) | 640 | 50.0 | 81.0 | 52.4 ± 0.5 | 1.7 ± 0.0 | 2.9 | 7.6 |
| [YOLO11s-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-pose.pt) | 640 | 58.9 | 86.3 | 90.5 ± 0.6 | 2.6 ± 0.0 | 9.9 | 23.2 |
| [YOLO11m-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-pose.pt) | 640 | 64.9 | 89.4 | 187.3 ± 0.8 | 4.9 ± 0.1 | 20.9 | 71.7 |
| [YOLO11l-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-pose.pt) | 640 | 66.1 | 89.9 | 247.7 ± 1.1 | 6.4 ± 0.1 | 26.2 | 90.7 |
| [YOLO11x-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-pose.pt) | 640 | 69.5 | 91.1 | 488.0 ± 13.9 | 12.1 ± 0.2 | 58.8 | 203.3 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO Keypoints val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val pose data=coco-pose.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val pose data=coco-pose.yaml batch=1 device=0|cpu`
@ -189,36 +192,19 @@ See [Pose Docs](https://docs.ultralytics.com/tasks/pose/) for usage examples wit
See [OBB Docs](https://docs.ultralytics.com/tasks/obb/) for usage examples with these models trained on [DOTAv1](https://docs.ultralytics.com/datasets/obb/dota-v2/#dota-v10/), which include 15 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>test<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| Model | size<br><sup>(pixels) | mAP<sup>test<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-obb.pt) | 1024 | 78.0 | 204.77 | 3.57 | 3.1 | 23.3 |
| [YOLOv8s-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-obb.pt) | 1024 | 79.5 | 424.88 | 4.07 | 11.4 | 76.3 |
| [YOLOv8m-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-obb.pt) | 1024 | 80.5 | 763.48 | 7.61 | 26.4 | 208.6 |
| [YOLOv8l-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-obb.pt) | 1024 | 80.7 | 1278.42 | 11.83 | 44.5 | 433.8 |
| [YOLOv8x-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-obb.pt) | 1024 | 81.36 | 1759.10 | 13.23 | 69.5 | 676.7 |
| [YOLO11n-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-obb.pt) | 1024 | 78.4 | 117.6 ± 0.8 | 4.4 ± 0.0 | 2.7 | 17.2 |
| [YOLO11s-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-obb.pt) | 1024 | 79.5 | 219.4 ± 4.0 | 5.1 ± 0.0 | 9.7 | 57.5 |
| [YOLO11m-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-obb.pt) | 1024 | 80.9 | 562.8 ± 2.9 | 10.1 ± 0.4 | 20.9 | 183.5 |
| [YOLO11l-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-obb.pt) | 1024 | 81.0 | 712.5 ± 5.0 | 13.5 ± 0.6 | 26.2 | 232.0 |
| [YOLO11x-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-obb.pt) | 1024 | 81.3 | 1408.6 ± 7.7 | 28.6 ± 1.0 | 58.8 | 520.2 |
- **mAP<sup>test</sup>** values are for single-model multiscale on [DOTAv1](https://captain-whu.github.io/DOTA/index.html) dataset. <br>Reproduce by `yolo val obb data=DOTAv1.yaml device=0 split=test` and submit merged results to [DOTA evaluation](https://captain-whu.github.io/DOTA/evaluation.html).
- **Speed** averaged over DOTAv1 val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu`
</details>
<details><summary>Classification (ImageNet)</summary>
See [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usage examples with these models trained on [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/), which include 1000 pretrained classes.
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-cls.pt) | 224 | 69.0 | 88.3 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-cls.pt) | 224 | 73.8 | 91.7 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-cls.pt) | 224 | 76.8 | 93.5 | 85.4 | 0.62 | 17.0 | 42.7 |
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-cls.pt) | 224 | 78.3 | 94.2 | 163.0 | 0.87 | 37.5 | 99.7 |
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-cls.pt) | 224 | 79.0 | 94.6 | 232.0 | 1.01 | 57.4 | 154.8 |
- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set. <br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
</details>
## <div align="center">Integrations</div>
Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with [Roboflow](https://roboflow.com/?ref=ultralytics), ClearML, [Comet](https://bit.ly/yolov8-readme-comet), Neural Magic and [OpenVINO](https://docs.ultralytics.com/integrations/openvino/), can optimize your AI workflow.
@ -245,18 +231,18 @@ Our key integrations with leading AI platforms extend the functionality of Ultra
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
| :--------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
| Label and export your custom datasets directly to YOLOv8 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv8 using [ClearML](https://clear.ml/) (open-source!) | Free forever, [Comet](https://bit.ly/yolov8-readme-comet) lets you save YOLOv8 models, resume training, and interactively visualize and debug predictions | Run YOLOv8 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
| Label and export your custom datasets directly to YOLO11 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLO11 using [ClearML](https://clear.ml/) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLO11 models, resume training, and interactively visualize and debug predictions | Run YOLO11 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
## <div align="center">Ultralytics HUB</div>
Experience seamless AI with [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐, the all-in-one solution for data visualization, YOLOv5 and YOLOv8 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly [Ultralytics App](https://www.ultralytics.com/app-install). Start your journey for **Free** now!
Experience seamless AI with [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐, the all-in-one solution for data visualization, YOLO11 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly [Ultralytics App](https://www.ultralytics.com/app-install). Start your journey for **Free** now!
<a href="https://ultralytics.com/hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB preview image"></a>
## <div align="center">Contribute</div>
We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our [Contributing Guide](https://docs.ultralytics.com/help/contributing/) to get started, and fill out our [Survey](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experience. Thank you 🙏 to all our contributors!
We love your input! Ultralytics YOLO would not be possible without help from our community. Please see our [Contributing Guide](https://docs.ultralytics.com/help/contributing/) to get started, and fill out our [Survey](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experience. Thank you 🙏 to all our contributors!
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->

@ -8,25 +8,25 @@
<div>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv8 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Docker Pulls"></a>
<a href="https://ultralytics.com/discord"><img alt="Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Ultralytics Docker Pulls"></a>
<a href="https://ultralytics.com/discord"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
<a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
<br>
<a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a>
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<a href="https://www.kaggle.com/ultralytics/yolov8"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run Ultralytics on Gradient"></a>
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open Ultralytics In Colab"></a>
<a href="https://www.kaggle.com/ultralytics/yolov8"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open Ultralytics In Kaggle"></a>
</div>
<br>
[Ultralytics](https://www.ultralytics.com/) [YOLOv8](https://github.com/ultralytics/ultralytics) 是一款前沿、最先进(SOTA)的模型,基于先前 YOLO 版本的成功,引入了新功能和改进,进一步提升性能和灵活性。YOLOv8 设计快速、准确且易于使用,使其成为各种物体检测与跟踪、实例分割、图像分类和姿态估计任务的绝佳选择。
[Ultralytics](https://www.ultralytics.com/) [YOLO11](https://github.com/ultralytics/ultralytics) 是一个尖端的、最先进(SOTA)的模型,基于之前 YOLO 版本的成功,并引入了新功能和改进以进一步提升性能和灵活性。YOLO11 被设计得快速、准确且易于使用,是进行广泛对象检测和跟踪、实例分割、图像分类和姿态估计任务的理想选择。
我们希望这里的资源能帮助您充分利用 YOLOv8。请浏览 YOLOv8 的<a href="https://docs.ultralytics.com/">文档</a>了解详情,如需支持、提问或讨论,请<a href="https://github.com/ultralytics/ultralytics/issues/new/choose">GitHub</a> 上提出问题,成为 Ultralytics <a href="https://ultralytics.com/discord">Discord</a><a href="https://reddit.com/r/ultralytics">Reddit</a><a href="https://community.ultralytics.com">论坛</a>员!
我们希望这里的资源能帮助你充分利用 YOLO。请浏览 Ultralytics <a href="https://docs.ultralytics.com/">文档</a> 以获取详细信息,<a href="https://github.com/ultralytics/ultralytics/issues/new/choose">GitHub</a> 上提出问题或讨论,成为 Ultralytics <a href="https://ultralytics.com/discord">Discord</a><a href="https://reddit.com/r/ultralytics">Reddit</a><a href="https://community.ultralytics.com">论坛</a>员!
如需申请企业许可,请在 [Ultralytics Licensing](https://www.ultralytics.com/license) 处填写表格
想申请企业许可证,请完成 [Ultralytics Licensing](https://www.ultralytics.com/license) 上的表单。
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/yolo-comparison-plots.png" alt="YOLOv8 performance plots"></a>
<img width="100%" src="https://github.com/user-attachments/assets/a311a4ed-bbf2-43b5-8012-5f183a28a845" alt="YOLO11 performance plots"></a>
<div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
@ -45,16 +45,14 @@
</div>
</div>
以下是提供的内容的中文翻译:
## <div align="center">文档</div>
请参阅下面的快速安装和使用示例,以及 [YOLOv8 文档](https://docs.ultralytics.com/) 上有关训练、验证、预测和部署的完整文档。
请参阅下方的快速开始安装和使用示例,并查看我们的 [文档](https://docs.ultralytics.com/) 以获取有关训练、验证、预测和部署的完整文档。
<details open>
<summary>安装</summary>
使用Pip在一个[**Python>=3.8**](https://www.python.org/)环境中安装`ultralytics`包,此环境还需包含[**PyTorch>=1.8**](https://pytorch.org/get-started/locally/)。这也会安装所有必要的[依赖项](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml)。
在 [**Python>=3.8**](https://www.python.org/) 环境中使用 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) 通过 pip 安装包含所有[依赖项](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) 的 ultralytics 包
[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)
@ -62,168 +60,154 @@
pip install ultralytics
```
如需使用包括[Conda](https://anaconda.org/conda-forge/ultralytics),[Docker](https://hub.docker.com/r/ultralytics/ultralytics)和Git在内的其他安装方法,请参考[快速入门指南](https://docs.ultralytics.com/quickstart/)。
有关其他安装方法,包括 [Conda](https://anaconda.org/conda-forge/ultralytics)、[Docker](https://hub.docker.com/r/ultralytics/ultralytics) 和 Git,请参阅 [快速开始指南](https://docs.ultralytics.com/quickstart/)。
[![Conda Version](https://img.shields.io/conda/vn/conda-forge/ultralytics?logo=condaforge)](https://anaconda.org/conda-forge/ultralytics) [![Docker Image Version](https://img.shields.io/docker/v/ultralytics/ultralytics?sort=semver&logo=docker)](https://hub.docker.com/r/ultralytics/ultralytics)
</details>
<details open>
<summary>Usage</summary>
<summary>使用</summary>
### CLI
YOLOv8 可以在命令行界面(CLI)中直接使用,只需输入 `yolo` 命令:
YOLO 可以直接在命令行接口(CLI)中使用 `yolo` 命令:
```bash
yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
```
`yolo`用于各种任务和模式,并接受其他参数,例如 `imgsz=640`。查看 YOLOv8 [CLI 文档](https://docs.ultralytics.com/usage/cli/)以获取示例。
`yolo`以用于各种任务和模式,并接受额外参数,例如 `imgsz=640`。请参阅 YOLO [CLI 文档](https://docs.ultralytics.com/usage/cli/) 以获取示例。
### Python
YOLOv8 也可以在 Python 环境中直接使用,并接受与上述 CLI 示例中相同的[参数](https://docs.ultralytics.com/usage/cfg/):
YOLO 也可以直接在 Python 环境中使用,并接受与上述 CLI 示例中相同的[参数](https://docs.ultralytics.com/usage/cfg/):
```python
from ultralytics import YOLO
# 加载模型
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# 训练模型
train_results = model.train(
data="coco8.yaml", # 数据配置文件的路径
epochs=100, # 训练的轮数
imgsz=640, # 训练图像大小
device="cpu", # 运行设备,例如 device=0 或 device=0,1,2,3 或 device=cpu
data="coco8.yaml", # 数据集 YAML 路径
epochs=100, # 训练轮次
imgsz=640, # 训练图像尺寸
device="cpu", # 运行设备,例如 device=0 或 device=0,1,2,3 或 device=cpu
)
# 在验证集上评估模型性能
# 评估模型在验证集上的性能
metrics = model.val()
# 对图像进行目标检测
# 在图像上执行对象检测
results = model("path/to/image.jpg")
results[0].show()
# 将模型导出为 ONNX 格式
path = model.export(format="onnx") # 返回导出模型路径
path = model.export(format="onnx") # 返回导出模型路径
```
查看 YOLOv8 [Python 文档](https://docs.ultralytics.com/usage/python/)以获取更多示例。
请参阅 YOLO [Python 文档](https://docs.ultralytics.com/usage/python/) 以获取更多示例。
</details>
### 笔记本
Ultralytics 提供了 YOLOv8 的交互式笔记本,涵盖训练、验证、跟踪等内容。每个笔记本都配有 [YouTube](https://www.youtube.com/ultralytics?sub_confirmation=1) 教程,使学习和实现高级 YOLOv8 功能变得简单。
| 文档 | 笔记本 | YouTube |
| ----------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| <a href="https://docs.ultralytics.com/modes/">YOLOv8 训练、验证、预测和导出模式</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a> | <a href="https://youtu.be/j8uQc0qB91s"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube 视频"></center></a> |
| <a href="https://docs.ultralytics.com/hub/quickstart/">Ultralytics HUB 快速开始</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/hub.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a> | <a href="https://youtu.be/lveF9iCMIzc"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube 视频"></center></a> |
| <a href="https://docs.ultralytics.com/modes/track/">YOLOv8 视频中的多对象跟踪</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/object_tracking.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a> | <a href="https://youtu.be/hHyHmOtmEgs"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube 视频"></center></a> |
| <a href="https://docs.ultralytics.com/guides/object-counting/">YOLOv8 视频中的对象计数</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/object_counting.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a> | <a href="https://youtu.be/Ag2e-5_NpS0"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube 视频"></center></a> |
| <a href="https://docs.ultralytics.com/guides/heatmaps/">YOLOv8 视频中的热图</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/heatmaps.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a> | <a href="https://youtu.be/4ezde5-nZZw"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube 视频"></center></a> |
| <a href="https://docs.ultralytics.com/datasets/explorer/">Ultralytics 数据集浏览器,集成 SQL 和 OpenAI 🚀 New</a> | <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/docs/en/datasets/explorer/explorer.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a> | <a href="https://youtu.be/3VryynorQeo"><center><img width=30% src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-social-youtube-rect.png" alt="Ultralytics Youtube Video"></center></a> |
## <div align="center">模型</div>
在[COCO](https://docs.ultralytics.com/datasets/detect/coco/)数据集上预训练的YOLOv8 [检测](https://docs.ultralytics.com/tasks/detect/),[分割](https://docs.ultralytics.com/tasks/segment/)和[姿态](https://docs.ultralytics.com/tasks/pose/)模型可以在这里找到,以及在[ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/)数据集上预训练的YOLOv8 [分类](https://docs.ultralytics.com/tasks/classify/)模型。所有的检测,分割和姿态模型都支持[追踪](https://docs.ultralytics.com/modes/track/)模式。
YOLO11 [检测](https://docs.ultralytics.com/tasks/detect/)、[分割](https://docs.ultralytics.com/tasks/segment/) 和 [姿态](https://docs.ultralytics.com/tasks/pose/) 模型在 [COCO](https://docs.ultralytics.com/datasets/detect/coco/) 数据集上进行预训练,这些模型可在此处获得,此外还有在 [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) 数据集上预训练的 YOLO11 [分类](https://docs.ultralytics.com/tasks/classify/) 模型。所有检测、分割和姿态模型均支持 [跟踪](https://docs.ultralytics.com/modes/track/) 模式。
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png" alt="Ultralytics YOLO supported tasks">
所有[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models)在首次使用时自动从最新的Ultralytics [发布版本](https://github.com/ultralytics/assets/releases)下载。
所有[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models)在首次使用时自动从最新的 Ultralytics [发布](https://github.com/ultralytics/assets/releases)下载。
<details open><summary>检测 (COCO)</summary>
查看[检测文档](https://docs.ultralytics.com/tasks/detect/)以获取这些在[COCO](https://docs.ultralytics.com/datasets/detect/coco/)上训练的模型的使用示例,其中包括80个预训练类别。
请参阅 [检测文档](https://docs.ultralytics.com/tasks/detect/) 以获取使用这些在 [COCO](https://docs.ultralytics.com/datasets/detect/coco/) 数据集上训练的模型的示例,其中包含 80 个预训练类别。
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>val<br>50-95 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>val<br>50-95 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | ------------------- | -------------------- | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
| [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt) | 640 | 39.5 | 56.1 ± 0.8 | 1.5 ± 0.0 | 2.6 | 6.5 |
| [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt) | 640 | 47.0 | 90.0 ± 1.2 | 2.5 ± 0.0 | 9.4 | 21.5 |
| [YOLO11m](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m.pt) | 640 | 51.5 | 183.2 ± 2.0 | 4.7 ± 0.1 | 20.1 | 68.0 |
| [YOLO11l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l.pt) | 640 | 53.4 | 238.6 ± 1.4 | 6.2 ± 0.1 | 25.3 | 86.9 |
| [YOLO11x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x.pt) | 640 | 54.7 | 462.8 ± 6.7 | 11.3 ± 0.2 | 56.9 | 194.9 |
- **mAP<sup>val</sup>**是基于单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上的结果。 <br>通过 `yolo val detect data=coco.yaml device=0` 复现
- **速度**使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 COCO val 图像进行平均计算的。 <br>通过 `yolo val detect data=coco.yaml batch=1 device=0|cpu` 复现
- **mAP<sup>val</sup>**针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val detect data=coco.yaml device=0`
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val detect data=coco.yaml batch=1 device=0|cpu`
</details>
<details><summary>分割 (COCO)</summary>
查看[分割文档](https://docs.ultralytics.com/tasks/segment/)以获取这些在[COCO-Seg](https://docs.ultralytics.com/datasets/segment/coco/)上训练的模型的使用示例,其中包括80个预训练类别。
请参阅 [分割文档](https://docs.ultralytics.com/tasks/segment/) 以获取使用这些在 [COCO-Seg](https://docs.ultralytics.com/datasets/segment/coco/) 数据集上训练的模型的示例,其中包含 80 个预训练类别。
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | ------------------- | -------------------- | --------------------- | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
| [YOLO11n-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt) | 640 | 38.9 | 32.0 | 65.9 ± 1.1 | 1.8 ± 0.0 | 2.9 | 10.4 |
| [YOLO11s-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-seg.pt) | 640 | 46.6 | 37.8 | 117.6 ± 4.9 | 2.9 ± 0.0 | 10.1 | 35.5 |
| [YOLO11m-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-seg.pt) | 640 | 51.5 | 41.5 | 281.6 ± 1.2 | 6.3 ± 0.1 | 22.4 | 123.3 |
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 |
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 |
- **mAP<sup>val</sup>**是基于单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上的结果。 <br>通过 `yolo val segment data=coco-seg.yaml device=0` 复现
- **速度**使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 COCO val 图像进行平均计算的。 <br>通过 `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu` 复现
- **mAP<sup>val</sup>**针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val segment data=coco-seg.yaml device=0`
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu`
</details>
<details><summary>姿态 (COCO)</summary>
<details><summary>分类 (ImageNet)</summary>
查看[姿态文档](https://docs.ultralytics.com/tasks/pose/)以获取这些在[COCO-Pose](https://docs.ultralytics.com/datasets/pose/coco/)上训练的模型的使用示例,其中包括1个预训练类别,即人
请参阅 [分类文档](https://docs.ultralytics.com/tasks/classify/) 以获取使用这些在 [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) 数据集上训练的模型的示例,其中包含 1000 个预训练类别
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | ------------------- | --------------------- | ------------------ | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
| 模型 | 尺寸<br><sup>(像素) | acc<br><sup>top1 | acc<br><sup>top5 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | ------------------- | ---------------- | ---------------- | ----------------------------- | ---------------------------------- | ---------------- | ------------------------ |
| [YOLO11n-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-cls.pt) | 224 | 70.0 | 89.4 | 5.0 ± 0.3 | 1.1 ± 0.0 | 1.6 | 3.3 |
| [YOLO11s-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-cls.pt) | 224 | 75.4 | 92.7 | 7.9 ± 0.2 | 1.3 ± 0.0 | 5.5 | 12.1 |
| [YOLO11m-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-cls.pt) | 224 | 77.3 | 93.9 | 17.2 ± 0.4 | 2.0 ± 0.0 | 10.4 | 39.3 |
| [YOLO11l-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-cls.pt) | 224 | 78.3 | 94.3 | 23.2 ± 0.3 | 2.8 ± 0.0 | 12.9 | 49.4 |
| [YOLO11x-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-cls.pt) | 224 | 79.5 | 94.9 | 41.4 ± 0.9 | 3.8 ± 0.0 | 28.4 | 110.4 |
- **mAP<sup>val</sup>** 值是基于单模型单尺度在 [COCO Keypoints val2017](https://cocodataset.org/) 数据集上的结果。 <br>通过 `yolo val pose data=coco-pose.yaml device=0` 复现
- **速度**使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 COCO val 图像进行平均计算的。 <br>通过 `yolo val pose data=coco-pose.yaml batch=1 device=0|cpu` 复现
- **acc** 值为在 [ImageNet](https://www.image-net.org/) 数据集验证集上的模型准确率。 <br>复制命令 `yolo val classify data=path/to/ImageNet device=0`
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 ImageNet 验证图像上平均。 <br>复制命令 `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
</details>
<details><summary>旋转检测 (DOTAv1)</summary>
<details><summary>姿态 (COCO)</summary>
查看[旋转检测文档](https://docs.ultralytics.com/tasks/obb/)以获取这些在[DOTAv1](https://docs.ultralytics.com/datasets/obb/dota-v2/#dota-v10/)上训练的模型的使用示例,其中包括15个预训练类别
请参阅 [姿态文档](https://docs.ultralytics.com/tasks/pose/) 以获取使用这些在 [COCO-Pose](https://docs.ultralytics.com/datasets/pose/coco/) 数据集上训练的模型的示例,其中包含 1 个预训练类别(人)
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>test<br>50 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | ------------------- | ------------------ | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
| [YOLOv8n-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-obb.pt) | 1024 | 78.0 | 204.77 | 3.57 | 3.1 | 23.3 |
| [YOLOv8s-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-obb.pt) | 1024 | 79.5 | 424.88 | 4.07 | 11.4 | 76.3 |
| [YOLOv8m-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-obb.pt) | 1024 | 80.5 | 763.48 | 7.61 | 26.4 | 208.6 |
| [YOLOv8l-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-obb.pt) | 1024 | 80.7 | 1278.42 | 11.83 | 44.5 | 433.8 |
| [YOLOv8x-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-obb.pt) | 1024 | 81.36 | 1759.10 | 13.23 | 69.5 | 676.7 |
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | ------------------- | --------------------- | ------------------ | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
| [YOLO11n-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-obb.pt) | 1024 | 78.4 | 117.6 ± 0.8 | 4.4 ± 0.0 | 2.7 | 17.2 |
| [YOLO11s-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-obb.pt) | 1024 | 79.5 | 219.4 ± 4.0 | 5.1 ± 0.0 | 9.7 | 57.5 |
| [YOLO11m-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-obb.pt) | 1024 | 80.9 | 562.8 ± 2.9 | 10.1 ± 0.4 | 20.9 | 183.5 |
| [YOLO11l-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-obb.pt) | 1024 | 81.0 | 712.5 ± 5.0 | 13.5 ± 0.6 | 26.2 | 232.0 |
| [YOLO11x-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-obb.pt) | 1024 | 81.3 | 1408.6 ± 7.7 | 28.6 ± 1.0 | 58.8 | 520.2 |
- **mAP<sup>val</sup>**是基于单模型多尺度在 [DOTAv1](https://captain-whu.github.io/DOTA/index.html) 数据集上的结果。 <br>通过 `yolo val obb data=DOTAv1.yaml device=0 split=test` 复现
- **速度**使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 COCO val 图像进行平均计算的。 <br>通过 `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu` 复现
- **mAP<sup>val</sup>**针对单模型单尺度在 [COCO Keypoints val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val pose data=coco-pose.yaml device=0`
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val pose data=coco-pose.yaml batch=1 device=0|cpu`
</details>
<details><summary>分类 (ImageNet)</summary>
<details><summary>OBB (DOTAv1)</summary>
查看[分类文档](https://docs.ultralytics.com/tasks/classify/)以获取这些在[ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/)上训练的模型的使用示例,其中包括1000个预训练类别。
请参阅 [OBB 文档](https://docs.ultralytics.com/tasks/obb/) 以获取使用这些在 [DOTAv1](https://docs.ultralytics.com/datasets/obb/dota-v2/#dota-v10/) 数据集上训练的模型的示例,其中包含 15 个预训练类别。
| 模型 | 尺寸<br><sup>(像素) | acc<br><sup>top1 | acc<br><sup>top5 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | ------------------- | ---------------- | ---------------- | ----------------------------- | ---------------------------------- | ---------------- | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-cls.pt) | 224 | 69.0 | 88.3 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-cls.pt) | 224 | 73.8 | 91.7 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-cls.pt) | 224 | 76.8 | 93.5 | 85.4 | 0.62 | 17.0 | 42.7 |
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-cls.pt) | 224 | 78.3 | 94.2 | 163.0 | 0.87 | 37.5 | 99.7 |
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-cls.pt) | 224 | 79.0 | 94.6 | 232.0 | 1.01 | 57.4 | 154.8 |
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>test<br>50 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>T4 TensorRT10<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | ------------------- | ------------------ | ----------------------------- | ---------------------------------- | ---------------- | ----------------- |
| [YOLO11n-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-obb.pt) | 1024 | 78.4 | 117.56 ± 0.80 | 4.43 ± 0.01 | 2.7 | 17.2 |
| [YOLO11s-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-obb.pt) | 1024 | 79.5 | 219.41 ± 4.00 | 5.13 ± 0.02 | 9.7 | 57.5 |
| [YOLO11m-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-obb.pt) | 1024 | 80.9 | 562.81 ± 2.87 | 10.07 ± 0.38 | 20.9 | 183.5 |
| [YOLO11l-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-obb.pt) | 1024 | 81.0 | 712.49 ± 4.98 | 13.46 ± 0.55 | 26.2 | 232.0 |
| [YOLO11x-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-obb.pt) | 1024 | 81.3 | 1408.63 ± 7.67 | 28.59 ± 0.96 | 58.8 | 520.2 |
- **acc** 值是模型在 [ImageNet](https://www.image-net.org/) 数据集验证集上的准确率。 <br>通过 `yolo val classify data=path/to/ImageNet device=0` 复现
- **速度**使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 ImageNet val 图像进行平均计算的。 <br>通过 `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu` 复现
- **mAP<sup>test</sup>** 值针对单模型多尺度在 [DOTAv1](https://captain-whu.github.io/DOTA/index.html) 数据集上进行。 <br>复制命令 `yolo val obb data=DOTAv1.yaml device=0 split=test` 并提交合并结果到 [DOTA 评估](https://captain-whu.github.io/DOTA/evaluation.html)。
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 DOTAv1 验证图像上平均。 <br>复制命令 `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu`
</details>
## <div align="center">集成</div>
我们与领先的AI平台的关键整合扩展了Ultralytics产品的功能,增强了数据集标签化、训练、可视化和模型管理等任务。探索Ultralytics如何与[Roboflow](https://roboflow.com/?ref=ultralytics)、ClearML、[Comet](https://bit.ly/yolov8-readme-comet)、Neural Magic以及[OpenVINO](https://docs.ultralytics.com/integrations/openvino/)合作,优化您的AI工作流程。
我们与领先的 AI 平台的关键集成扩展了 Ultralytics 产品的功能,增强了数据集标记、训练、可视化和模型管理等任务的能力。了解 Ultralytics 如何与 [Roboflow](https://roboflow.com/?ref=ultralytics)、ClearML、[Comet](https://bit.ly/yolov8-readme-comet)、Neural Magic 和 [OpenVINO](https://docs.ultralytics.com/integrations/openvino/) 合作,优化您的 AI 工作流程。
<br>
<a href="https://ultralytics.com/hub" target="_blank">
@ -245,36 +229,36 @@ Ultralytics 提供了 YOLOv8 的交互式笔记本,涵盖训练、验证、跟
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="NeuralMagic logo"></a>
</div>
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
| :-------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------: |
| 使用 [Roboflow](https://roboflow.com/?ref=ultralytics) 将您的自定义数据集直接标记并导出至 YOLOv8 进行训练 | 使用 [ClearML](https://clear.ml/)(开源!)自动跟踪、可视化,甚至远程训练 YOLOv8 | 免费且永久,[Comet](https://bit.ly/yolov8-readme-comet) 让您保存 YOLOv8 模型、恢复训练,并以交互式方式查看和调试预测 | 使用 [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) 使 YOLOv8 推理速度提高多达 6 倍 |
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
| :--------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
| Label and export your custom datasets directly to YOLO11 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLO11 using [ClearML](https://clear.ml/) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLO11 models, resume training, and interactively visualize and debug predictions | Run YOLO11 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
## <div align="center">Ultralytics HUB</div>
体验 [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐ 带来的无缝 AI,这是一个一体化解决方案,用于数据可视化、YOLOv5 和即将推出的 YOLOv8 🚀 模型训练和部署,无需任何编码。通过我们先进的平台和用户友好的 [Ultralytics 应用程序](https://www.ultralytics.com/app-install),轻松将图像转化为可操作的见解,并实现您的 AI 愿景。现在就开始您的**免费**之旅
体验无缝 AI 使用 [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐,一个集数据可视化、YOLO11 🚀 模型训练和部署于一体的解决方案,无需编写代码。利用我们最先进的平台和用户友好的 [Ultralytics 应用](https://www.ultralytics.com/app-install),将图像转换为可操作见解,并轻松实现您的 AI 愿景。免费开始您的旅程
<a href="https://ultralytics.com/hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB preview image"></a>
## <div align="center">贡献</div>
我们喜欢您的参与!没有社区的帮助,YOLOv5 和 YOLOv8 将无法实现。请参阅我们的[贡献指南](https://docs.ultralytics.com/help/contributing/)以开始使用,并填写我们的[调查问卷](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey)向我们提供您的使用体验反馈。感谢所有贡献者的支持!🙏
我们欢迎您的意见!没有社区的帮助,Ultralytics YOLO 就不可能实现。请参阅我们的 [贡献指南](https://docs.ultralytics.com/help/contributing/) 开始,并填写我们的 [调查问卷](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) 向我们提供您体验的反馈。感谢所有贡献者 🙏!
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
<a href="https://github.com/ultralytics/ultralytics/graphs/contributors">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" alt="Ultralytics open-source contributors"></a>
## <div align="center">许可</div>
## <div align="center">许可</div>
Ultralytics 提供两种许可证选项以适应各种使用场景
Ultralytics 提供两种许可选项以适应各种用例
- **AGPL-3.0 许可证**:这个[OSI 批准](https://opensource.org/license)的开源许可证非常适合学生和爱好者,可以推动开放的协作和知识分享。请查看[LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) 文件以了解更多细节
- **企业许可证**:专为商业用途设计,该许可证允许将 Ultralytics 的软件和 AI 模型无缝集成到商业产品和服务中,从而绕过 AGPL-3.0 的开源要求。如果您的场景涉及将我们的解决方案嵌入到商业产品,请通过 [Ultralytics Licensing](https://www.ultralytics.com/license)与我们联系
- **AGPL-3.0 许可**:这是一个 [OSI 批准](https://opensource.org/license) 的开源许可,适合学生和爱好者,促进开放协作和知识共享。有关详细信息,请参阅 [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) 文件。
- **企业许可**:专为商业使用设计,此许可允许将 Ultralytics 软件和 AI 模型无缝集成到商业产品和服务中,无需满足 AGPL-3.0 的开源要求。如果您的场景涉及将我们的解决方案嵌入到商业产品,请通过 [Ultralytics Licensing](https://www.ultralytics.com/license) 联系我们
## <div align="center">联系方式</div>
## <div align="center">联系</div>
有关 Ultralytics 错误报告和功能请求,请访问 [GitHub 问题](https://github.com/ultralytics/ultralytics/issues)。成为 Ultralytics [Discord](https://discord.com/invite/ultralytics)、[Reddit](https://www.reddit.com/r/ultralytics/) 或 [论坛](https://community.ultralytics.com/) 的成员 用于提出问题、共享项目、学习讨论或寻求有关 Ultralytics 的所有帮助!
如需 Ultralytics 的错误报告和功能请求,请访问 [GitHub Issues](https://github.com/ultralytics/ultralytics/issues)。成为 Ultralytics [Discord](https://discord.com/invite/ultralytics)、[Reddit](https://www.reddit.com/r/ultralytics/) 或 [论坛](https://community.ultralytics.com/) 的成员,提出问题、分享项目、探讨学习讨论,或寻求所有 Ultralytics 相关的帮助!
<br>
<div align="center">

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Builds ultralytics/ultralytics:latest image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Image is CUDA-optimized for YOLOv8 single/multi-GPU training and inference
# Image is CUDA-optimized for YOLO11 single/multi-GPU training and inference
# Start FROM PyTorch image https://hub.docker.com/r/pytorch/pytorch or nvcr.io/nvidia/pytorch:23.03-py3
FROM pytorch/pytorch:2.3.1-cuda12.1-cudnn8-runtime
@ -21,8 +21,10 @@ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
# Install linux packages
# g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
# libsm6 required by libqxcb to create QT-based windows for visualization; set 'QT_DEBUG_PLUGINS=1' to test in docker
RUN apt update \
&& apt install --no-install-recommends -y gcc git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 libsm6
RUN apt-get update && \
apt-get install -y --no-install-recommends \
gcc git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 libsm6 \
&& rm -rf /var/lib/apt/lists/*
# Security updates
# https://security.snyk.io/vuln/SNYK-UBUNTU1804-OPENSSL-3314796
@ -34,7 +36,7 @@ WORKDIR /ultralytics
# Copy contents and configure git
COPY . .
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
ADD https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt .
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
# Install pip packages
RUN python3 -m pip install --upgrade pip wheel
@ -43,14 +45,15 @@ RUN pip install -e ".[export]" "tensorrt-cu12==10.1.0" "albumentations>=1.4.6" c
# Run exports to AutoInstall packages
# Edge TPU export fails the first time so is run twice here
RUN yolo export model=tmp/yolov8n.pt format=edgetpu imgsz=32 || yolo export model=tmp/yolov8n.pt format=edgetpu imgsz=32
RUN yolo export model=tmp/yolov8n.pt format=ncnn imgsz=32
RUN yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32 || yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32
RUN yolo export model=tmp/yolo11n.pt format=ncnn imgsz=32
# Requires <= Python 3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
RUN pip install "paddlepaddle>=2.6.0" x2paddle
# Fix error: `np.bool` was a deprecated alias for the builtin `bool` segmentation error in Tests
RUN pip install numpy==1.23.5
# Remove exported models
RUN rm -rf tmp
# Remove extra build files
RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json
# Usage Examples -------------------------------------------------------------------------------------------------------

@ -20,8 +20,10 @@ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
# Install linux packages
# g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
# pkg-config and libhdf5-dev (not included) are needed to build 'h5py==3.11.0' aarch64 wheel required by 'tensorflow'
RUN apt update \
&& apt install --no-install-recommends -y python3-pip git zip unzip wget curl htop gcc libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip git zip unzip wget curl htop gcc libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Create working directory
WORKDIR /ultralytics
@ -29,7 +31,7 @@ WORKDIR /ultralytics
# Copy contents and configure git
COPY . .
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
ADD https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt .
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
# Install pip packages
RUN python3 -m pip install --upgrade pip wheel
@ -38,6 +40,8 @@ RUN pip install -e ".[export]"
# Creates a symbolic link to make 'python' point to 'python3'
RUN ln -sf /usr/bin/python3 /usr/bin/python
# Remove extra build files
RUN rm -rf /root/.config/Ultralytics/persistent_cache.json
# Usage Examples -------------------------------------------------------------------------------------------------------

@ -17,11 +17,13 @@ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
/root/.config/Ultralytics/
# Install linux packages
RUN apt update \
&& apt install --no-install-recommends -y libgl1
RUN apt-get update && \
apt-get install -y --no-install-recommends \
libgl1 \
&& rm -rf /var/lib/apt/lists/*
# Copy contents
ADD https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt .
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
# Install conda packages
# mkl required to fix 'OSError: libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory'
@ -30,6 +32,8 @@ RUN conda config --set solver libmamba && \
conda install -c conda-forge ultralytics mkl
# conda install -c pytorch -c nvidia -c conda-forge pytorch torchvision pytorch-cuda=12.1 ultralytics mkl
# Remove extra build files
RUN rm -rf /root/.config/Ultralytics/persistent_cache.json
# Usage Examples -------------------------------------------------------------------------------------------------------

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Builds ultralytics/ultralytics:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLOv8 deployments
# Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLO11 deployments
# Start FROM Ubuntu image https://hub.docker.com/_/ubuntu
FROM ubuntu:23.10
@ -18,8 +18,10 @@ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
# Install linux packages
# g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
RUN apt update \
&& apt install --no-install-recommends -y python3-pip git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Create working directory
WORKDIR /ultralytics
@ -27,23 +29,23 @@ WORKDIR /ultralytics
# Copy contents and configure git
COPY . .
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
ADD https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt .
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
# Install pip packages
RUN python3 -m pip install --upgrade pip wheel
RUN pip install -e ".[export]" --extra-index-url https://download.pytorch.org/whl/cpu
# Run exports to AutoInstall packages
RUN yolo export model=tmp/yolov8n.pt format=edgetpu imgsz=32
RUN yolo export model=tmp/yolov8n.pt format=ncnn imgsz=32
RUN yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32
RUN yolo export model=tmp/yolo11n.pt format=ncnn imgsz=32
# Requires Python<=3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
# RUN pip install "paddlepaddle>=2.6.0" x2paddle
# Remove exported models
RUN rm -rf tmp
# Creates a symbolic link to make 'python' point to 'python3'
RUN ln -sf /usr/bin/python3 /usr/bin/python
# Remove extra build files
RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json
# Usage Examples -------------------------------------------------------------------------------------------------------

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Builds ultralytics/ultralytics:jetson-jetpack4 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Supports JetPack4.x for YOLOv8 on Jetson Nano, TX2, Xavier NX, AGX Xavier
# Supports JetPack4.x for YOLO11 on Jetson Nano, TX2, Xavier NX, AGX Xavier
# Start FROM https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-cuda
FROM nvcr.io/nvidia/l4t-cuda:10.2.460-runtime
@ -20,8 +20,10 @@ RUN wget -q -O - https://repo.download.nvidia.com/jetson/jetson-ota-public.asc |
echo "deb https://repo.download.nvidia.com/jetson/t194 r32.7 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
# Install dependencies
RUN apt update && \
apt install --no-install-recommends -y git python3.8 python3.8-dev python3-pip python3-libnvinfer libopenmpi-dev libopenblas-base libomp-dev gcc
RUN apt-get update && \
apt-get install -y --no-install-recommends \
git python3.8 python3.8-dev python3-pip python3-libnvinfer libopenmpi-dev libopenblas-base libomp-dev gcc \
&& rm -rf /var/lib/apt/lists/*
# Create symbolic links for python3.8 and pip3
RUN ln -sf /usr/bin/python3.8 /usr/bin/python3
@ -33,7 +35,7 @@ WORKDIR /ultralytics
# Copy contents and configure git
COPY . .
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
ADD https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt .
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
# Download onnxruntime-gpu 1.8.0 and tensorrt 8.2.0.6
# Other versions can be seen in https://elinux.org/Jetson_Zoo and https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
@ -48,7 +50,9 @@ RUN pip install \
https://github.com/ultralytics/assets/releases/download/v0.0.0/torch-1.11.0a0+gitbc2c6ed-cp38-cp38-linux_aarch64.whl \
https://github.com/ultralytics/assets/releases/download/v0.0.0/torchvision-0.12.0a0+9b5a3fe-cp38-cp38-linux_aarch64.whl
RUN pip install -e ".[export]"
RUN rm *.whl
# Remove extra build files
RUN rm -rf *.whl /root/.config/Ultralytics/persistent_cache.json
# Usage Examples -------------------------------------------------------------------------------------------------------

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Builds ultralytics/ultralytics:jetson-jetson-jetpack5 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Supports JetPack5.x for YOLOv8 on Jetson Xavier NX, AGX Xavier, AGX Orin, Orin Nano and Orin NX
# Supports JetPack5.x for YOLO11 on Jetson Xavier NX, AGX Xavier, AGX Orin, Orin Nano and Orin NX
# Start FROM https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch
FROM nvcr.io/nvidia/l4t-pytorch:r35.2.1-pth2.0-py3
@ -20,8 +20,10 @@ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
# g++ required to build 'tflite_support' and 'lap' packages
# libusb-1.0-0 required for 'tflite_support' package when exporting to TFLite
# pkg-config and libhdf5-dev (not included) are needed to build 'h5py==3.11.0' aarch64 wheel required by 'tensorflow'
RUN apt update \
&& apt install --no-install-recommends -y gcc git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0
RUN apt-get update && \
apt-get install -y --no-install-recommends \
gcc git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Create working directory
WORKDIR /ultralytics
@ -29,7 +31,7 @@ WORKDIR /ultralytics
# Copy contents and configure git
COPY . .
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
ADD https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt .
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
# Remove opencv-python from Ultralytics dependencies as it conflicts with opencv-python installed in base image
RUN sed -i '/opencv-python/d' pyproject.toml
@ -41,8 +43,9 @@ ADD https://nvidia.box.com/shared/static/mvdcltm9ewdy2d5nurkiqorofz1s53ww.whl on
RUN python3 -m pip install --upgrade pip wheel
RUN pip install onnxruntime_gpu-1.15.1-cp38-cp38-linux_aarch64.whl
RUN pip install -e ".[export]"
RUN rm *.whl
# Remove extra build files
RUN rm -rf *.whl /root/.config/Ultralytics/persistent_cache.json
# Usage Examples -------------------------------------------------------------------------------------------------------

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Builds ultralytics/ultralytics:jetson-jetpack6 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Supports JetPack6.x for YOLOv8 on Jetson AGX Orin, Orin NX and Orin Nano Series
# Supports JetPack6.x for YOLO11 on Jetson AGX Orin, Orin NX and Orin Nano Series
# Start FROM https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-jetpack
FROM nvcr.io/nvidia/l4t-jetpack:r36.3.0
@ -17,8 +17,10 @@ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
/root/.config/Ultralytics/
# Install dependencies
RUN apt update && \
apt install --no-install-recommends -y git python3-pip libopenmpi-dev libopenblas-base libomp-dev
RUN apt-get update && \
apt-get install -y --no-install-recommends \
git python3-pip libopenmpi-dev libopenblas-base libomp-dev \
&& rm -rf /var/lib/apt/lists/*
# Create working directory
WORKDIR /ultralytics
@ -26,7 +28,7 @@ WORKDIR /ultralytics
# Copy contents and configure git
COPY . .
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
ADD https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt .
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
# Download onnxruntime-gpu 1.18.0 from https://elinux.org/Jetson_Zoo and https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
ADD https://nvidia.box.com/shared/static/48dtuob7meiw6ebgfsfqakc9vse62sg4.whl onnxruntime_gpu-1.18.0-cp310-cp310-linux_aarch64.whl
@ -38,7 +40,9 @@ RUN pip install \
https://github.com/ultralytics/assets/releases/download/v0.0.0/torch-2.3.0-cp310-cp310-linux_aarch64.whl \
https://github.com/ultralytics/assets/releases/download/v0.0.0/torchvision-0.18.0a0+6043bc2-cp310-cp310-linux_aarch64.whl
RUN pip install -e ".[export]"
RUN rm *.whl
# Remove extra build files
RUN rm -rf *.whl /root/.config/Ultralytics/persistent_cache.json
# Usage Examples -------------------------------------------------------------------------------------------------------

@ -1,9 +1,9 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Builds ultralytics/ultralytics:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLOv8 deployments
# Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLO11 deployments
# Use the official Python 3.10 slim-bookworm as base image
FROM python:3.10-slim-bookworm
# Use official Python base image for reproducibility (3.11.10 for export and 3.12.6 for inference)
FROM python:3.11.10-slim-bookworm
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
@ -18,8 +18,10 @@ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
# Install linux packages
# g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
RUN apt update \
&& apt install --no-install-recommends -y python3-pip git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip git zip unzip wget curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Create working directory
WORKDIR /ultralytics
@ -27,20 +29,20 @@ WORKDIR /ultralytics
# Copy contents and configure git
COPY . .
RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
ADD https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt .
ADD https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt .
# Install pip packages
RUN python3 -m pip install --upgrade pip wheel
RUN pip install -e ".[export]" --extra-index-url https://download.pytorch.org/whl/cpu
# Run exports to AutoInstall packages
RUN yolo export model=tmp/yolov8n.pt format=edgetpu imgsz=32
RUN yolo export model=tmp/yolov8n.pt format=ncnn imgsz=32
RUN yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32
RUN yolo export model=tmp/yolo11n.pt format=ncnn imgsz=32
# Requires Python<=3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
RUN pip install "paddlepaddle>=2.6.0" x2paddle
# Remove exported models
RUN rm -rf tmp
# Remove extra build files
RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json
# Usage Examples -------------------------------------------------------------------------------------------------------

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Builds GitHub actions CI runner image for deployment to DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Image is CUDA-optimized for YOLOv8 single/multi-GPU training and inference tests
# Image is CUDA-optimized for YOLO11 single/multi-GPU training and inference tests
# Start FROM Ultralytics GPU image
FROM ultralytics/ultralytics:latest

@ -36,7 +36,7 @@ To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="caltech101", epochs=100, imgsz=416)
@ -46,7 +46,7 @@ To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the
```bash
# Start training from a pretrained *.pt model
yolo classify train data=caltech101 model=yolov8n-cls.pt epochs=100 imgsz=416
yolo classify train data=caltech101 model=yolo11n-cls.pt epochs=100 imgsz=416
```
## Sample Images and Annotations
@ -98,7 +98,7 @@ To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the p
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="caltech101", epochs=100, imgsz=416)
@ -108,7 +108,7 @@ To train an Ultralytics YOLO model on the Caltech-101 dataset, you can use the p
```bash
# Start training from a pretrained *.pt model
yolo classify train data=caltech101 model=yolov8n-cls.pt epochs=100 imgsz=416
yolo classify train data=caltech101 model=yolo11n-cls.pt epochs=100 imgsz=416
```
For more detailed arguments and options, refer to the model [Training](../../modes/train.md) page.

@ -16,7 +16,7 @@ The [Caltech-256](https://data.caltech.edu/records/nyy15-4j048) dataset is an ex
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Train [Image Classification](https://www.ultralytics.com/glossary/image-classification) Model using Caltech-256 Dataset with Ultralytics HUB
<strong>Watch:</strong> How to Train <a href="https://www.ultralytics.com/glossary/image-classification">Image Classification</a> Model using Caltech-256 Dataset with Ultralytics HUB
</p>
## Key Features
@ -47,7 +47,7 @@ To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="caltech256", epochs=100, imgsz=416)
@ -57,7 +57,7 @@ To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the
```bash
# Start training from a pretrained *.pt model
yolo classify train data=caltech256 model=yolov8n-cls.pt epochs=100 imgsz=416
yolo classify train data=caltech256 model=yolo11n-cls.pt epochs=100 imgsz=416
```
## Sample Images and Annotations
@ -106,7 +106,7 @@ To train a YOLO model on the Caltech-256 dataset for 100 [epochs](https://www.ul
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model
model = YOLO("yolo11n-cls.pt") # load a pretrained model
# Train the model
results = model.train(data="caltech256", epochs=100, imgsz=416)
@ -116,7 +116,7 @@ To train a YOLO model on the Caltech-256 dataset for 100 [epochs](https://www.ul
```bash
# Start training from a pretrained *.pt model
yolo classify train data=caltech256 model=yolov8n-cls.pt epochs=100 imgsz=416
yolo classify train data=caltech256 model=yolo11n-cls.pt epochs=100 imgsz=416
```
### What are the most common use cases for the Caltech-256 dataset?
@ -141,6 +141,6 @@ Ultralytics YOLO models offer several advantages for training on the Caltech-256
- **High Accuracy**: YOLO models are known for their state-of-the-art performance in object detection tasks.
- **Speed**: They provide real-time inference capabilities, making them suitable for applications requiring quick predictions.
- **Ease of Use**: With Ultralytics HUB, users can train, validate, and deploy models without extensive coding.
- **Pretrained Models**: Starting from pretrained models, like `yolov8n-cls.pt`, can significantly reduce training time and improve model [accuracy](https://www.ultralytics.com/glossary/accuracy).
- **Pretrained Models**: Starting from pretrained models, like `yolo11n-cls.pt`, can significantly reduce training time and improve model [accuracy](https://www.ultralytics.com/glossary/accuracy).
For more details, explore our [comprehensive training guide](../../modes/train.md).

@ -16,7 +16,7 @@ The [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) (Canadian Institute
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Train an [Image Classification](https://www.ultralytics.com/glossary/image-classification) Model with CIFAR-10 Dataset using Ultralytics YOLOv8
<strong>Watch:</strong> How to Train an <a href="https://www.ultralytics.com/glossary/image-classification">Image Classification</a> Model with CIFAR-10 Dataset using Ultralytics YOLO11
</p>
## Key Features
@ -50,7 +50,7 @@ To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="cifar10", epochs=100, imgsz=32)
@ -60,7 +60,7 @@ To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size
```bash
# Start training from a pretrained *.pt model
yolo classify train data=cifar10 model=yolov8n-cls.pt epochs=100 imgsz=32
yolo classify train data=cifar10 model=yolo11n-cls.pt epochs=100 imgsz=32
```
## Sample Images and Annotations
@ -104,7 +104,7 @@ To train a YOLO model on the CIFAR-10 dataset using Ultralytics, you can follow
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="cifar10", epochs=100, imgsz=32)
@ -114,7 +114,7 @@ To train a YOLO model on the CIFAR-10 dataset using Ultralytics, you can follow
```bash
# Start training from a pretrained *.pt model
yolo classify train data=cifar10 model=yolov8n-cls.pt epochs=100 imgsz=32
yolo classify train data=cifar10 model=yolo11n-cls.pt epochs=100 imgsz=32
```
For more details, refer to the model [Training](../../modes/train.md) page.

@ -39,7 +39,7 @@ To train a YOLO model on the CIFAR-100 dataset for 100 [epochs](https://www.ultr
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="cifar100", epochs=100, imgsz=32)
@ -49,7 +49,7 @@ To train a YOLO model on the CIFAR-100 dataset for 100 [epochs](https://www.ultr
```bash
# Start training from a pretrained *.pt model
yolo classify train data=cifar100 model=yolov8n-cls.pt epochs=100 imgsz=32
yolo classify train data=cifar100 model=yolo11n-cls.pt epochs=100 imgsz=32
```
## Sample Images and Annotations
@ -97,7 +97,7 @@ You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI c
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="cifar100", epochs=100, imgsz=32)
@ -107,7 +107,7 @@ You can train a YOLO model on the CIFAR-100 dataset using either Python or CLI c
```bash
# Start training from a pretrained *.pt model
yolo classify train data=cifar100 model=yolov8n-cls.pt epochs=100 imgsz=32
yolo classify train data=cifar100 model=yolo11n-cls.pt epochs=100 imgsz=32
```
For a comprehensive list of available arguments, please refer to the model [Training](../../modes/train.md) page.

@ -16,7 +16,7 @@ The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to do [Image Classification](https://www.ultralytics.com/glossary/image-classification) on Fashion MNIST Dataset using Ultralytics YOLOv8
<strong>Watch:</strong> How to do <a href="https://www.ultralytics.com/glossary/image-classification">Image Classification</a> on Fashion MNIST Dataset using Ultralytics YOLO11
</p>
## Key Features
@ -64,7 +64,7 @@ To train a CNN model on the Fashion-MNIST dataset for 100 [epochs](https://www.u
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="fashion-mnist", epochs=100, imgsz=28)
@ -74,7 +74,7 @@ To train a CNN model on the Fashion-MNIST dataset for 100 [epochs](https://www.u
```bash
# Start training from a pretrained *.pt model
yolo classify train data=fashion-mnist model=yolov8n-cls.pt epochs=100 imgsz=28
yolo classify train data=fashion-mnist model=yolo11n-cls.pt epochs=100 imgsz=28
```
## Sample Images and Annotations
@ -107,7 +107,7 @@ To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use bot
from ultralytics import YOLO
# Load a pretrained model
model = YOLO("yolov8n-cls.pt")
model = YOLO("yolo11n-cls.pt")
# Train the model on Fashion-MNIST
results = model.train(data="fashion-mnist", epochs=100, imgsz=28)
@ -117,7 +117,7 @@ To train an Ultralytics YOLO model on the Fashion-MNIST dataset, you can use bot
=== "CLI"
```bash
yolo classify train data=fashion-mnist model=yolov8n-cls.pt epochs=100 imgsz=28
yolo classify train data=fashion-mnist model=yolo11n-cls.pt epochs=100 imgsz=28
```
For more detailed training parameters, refer to the [Training page](../../modes/train.md).
@ -128,7 +128,7 @@ The [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset is
### Can I use Ultralytics YOLO for image classification tasks like Fashion-MNIST?
Yes, Ultralytics YOLO models can be used for image classification tasks, including those involving the Fashion-MNIST dataset. YOLOv8, for example, supports various vision tasks such as detection, segmentation, and classification. To get started with image classification tasks, refer to the [Classification page](https://docs.ultralytics.com/tasks/classify/).
Yes, Ultralytics YOLO models can be used for image classification tasks, including those involving the Fashion-MNIST dataset. YOLO11, for example, supports various vision tasks such as detection, segmentation, and classification. To get started with image classification tasks, refer to the [Classification page](https://docs.ultralytics.com/tasks/classify/).
### What are the key features and structure of the Fashion-MNIST dataset?

@ -10,13 +10,7 @@ keywords: ImageNet, deep learning, visual recognition, computer vision, pretrain
## ImageNet Pretrained Models
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-cls.pt) | 224 | 69.0 | 88.3 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-cls.pt) | 224 | 73.8 | 91.7 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-cls.pt) | 224 | 76.8 | 93.5 | 85.4 | 0.62 | 17.0 | 42.7 |
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-cls.pt) | 224 | 76.8 | 93.5 | 163.0 | 0.87 | 37.5 | 99.7 |
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-cls.pt) | 224 | 79.0 | 94.6 | 232.0 | 1.01 | 57.4 | 154.8 |
{% include "macros/yolo-cls-perf.md" %}
## Key Features
@ -49,7 +43,7 @@ To train a deep learning model on the ImageNet dataset for 100 [epochs](https://
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="imagenet", epochs=100, imgsz=224)
@ -59,7 +53,7 @@ To train a deep learning model on the ImageNet dataset for 100 [epochs](https://
```bash
# Start training from a pretrained *.pt model
yolo classify train data=imagenet model=yolov8n-cls.pt epochs=100 imgsz=224
yolo classify train data=imagenet model=yolo11n-cls.pt epochs=100 imgsz=224
```
## Sample Images and Annotations
@ -110,7 +104,7 @@ To use a pretrained Ultralytics YOLO model for image classification on the Image
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="imagenet", epochs=100, imgsz=224)
@ -120,14 +114,14 @@ To use a pretrained Ultralytics YOLO model for image classification on the Image
```bash
# Start training from a pretrained *.pt model
yolo classify train data=imagenet model=yolov8n-cls.pt epochs=100 imgsz=224
yolo classify train data=imagenet model=yolo11n-cls.pt epochs=100 imgsz=224
```
For more in-depth training instruction, refer to our [Training page](../../modes/train.md).
### Why should I use the Ultralytics YOLOv8 pretrained models for my ImageNet dataset projects?
### Why should I use the Ultralytics YOLO11 pretrained models for my ImageNet dataset projects?
Ultralytics YOLOv8 pretrained models offer state-of-the-art performance in terms of speed and [accuracy](https://www.ultralytics.com/glossary/accuracy) for various computer vision tasks. For example, the YOLOv8n-cls model, with a top-1 accuracy of 69.0% and a top-5 accuracy of 88.3%, is optimized for real-time applications. Pretrained models reduce the computational resources required for training from scratch and accelerate development cycles. Learn more about the performance metrics of YOLOv8 models in the [ImageNet Pretrained Models section](#imagenet-pretrained-models).
Ultralytics YOLO11 pretrained models offer state-of-the-art performance in terms of speed and [accuracy](https://www.ultralytics.com/glossary/accuracy) for various computer vision tasks. For example, the YOLO11n-cls model, with a top-1 accuracy of 69.0% and a top-5 accuracy of 88.3%, is optimized for real-time applications. Pretrained models reduce the computational resources required for training from scratch and accelerate development cycles. Learn more about the performance metrics of YOLO11 models in the [ImageNet Pretrained Models section](#imagenet-pretrained-models).
### How is the ImageNet dataset structured, and why is it important?

@ -35,7 +35,7 @@ To test a deep learning model on the ImageNet10 dataset with an image size of 22
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="imagenet10", epochs=5, imgsz=224)
@ -45,7 +45,7 @@ To test a deep learning model on the ImageNet10 dataset with an image size of 22
```bash
# Start training from a pretrained *.pt model
yolo classify train data=imagenet10 model=yolov8n-cls.pt epochs=5 imgsz=224
yolo classify train data=imagenet10 model=yolo11n-cls.pt epochs=5 imgsz=224
```
## Sample Images and Annotations
@ -94,7 +94,7 @@ To test your deep learning model on the ImageNet10 dataset with an image size of
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="imagenet10", epochs=5, imgsz=224)
@ -104,7 +104,7 @@ To test your deep learning model on the ImageNet10 dataset with an image size of
```bash
# Start training from a pretrained *.pt model
yolo classify train data=imagenet10 model=yolov8n-cls.pt epochs=5 imgsz=224
yolo classify train data=imagenet10 model=yolo11n-cls.pt epochs=5 imgsz=224
```
Refer to the [Training](../../modes/train.md) page for a comprehensive list of available arguments.

@ -37,7 +37,7 @@ To train a model on the ImageNette dataset for 100 epochs with a standard image
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="imagenette", epochs=100, imgsz=224)
@ -47,7 +47,7 @@ To train a model on the ImageNette dataset for 100 epochs with a standard image
```bash
# Start training from a pretrained *.pt model
yolo classify train data=imagenette model=yolov8n-cls.pt epochs=100 imgsz=224
yolo classify train data=imagenette model=yolo11n-cls.pt epochs=100 imgsz=224
```
## Sample Images and Annotations
@ -72,7 +72,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model with ImageNette160
results = model.train(data="imagenette160", epochs=100, imgsz=160)
@ -82,7 +82,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
```bash
# Start training from a pretrained *.pt model with ImageNette160
yolo classify train data=imagenette160 model=yolov8n-cls.pt epochs=100 imgsz=160
yolo classify train data=imagenette160 model=yolo11n-cls.pt epochs=100 imgsz=160
```
!!! example "Train Example with ImageNette320"
@ -93,7 +93,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model with ImageNette320
results = model.train(data="imagenette320", epochs=100, imgsz=320)
@ -103,7 +103,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
```bash
# Start training from a pretrained *.pt model with ImageNette320
yolo classify train data=imagenette320 model=yolov8n-cls.pt epochs=100 imgsz=320
yolo classify train data=imagenette320 model=yolo11n-cls.pt epochs=100 imgsz=320
```
These smaller versions of the dataset allow for rapid iterations during the development process while still providing valuable and realistic image classification tasks.
@ -130,7 +130,7 @@ To train a YOLO model on the ImageNette dataset for 100 [epochs](https://www.ult
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="imagenette", epochs=100, imgsz=224)
@ -140,7 +140,7 @@ To train a YOLO model on the ImageNette dataset for 100 [epochs](https://www.ult
```bash
# Start training from a pretrained *.pt model
yolo classify train data=imagenette model=yolov8n-cls.pt epochs=100 imgsz=224
yolo classify train data=imagenette model=yolo11n-cls.pt epochs=100 imgsz=224
```
For more details, see the [Training](../../modes/train.md) documentation page.
@ -167,7 +167,7 @@ Yes, the ImageNette dataset is also available in two resized versions: ImageNett
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt")
model = YOLO("yolo11n-cls.pt")
# Train the model with ImageNette160
results = model.train(data="imagenette160", epochs=100, imgsz=160)
@ -177,7 +177,7 @@ Yes, the ImageNette dataset is also available in two resized versions: ImageNett
```bash
# Start training from a pretrained *.pt model with ImageNette160
yolo detect train data=imagenette160 model=yolov8n-cls.pt epochs=100 imgsz=160
yolo detect train data=imagenette160 model=yolo11n-cls.pt epochs=100 imgsz=160
```
For more information, refer to [Training with ImageNette160 and ImageNette320](#imagenette160-and-imagenette320).

@ -34,7 +34,7 @@ To train a CNN model on the ImageWoof dataset for 100 [epochs](https://www.ultra
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="imagewoof", epochs=100, imgsz=224)
@ -44,7 +44,7 @@ To train a CNN model on the ImageWoof dataset for 100 [epochs](https://www.ultra
```bash
# Start training from a pretrained *.pt model
yolo classify train data=imagewoof model=yolov8n-cls.pt epochs=100 imgsz=224
yolo classify train data=imagewoof model=yolo11n-cls.pt epochs=100 imgsz=224
```
## Dataset Variants
@ -67,7 +67,7 @@ To use these variants in your training, simply replace 'imagewoof' in the datase
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# For medium-sized dataset
model.train(data="imagewoof320", epochs=100, imgsz=224)
@ -80,7 +80,7 @@ To use these variants in your training, simply replace 'imagewoof' in the datase
```bash
# Load a pretrained model and train on the small-sized dataset
yolo classify train model=yolov8n-cls.pt data=imagewoof320 epochs=100 imgsz=224
yolo classify train model=yolo11n-cls.pt data=imagewoof320 epochs=100 imgsz=224
```
It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping.
@ -116,7 +116,7 @@ To train a [Convolutional Neural Network](https://www.ultralytics.com/glossary/c
```python
from ultralytics import YOLO
model = YOLO("yolov8n-cls.pt") # Load a pretrained model
model = YOLO("yolo11n-cls.pt") # Load a pretrained model
results = model.train(data="imagewoof", epochs=100, imgsz=224)
```
@ -124,7 +124,7 @@ To train a [Convolutional Neural Network](https://www.ultralytics.com/glossary/c
=== "CLI"
```bash
yolo classify train data=imagewoof model=yolov8n-cls.pt epochs=100 imgsz=224
yolo classify train data=imagewoof model=yolo11n-cls.pt epochs=100 imgsz=224
```
For more details on available training arguments, refer to the [Training](../../modes/train.md) page.

@ -86,7 +86,7 @@ This structured approach ensures that the model can effectively learn from well-
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="path/to/dataset", epochs=100, imgsz=640)
@ -96,7 +96,7 @@ This structured approach ensures that the model can effectively learn from well-
```bash
# Start training from a pretrained *.pt model
yolo detect train data=path/to/data model=yolov8n-cls.pt epochs=100 imgsz=640
yolo detect train data=path/to/data model=yolo11n-cls.pt epochs=100 imgsz=640
```
## Supported Datasets
@ -170,7 +170,7 @@ To use your own dataset with Ultralytics YOLO, ensure it follows the specified d
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="path/to/your/dataset", epochs=100, imgsz=640)
@ -182,7 +182,7 @@ More details can be found in the [Adding your own dataset](#adding-your-own-data
Ultralytics YOLO offers several benefits for image classification, including:
- **Pretrained Models**: Load pretrained models like `yolov8n-cls.pt` to jump-start your training process.
- **Pretrained Models**: Load pretrained models like `yolo11n-cls.pt` to jump-start your training process.
- **Ease of Use**: Simple API and CLI commands for training and evaluation.
- **High Performance**: State-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed, ideal for real-time applications.
- **Support for Multiple Datasets**: Seamless integration with various popular datasets like CIFAR-10, ImageNet, and more.
@ -202,7 +202,7 @@ Training a model using Ultralytics YOLO can be done easily in both Python and CL
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model
model = YOLO("yolo11n-cls.pt") # load a pretrained model
# Train the model
results = model.train(data="path/to/dataset", epochs=100, imgsz=640)
@ -213,7 +213,7 @@ Training a model using Ultralytics YOLO can be done easily in both Python and CL
```bash
# Start training from a pretrained *.pt model
yolo detect train data=path/to/data model=yolov8n-cls.pt epochs=100 imgsz=640
yolo detect train data=path/to/data model=yolo11n-cls.pt epochs=100 imgsz=640
```
These examples demonstrate the straightforward process of training a YOLO model using either approach. For more information, visit the [Usage](#usage) section.

@ -42,7 +42,7 @@ To train a CNN model on the MNIST dataset for 100 [epochs](https://www.ultralyti
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="mnist", epochs=100, imgsz=32)
@ -52,7 +52,7 @@ To train a CNN model on the MNIST dataset for 100 [epochs](https://www.ultralyti
```bash
# Start training from a pretrained *.pt model
yolo classify train data=mnist model=yolov8n-cls.pt epochs=100 imgsz=28
yolo classify train data=mnist model=yolo11n-cls.pt epochs=100 imgsz=28
```
## Sample Images and Annotations
@ -103,7 +103,7 @@ To train a model on the MNIST dataset using Ultralytics YOLO, you can follow the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="mnist", epochs=100, imgsz=32)
@ -113,7 +113,7 @@ To train a model on the MNIST dataset using Ultralytics YOLO, you can follow the
```bash
# Start training from a pretrained *.pt model
yolo classify train data=mnist model=yolov8n-cls.pt epochs=100 imgsz=28
yolo classify train data=mnist model=yolo11n-cls.pt epochs=100 imgsz=28
```
For a detailed list of available training arguments, refer to the [Training](../../modes/train.md) page.

@ -1,7 +1,7 @@
---
comments: true
description: Explore our African Wildlife Dataset featuring images of buffalo, elephant, rhino, and zebra for training computer vision models. Ideal for research and conservation.
keywords: African Wildlife Dataset, South African animals, object detection, computer vision, YOLOv8, wildlife research, conservation, dataset
keywords: African Wildlife Dataset, South African animals, object detection, computer vision, YOLO11, wildlife research, conservation, dataset
---
# African Wildlife Dataset
@ -16,7 +16,7 @@ This dataset showcases four common animal classes typically found in South Afric
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> African Wildlife Animals Detection using Ultralytics YOLOv8
<strong>Watch:</strong> African Wildlife Animals Detection using Ultralytics YOLO11
</p>
## Dataset Structure
@ -43,7 +43,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
## Usage
To train a YOLOv8n model on the African wildlife dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
To train a YOLO11n model on the African wildlife dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -53,7 +53,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 [epochs](https:
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="african-wildlife.yaml", epochs=100, imgsz=640)
@ -63,7 +63,7 @@ To train a YOLOv8n model on the African wildlife dataset for 100 [epochs](https:
```bash
# Start training from a pretrained *.pt model
yolo detect train data=african-wildlife.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=african-wildlife.yaml model=yolo11n.pt epochs=100 imgsz=640
```
!!! example "Inference Example"
@ -107,9 +107,9 @@ The dataset has been released available under the [AGPL-3.0 License](https://git
The African Wildlife Dataset includes images of four common animal species found in South African nature reserves: buffalo, elephant, rhino, and zebra. It is a valuable resource for training computer vision algorithms in object detection and animal identification. The dataset supports various tasks like object tracking, research, and conservation efforts. For more information on its structure and applications, refer to the [Dataset Structure](#dataset-structure) section and [Applications](#applications) of the dataset.
### How do I train a YOLOv8 model using the African Wildlife Dataset?
### How do I train a YOLO11 model using the African Wildlife Dataset?
You can train a YOLOv8 model on the African Wildlife Dataset by using the `african-wildlife.yaml` configuration file. Below is an example of how to train the YOLOv8n model for 100 epochs with an image size of 640:
You can train a YOLO11 model on the African Wildlife Dataset by using the `african-wildlife.yaml` configuration file. Below is an example of how to train the YOLO11n model for 100 epochs with an image size of 640:
!!! example
@ -119,7 +119,7 @@ You can train a YOLOv8 model on the African Wildlife Dataset by using the `afric
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="african-wildlife.yaml", epochs=100, imgsz=640)
@ -129,7 +129,7 @@ You can train a YOLOv8 model on the African Wildlife Dataset by using the `afric
```bash
# Start training from a pretrained *.pt model
yolo detect train data=african-wildlife.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=african-wildlife.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For additional training parameters and options, refer to the [Training](../../modes/train.md) documentation.

@ -43,7 +43,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the Argoverse dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the Argoverse dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -53,7 +53,7 @@ To train a YOLOv8n model on the Argoverse dataset for 100 [epochs](https://www.u
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="Argoverse.yaml", epochs=100, imgsz=640)
@ -63,7 +63,7 @@ To train a YOLOv8n model on the Argoverse dataset for 100 [epochs](https://www.u
```bash
# Start training from a pretrained *.pt model
yolo detect train data=Argoverse.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=Argoverse.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -104,7 +104,7 @@ The [Argoverse](https://www.argoverse.org/) dataset, developed by Argo AI, suppo
### How can I train an Ultralytics YOLO model using the Argoverse dataset?
To train a YOLOv8 model with the Argoverse dataset, use the provided YAML configuration file and the following code:
To train a YOLO11 model with the Argoverse dataset, use the provided YAML configuration file and the following code:
!!! example "Train Example"
@ -114,7 +114,7 @@ To train a YOLOv8 model with the Argoverse dataset, use the provided YAML config
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="Argoverse.yaml", epochs=100, imgsz=640)
@ -125,7 +125,7 @@ To train a YOLOv8 model with the Argoverse dataset, use the provided YAML config
```bash
# Start training from a pretrained *.pt model
yolo detect train data=Argoverse.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=Argoverse.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For a detailed explanation of the arguments, refer to the model [Training](../../modes/train.md) page.

@ -42,7 +42,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the brain tumor dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, utilize the provided code snippets. For a detailed list of available arguments, consult the model's [Training](../../modes/train.md) page.
To train a YOLO11n model on the brain tumor dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, utilize the provided code snippets. For a detailed list of available arguments, consult the model's [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -52,7 +52,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 [epochs](https://www
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
@ -62,7 +62,7 @@ To train a YOLOv8n model on the brain tumor dataset for 100 [epochs](https://www
```bash
# Start training from a pretrained *.pt model
yolo detect train data=brain-tumor.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=brain-tumor.yaml model=yolo11n.pt epochs=100 imgsz=640
```
!!! example "Inference Example"
@ -106,9 +106,9 @@ The dataset has been released available under the [AGPL-3.0 License](https://git
The brain tumor dataset is divided into two subsets: the **training set** consists of 893 images with corresponding annotations, while the **testing set** comprises 223 images with paired annotations. This structured division aids in developing robust and accurate computer vision models for detecting brain tumors. For more information on the dataset structure, visit the [Dataset Structure](#dataset-structure) section.
### How can I train a YOLOv8 model on the brain tumor dataset using Ultralytics?
### How can I train a YOLO11 model on the brain tumor dataset using Ultralytics?
You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an image size of 640px using both Python and CLI methods. Below are the examples for both:
You can train a YOLO11 model on the brain tumor dataset for 100 epochs with an image size of 640px using both Python and CLI methods. Below are the examples for both:
!!! example "Train Example"
@ -118,7 +118,7 @@ You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an i
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
@ -129,7 +129,7 @@ You can train a YOLOv8 model on the brain tumor dataset for 100 epochs with an i
```bash
# Start training from a pretrained *.pt model
yolo detect train data=brain-tumor.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=brain-tumor.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For a detailed list of available arguments, refer to the [Training](../../modes/train.md) page.
@ -138,9 +138,9 @@ For a detailed list of available arguments, refer to the [Training](../../modes/
Using the brain tumor dataset in AI projects enables early diagnosis and treatment planning for brain tumors. It helps in automating brain tumor identification through computer vision, facilitating accurate and timely medical interventions, and supporting personalized treatment strategies. This application holds significant potential in improving patient outcomes and medical efficiencies.
### How do I perform inference using a fine-tuned YOLOv8 model on the brain tumor dataset?
### How do I perform inference using a fine-tuned YOLO11 model on the brain tumor dataset?
Inference using a fine-tuned YOLOv8 model can be performed with either Python or CLI approaches. Here are the examples:
Inference using a fine-tuned YOLO11 model can be performed with either Python or CLI approaches. Here are the examples:
!!! example "Inference Example"

@ -21,13 +21,7 @@ The [COCO](https://cocodataset.org/#home) (Common Objects in Context) dataset is
## COCO Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
{% include "macros/yolo-det-perf.md" %}
## Key Features
@ -60,7 +54,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the COCO dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the COCO dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -70,7 +64,7 @@ To train a YOLOv8n model on the COCO dataset for 100 [epochs](https://www.ultral
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco.yaml", epochs=100, imgsz=640)
@ -80,7 +74,7 @@ To train a YOLOv8n model on the COCO dataset for 100 [epochs](https://www.ultral
```bash
# Start training from a pretrained *.pt model
yolo detect train data=coco.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -122,7 +116,7 @@ The [COCO dataset](https://cocodataset.org/#home) (Common Objects in Context) is
### How can I train a YOLO model using the COCO dataset?
To train a YOLOv8 model using the COCO dataset, you can use the following code snippets:
To train a YOLO11 model using the COCO dataset, you can use the following code snippets:
!!! example "Train Example"
@ -132,7 +126,7 @@ To train a YOLOv8 model using the COCO dataset, you can use the following code s
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco.yaml", epochs=100, imgsz=640)
@ -142,7 +136,7 @@ To train a YOLOv8 model using the COCO dataset, you can use the following code s
```bash
# Start training from a pretrained *.pt model
yolo detect train data=coco.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco.yaml model=yolo11n.pt epochs=100 imgsz=640
```
Refer to the [Training page](../../modes/train.md) for more details on available arguments.
@ -156,13 +150,15 @@ The COCO dataset includes:
- Standardized evaluation metrics for object detection (mAP) and segmentation (mean Average Recall, mAR).
- **Mosaicing** technique in training batches to enhance model generalization across various object sizes and contexts.
### Where can I find pretrained YOLOv8 models trained on the COCO dataset?
### Where can I find pretrained YOLO11 models trained on the COCO dataset?
Pretrained YOLOv8 models on the COCO dataset can be downloaded from the links provided in the documentation. Examples include:
Pretrained YOLO11 models on the COCO dataset can be downloaded from the links provided in the documentation. Examples include:
- [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt)
- [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt)
- [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt)
- [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt)
- [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt)
- [YOLO11m](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m.pt)
- [YOLO11l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l.pt)
- [YOLO11x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x.pt)
These models vary in size, mAP, and inference speed, providing options for different performance and resource requirements.

@ -1,7 +1,7 @@
---
comments: true
description: Explore the Ultralytics COCO8 dataset, a versatile and manageable set of 8 images perfect for testing object detection models and training pipelines.
keywords: COCO8, Ultralytics, dataset, object detection, YOLOv8, training, validation, machine learning, computer vision
keywords: COCO8, Ultralytics, dataset, object detection, YOLO11, training, validation, machine learning, computer vision
---
# COCO8 Dataset
@ -21,7 +21,7 @@ keywords: COCO8, Ultralytics, dataset, object detection, YOLOv8, training, valid
<strong>Watch:</strong> Ultralytics COCO Dataset Overview
</p>
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics).
## Dataset YAML
@ -35,7 +35,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -45,7 +45,7 @@ To train a YOLOv8n model on the COCO8 dataset for 100 [epochs](https://www.ultra
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
@ -55,7 +55,7 @@ To train a YOLOv8n model on the COCO8 dataset for 100 [epochs](https://www.ultra
```bash
# Start training from a pretrained *.pt model
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -95,9 +95,9 @@ We would like to acknowledge the COCO Consortium for creating and maintaining th
The Ultralytics COCO8 dataset is a compact yet versatile object detection dataset consisting of the first 8 images from the COCO train 2017 set, with 4 images for training and 4 for validation. It is designed for testing and debugging object detection models and experimentation with new detection approaches. Despite its small size, COCO8 offers enough diversity to act as a sanity check for your training pipelines before deploying larger datasets. For more details, view the [COCO8 dataset](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml).
### How do I train a YOLOv8 model using the COCO8 dataset?
### How do I train a YOLO11 model using the COCO8 dataset?
To train a YOLOv8 model using the COCO8 dataset, you can employ either Python or CLI commands. Here's how you can start:
To train a YOLO11 model using the COCO8 dataset, you can employ either Python or CLI commands. Here's how you can start:
!!! example "Train Example"
@ -107,7 +107,7 @@ To train a YOLOv8 model using the COCO8 dataset, you can employ either Python or
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
@ -117,19 +117,19 @@ To train a YOLOv8 model using the COCO8 dataset, you can employ either Python or
```bash
# Start training from a pretrained *.pt model
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
### Why should I use Ultralytics HUB for managing my COCO8 training?
Ultralytics HUB is an all-in-one web tool designed to simplify the training and deployment of YOLO models, including the Ultralytics YOLOv8 models on the COCO8 dataset. It offers cloud training, real-time tracking, and seamless dataset management. HUB allows you to start training with a single click and avoids the complexities of manual setups. Discover more about [Ultralytics HUB](https://hub.ultralytics.com/) and its benefits.
Ultralytics HUB is an all-in-one web tool designed to simplify the training and deployment of YOLO models, including the Ultralytics YOLO11 models on the COCO8 dataset. It offers cloud training, real-time tracking, and seamless dataset management. HUB allows you to start training with a single click and avoids the complexities of manual setups. Discover more about [Ultralytics HUB](https://hub.ultralytics.com/) and its benefits.
### What are the benefits of using mosaic augmentation in training with the COCO8 dataset?
Mosaic augmentation, demonstrated in the COCO8 dataset, combines multiple images into a single image during training. This technique increases the variety of objects and scenes in each training batch, improving the model's ability to generalize across different object sizes, aspect ratios, and contexts. This results in a more robust object detection model. For more details, refer to the [training guide](#usage).
### How can I validate my YOLOv8 model trained on the COCO8 dataset?
### How can I validate my YOLO11 model trained on the COCO8 dataset?
Validation of your YOLOv8 model trained on the COCO8 dataset can be performed using the model's validation commands. You can invoke the validation mode via CLI or Python script to evaluate the model's performance using precise metrics. For detailed instructions, visit the [Validation](../../modes/val.md) page.
Validation of your YOLO11 model trained on the COCO8 dataset can be performed using the model's validation commands. You can invoke the validation mode via CLI or Python script to evaluate the model's performance using precise metrics. For detailed instructions, visit the [Validation](../../modes/val.md) page.

@ -38,7 +38,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the Global Wheat Head Dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the Global Wheat Head Dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -48,7 +48,7 @@ To train a YOLOv8n model on the Global Wheat Head Dataset for 100 [epochs](https
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="GlobalWheat2020.yaml", epochs=100, imgsz=640)
@ -58,7 +58,7 @@ To train a YOLOv8n model on the Global Wheat Head Dataset for 100 [epochs](https
```bash
# Start training from a pretrained *.pt model
yolo detect train data=GlobalWheat2020.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=GlobalWheat2020.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -96,9 +96,9 @@ We would like to acknowledge the researchers and institutions that contributed t
The Global Wheat Head Dataset is primarily used for developing and training deep learning models aimed at wheat head detection. This is crucial for applications in wheat phenotyping and crop management, allowing for more accurate estimations of wheat head density, size, and overall crop yield potential. Accurate detection methods help in assessing crop health and maturity, essential for efficient crop management.
### How do I train a YOLOv8n model on the Global Wheat Head Dataset?
### How do I train a YOLO11n model on the Global Wheat Head Dataset?
To train a YOLOv8n model on the Global Wheat Head Dataset, you can use the following code snippets. Make sure you have the `GlobalWheat2020.yaml` configuration file specifying dataset paths and classes:
To train a YOLO11n model on the Global Wheat Head Dataset, you can use the following code snippets. Make sure you have the `GlobalWheat2020.yaml` configuration file specifying dataset paths and classes:
!!! example "Train Example"
@ -108,7 +108,7 @@ To train a YOLOv8n model on the Global Wheat Head Dataset, you can use the follo
from ultralytics import YOLO
# Load a pre-trained model (recommended for training)
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Train the model
results = model.train(data="GlobalWheat2020.yaml", epochs=100, imgsz=640)
@ -118,7 +118,7 @@ To train a YOLOv8n model on the Global Wheat Head Dataset, you can use the follo
```bash
# Start training from a pretrained *.pt model
yolo detect train data=GlobalWheat2020.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=GlobalWheat2020.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.

@ -56,7 +56,7 @@ Here's how you can use these formats to train your model:
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
@ -66,7 +66,7 @@ Here's how you can use these formats to train your model:
```bash
# Start training from a pretrained *.pt model
yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Supported Datasets
@ -158,11 +158,11 @@ Ultralytics YOLO supports a wide range of datasets, including:
- [Objects365](objects365.md)
- [OpenImagesV7](open-images-v7.md)
Each dataset page provides detailed information on the structure and usage tailored for efficient YOLOv8 training. Explore the full list in the [Supported Datasets](#supported-datasets) section.
Each dataset page provides detailed information on the structure and usage tailored for efficient YOLO11 training. Explore the full list in the [Supported Datasets](#supported-datasets) section.
### How do I start training a YOLOv8 model using my dataset?
### How do I start training a YOLO11 model using my dataset?
To start training a YOLOv8 model, ensure your dataset is formatted correctly and the paths are defined in a YAML file. Use the following script to begin training:
To start training a YOLO11 model, ensure your dataset is formatted correctly and the paths are defined in a YAML file. Use the following script to begin training:
!!! example
@ -171,18 +171,18 @@ To start training a YOLOv8 model, ensure your dataset is formatted correctly and
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # Load a pretrained model
model = YOLO("yolo11n.pt") # Load a pretrained model
results = model.train(data="path/to/your_dataset.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo detect train data=path/to/your_dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=path/to/your_dataset.yaml model=yolo11n.pt epochs=100 imgsz=640
```
Refer to the [Usage](#usage) section for more details on utilizing different modes, including CLI commands.
### Where can I find practical examples of using Ultralytics YOLO for object detection?
Ultralytics provides numerous examples and practical guides for using YOLOv8 in diverse applications. For a comprehensive overview, visit the [Ultralytics Blog](https://www.ultralytics.com/blog) where you can find case studies, detailed tutorials, and community stories showcasing object detection, segmentation, and more with YOLOv8. For specific examples, check the [Usage](../../modes/predict.md) section in the documentation.
Ultralytics provides numerous examples and practical guides for using YOLO11 in diverse applications. For a comprehensive overview, visit the [Ultralytics Blog](https://www.ultralytics.com/blog) where you can find case studies, detailed tutorials, and community stories showcasing object detection, segmentation, and more with YOLO11. For specific examples, check the [Usage](../../modes/predict.md) section in the documentation.

@ -56,7 +56,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the LVIS dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the LVIS dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -66,7 +66,7 @@ To train a YOLOv8n model on the LVIS dataset for 100 [epochs](https://www.ultral
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="lvis.yaml", epochs=100, imgsz=640)
@ -76,7 +76,7 @@ To train a YOLOv8n model on the LVIS dataset for 100 [epochs](https://www.ultral
```bash
# Start training from a pretrained *.pt model
yolo detect train data=lvis.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=lvis.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -114,9 +114,9 @@ We would like to acknowledge the LVIS Consortium for creating and maintaining th
The [LVIS dataset](https://www.lvisdataset.org/) is a large-scale dataset with fine-grained vocabulary-level annotations developed by Facebook AI Research (FAIR). It is primarily used for object detection and instance segmentation, featuring over 1203 object categories and 2 million instance annotations. Researchers and practitioners use it to train and benchmark models like Ultralytics YOLO for advanced computer vision tasks. The dataset's extensive size and diversity make it an essential resource for pushing the boundaries of model performance in detection and segmentation.
### How can I train a YOLOv8n model using the LVIS dataset?
### How can I train a YOLO11n model using the LVIS dataset?
To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size of 640, follow the example below. This process utilizes Ultralytics' framework, which offers comprehensive training features.
To train a YOLO11n model on the LVIS dataset for 100 epochs with an image size of 640, follow the example below. This process utilizes Ultralytics' framework, which offers comprehensive training features.
!!! example "Train Example"
@ -126,7 +126,7 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="lvis.yaml", epochs=100, imgsz=640)
@ -137,7 +137,7 @@ To train a YOLOv8n model on the LVIS dataset for 100 epochs with an image size o
```bash
# Start training from a pretrained *.pt model
yolo detect train data=lvis.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=lvis.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For detailed training configurations, refer to the [Training](../../modes/train.md) documentation.
@ -148,7 +148,7 @@ The images in the LVIS dataset are the same as those in the [COCO dataset](./coc
### Why should I use Ultralytics YOLO for training on the LVIS dataset?
Ultralytics YOLO models, including the latest YOLOv8, are optimized for real-time object detection with state-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed. They support a wide range of annotations, such as the fine-grained ones provided by the LVIS dataset, making them ideal for advanced computer vision applications. Moreover, Ultralytics offers seamless integration with various [training](../../modes/train.md), [validation](../../modes/val.md), and [prediction](../../modes/predict.md) modes, ensuring efficient model development and deployment.
Ultralytics YOLO models, including the latest YOLO11, are optimized for real-time object detection with state-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed. They support a wide range of annotations, such as the fine-grained ones provided by the LVIS dataset, making them ideal for advanced computer vision applications. Moreover, Ultralytics offers seamless integration with various [training](../../modes/train.md), [validation](../../modes/val.md), and [prediction](../../modes/predict.md) modes, ensuring efficient model development and deployment.
### Can I see some sample annotations from the LVIS dataset?

@ -1,7 +1,7 @@
---
comments: true
description: Explore the Objects365 Dataset with 2M images and 30M bounding boxes across 365 categories. Enhance your object detection models with diverse, high-quality data.
keywords: Objects365 dataset, object detection, machine learning, deep learning, computer vision, annotated images, bounding boxes, YOLOv8, high-resolution images, dataset configuration
keywords: Objects365 dataset, object detection, machine learning, deep learning, computer vision, annotated images, bounding boxes, YOLO11, high-resolution images, dataset configuration
---
# Objects365 Dataset
@ -38,7 +38,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the Objects365 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the Objects365 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -48,7 +48,7 @@ To train a YOLOv8n model on the Objects365 dataset for 100 [epochs](https://www.
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="Objects365.yaml", epochs=100, imgsz=640)
@ -58,7 +58,7 @@ To train a YOLOv8n model on the Objects365 dataset for 100 [epochs](https://www.
```bash
# Start training from a pretrained *.pt model
yolo detect train data=Objects365.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=Objects365.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -97,9 +97,9 @@ We would like to acknowledge the team of researchers who created and maintain th
The [Objects365 dataset](https://www.objects365.org/) is designed for object detection tasks in [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and computer vision. It provides a large-scale, high-quality dataset with 2 million annotated images and 30 million bounding boxes across 365 categories. Leveraging such a diverse dataset helps improve the performance and generalization of object detection models, making it invaluable for research and development in the field.
### How can I train a YOLOv8 model on the Objects365 dataset?
### How can I train a YOLO11 model on the Objects365 dataset?
To train a YOLOv8n model using the Objects365 dataset for 100 epochs with an image size of 640, follow these instructions:
To train a YOLO11n model using the Objects365 dataset for 100 epochs with an image size of 640, follow these instructions:
!!! example "Train Example"
@ -109,7 +109,7 @@ To train a YOLOv8n model using the Objects365 dataset for 100 epochs with an ima
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="Objects365.yaml", epochs=100, imgsz=640)
@ -119,7 +119,7 @@ To train a YOLOv8n model using the Objects365 dataset for 100 epochs with an ima
```bash
# Start training from a pretrained *.pt model
yolo detect train data=Objects365.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=Objects365.yaml model=yolo11n.pt epochs=100 imgsz=640
```
Refer to the [Training](../../modes/train.md) page for a comprehensive list of available arguments.

@ -1,7 +1,7 @@
---
comments: true
description: Explore the comprehensive Open Images V7 dataset by Google. Learn about its annotations, applications, and use YOLOv8 pretrained models for computer vision tasks.
keywords: Open Images V7, Google dataset, computer vision, YOLOv8 models, object detection, image segmentation, visual relationships, AI research, Ultralytics
description: Explore the comprehensive Open Images V7 dataset by Google. Learn about its annotations, applications, and use YOLO11 pretrained models for computer vision tasks.
keywords: Open Images V7, Google dataset, computer vision, YOLO11 models, object detection, image segmentation, visual relationships, AI research, Ultralytics
---
# Open Images V7 Dataset
@ -16,7 +16,7 @@ keywords: Open Images V7, Google dataset, computer vision, YOLOv8 models, object
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> [Object Detection](https://www.ultralytics.com/glossary/object-detection) using OpenImagesV7 Pretrained Model
<strong>Watch:</strong> <a href="https://www.ultralytics.com/glossary/object-detection">Object Detection</a> using OpenImagesV7 Pretrained Model
</p>
## Open Images V7 Pretrained Models
@ -69,7 +69,7 @@ Typically, datasets come with a YAML (Yet Another Markup Language) file that del
## Usage
To train a YOLOv8n model on the Open Images V7 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the Open Images V7 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! warning
@ -87,8 +87,8 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 [epochs](https://
```python
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a COCO-pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Train the model on the Open Images V7 dataset
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
@ -97,8 +97,8 @@ To train a YOLOv8n model on the Open Images V7 dataset for 100 [epochs](https://
=== "CLI"
```bash
# Train a COCO-pretrained YOLOv8n model on the Open Images V7 dataset
yolo detect train data=open-images-v7.yaml model=yolov8n.pt epochs=100 imgsz=640
# Train a COCO-pretrained YOLO11n model on the Open Images V7 dataset
yolo detect train data=open-images-v7.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -136,9 +136,9 @@ A heartfelt acknowledgment goes out to the Google AI team for creating and maint
Open Images V7 is an extensive and versatile dataset created by Google, designed to advance research in computer vision. It includes image-level labels, object bounding boxes, object segmentation masks, visual relationships, and localized narratives, making it ideal for various computer vision tasks such as object detection, segmentation, and relationship detection.
### How do I train a YOLOv8 model on the Open Images V7 dataset?
### How do I train a YOLO11 model on the Open Images V7 dataset?
To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python and CLI commands. Here's an example of training the YOLOv8n model for 100 epochs with an image size of 640:
To train a YOLO11 model on the Open Images V7 dataset, you can use both Python and CLI commands. Here's an example of training the YOLO11n model for 100 epochs with an image size of 640:
!!! example "Train Example"
@ -147,8 +147,8 @@ To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python a
```python
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Load a COCO-pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Train the model on the Open Images V7 dataset
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
@ -158,8 +158,8 @@ To train a YOLOv8 model on the Open Images V7 dataset, you can use both Python a
=== "CLI"
```bash
# Train a COCO-pretrained YOLOv8n model on the Open Images V7 dataset
yolo detect train data=open-images-v7.yaml model=yolov8n.pt epochs=100 imgsz=640
# Train a COCO-pretrained YOLO11n model on the Open Images V7 dataset
yolo detect train data=open-images-v7.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For more details on arguments and settings, refer to the [Training](../../modes/train.md) page.

@ -67,7 +67,7 @@ Dataset benchmarking evaluates machine learning model performance on specific da
if path.exists():
# Fix YAML file and run training
benchmark.fix_yaml(str(path))
os.system(f"yolo detect train data={path} model=yolov8s.pt epochs=1 batch=16")
os.system(f"yolo detect train data={path} model=yolo11s.pt epochs=1 batch=16")
# Run validation and evaluate
os.system(f"yolo detect val data={path} model=runs/detect/train/weights/best.pt > {val_log_file} 2>&1")
@ -165,7 +165,7 @@ To use the Roboflow 100 dataset for benchmarking, you can implement the RF100Ben
if path.exists():
# Fix YAML file and run training
benchmark.fix_yaml(str(path))
os.system(f"yolo detect train data={path} model=yolov8s.pt epochs=1 batch=16")
os.system(f"yolo detect train data={path} model=yolo11n.pt epochs=1 batch=16")
# Run validation and evaluate
os.system(f"yolo detect val data={path} model=runs/detect/train/weights/best.pt > {val_log_file} 2>&1")

@ -1,7 +1,7 @@
---
comments: true
description: Discover the Signature Detection Dataset for training models to identify and verify human signatures in various documents. Perfect for document verification and fraud prevention.
keywords: Signature Detection Dataset, document verification, fraud detection, computer vision, YOLOv8, Ultralytics, annotated signatures, training dataset
keywords: Signature Detection Dataset, document verification, fraud detection, computer vision, YOLO11, Ultralytics, annotated signatures, training dataset
---
# Signature Detection Dataset
@ -31,7 +31,7 @@ A YAML (Yet Another Markup Language) file defines the dataset configuration, inc
## Usage
To train a YOLOv8n model on the signature detection dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
To train a YOLO11n model on the signature detection dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, use the provided code samples. For a comprehensive list of available parameters, refer to the model's [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -41,7 +41,7 @@ To train a YOLOv8n model on the signature detection dataset for 100 [epochs](htt
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="signature.yaml", epochs=100, imgsz=640)
@ -51,7 +51,7 @@ To train a YOLOv8n model on the signature detection dataset for 100 [epochs](htt
```bash
# Start training from a pretrained *.pt model
yolo detect train data=signature.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=signature.yaml model=yolo11n.pt epochs=100 imgsz=640
```
!!! example "Inference Example"
@ -95,9 +95,9 @@ The dataset has been released available under the [AGPL-3.0 License](https://git
The Signature Detection Dataset is a collection of annotated images aimed at detecting human signatures within various document types. It can be applied in computer vision tasks such as [object detection](https://www.ultralytics.com/glossary/object-detection) and tracking, primarily for document verification, fraud detection, and archival research. This dataset helps train models to recognize signatures in different contexts, making it valuable for both research and practical applications.
### How do I train a YOLOv8n model on the Signature Detection Dataset?
### How do I train a YOLO11n model on the Signature Detection Dataset?
To train a YOLOv8n model on the Signature Detection Dataset, follow these steps:
To train a YOLO11n model on the Signature Detection Dataset, follow these steps:
1. Download the `signature.yaml` dataset configuration file from [signature.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/signature.yaml).
2. Use the following Python script or CLI command to start training:
@ -110,7 +110,7 @@ To train a YOLOv8n model on the Signature Detection Dataset, follow these steps:
from ultralytics import YOLO
# Load a pretrained model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Train the model
results = model.train(data="signature.yaml", epochs=100, imgsz=640)
@ -119,7 +119,7 @@ To train a YOLOv8n model on the Signature Detection Dataset, follow these steps:
=== "CLI"
```bash
yolo detect train data=signature.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=signature.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For more details, refer to the [Training](../../modes/train.md) page.

@ -51,7 +51,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the SKU-110K dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the SKU-110K dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -61,7 +61,7 @@ To train a YOLOv8n model on the SKU-110K dataset for 100 [epochs](https://www.ul
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="SKU-110K.yaml", epochs=100, imgsz=640)
@ -71,7 +71,7 @@ To train a YOLOv8n model on the SKU-110K dataset for 100 [epochs](https://www.ul
```bash
# Start training from a pretrained *.pt model
yolo detect train data=SKU-110K.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=SKU-110K.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -109,9 +109,9 @@ We would like to acknowledge Eran Goldman et al. for creating and maintaining th
The SKU-110k dataset consists of densely packed retail shelf images designed to aid research in object detection tasks. Developed by Eran Goldman et al., it includes over 110,000 unique SKU categories. Its importance lies in its ability to challenge state-of-the-art object detectors with diverse object appearances and close proximity, making it an invaluable resource for researchers and practitioners in computer vision. Learn more about the dataset's structure and applications in our [SKU-110k Dataset](#sku-110k-dataset) section.
### How do I train a YOLOv8 model using the SKU-110k dataset?
### How do I train a YOLO11 model using the SKU-110k dataset?
Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an example to train a YOLOv8n model for 100 epochs with an image size of 640:
Training a YOLO11 model on the SKU-110k dataset is straightforward. Here's an example to train a YOLO11n model for 100 epochs with an image size of 640:
!!! example "Train Example"
@ -121,7 +121,7 @@ Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an ex
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="SKU-110K.yaml", epochs=100, imgsz=640)
@ -132,7 +132,7 @@ Training a YOLOv8 model on the SKU-110k dataset is straightforward. Here's an ex
```bash
# Start training from a pretrained *.pt model
yolo detect train data=SKU-110K.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=SKU-110K.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.

@ -47,7 +47,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the VisDrone dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the VisDrone dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -57,7 +57,7 @@ To train a YOLOv8n model on the VisDrone dataset for 100 [epochs](https://www.ul
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="VisDrone.yaml", epochs=100, imgsz=640)
@ -67,7 +67,7 @@ To train a YOLOv8n model on the VisDrone dataset for 100 [epochs](https://www.ul
```bash
# Start training from a pretrained *.pt model
yolo detect train data=VisDrone.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=VisDrone.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -113,9 +113,9 @@ The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-
- **Diversity**: Collected across 14 cities, in urban and rural settings, under different weather and lighting conditions.
- **Tasks**: Split into five main tasks—object detection in images and videos, single-object and multi-object tracking, and crowd counting.
### How can I use the VisDrone Dataset to train a YOLOv8 model with Ultralytics?
### How can I use the VisDrone Dataset to train a YOLO11 model with Ultralytics?
To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image size of 640, you can follow these steps:
To train a YOLO11 model on the VisDrone dataset for 100 epochs with an image size of 640, you can follow these steps:
!!! example "Train Example"
@ -125,7 +125,7 @@ To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image siz
from ultralytics import YOLO
# Load a pretrained model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Train the model
results = model.train(data="VisDrone.yaml", epochs=100, imgsz=640)
@ -135,7 +135,7 @@ To train a YOLOv8 model on the VisDrone dataset for 100 epochs with an image siz
```bash
# Start training from a pretrained *.pt model
yolo detect train data=VisDrone.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=VisDrone.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For additional configuration options, please refer to the model [Training](../../modes/train.md) page.

@ -39,7 +39,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n model on the VOC dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n model on the VOC dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -49,7 +49,7 @@ To train a YOLOv8n model on the VOC dataset for 100 [epochs](https://www.ultraly
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="VOC.yaml", epochs=100, imgsz=640)
@ -59,7 +59,7 @@ To train a YOLOv8n model on the VOC dataset for 100 [epochs](https://www.ultraly
```bash
# Start training from a pretrained *.pt model
yolo detect train data=VOC.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=VOC.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -99,9 +99,9 @@ We would like to acknowledge the PASCAL VOC Consortium for creating and maintain
The [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) (Visual Object Classes) dataset is a renowned benchmark for [object detection](https://www.ultralytics.com/glossary/object-detection), segmentation, and classification in computer vision. It includes comprehensive annotations like bounding boxes, class labels, and segmentation masks across 20 different object categories. Researchers use it widely to evaluate the performance of models like Faster R-CNN, YOLO, and Mask R-CNN due to its standardized evaluation metrics such as mean Average Precision (mAP).
### How do I train a YOLOv8 model using the VOC dataset?
### How do I train a YOLO11 model using the VOC dataset?
To train a YOLOv8 model with the VOC dataset, you need the dataset configuration in a YAML file. Here's an example to start training a YOLOv8n model for 100 epochs with an image size of 640:
To train a YOLO11 model with the VOC dataset, you need the dataset configuration in a YAML file. Here's an example to start training a YOLO11n model for 100 epochs with an image size of 640:
!!! example "Train Example"
@ -111,7 +111,7 @@ To train a YOLOv8 model with the VOC dataset, you need the dataset configuration
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="VOC.yaml", epochs=100, imgsz=640)
@ -121,7 +121,7 @@ To train a YOLOv8 model with the VOC dataset, you need the dataset configuration
```bash
# Start training from a pretrained *.pt model
yolo detect train data=VOC.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=VOC.yaml model=yolo11n.pt epochs=100 imgsz=640
```
### What are the primary challenges included in the VOC dataset?

@ -52,7 +52,7 @@ To train a model on the xView dataset for 100 [epochs](https://www.ultralytics.c
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="xView.yaml", epochs=100, imgsz=640)
@ -62,7 +62,7 @@ To train a model on the xView dataset for 100 [epochs](https://www.ultralytics.c
```bash
# Start training from a pretrained *.pt model
yolo detect train data=xView.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=xView.yaml model=yolo11n.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -114,7 +114,7 @@ To train a model on the xView dataset using Ultralytics YOLO, follow these steps
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="xView.yaml", epochs=100, imgsz=640)
@ -125,7 +125,7 @@ To train a model on the xView dataset using Ultralytics YOLO, follow these steps
```bash
# Start training from a pretrained *.pt model
yolo detect train data=xView.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train data=xView.yaml model=yolo11n.pt epochs=100 imgsz=640
```
For detailed arguments and settings, refer to the model [Training](../../modes/train.md) page.

@ -36,7 +36,7 @@ pip install ultralytics[explorer]
from ultralytics import Explorer
# Create an Explorer object
explorer = Explorer(data="coco128.yaml", model="yolov8n.pt")
explorer = Explorer(data="coco128.yaml", model="yolo11n.pt")
# Create embeddings for your dataset
explorer.create_embeddings_table()
@ -75,7 +75,7 @@ You get a pandas dataframe with the `limit` number of most similar data points t
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp = Explorer(data="coco128.yaml", model="yolo11n.pt")
exp.create_embeddings_table()
similar = exp.get_similar(img="https://ultralytics.com/images/bus.jpg", limit=10)
@ -95,7 +95,7 @@ You get a pandas dataframe with the `limit` number of most similar data points t
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp = Explorer(data="coco128.yaml", model="yolo11n.pt")
exp.create_embeddings_table()
similar = exp.get_similar(idx=1, limit=10)
@ -118,7 +118,7 @@ You can also plot the similar images using the `plot_similar` method. This metho
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp = Explorer(data="coco128.yaml", model="yolo11n.pt")
exp.create_embeddings_table()
plt = exp.plot_similar(img="https://ultralytics.com/images/bus.jpg", limit=10)
@ -131,7 +131,7 @@ You can also plot the similar images using the `plot_similar` method. This metho
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp = Explorer(data="coco128.yaml", model="yolo11n.pt")
exp.create_embeddings_table()
plt = exp.plot_similar(idx=1, limit=10)
@ -150,7 +150,7 @@ Note: This works using LLMs under the hood so the results are probabilistic and
from ultralytics.data.explorer import plot_query_result
# create an Explorer object
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp = Explorer(data="coco128.yaml", model="yolo11n.pt")
exp.create_embeddings_table()
df = exp.ask_ai("show me 100 images with exactly one person and 2 dogs. There can be other objects too")
@ -171,7 +171,7 @@ You can run SQL queries on your dataset using the `sql_query` method. This metho
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp = Explorer(data="coco128.yaml", model="yolo11n.pt")
exp.create_embeddings_table()
df = exp.sql_query("WHERE labels LIKE '%person%' AND labels LIKE '%dog%'")
@ -188,7 +188,7 @@ You can also plot the results of a SQL query using the `plot_sql_query` method.
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data="coco128.yaml", model="yolov8n.pt")
exp = Explorer(data="coco128.yaml", model="yolo11n.pt")
exp.create_embeddings_table()
# plot the SQL Query
@ -235,7 +235,7 @@ Here are some examples of what you can do with the table:
```python
from ultralytics import Explorer
exp = Explorer(model="yolov8n.pt")
exp = Explorer(model="yolo11n.pt")
exp.create_embeddings_table()
table = exp.table
@ -359,7 +359,7 @@ You can use the Ultralytics Explorer API to perform similarity searches by creat
from ultralytics import Explorer
# Create an Explorer object
explorer = Explorer(data="coco128.yaml", model="yolov8n.pt")
explorer = Explorer(data="coco128.yaml", model="yolo11n.pt")
explorer.create_embeddings_table()
# Search for similar images to a given image
@ -381,7 +381,7 @@ The Ask AI feature allows users to filter datasets using natural language querie
from ultralytics import Explorer
# Create an Explorer object
explorer = Explorer(data="coco128.yaml", model="yolov8n.pt")
explorer = Explorer(data="coco128.yaml", model="yolo11n.pt")
explorer.create_embeddings_table()
# Query with natural language

@ -88,7 +88,7 @@
},
"outputs": [],
"source": [
"exp = Explorer(\"VOC.yaml\", model=\"yolov8n.pt\")\n",
"exp = Explorer(\"VOC.yaml\", model=\"yolo11n.pt\")\n",
"exp.create_embeddings_table()"
]
},

@ -69,6 +69,7 @@ Pose estimation is a technique used to determine the pose of the object relative
- [COCO](pose/coco.md): A large-scale dataset with human pose annotations designed for pose estimation tasks.
- [COCO8-pose](pose/coco8-pose.md): A smaller dataset for pose estimation tasks, containing a subset of 8 COCO images with human pose annotations.
- [Tiger-pose](pose/tiger-pose.md): A compact dataset consisting of 263 images focused on tigers, annotated with 12 keypoints per tiger for pose estimation tasks.
- [Hand-Keypoints](pose/hand-keypoints.md): A concise dataset featuring over 26,000 images centered on human hands, annotated with 21 keypoints per hand, designed for pose estimation tasks.
## [Classification](classify/index.md)

@ -108,8 +108,8 @@ To train a model on the DOTA v1 dataset, you can utilize the following code snip
```python
from ultralytics import YOLO
# Create a new YOLOv8n-OBB model from scratch
model = YOLO("yolov8n-obb.yaml")
# Create a new YOLO11n-OBB model from scratch
model = YOLO("yolo11n-obb.yaml")
# Train the model on the DOTAv1 dataset
results = model.train(data="DOTAv1.yaml", epochs=100, imgsz=1024)
@ -118,8 +118,8 @@ To train a model on the DOTA v1 dataset, you can utilize the following code snip
=== "CLI"
```bash
# Train a new YOLOv8n-OBB model on the DOTAv1 dataset
yolo obb train data=DOTAv1.yaml model=yolov8n-obb.pt epochs=100 imgsz=1024
# Train a new YOLO11n-OBB model on the DOTAv1 dataset
yolo obb train data=DOTAv1.yaml model=yolo11n-obb.pt epochs=100 imgsz=1024
```
## Sample Data and Annotations
@ -176,8 +176,8 @@ To train a model on the DOTA dataset, you can use the following example with Ult
```python
from ultralytics import YOLO
# Create a new YOLOv8n-OBB model from scratch
model = YOLO("yolov8n-obb.yaml")
# Create a new YOLO11n-OBB model from scratch
model = YOLO("yolo11n-obb.yaml")
# Train the model on the DOTAv1 dataset
results = model.train(data="DOTAv1.yaml", epochs=100, imgsz=1024)
@ -186,8 +186,8 @@ To train a model on the DOTA dataset, you can use the following example with Ult
=== "CLI"
```bash
# Train a new YOLOv8n-OBB model on the DOTAv1 dataset
yolo obb train data=DOTAv1.yaml model=yolov8n-obb.pt epochs=100 imgsz=1024
# Train a new YOLO11n-OBB model on the DOTAv1 dataset
yolo obb train data=DOTAv1.yaml model=yolo11n-obb.pt epochs=100 imgsz=1024
```
For more details on how to split and preprocess the DOTA images, refer to the [split DOTA images section](#split-dota-images).

@ -1,7 +1,7 @@
---
comments: true
description: Explore the DOTA8 dataset - a small, versatile oriented object detection dataset ideal for testing and debugging object detection models using Ultralytics YOLOv8.
keywords: DOTA8 dataset, Ultralytics, YOLOv8, object detection, debugging, training models, oriented object detection, dataset YAML
description: Explore the DOTA8 dataset - a small, versatile oriented object detection dataset ideal for testing and debugging object detection models using Ultralytics YOLO11.
keywords: DOTA8 dataset, Ultralytics, YOLO11, object detection, debugging, training models, oriented object detection, dataset YAML
---
# DOTA8 Dataset
@ -10,7 +10,7 @@ keywords: DOTA8 dataset, Ultralytics, YOLOv8, object detection, debugging, train
[Ultralytics](https://www.ultralytics.com/) DOTA8 is a small, but versatile oriented [object detection](https://www.ultralytics.com/glossary/object-detection) dataset composed of the first 8 images of 8 images of the split DOTAv1 set, 4 for training and 4 for validation. This dataset is ideal for testing and debugging object detection models, or for experimenting with new detection approaches. With 8 images, it is small enough to be easily manageable, yet diverse enough to test training pipelines for errors and act as a sanity check before training larger datasets.
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics).
## Dataset YAML
@ -24,7 +24,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-obb model on the DOTA8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -34,7 +34,7 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 [epochs](https://www.u
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-obb.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-obb.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="dota8.yaml", epochs=100, imgsz=640)
@ -44,7 +44,7 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 [epochs](https://www.u
```bash
# Start training from a pretrained *.pt model
yolo obb train data=dota8.yaml model=yolov8n-obb.pt epochs=100 imgsz=640
yolo obb train data=dota8.yaml model=yolo11n-obb.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -84,11 +84,11 @@ A special note of gratitude to the team behind the DOTA datasets for their comme
### What is the DOTA8 dataset and how can it be used?
The DOTA8 dataset is a small, versatile oriented object detection dataset made up of the first 8 images from the DOTAv1 split set, with 4 images designated for training and 4 for validation. It's ideal for testing and debugging object detection models like Ultralytics YOLOv8. Due to its manageable size and diversity, it helps in identifying pipeline errors and running sanity checks before deploying larger datasets. Learn more about object detection with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics).
The DOTA8 dataset is a small, versatile oriented object detection dataset made up of the first 8 images from the DOTAv1 split set, with 4 images designated for training and 4 for validation. It's ideal for testing and debugging object detection models like Ultralytics YOLO11. Due to its manageable size and diversity, it helps in identifying pipeline errors and running sanity checks before deploying larger datasets. Learn more about object detection with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics).
### How do I train a YOLOv8 model using the DOTA8 dataset?
### How do I train a YOLO11 model using the DOTA8 dataset?
To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For comprehensive argument options, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-obb model on the DOTA8 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For comprehensive argument options, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -98,7 +98,7 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image s
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-obb.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-obb.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="dota8.yaml", epochs=100, imgsz=640)
@ -108,7 +108,7 @@ To train a YOLOv8n-obb model on the DOTA8 dataset for 100 epochs with an image s
```bash
# Start training from a pretrained *.pt model
yolo obb train data=dota8.yaml model=yolov8n-obb.pt epochs=100 imgsz=640
yolo obb train data=dota8.yaml model=yolo11n-obb.pt epochs=100 imgsz=640
```
### What are the key features of the DOTA dataset and where can I access the YAML file?
@ -119,6 +119,6 @@ The DOTA dataset is known for its large-scale benchmark and the challenges it pr
Mosaicing combines multiple images into one during training, increasing the variety of objects and contexts within each batch. This improves a model's ability to generalize to different object sizes, aspect ratios, and scenes. This technique can be visually demonstrated through a training batch composed of mosaiced DOTA8 dataset images, helping in robust model development. Explore more about mosaicing and training techniques on our [Training](../../modes/train.md) page.
### Why should I use Ultralytics YOLOv8 for object detection tasks?
### Why should I use Ultralytics YOLO11 for object detection tasks?
Ultralytics YOLOv8 provides state-of-the-art real-time object detection capabilities, including features like oriented bounding boxes (OBB), [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation), and a highly versatile training pipeline. It's suitable for various applications and offers pretrained models for efficient fine-tuning. Explore further about the advantages and usage in the [Ultralytics YOLOv8 documentation](https://github.com/ultralytics/ultralytics).
Ultralytics YOLO11 provides state-of-the-art real-time object detection capabilities, including features like oriented bounding boxes (OBB), [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation), and a highly versatile training pipeline. It's suitable for various applications and offers pretrained models for efficient fine-tuning. Explore further about the advantages and usage in the [Ultralytics YOLO11 documentation](https://github.com/ultralytics/ultralytics).

@ -39,8 +39,8 @@ To train a model using these OBB formats:
```python
from ultralytics import YOLO
# Create a new YOLOv8n-OBB model from scratch
model = YOLO("yolov8n-obb.yaml")
# Create a new YOLO11n-OBB model from scratch
model = YOLO("yolo11n-obb.yaml")
# Train the model on the DOTAv1 dataset
results = model.train(data="DOTAv1.yaml", epochs=100, imgsz=1024)
@ -49,8 +49,8 @@ To train a model using these OBB formats:
=== "CLI"
```bash
# Train a new YOLOv8n-OBB model on the DOTAv1 dataset
yolo obb train data=DOTAv1.yaml model=yolov8n-obb.pt epochs=100 imgsz=1024
# Train a new YOLO11n-OBB model on the DOTAv1 dataset
yolo obb train data=DOTAv1.yaml model=yolo11n-obb.pt epochs=100 imgsz=1024
```
## Supported Datasets
@ -92,7 +92,7 @@ It's imperative to validate the compatibility of the dataset with your model and
Oriented Bounding Boxes (OBB) are a type of bounding box annotation where the box can be rotated to align more closely with the object being detected, rather than just being axis-aligned. This is particularly useful in aerial or satellite imagery where objects might not be aligned with the image axes. In Ultralytics YOLO models, OBBs are represented by their four corner points in the YOLO OBB format. This allows for more accurate object detection since the bounding boxes can rotate to fit the objects better.
### How do I convert my existing DOTA dataset labels to YOLO OBB format for use with Ultralytics YOLOv8?
### How do I convert my existing DOTA dataset labels to YOLO OBB format for use with Ultralytics YOLO11?
You can convert DOTA dataset labels to YOLO OBB format using the `convert_dota_to_yolo_obb` function from Ultralytics. This conversion ensures compatibility with the Ultralytics YOLO models, enabling you to leverage the OBB capabilities for enhanced object detection. Here's a quick example:
@ -104,9 +104,9 @@ convert_dota_to_yolo_obb("path/to/DOTA")
This script will reformat your DOTA annotations into a YOLO-compatible format.
### How do I train a YOLOv8 model with oriented bounding boxes (OBB) on my dataset?
### How do I train a YOLO11 model with oriented bounding boxes (OBB) on my dataset?
Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO OBB format and then using the Ultralytics API to train the model. Here's an example in both Python and CLI:
Training a YOLO11 model with OBBs involves ensuring your dataset is in the YOLO OBB format and then using the Ultralytics API to train the model. Here's an example in both Python and CLI:
!!! example
@ -115,8 +115,8 @@ Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO
```python
from ultralytics import YOLO
# Create a new YOLOv8n-OBB model from scratch
model = YOLO("yolov8n-obb.yaml")
# Create a new YOLO11n-OBB model from scratch
model = YOLO("yolo11n-obb.yaml")
# Train the model on the custom dataset
results = model.train(data="your_dataset.yaml", epochs=100, imgsz=640)
@ -125,8 +125,8 @@ Training a YOLOv8 model with OBBs involves ensuring your dataset is in the YOLO
=== "CLI"
```bash
# Train a new YOLOv8n-OBB model on the custom dataset
yolo obb train data=your_dataset.yaml model=yolov8n-obb.yaml epochs=100 imgsz=640
# Train a new YOLO11n-OBB model on the custom dataset
yolo obb train data=your_dataset.yaml model=yolo11n-obb.yaml epochs=100 imgsz=640
```
This ensures your model leverages the detailed OBB annotations for improved detection [accuracy](https://www.ultralytics.com/glossary/accuracy).
@ -142,6 +142,6 @@ Currently, Ultralytics supports the following datasets for OBB training:
These datasets are tailored for scenarios where OBBs offer a significant advantage, such as aerial and satellite image analysis.
### Can I use my own dataset with oriented bounding boxes for YOLOv8 training, and if so, how?
### Can I use my own dataset with oriented bounding boxes for YOLO11 training, and if so, how?
Yes, you can use your own dataset with oriented bounding boxes for YOLOv8 training. Ensure your dataset annotations are converted to the YOLO OBB format, which involves defining bounding boxes by their four corner points. You can then create a YAML configuration file specifying the dataset paths, classes, and other necessary details. For more information on creating and configuring your datasets, refer to the [Supported Datasets](#supported-datasets) section.
Yes, you can use your own dataset with oriented bounding boxes for YOLO11 training. Ensure your dataset annotations are converted to the YOLO OBB format, which involves defining bounding boxes by their four corner points. You can then create a YAML configuration file specifying the dataset paths, classes, and other necessary details. For more information on creating and configuring your datasets, refer to the [Supported Datasets](#supported-datasets) section.

@ -12,14 +12,7 @@ The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialize
## COCO-Pose Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
{% include "macros/yolo-pose-perf.md" %}
## Key Features
@ -51,7 +44,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-pose model on the COCO-Pose dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -61,7 +54,7 @@ To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 [epochs](https://
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco-pose.yaml", epochs=100, imgsz=640)
@ -71,7 +64,7 @@ To train a YOLOv8n-pose model on the COCO-Pose dataset for 100 [epochs](https://
```bash
# Start training from a pretrained *.pt model
yolo pose train data=coco-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
yolo pose train data=coco-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -109,11 +102,11 @@ We would like to acknowledge the COCO Consortium for creating and maintaining th
### What is the COCO-Pose dataset and how is it used with Ultralytics YOLO for pose estimation?
The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialized version of the COCO (Common Objects in Context) dataset designed for pose estimation tasks. It builds upon the COCO Keypoints 2017 images and annotations, allowing for the training of models like Ultralytics YOLO for detailed pose estimation. For instance, you can use the COCO-Pose dataset to train a YOLOv8n-pose model by loading a pretrained model and training it with a YAML configuration. For training examples, refer to the [Training](../../modes/train.md) documentation.
The [COCO-Pose](https://cocodataset.org/#keypoints-2017) dataset is a specialized version of the COCO (Common Objects in Context) dataset designed for pose estimation tasks. It builds upon the COCO Keypoints 2017 images and annotations, allowing for the training of models like Ultralytics YOLO for detailed pose estimation. For instance, you can use the COCO-Pose dataset to train a YOLO11n-pose model by loading a pretrained model and training it with a YAML configuration. For training examples, refer to the [Training](../../modes/train.md) documentation.
### How can I train a YOLOv8 model on the COCO-Pose dataset?
### How can I train a YOLO11 model on the COCO-Pose dataset?
Training a YOLOv8 model on the COCO-Pose dataset can be accomplished using either Python or CLI commands. For example, to train a YOLOv8n-pose model for 100 epochs with an image size of 640, you can follow the steps below:
Training a YOLO11 model on the COCO-Pose dataset can be accomplished using either Python or CLI commands. For example, to train a YOLO11n-pose model for 100 epochs with an image size of 640, you can follow the steps below:
!!! example "Train Example"
@ -123,7 +116,7 @@ Training a YOLOv8 model on the COCO-Pose dataset can be accomplished using eithe
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco-pose.yaml", epochs=100, imgsz=640)
@ -133,14 +126,14 @@ Training a YOLOv8 model on the COCO-Pose dataset can be accomplished using eithe
```bash
# Start training from a pretrained *.pt model
yolo pose train data=coco-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
yolo pose train data=coco-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
For more details on the training process and available arguments, check the [training page](../../modes/train.md).
### What are the different metrics provided by the COCO-Pose dataset for evaluating model performance?
The COCO-Pose dataset provides several standardized evaluation metrics for pose estimation tasks, similar to the original COCO dataset. Key metrics include the Object Keypoint Similarity (OKS), which evaluates the [accuracy](https://www.ultralytics.com/glossary/accuracy) of predicted keypoints against ground truth annotations. These metrics allow for thorough performance comparisons between different models. For instance, the COCO-Pose pretrained models such as YOLOv8n-pose, YOLOv8s-pose, and others have specific performance metrics listed in the documentation, like mAP<sup>pose</sup>50-95 and mAP<sup>pose</sup>50.
The COCO-Pose dataset provides several standardized evaluation metrics for pose estimation tasks, similar to the original COCO dataset. Key metrics include the Object Keypoint Similarity (OKS), which evaluates the [accuracy](https://www.ultralytics.com/glossary/accuracy) of predicted keypoints against ground truth annotations. These metrics allow for thorough performance comparisons between different models. For instance, the COCO-Pose pretrained models such as YOLO11n-pose, YOLO11s-pose, and others have specific performance metrics listed in the documentation, like mAP<sup>pose</sup>50-95 and mAP<sup>pose</sup>50.
### How is the dataset structured and split for the COCO-Pose dataset?
@ -154,6 +147,6 @@ These subsets help organize the training, validation, and testing phases effecti
### What are the key features and applications of the COCO-Pose dataset?
The COCO-Pose dataset extends the COCO Keypoints 2017 annotations to include 17 keypoints for human figures, enabling detailed pose estimation. Standardized evaluation metrics (e.g., OKS) facilitate comparisons across different models. Applications of the COCO-Pose dataset span various domains, such as sports analytics, healthcare, and human-computer interaction, wherever detailed pose estimation of human figures is required. For practical use, leveraging pretrained models like those provided in the documentation (e.g., YOLOv8n-pose) can significantly streamline the process ([Key Features](#key-features)).
The COCO-Pose dataset extends the COCO Keypoints 2017 annotations to include 17 keypoints for human figures, enabling detailed pose estimation. Standardized evaluation metrics (e.g., OKS) facilitate comparisons across different models. Applications of the COCO-Pose dataset span various domains, such as sports analytics, healthcare, and human-computer interaction, wherever detailed pose estimation of human figures is required. For practical use, leveraging pretrained models like those provided in the documentation (e.g., YOLO11n-pose) can significantly streamline the process ([Key Features](#key-features)).
If you use the COCO-Pose dataset in your research or development work, please cite the paper with the following [BibTeX entry](#citations-and-acknowledgments).

@ -1,7 +1,7 @@
---
comments: true
description: Explore the compact, versatile COCO8-Pose dataset for testing and debugging object detection models. Ideal for quick experiments with YOLOv8.
keywords: COCO8-Pose, Ultralytics, pose detection dataset, object detection, YOLOv8, machine learning, computer vision, training data
description: Explore the compact, versatile COCO8-Pose dataset for testing and debugging object detection models. Ideal for quick experiments with YOLO11.
keywords: COCO8-Pose, Ultralytics, pose detection dataset, object detection, YOLO11, machine learning, computer vision, training data
---
# COCO8-Pose Dataset
@ -10,7 +10,7 @@ keywords: COCO8-Pose, Ultralytics, pose detection dataset, object detection, YOL
[Ultralytics](https://www.ultralytics.com/) COCO8-Pose is a small, but versatile pose detection dataset composed of the first 8 images of the COCO train 2017 set, 4 for training and 4 for validation. This dataset is ideal for testing and debugging [object detection](https://www.ultralytics.com/glossary/object-detection) models, or for experimenting with new detection approaches. With 8 images, it is small enough to be easily manageable, yet diverse enough to test training pipelines for errors and act as a sanity check before training larger datasets.
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics).
## Dataset YAML
@ -24,7 +24,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-pose model on the COCO8-Pose dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -34,7 +34,7 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 [epochs](https:/
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
@ -44,7 +44,7 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 [epochs](https:/
```bash
# Start training from a pretrained *.pt model
yolo pose train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
yolo pose train data=coco8-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -80,13 +80,13 @@ We would like to acknowledge the COCO Consortium for creating and maintaining th
## FAQ
### What is the COCO8-Pose dataset, and how is it used with Ultralytics YOLOv8?
### What is the COCO8-Pose dataset, and how is it used with Ultralytics YOLO11?
The COCO8-Pose dataset is a small, versatile pose detection dataset that includes the first 8 images from the COCO train 2017 set, with 4 images for training and 4 for validation. It's designed for testing and debugging object detection models and experimenting with new detection approaches. This dataset is ideal for quick experiments with [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8/). For more details on dataset configuration, check out the dataset YAML file [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml).
The COCO8-Pose dataset is a small, versatile pose detection dataset that includes the first 8 images from the COCO train 2017 set, with 4 images for training and 4 for validation. It's designed for testing and debugging object detection models and experimenting with new detection approaches. This dataset is ideal for quick experiments with [Ultralytics YOLO11](https://docs.ultralytics.com/models/yolo11/). For more details on dataset configuration, check out the dataset YAML file [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml).
### How do I train a YOLOv8 model using the COCO8-Pose dataset in Ultralytics?
### How do I train a YOLO11 model using the COCO8-Pose dataset in Ultralytics?
To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, follow these examples:
To train a YOLO11n-pose model on the COCO8-Pose dataset for 100 epochs with an image size of 640, follow these examples:
!!! example "Train Example"
@ -96,7 +96,7 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an i
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt")
model = YOLO("yolo11n-pose.pt")
# Train the model
results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
@ -105,7 +105,7 @@ To train a YOLOv8n-pose model on the COCO8-Pose dataset for 100 epochs with an i
=== "CLI"
```bash
yolo pose train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
yolo pose train data=coco8-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
For a comprehensive list of training arguments, refer to the model [Training](../../modes/train.md) page.
@ -120,12 +120,12 @@ The COCO8-Pose dataset offers several benefits:
For more about its features and usage, see the [Dataset Introduction](#introduction) section.
### How does mosaicing benefit the YOLOv8 training process using the COCO8-Pose dataset?
### How does mosaicing benefit the YOLO11 training process using the COCO8-Pose dataset?
Mosaicing, demonstrated in the sample images of the COCO8-Pose dataset, combines multiple images into one, increasing the variety of objects and scenes within each training batch. This technique helps improve the model's ability to generalize across various object sizes, aspect ratios, and contexts, ultimately enhancing model performance. See the [Sample Images and Annotations](#sample-images-and-annotations) section for example images.
### Where can I find the COCO8-Pose dataset YAML file and how do I use it?
The COCO8-Pose dataset YAML file can be found [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml). This file defines the dataset configuration, including paths, classes, and other relevant information. Use this file with the YOLOv8 training scripts as mentioned in the [Train Example](#how-do-i-train-a-yolov8-model-using-the-coco8-pose-dataset-in-ultralytics) section.
The COCO8-Pose dataset YAML file can be found [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml). This file defines the dataset configuration, including paths, classes, and other relevant information. Use this file with the YOLO11 training scripts as mentioned in the [Train Example](#how-do-i-train-a-yolo11-model-using-the-coco8-pose-dataset-in-ultralytics) section.
For more FAQs and detailed documentation, visit the [Ultralytics Documentation](https://docs.ultralytics.com/).

@ -0,0 +1,175 @@
---
comments: true
description: Explore the hand keypoints estimation dataset for advanced pose estimation. Learn about datasets, pretrained models, metrics, and applications for training with YOLO.
keywords: Hand KeyPoints, pose estimation, dataset, keypoints, MediaPipe, YOLO, deep learning, computer vision
---
# Hand Keypoints Dataset
## Introduction
The hand-keypoints dataset contains 26,768 images of hands annotated with keypoints, making it suitable for training models like Ultralytics YOLO for pose estimation tasks. The annotations were generated using the Google MediaPipe library, ensuring high accuracy and consistency, and the dataset is compatible [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) formats.
## Hand Landmarks
![Hand Landmarks](https://github.com/ultralytics/docs/releases/download/0/hand_landmarks.jpg)
## KeyPoints
The dataset includes keypoints for hand detection. The keypoints are annotated as follows:
1. Wrist
2. Thumb (4 points)
3. Index finger (4 points)
4. Middle finger (4 points)
5. Ring finger (4 points)
6. Little finger (4 points)
Each hand has a total of 21 keypoints.
## Key Features
- **Large Dataset**: 26,768 images with hand keypoint annotations.
- **YOLO11 Compatibility**: Ready for use with YOLO11 models.
- **21 Keypoints**: Detailed hand pose representation.
## Dataset Structure
The hand keypoint dataset is split into two subsets:
1. **Train**: This subset contains 18,776 images from the hand keypoints dataset, annotated for training pose estimation models.
2. **Val**: This subset contains 7992 images that can be used for validation purposes during model training.
## Applications
Hand keypoints can be used for gesture recognition, AR/VR controls, robotic manipulation, and hand movement analysis in healthcare. They can be also applied in animation for motion capture and biometric authentication systems for security.
## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Hand Keypoints dataset, the `hand-keypoints.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/hand-keypoints.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/hand-keypoints.yaml).
!!! example "ultralytics/cfg/datasets/hand-keypoints.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/hand-keypoints.yaml"
```
## Usage
To train a YOLO11n-pose model on the Hand Keypoints dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="hand-keypoints.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo pose train data=hand-keypoints.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
The Hand keypoints dataset contains a diverse set of images with human hands annotated with keypoints. Here are some examples of images from the dataset, along with their corresponding annotations:
![Dataset sample image](https://github.com/ultralytics/docs/releases/download/0/human-hand-pose.jpg)
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
The example showcases the variety and complexity of the images in the Hand Keypoints dataset and the benefits of using mosaicing during the training process.
## Citations and Acknowledgments
If you use the hand-keypoints dataset in your research or development work, please acknowledge the following sources:
!!! quote ""
=== "Credits"
We would like to thank the following sources for providing the images used in this dataset:
- [11k Hands](https://sites.google.com/view/11khands)
- [2000 Hand Gestures](https://www.kaggle.com/datasets/ritikagiridhar/2000-hand-gestures)
- [Gesture Recognition](https://www.kaggle.com/datasets/imsparsh/gesture-recognition)
The images were collected and used under the respective licenses provided by each platform and are distributed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
We would also like to acknowledge the creator of this dataset, [Rion Dsilva](https://www.linkedin.com/in/rion-dsilva-043464229/), for his great contribution to Vision AI research.
## FAQ
### How do I train a YOLO11 model on the Hand Keypoints dataset?
To train a YOLO11 model on the Hand Keypoints dataset, you can use either Python or the command line interface (CLI). Here's an example for training a YOLO11n-pose model for 100 epochs with an image size of 640:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="hand-keypoints.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo pose train data=hand-keypoints.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
### What are the key features of the Hand Keypoints dataset?
The Hand Keypoints dataset is designed for advanced pose estimation tasks and includes several key features:
- **Large Dataset**: Contains 26,768 images with hand keypoint annotations.
- **YOLO11 Compatibility**: Ready for use with YOLO11 models.
- **21 Keypoints**: Detailed hand pose representation, including wrist and finger joints.
For more details, you can explore the [Hand Keypoints Dataset](#introduction) section.
### What applications can benefit from using the Hand Keypoints dataset?
The Hand Keypoints dataset can be applied in various fields, including:
- **Gesture Recognition**: Enhancing human-computer interaction.
- **AR/VR Controls**: Improving user experience in augmented and virtual reality.
- **Robotic Manipulation**: Enabling precise control of robotic hands.
- **Healthcare**: Analyzing hand movements for medical diagnostics.
- **Animation**: Capturing motion for realistic animations.
- **Biometric Authentication**: Enhancing security systems.
For more information, refer to the [Applications](#applications) section.
### How is the Hand Keypoints dataset structured?
The Hand Keypoints dataset is divided into two subsets:
1. **Train**: Contains 18,776 images for training pose estimation models.
2. **Val**: Contains 7,992 images for validation purposes during model training.
This structure ensures a comprehensive training and validation process. For more details, see the [Dataset Structure](#dataset-structure) section.
### How do I use the dataset YAML file for training?
The dataset configuration is defined in a YAML file, which includes paths, classes, and other relevant information. The `hand-keypoints.yaml` file can be found at [hand-keypoints.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/hand-keypoints.yaml).
To use this YAML file for training, specify it in your training script or CLI command as shown in the training example above. For more details, refer to the [Dataset YAML](#dataset-yaml) section.

@ -72,7 +72,7 @@ The `train` and `val` fields specify the paths to the directories containing the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
@ -82,7 +82,7 @@ The `train` and `val` fields specify the paths to the directories containing the
```bash
# Start training from a pretrained *.pt model
yolo pose train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
yolo pose train data=coco8-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
## Supported Datasets
@ -118,6 +118,15 @@ This section outlines the datasets that are compatible with Ultralytics YOLO for
- **Usage**: Great for animal pose or any other pose that is not human-based.
- [Read more about Tiger-Pose](tiger-pose.md)
### Hand Keypoints
- **Description**: Hand keypoints pose dataset comprises nearly 26K images, with 18776 images allocated for training and 7992 for validation.
- **Label Format**: Same as Ultralytics YOLO format as described above, but with 21 keypoints for human hand and visible dimension.
- **Number of Classes**: 1 (Hand).
- **Keypoints**: 21 keypoints.
- **Usage**: Great for human hand pose estimation.
- [Read more about Hand Keypoints](hand-keypoints.md)
### Adding your own dataset
If you have your own dataset and would like to use it for training pose estimation models with Ultralytics YOLO format, ensure that it follows the format specified above under "Ultralytics YOLO format". Convert your annotations to the required format and specify the paths, number of classes, and class names in the YAML configuration file.
@ -162,7 +171,7 @@ To use the COCO-Pose dataset with Ultralytics YOLO:
```python
from ultralytics import YOLO
model = YOLO("yolov8n-pose.pt") # load pretrained model
model = YOLO("yolo11n-pose.pt") # load pretrained model
results = model.train(data="coco-pose.yaml", epochs=100, imgsz=640)
```
@ -179,7 +188,7 @@ To add your dataset:
```python
from ultralytics import YOLO
model = YOLO("yolov8n-pose.pt")
model = YOLO("yolo11n-pose.pt")
results = model.train(data="your-dataset.yaml", epochs=100, imgsz=640)
```

@ -1,7 +1,7 @@
---
comments: true
description: Explore Ultralytics Tiger-Pose dataset with 263 diverse images. Ideal for testing, training, and refining pose estimation algorithms.
keywords: Ultralytics, Tiger-Pose, dataset, pose estimation, YOLOv8, training data, machine learning, neural networks
keywords: Ultralytics, Tiger-Pose, dataset, pose estimation, YOLO11, training data, machine learning, neural networks
---
# Tiger-Pose Dataset
@ -12,7 +12,7 @@ keywords: Ultralytics, Tiger-Pose, dataset, pose estimation, YOLOv8, training da
Despite its manageable size of 210 images, tiger-pose dataset offers diversity, making it suitable for assessing training pipelines, identifying potential errors, and serving as a valuable preliminary step before working with larger datasets for pose estimation.
This dataset is intended for use with [Ultralytics HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
This dataset is intended for use with [Ultralytics HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics).
<p align="center">
<br>
@ -22,7 +22,7 @@ This dataset is intended for use with [Ultralytics HUB](https://hub.ultralytics.
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Train YOLOv8 Pose Model on Tiger-Pose Dataset Using Ultralytics HUB
<strong>Watch:</strong> Train YOLO11 Pose Model on Tiger-Pose Dataset Using Ultralytics HUB
</p>
## Dataset YAML
@ -37,7 +37,7 @@ A YAML (Yet Another Markup Language) file serves as the means to specify the con
## Usage
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-pose model on the Tiger-Pose dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -47,7 +47,7 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 [epochs](https:/
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="tiger-pose.yaml", epochs=100, imgsz=640)
@ -57,7 +57,7 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 [epochs](https:/
```bash
# Start training from a pretrained *.pt model
yolo task=pose mode=train data=tiger-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
yolo task=pose mode=train data=tiger-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -101,11 +101,11 @@ The dataset has been released available under the [AGPL-3.0 License](https://git
### What is the Ultralytics Tiger-Pose dataset used for?
The Ultralytics Tiger-Pose dataset is designed for pose estimation tasks, consisting of 263 images sourced from a [YouTube video](https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUbVGlnZXIgd2Fsa2luZyByZWZlcmVuY2UubXA0). The dataset is divided into 210 training images and 53 validation images. It is particularly useful for testing, training, and refining pose estimation algorithms using [Ultralytics HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
The Ultralytics Tiger-Pose dataset is designed for pose estimation tasks, consisting of 263 images sourced from a [YouTube video](https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUbVGlnZXIgd2Fsa2luZyByZWZlcmVuY2UubXA0). The dataset is divided into 210 training images and 53 validation images. It is particularly useful for testing, training, and refining pose estimation algorithms using [Ultralytics HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics).
### How do I train a YOLOv8 model on the Tiger-Pose dataset?
### How do I train a YOLO11 model on the Tiger-Pose dataset?
To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, use the following code snippets. For more details, visit the [Training](../../modes/train.md) page:
To train a YOLO11n-pose model on the Tiger-Pose dataset for 100 epochs with an image size of 640, use the following code snippets. For more details, visit the [Training](../../modes/train.md) page:
!!! example "Train Example"
@ -115,7 +115,7 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="tiger-pose.yaml", epochs=100, imgsz=640)
@ -126,16 +126,16 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
```bash
# Start training from a pretrained *.pt model
yolo task=pose mode=train data=tiger-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
yolo task=pose mode=train data=tiger-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
### What configurations does the `tiger-pose.yaml` file include?
The `tiger-pose.yaml` file is used to specify the configuration details of the Tiger-Pose dataset. It includes crucial data such as file paths and class definitions. To see the exact configuration, you can check out the [Ultralytics Tiger-Pose Dataset Configuration File](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/tiger-pose.yaml).
### How can I run inference using a YOLOv8 model trained on the Tiger-Pose dataset?
### How can I run inference using a YOLO11 model trained on the Tiger-Pose dataset?
To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you can use the following code snippets. For a detailed guide, visit the [Prediction](../../modes/predict.md) page:
To perform inference using a YOLO11 model trained on the Tiger-Pose dataset, you can use the following code snippets. For a detailed guide, visit the [Prediction](../../modes/predict.md) page:
!!! example "Inference Example"
@ -161,4 +161,4 @@ To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you
### What are the benefits of using the Tiger-Pose dataset for pose estimation?
The Tiger-Pose dataset, despite its manageable size of 210 images for training, provides a diverse collection of images that are ideal for testing pose estimation pipelines. The dataset helps identify potential errors and acts as a preliminary step before working with larger datasets. Additionally, the dataset supports the training and refinement of pose estimation algorithms using advanced tools like [Ultralytics HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics), enhancing model performance and [accuracy](https://www.ultralytics.com/glossary/accuracy).
The Tiger-Pose dataset, despite its manageable size of 210 images for training, provides a diverse collection of images that are ideal for testing pose estimation pipelines. The dataset helps identify potential errors and acts as a preliminary step before working with larger datasets. Additionally, the dataset supports the training and refinement of pose estimation algorithms using advanced tools like [Ultralytics HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics), enhancing model performance and [accuracy](https://www.ultralytics.com/glossary/accuracy).

@ -18,7 +18,7 @@ Whether you're working on automotive research, developing AI solutions for vehic
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Carparts [Instance Segmentation](https://www.ultralytics.com/glossary/instance-segmentation) Using Ultralytics HUB
<strong>Watch:</strong> Carparts <a href="https://www.ultralytics.com/glossary/instance-segmentation">Instance Segmentation</a> Using Ultralytics HUB
</p>
## Dataset Structure
@ -45,7 +45,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train Ultralytics YOLO11n model on the Carparts Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -55,7 +55,7 @@ To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="carparts-seg.yaml", epochs=100, imgsz=640)
@ -65,7 +65,7 @@ To train Ultralytics YOLOv8n model on the Carparts Segmentation dataset for 100
```bash
# Start training from a pretrained *.pt model
yolo segment train data=carparts-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=carparts-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -108,9 +108,9 @@ We extend our thanks to the Roboflow team for their dedication in developing and
The [Roboflow Carparts Segmentation Dataset](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm?ref=ultralytics) is a curated collection of images and videos specifically designed for car part segmentation tasks in computer vision. This dataset includes a diverse range of visuals captured from multiple perspectives, making it an invaluable resource for training and testing segmentation models for automotive applications.
### How can I use the Carparts Segmentation Dataset with Ultralytics YOLOv8?
### How can I use the Carparts Segmentation Dataset with Ultralytics YOLO11?
To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow these steps:
To train a YOLO11 model on the Carparts Segmentation dataset, you can follow these steps:
!!! example "Train Example"
@ -120,7 +120,7 @@ To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="carparts-seg.yaml", epochs=100, imgsz=640)
@ -130,7 +130,7 @@ To train a YOLOv8 model on the Carparts Segmentation dataset, you can follow the
```bash
# Start training from a pretrained *.pt model
yolo segment train data=carparts-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=carparts-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
For more details, refer to the [Training](../../modes/train.md) documentation.

@ -1,7 +1,7 @@
---
comments: true
description: Explore the COCO-Seg dataset, an extension of COCO, with detailed segmentation annotations. Learn how to train YOLO models with COCO-Seg.
keywords: COCO-Seg, dataset, YOLO models, instance segmentation, object detection, COCO dataset, YOLOv8, computer vision, Ultralytics, machine learning
keywords: COCO-Seg, dataset, YOLO models, instance segmentation, object detection, COCO dataset, YOLO11, computer vision, Ultralytics, machine learning
---
# COCO-Seg Dataset
@ -10,13 +10,7 @@ The [COCO-Seg](https://cocodataset.org/#home) dataset, an extension of the COCO
## COCO-Seg Pretrained Models
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
{% include "macros/yolo-seg-perf.md" %}
## Key Features
@ -49,7 +43,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-seg model on the COCO-Seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -59,7 +53,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 [epochs](https://ww
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640)
@ -69,7 +63,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 [epochs](https://ww
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -109,9 +103,9 @@ We extend our thanks to the COCO Consortium for creating and maintaining this in
The [COCO-Seg](https://cocodataset.org/#home) dataset is an extension of the original COCO (Common Objects in Context) dataset, specifically designed for instance segmentation tasks. While it uses the same images as the COCO dataset, COCO-Seg includes more detailed segmentation annotations, making it a powerful resource for researchers and developers focusing on object instance segmentation.
### How can I train a YOLOv8 model using the COCO-Seg dataset?
### How can I train a YOLO11 model using the COCO-Seg dataset?
To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a detailed list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-seg model on the COCO-Seg dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a detailed list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -121,7 +115,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640)
@ -131,7 +125,7 @@ To train a YOLOv8n-seg model on the COCO-Seg dataset for 100 epochs with an imag
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
### What are the key features of the COCO-Seg dataset?
@ -145,15 +139,9 @@ The COCO-Seg dataset includes several key features:
### What pretrained models are available for COCO-Seg, and what are their performance metrics?
The COCO-Seg dataset supports multiple pretrained YOLOv8 segmentation models with varying performance metrics. Here's a summary of the available models and their key metrics:
The COCO-Seg dataset supports multiple pretrained YOLO11 segmentation models with varying performance metrics. Here's a summary of the available models and their key metrics:
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
{% include "macros/yolo-seg-perf.md" %}
### How is the COCO-Seg dataset structured and what subsets does it contain?

@ -1,7 +1,7 @@
---
comments: true
description: Discover the versatile and manageable COCO8-Seg dataset by Ultralytics, ideal for testing and debugging segmentation models or new detection approaches.
keywords: COCO8-Seg, Ultralytics, segmentation dataset, YOLOv8, COCO 2017, model training, computer vision, dataset configuration
keywords: COCO8-Seg, Ultralytics, segmentation dataset, YOLO11, COCO 2017, model training, computer vision, dataset configuration
---
# COCO8-Seg Dataset
@ -10,7 +10,7 @@ keywords: COCO8-Seg, Ultralytics, segmentation dataset, YOLOv8, COCO 2017, model
[Ultralytics](https://www.ultralytics.com/) COCO8-Seg is a small, but versatile [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation) dataset composed of the first 8 images of the COCO train 2017 set, 4 for training and 4 for validation. This dataset is ideal for testing and debugging segmentation models, or for experimenting with new detection approaches. With 8 images, it is small enough to be easily manageable, yet diverse enough to test training pipelines for errors and act as a sanity check before training larger datasets.
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics).
## Dataset YAML
@ -24,7 +24,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train a YOLO11n-seg model on the COCO8-Seg dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -34,7 +34,7 @@ To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 [epochs](https://w
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
@ -44,7 +44,7 @@ To train a YOLOv8n-seg model on the COCO8-Seg dataset for 100 [epochs](https://w
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco8-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -80,13 +80,13 @@ We would like to acknowledge the COCO Consortium for creating and maintaining th
## FAQ
### What is the COCO8-Seg dataset, and how is it used in Ultralytics YOLOv8?
### What is the COCO8-Seg dataset, and how is it used in Ultralytics YOLO11?
The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralytics, consisting of the first 8 images from the COCO train 2017 set—4 images for training and 4 for validation. This dataset is tailored for testing and debugging segmentation models or experimenting with new detection methods. It is particularly useful with Ultralytics [YOLOv8](https://github.com/ultralytics/ultralytics) and [HUB](https://hub.ultralytics.com/) for rapid iteration and pipeline error-checking before scaling to larger datasets. For detailed usage, refer to the model [Training](../../modes/train.md) page.
The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralytics, consisting of the first 8 images from the COCO train 2017 set—4 images for training and 4 for validation. This dataset is tailored for testing and debugging segmentation models or experimenting with new detection methods. It is particularly useful with Ultralytics [YOLO11](https://github.com/ultralytics/ultralytics) and [HUB](https://hub.ultralytics.com/) for rapid iteration and pipeline error-checking before scaling to larger datasets. For detailed usage, refer to the model [Training](../../modes/train.md) page.
### How can I train a YOLOv8n-seg model using the COCO8-Seg dataset?
### How can I train a YOLO11n-seg model using the COCO8-Seg dataset?
To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use Python or CLI commands. Here's a quick example:
To train a **YOLO11n-seg** model on the COCO8-Seg dataset for 100 epochs with an image size of 640, you can use Python or CLI commands. Here's a quick example:
!!! example "Train Example"
@ -96,7 +96,7 @@ To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # Load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # Load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
@ -106,7 +106,7 @@ To train a **YOLOv8n-seg** model on the COCO8-Seg dataset for 100 epochs with an
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco8-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
For a thorough explanation of available arguments and configuration options, you can check the [Training](../../modes/train.md) documentation.

@ -34,7 +34,7 @@ A YAML (Yet Another Markup Language) file is employed to outline the configurati
## Usage
To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train Ultralytics YOLO11n model on the Crack Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -44,7 +44,7 @@ To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 [ep
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="crack-seg.yaml", epochs=100, imgsz=640)
@ -54,7 +54,7 @@ To train Ultralytics YOLOv8n model on the Crack Segmentation dataset for 100 [ep
```bash
# Start training from a pretrained *.pt model
yolo segment train data=crack-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=crack-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -98,9 +98,9 @@ We would like to acknowledge the Roboflow team for creating and maintaining the
The [Roboflow Crack Segmentation Dataset](https://universe.roboflow.com/university-bswxt/crack-bphdr?ref=ultralytics) is a comprehensive collection of 4029 static images designed specifically for transportation and public safety studies. It is ideal for tasks such as self-driving car model development and infrastructure maintenance. The dataset includes training, testing, and validation sets, aiding in accurate crack detection and segmentation.
### How do I train a model using the Crack Segmentation Dataset with Ultralytics YOLOv8?
### How do I train a model using the Crack Segmentation Dataset with Ultralytics YOLO11?
To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the following code snippets. Detailed instructions and further parameters can be found on the model [Training](../../modes/train.md) page.
To train an Ultralytics YOLO11 model on the Crack Segmentation dataset, use the following code snippets. Detailed instructions and further parameters can be found on the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -110,7 +110,7 @@ To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="crack-seg.yaml", epochs=100, imgsz=640)
@ -120,7 +120,7 @@ To train an Ultralytics YOLOv8 model on the Crack Segmentation dataset, use the
```bash
# Start training from a pretrained *.pt model
yolo segment train data=crack-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=crack-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
### Why should I use the Crack Segmentation Dataset for my self-driving car project?

@ -74,7 +74,7 @@ The `train` and `val` fields specify the paths to the directories containing the
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
@ -84,7 +84,7 @@ The `train` and `val` fields specify the paths to the directories containing the
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco8-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Supported Datasets
@ -137,13 +137,13 @@ To auto-annotate your dataset using the Ultralytics framework, you can use the `
```python
from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model="sam_b.pt")
auto_annotate(data="path/to/images", det_model="yolo11x.pt", sam_model="sam_b.pt")
```
| Argument | Type | Description | Default |
| ------------ | ----------------------- | ----------------------------------------------------------------------------------------------------------- | -------------- |
| `data` | `str` | Path to a folder containing images to be annotated. | `None` |
| `det_model` | `str, optional` | Pre-trained YOLO detection model. Defaults to `'yolov8x.pt'`. | `'yolov8x.pt'` |
| `det_model` | `str, optional` | Pre-trained YOLO detection model. Defaults to `'yolo11x.pt'`. | `'yolo11x.pt'` |
| `sam_model` | `str, optional` | Pre-trained SAM segmentation model. Defaults to `'sam_b.pt'`. | `'sam_b.pt'` |
| `device` | `str, optional` | Device to run the models on. Defaults to an empty string (CPU or GPU, if available). | `''` |
| `output_dir` | `str or None, optional` | Directory to save the annotated results. Defaults to a `'labels'` folder in the same directory as `'data'`. | `None` |
@ -195,7 +195,7 @@ Auto-annotation in Ultralytics YOLO allows you to generate segmentation annotati
```python
from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model="sam_b.pt")
auto_annotate(data="path/to/images", det_model="yolo11x.pt", sam_model="sam_b.pt")
```
This function automates the annotation process, making it faster and more efficient. For more details, explore the [Auto-Annotation](#auto-annotation) section.

@ -34,7 +34,7 @@ A YAML (Yet Another Markup Language) file is used to define the dataset configur
## Usage
To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
To train Ultralytics YOLO11n model on the Package Segmentation dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
@ -44,7 +44,7 @@ To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 [
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
@ -54,7 +54,7 @@ To train Ultralytics YOLOv8n model on the Package Segmentation dataset for 100 [
```bash
# Start training from a pretrained *.pt model
yolo segment train data=package-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=package-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Data and Annotations
@ -97,9 +97,9 @@ We express our gratitude to the Roboflow team for their efforts in creating and
The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factorypackage/factory_package?ref=ultralytics) is a curated collection of images tailored for tasks involving package segmentation. It includes diverse images of packages in various contexts, making it invaluable for training and evaluating segmentation models. This dataset is particularly useful for applications in logistics, warehouse automation, and any project requiring precise package analysis. It helps optimize logistics and enhance vision models for accurate package identification and sorting.
### How do I train an Ultralytics YOLOv8 model on the Package Segmentation Dataset?
### How do I train an Ultralytics YOLO11 model on the Package Segmentation Dataset?
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below:
You can train an Ultralytics YOLO11n model using both Python and CLI methods. Use the snippets below:
!!! example "Train Example"
@ -109,7 +109,7 @@ You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Us
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load a pretrained model
model = YOLO("yolo11n-seg.pt") # load a pretrained model
# Train the model
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
@ -119,7 +119,7 @@ You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Us
```bash
# Start training from a pretrained *.pt model
yolo segment train data=package-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
yolo segment train data=package-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
Refer to the model [Training](../../modes/train.md) page for more details.
@ -134,9 +134,9 @@ The dataset is structured into three main components:
This structure ensures a balanced dataset for thorough model training, validation, and testing, enhancing the performance of segmentation algorithms.
### Why should I use Ultralytics YOLOv8 with the Package Segmentation Dataset?
### Why should I use Ultralytics YOLO11 with the Package Segmentation Dataset?
Ultralytics YOLOv8 provides state-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed for real-time object detection and segmentation tasks. Using it with the Package Segmentation Dataset allows you to leverage YOLOv8's capabilities for precise package segmentation. This combination is especially beneficial for industries like logistics and warehouse automation, where accurate package identification is critical. For more information, check out our [page on YOLOv8 segmentation](https://docs.ultralytics.com/models/yolov8/).
Ultralytics YOLO11 provides state-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed for real-time object detection and segmentation tasks. Using it with the Package Segmentation Dataset allows you to leverage YOLO11's capabilities for precise package segmentation. This combination is especially beneficial for industries like logistics and warehouse automation, where accurate package identification is critical. For more information, check out our [page on YOLO11 segmentation](https://docs.ultralytics.com/models/yolo11/).
### How can I access and use the package-seg.yaml file for the Package Segmentation Dataset?

@ -19,14 +19,14 @@ Multi-Object Detector doesn't need standalone training and directly supports pre
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
results = model.track(source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True)
```
=== "CLI"
```bash
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
```
## FAQ
@ -42,17 +42,17 @@ To use Multi-Object Tracking with Ultralytics YOLO, you can start by using the P
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # Load the YOLOv8 model
model = YOLO("yolo11n.pt") # Load the YOLO11 model
results = model.track(source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True)
```
=== "CLI"
```bash
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3 iou=0.5 show
yolo track model=yolo11n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3 iou=0.5 show
```
These commands load the YOLOv8 model and use it for tracking objects in the given video source with specific confidence (`conf`) and [Intersection over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (`iou`) thresholds. For more details, refer to the [track mode documentation](../../modes/track.md).
These commands load the YOLO11 model and use it for tracking objects in the given video source with specific confidence (`conf`) and [Intersection over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (`iou`) thresholds. For more details, refer to the [track mode documentation](../../modes/track.md).
### What are the upcoming features for training trackers in Ultralytics?

@ -1,15 +1,26 @@
---
comments: true
description: Learn to create line graphs, bar plots, and pie charts using Python with guided instructions and code snippets. Maximize your data visualization skills!.
keywords: Ultralytics, YOLOv8, data visualization, line graphs, bar plots, pie charts, Python, analytics, tutorial, guide
keywords: Ultralytics, YOLO11, data visualization, line graphs, bar plots, pie charts, Python, analytics, tutorial, guide
---
# Analytics using Ultralytics YOLOv8
# Analytics using Ultralytics YOLO11
## Introduction
This guide provides a comprehensive overview of three fundamental types of [data visualizations](https://www.ultralytics.com/glossary/data-visualization): line graphs, bar plots, and pie charts. Each section includes step-by-step instructions and code snippets on how to create these visualizations using Python.
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/tVuLIMt4DMY"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to generate Analytical Graphs using Ultralytics | Line Graphs, Bar Plots, Area and Pie Charts
</p>
### Visual Samples
| Line Graph | Bar Plot | Pie Chart |
@ -31,7 +42,7 @@ This guide provides a comprehensive overview of three fundamental types of [data
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
@ -80,7 +91,7 @@ This guide provides a comprehensive overview of three fundamental types of [data
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
@ -141,7 +152,7 @@ This guide provides a comprehensive overview of three fundamental types of [data
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
@ -191,7 +202,7 @@ This guide provides a comprehensive overview of three fundamental types of [data
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
@ -241,7 +252,7 @@ This guide provides a comprehensive overview of three fundamental types of [data
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
@ -319,11 +330,11 @@ Understanding when and how to use different types of visualizations is crucial f
## FAQ
### How do I create a line graph using Ultralytics YOLOv8 Analytics?
### How do I create a line graph using Ultralytics YOLO11 Analytics?
To create a line graph using Ultralytics YOLOv8 Analytics, follow these steps:
To create a line graph using Ultralytics YOLO11 Analytics, follow these steps:
1. Load a YOLOv8 model and open your video file.
1. Load a YOLO11 model and open your video file.
2. Initialize the `Analytics` class with the type set to "line."
3. Iterate through video frames, updating the line graph with relevant data, such as object counts per frame.
4. Save the output video displaying the line graph.
@ -335,7 +346,7 @@ import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("line_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
@ -355,11 +366,11 @@ out.release()
cv2.destroyAllWindows()
```
For further details on configuring the `Analytics` class, visit the [Analytics using Ultralytics YOLOv8 📊](#analytics-using-ultralytics-yolov8) section.
For further details on configuring the `Analytics` class, visit the [Analytics using Ultralytics YOLO11 📊](#analytics-using-ultralytics-yolo11) section.
### What are the benefits of using Ultralytics YOLOv8 for creating bar plots?
### What are the benefits of using Ultralytics YOLO11 for creating bar plots?
Using Ultralytics YOLOv8 for creating bar plots offers several benefits:
Using Ultralytics YOLO11 for creating bar plots offers several benefits:
1. **Real-time Data Visualization**: Seamlessly integrate [object detection](https://www.ultralytics.com/glossary/object-detection) results into bar plots for dynamic updates.
2. **Ease of Use**: Simple API and functions make it straightforward to implement and visualize data.
@ -373,7 +384,7 @@ import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("bar_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
@ -398,9 +409,9 @@ cv2.destroyAllWindows()
To learn more, visit the [Bar Plot](#visual-samples) section in the guide.
### Why should I use Ultralytics YOLOv8 for creating pie charts in my data visualization projects?
### Why should I use Ultralytics YOLO11 for creating pie charts in my data visualization projects?
Ultralytics YOLOv8 is an excellent choice for creating pie charts because:
Ultralytics YOLO11 is an excellent choice for creating pie charts because:
1. **Integration with Object Detection**: Directly integrate object detection results into pie charts for immediate insights.
2. **User-Friendly API**: Simple to set up and use with minimal code.
@ -414,7 +425,7 @@ import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("pie_chart.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
@ -439,9 +450,9 @@ cv2.destroyAllWindows()
For more information, refer to the [Pie Chart](#visual-samples) section in the guide.
### Can Ultralytics YOLOv8 be used to track objects and dynamically update visualizations?
### Can Ultralytics YOLO11 be used to track objects and dynamically update visualizations?
Yes, Ultralytics YOLOv8 can be used to track objects and dynamically update visualizations. It supports tracking multiple objects in real-time and can update various visualizations like line graphs, bar plots, and pie charts based on the tracked objects' data.
Yes, Ultralytics YOLO11 can be used to track objects and dynamically update visualizations. It supports tracking multiple objects in real-time and can update various visualizations like line graphs, bar plots, and pie charts based on the tracked objects' data.
Example for tracking and updating a line graph:
@ -450,7 +461,7 @@ import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("line_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
@ -472,11 +483,11 @@ cv2.destroyAllWindows()
To learn about the complete functionality, see the [Tracking](../modes/track.md) section.
### What makes Ultralytics YOLOv8 different from other object detection solutions like [OpenCV](https://www.ultralytics.com/glossary/opencv) and [TensorFlow](https://www.ultralytics.com/glossary/tensorflow)?
### What makes Ultralytics YOLO11 different from other object detection solutions like [OpenCV](https://www.ultralytics.com/glossary/opencv) and [TensorFlow](https://www.ultralytics.com/glossary/tensorflow)?
Ultralytics YOLOv8 stands out from other object detection solutions like OpenCV and TensorFlow for multiple reasons:
Ultralytics YOLO11 stands out from other object detection solutions like OpenCV and TensorFlow for multiple reasons:
1. **State-of-the-art [Accuracy](https://www.ultralytics.com/glossary/accuracy)**: YOLOv8 provides superior accuracy in object detection, segmentation, and classification tasks.
1. **State-of-the-art [Accuracy](https://www.ultralytics.com/glossary/accuracy)**: YOLO11 provides superior accuracy in object detection, segmentation, and classification tasks.
2. **Ease of Use**: User-friendly API allows for quick implementation and integration without extensive coding.
3. **Real-time Performance**: Optimized for high-speed inference, suitable for real-time applications.
4. **Diverse Applications**: Supports various tasks including multi-object tracking, custom model training, and exporting to different formats like ONNX, TensorRT, and CoreML.

@ -1,10 +1,10 @@
---
comments: true
description: Learn how to run YOLOv8 on AzureML. Quickstart instructions for terminal and notebooks to harness Azure's cloud computing for efficient model training.
keywords: YOLOv8, AzureML, machine learning, cloud computing, quickstart, terminal, notebooks, model training, Python SDK, AI, Ultralytics
description: Learn how to run YOLO11 on AzureML. Quickstart instructions for terminal and notebooks to harness Azure's cloud computing for efficient model training.
keywords: YOLO11, AzureML, machine learning, cloud computing, quickstart, terminal, notebooks, model training, Python SDK, AI, Ultralytics
---
# YOLOv8 🚀 on AzureML
# YOLO11 🚀 on AzureML
## What is Azure?
@ -22,7 +22,7 @@ For users of YOLO (You Only Look Once), AzureML provides a robust, scalable, and
- Utilize built-in tools for data preprocessing, feature selection, and model training.
- Collaborate more efficiently with capabilities for MLOps (Machine Learning Operations), including but not limited to monitoring, auditing, and versioning of models and data.
In the subsequent sections, you will find a quickstart guide detailing how to run YOLOv8 object detection models using AzureML, either from a compute terminal or a notebook.
In the subsequent sections, you will find a quickstart guide detailing how to run YOLO11 object detection models using AzureML, either from a compute terminal or a notebook.
## Prerequisites
@ -49,8 +49,8 @@ Start your compute and open a Terminal:
Create your conda virtualenv and install pip in it:
```bash
conda create --name yolov8env -y
conda activate yolov8env
conda create --name yolo11env -y
conda activate yolo11env
conda install pip -y
```
@ -63,18 +63,18 @@ pip install ultralytics
pip install onnx>=1.12.0
```
### Perform YOLOv8 tasks
### Perform YOLO11 tasks
Predict:
```bash
yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
```
Train a detection model for 10 [epochs](https://www.ultralytics.com/glossary/epoch) with an initial learning_rate of 0.01:
```bash
yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
yolo train data=coco8.yaml model=yolo11n.pt epochs=10 lr0=0.01
```
You can find more [instructions to use the Ultralytics CLI here](../quickstart.md#use-ultralytics-with-cli).
@ -92,11 +92,11 @@ Open the compute Terminal.
From your compute terminal, you need to create a new ipykernel that will be used by your notebook to manage your dependencies:
```bash
conda create --name yolov8env -y
conda activate yolov8env
conda create --name yolo11env -y
conda activate yolo11env
conda install pip -y
conda install ipykernel -y
python -m ipykernel install --user --name yolov8env --display-name "yolov8env"
python -m ipykernel install --user --name yolo11env --display-name "yolo11env"
```
Close your terminal and create a new notebook. From your Notebook, you can select the new kernel.
@ -105,21 +105,21 @@ Then you can open a Notebook cell and install the required dependencies:
```bash
%%bash
source activate yolov8env
source activate yolo11env
cd ultralytics
pip install -r requirements.txt
pip install ultralytics
pip install onnx>=1.12.0
```
Note that we need to use the `source activate yolov8env` for all the %%bash cells, to make sure that the %%bash cell uses environment we want.
Note that we need to use the `source activate yolo11env` for all the %%bash cells, to make sure that the %%bash cell uses environment we want.
Run some predictions using the [Ultralytics CLI](../quickstart.md#use-ultralytics-with-cli):
```bash
%%bash
source activate yolov8env
yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
source activate yolo11env
yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
```
Or with the [Ultralytics Python interface](../quickstart.md#use-ultralytics-with-python), for example to train the model:
@ -128,7 +128,7 @@ Or with the [Ultralytics Python interface](../quickstart.md#use-ultralytics-with
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official YOLOv8n model
model = YOLO("yolo11n.pt") # load an official YOLO11n model
# Use the model
model.train(data="coco8.yaml", epochs=3) # train the model
@ -137,47 +137,47 @@ results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format
```
You can use either the Ultralytics CLI or Python interface for running YOLOv8 tasks, as described in the terminal section above.
You can use either the Ultralytics CLI or Python interface for running YOLO11 tasks, as described in the terminal section above.
By following these steps, you should be able to get YOLOv8 running quickly on AzureML for quick trials. For more advanced uses, you may refer to the full AzureML documentation linked at the beginning of this guide.
By following these steps, you should be able to get YOLO11 running quickly on AzureML for quick trials. For more advanced uses, you may refer to the full AzureML documentation linked at the beginning of this guide.
## Explore More with AzureML
This guide serves as an introduction to get you up and running with YOLOv8 on AzureML. However, it only scratches the surface of what AzureML can offer. To delve deeper and unlock the full potential of AzureML for your machine learning projects, consider exploring the following resources:
This guide serves as an introduction to get you up and running with YOLO11 on AzureML. However, it only scratches the surface of what AzureML can offer. To delve deeper and unlock the full potential of AzureML for your machine learning projects, consider exploring the following resources:
- [Create a Data Asset](https://learn.microsoft.com/azure/machine-learning/how-to-create-data-assets): Learn how to set up and manage your data assets effectively within the AzureML environment.
- [Initiate an AzureML Job](https://learn.microsoft.com/azure/machine-learning/how-to-train-model): Get a comprehensive understanding of how to kickstart your machine learning training jobs on AzureML.
- [Register a Model](https://learn.microsoft.com/azure/machine-learning/how-to-manage-models): Familiarize yourself with model management practices including registration, versioning, and deployment.
- [Train YOLOv8 with AzureML Python SDK](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azure-machine-learning-python-sdk-8268696be8ba): Explore a step-by-step guide on using the AzureML Python SDK to train your YOLOv8 models.
- [Train YOLOv8 with AzureML CLI](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azureml-and-the-az-cli-73d3c870ba8e): Discover how to utilize the command-line interface for streamlined training and management of YOLOv8 models on AzureML.
- [Train YOLO11 with AzureML Python SDK](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azure-machine-learning-python-sdk-8268696be8ba): Explore a step-by-step guide on using the AzureML Python SDK to train your YOLO11 models.
- [Train YOLO11 with AzureML CLI](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azureml-and-the-az-cli-73d3c870ba8e): Discover how to utilize the command-line interface for streamlined training and management of YOLO11 models on AzureML.
## FAQ
### How do I run YOLOv8 on AzureML for model training?
### How do I run YOLO11 on AzureML for model training?
Running YOLOv8 on AzureML for model training involves several steps:
Running YOLO11 on AzureML for model training involves several steps:
1. **Create a Compute Instance**: From your AzureML workspace, navigate to Compute > Compute instances > New, and select the required instance.
2. **Setup Environment**: Start your compute instance, open a terminal, and create a conda environment:
```bash
conda create --name yolov8env -y
conda activate yolov8env
conda create --name yolo11env -y
conda activate yolo11env
conda install pip -y
pip install ultralytics onnx>=1.12.0
```
3. **Run YOLOv8 Tasks**: Use the Ultralytics CLI to train your model:
3. **Run YOLO11 Tasks**: Use the Ultralytics CLI to train your model:
```bash
yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
yolo train data=coco8.yaml model=yolo11n.pt epochs=10 lr0=0.01
```
For more details, you can refer to the [instructions to use the Ultralytics CLI](../quickstart.md#use-ultralytics-with-cli).
### What are the benefits of using AzureML for YOLOv8 training?
### What are the benefits of using AzureML for YOLO11 training?
AzureML provides a robust and efficient ecosystem for training YOLOv8 models:
AzureML provides a robust and efficient ecosystem for training YOLO11 models:
- **Scalability**: Easily scale your compute resources as your data and model complexity grows.
- **MLOps Integration**: Utilize features like versioning, monitoring, and auditing to streamline ML operations.
@ -185,9 +185,9 @@ AzureML provides a robust and efficient ecosystem for training YOLOv8 models:
These advantages make AzureML an ideal platform for projects ranging from quick prototypes to large-scale deployments. For more tips, check out [AzureML Jobs](https://learn.microsoft.com/azure/machine-learning/how-to-train-model).
### How do I troubleshoot common issues when running YOLOv8 on AzureML?
### How do I troubleshoot common issues when running YOLO11 on AzureML?
Troubleshooting common issues with YOLOv8 on AzureML can involve the following steps:
Troubleshooting common issues with YOLO11 on AzureML can involve the following steps:
- **Dependency Issues**: Ensure all required packages are installed. Refer to the `requirements.txt` file for dependencies.
- **Environment Setup**: Verify that your conda environment is correctly activated before running commands.
@ -202,7 +202,7 @@ Yes, AzureML allows you to use both the Ultralytics CLI and the Python interface
- **CLI**: Ideal for quick tasks and running standard scripts directly from the terminal.
```bash
yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
```
- **Python Interface**: Useful for more complex tasks requiring custom coding and integration within notebooks.
@ -210,18 +210,18 @@ Yes, AzureML allows you to use both the Ultralytics CLI and the Python interface
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
model.train(data="coco8.yaml", epochs=3)
```
Refer to the quickstart guides for more detailed instructions [here](../quickstart.md#use-ultralytics-with-cli) and [here](../quickstart.md#use-ultralytics-with-python).
### What is the advantage of using Ultralytics YOLOv8 over other [object detection](https://www.ultralytics.com/glossary/object-detection) models?
### What is the advantage of using Ultralytics YOLO11 over other [object detection](https://www.ultralytics.com/glossary/object-detection) models?
Ultralytics YOLOv8 offers several unique advantages over competing object detection models:
Ultralytics YOLO11 offers several unique advantages over competing object detection models:
- **Speed**: Faster inference and training times compared to models like Faster R-CNN and SSD.
- **[Accuracy](https://www.ultralytics.com/glossary/accuracy)**: High accuracy in detection tasks with features like anchor-free design and enhanced augmentation strategies.
- **Ease of Use**: Intuitive API and CLI for quick setup, making it accessible both to beginners and experts.
To explore more about YOLOv8's features, visit the [Ultralytics YOLO](https://www.ultralytics.com/yolo) page for detailed insights.
To explore more about YOLO11's features, visit the [Ultralytics YOLO](https://www.ultralytics.com/yolo) page for detailed insights.

@ -73,7 +73,7 @@ With Ultralytics installed, you can now start using its robust features for [obj
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # initialize model
model = YOLO("yolo11n.pt") # initialize model
results = model("path/to/image.jpg") # perform inference
results[0].show() # display results for the first image
```

@ -1,10 +1,10 @@
---
comments: true
description: Learn how to boost your Raspberry Pi's ML performance using Coral Edge TPU with Ultralytics YOLOv8. Follow our detailed setup and installation guide.
keywords: Coral Edge TPU, Raspberry Pi, YOLOv8, Ultralytics, TensorFlow Lite, ML inference, machine learning, AI, installation guide, setup tutorial
description: Learn how to boost your Raspberry Pi's ML performance using Coral Edge TPU with Ultralytics YOLO11. Follow our detailed setup and installation guide.
keywords: Coral Edge TPU, Raspberry Pi, YOLO11, Ultralytics, TensorFlow Lite, ML inference, machine learning, AI, installation guide, setup tutorial
---
# Coral Edge TPU on a Raspberry Pi with Ultralytics YOLOv8 🚀
# Coral Edge TPU on a Raspberry Pi with Ultralytics YOLO11 🚀
<p align="center">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/edge-tpu-usb-accelerator-and-pi.avif" alt="Raspberry Pi single board computer with USB Edge TPU accelerator">
@ -152,9 +152,9 @@ Find comprehensive information on the [Predict](../modes/predict.md) page for fu
## FAQ
### What is a Coral Edge TPU and how does it enhance Raspberry Pi's performance with Ultralytics YOLOv8?
### What is a Coral Edge TPU and how does it enhance Raspberry Pi's performance with Ultralytics YOLO11?
The Coral Edge TPU is a compact device designed to add an Edge TPU coprocessor to your system. This coprocessor enables low-power, high-performance [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) inference, particularly optimized for TensorFlow Lite models. When using a Raspberry Pi, the Edge TPU accelerates ML model inference, significantly boosting performance, especially for Ultralytics YOLOv8 models. You can read more about the Coral Edge TPU on their [home page](https://coral.ai/products/accelerator).
The Coral Edge TPU is a compact device designed to add an Edge TPU coprocessor to your system. This coprocessor enables low-power, high-performance [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) inference, particularly optimized for TensorFlow Lite models. When using a Raspberry Pi, the Edge TPU accelerates ML model inference, significantly boosting performance, especially for Ultralytics YOLO11 models. You can read more about the Coral Edge TPU on their [home page](https://coral.ai/products/accelerator).
### How do I install the Coral Edge TPU runtime on a Raspberry Pi?
@ -166,9 +166,9 @@ sudo dpkg -i path/to/package.deb
Make sure to uninstall any previous Coral Edge TPU runtime versions by following the steps outlined in the [Installation Walkthrough](#installation-walkthrough) section.
### Can I export my Ultralytics YOLOv8 model to be compatible with Coral Edge TPU?
### Can I export my Ultralytics YOLO11 model to be compatible with Coral Edge TPU?
Yes, you can export your Ultralytics YOLOv8 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the [Ultralytics Docker container](docker-quickstart.md). You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:
Yes, you can export your Ultralytics YOLO11 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the [Ultralytics Docker container](docker-quickstart.md). You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:
!!! note "Exporting the model"
@ -208,9 +208,9 @@ pip install -U tflite-runtime
For a specific wheel, such as TensorFlow 2.15.0 `tflite-runtime`, you can download it from [this link](https://github.com/feranick/TFlite-builds/releases) and install it using `pip`. Detailed instructions are available in the section on running the model [Running the Model](#running-the-model).
### How do I run inference with an exported YOLOv8 model on a Raspberry Pi using the Coral Edge TPU?
### How do I run inference with an exported YOLO11 model on a Raspberry Pi using the Coral Edge TPU?
After exporting your YOLOv8 model to an Edge TPU-compatible format, you can run inference using the following code snippets:
After exporting your YOLO11 model to an Edge TPU-compatible format, you can run inference using the following code snippets:
!!! note "Running the model"

@ -88,7 +88,7 @@ Let's say you are ready to annotate now. There are several open-source tools ava
- **[Label Studio](https://github.com/HumanSignal/label-studio)**: A flexible tool that supports a wide range of annotation tasks and includes features for managing projects and quality control.
- **[CVAT](https://github.com/cvat-ai/cvat)**: A powerful tool that supports various annotation formats and customizable workflows, making it suitable for complex projects.
- **[Labelme](https://github.com/labelmeai/labelme)**: A simple and easy-to-use tool that allows for quick annotation of images with polygons, making it ideal for straightforward tasks.
- **[Labelme](https://github.com/wkentaro/labelme)**: A simple and easy-to-use tool that allows for quick annotation of images with polygons, making it ideal for straightforward tasks.
<p align="center">
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/labelme-instance-segmentation-annotation.avif" alt="LabelMe Overview">
@ -136,12 +136,12 @@ Bouncing your ideas and queries off other [computer vision](https://www.ultralyt
### Where to Find Help and Support
- **GitHub Issues:** Visit the YOLOv8 GitHub repository and use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features. The community and maintainers are there to help with any issues you face.
- **GitHub Issues:** Visit the YOLO11 GitHub repository and use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features. The community and maintainers are there to help with any issues you face.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to connect with other users and developers, get support, share knowledge, and brainstorm ideas.
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Refer to the [official YOLOv8 documentation](./index.md) for thorough guides and valuable insights on numerous computer vision tasks and projects.
- **Ultralytics YOLO11 Documentation:** Refer to the [official YOLO11 documentation](./index.md) for thorough guides and valuable insights on numerous computer vision tasks and projects.
## Conclusion
@ -159,7 +159,7 @@ Ensuring high consistency and accuracy in data annotation involves establishing
### How many images do I need for training Ultralytics YOLO models?
For effective [transfer learning](https://www.ultralytics.com/glossary/transfer-learning) and object detection with Ultralytics YOLO models, start with a minimum of a few hundred annotated objects per class. If training for just one class, begin with at least 100 annotated images and train for approximately 100 [epochs](https://www.ultralytics.com/glossary/epoch). More complex tasks might require thousands of images per class to achieve high reliability and performance. Quality annotations are crucial, so ensure your data collection and annotation processes are rigorous and aligned with your project's specific goals. Explore detailed training strategies in the [YOLOv8 training guide](../modes/train.md).
For effective [transfer learning](https://www.ultralytics.com/glossary/transfer-learning) and object detection with Ultralytics YOLO models, start with a minimum of a few hundred annotated objects per class. If training for just one class, begin with at least 100 annotated images and train for approximately 100 [epochs](https://www.ultralytics.com/glossary/epoch). More complex tasks might require thousands of images per class to achieve high reliability and performance. Quality annotations are crucial, so ensure your data collection and annotation processes are rigorous and aligned with your project's specific goals. Explore detailed training strategies in the [YOLO11 training guide](../modes/train.md).
### What are some popular tools for data annotation?
@ -167,7 +167,7 @@ Several popular open-source tools can streamline the data annotation process:
- **[Label Studio](https://github.com/HumanSignal/label-studio)**: A flexible tool supporting various annotation tasks, project management, and quality control features.
- **[CVAT](https://www.cvat.ai/)**: Offers multiple annotation formats and customizable workflows, making it suitable for complex projects.
- **[Labelme](https://github.com/labelmeai/labelme)**: Ideal for quick and straightforward image annotation with polygons.
- **[Labelme](https://github.com/wkentaro/labelme)**: Ideal for quick and straightforward image annotation with polygons.
These tools can help enhance the efficiency and accuracy of your annotation workflows. For extensive feature lists and guides, refer to our [data annotation tools documentation](../datasets/index.md).

@ -1,10 +1,10 @@
---
comments: true
description: Learn how to deploy Ultralytics YOLOv8 on NVIDIA Jetson devices using TensorRT and DeepStream SDK. Explore performance benchmarks and maximize AI capabilities.
keywords: Ultralytics, YOLOv8, NVIDIA Jetson, JetPack, AI deployment, embedded systems, deep learning, TensorRT, DeepStream SDK, computer vision
description: Learn how to deploy Ultralytics YOLO11 on NVIDIA Jetson devices using TensorRT and DeepStream SDK. Explore performance benchmarks and maximize AI capabilities.
keywords: Ultralytics, YOLO11, NVIDIA Jetson, JetPack, AI deployment, embedded systems, deep learning, TensorRT, DeepStream SDK, computer vision
---
# Ultralytics YOLOv8 on NVIDIA Jetson using DeepStream SDK and TensorRT
# Ultralytics YOLO11 on NVIDIA Jetson using DeepStream SDK and TensorRT
<p align="center">
<br>
@ -14,10 +14,10 @@ keywords: Ultralytics, YOLOv8, NVIDIA Jetson, JetPack, AI deployment, embedded s
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Run Multiple Streams with DeepStream SDK on Jetson Nano using Ultralytics YOLOv8
<strong>Watch:</strong> How to Run Multiple Streams with DeepStream SDK on Jetson Nano using Ultralytics YOLO11
</p>
This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLOv8 on [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) devices using DeepStream SDK and TensorRT. Here we use TensorRT to maximize the inference performance on the Jetson platform.
This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLO11 on [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) devices using DeepStream SDK and TensorRT. Here we use TensorRT to maximize the inference performance on the Jetson platform.
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/deepstream-nvidia-jetson.avif" alt="DeepStream on NVIDIA Jetson">
@ -33,7 +33,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
Before you start to follow this guide:
- Visit our documentation, [Quick Start Guide: NVIDIA Jetson with Ultralytics YOLOv8](nvidia-jetson.md) to set up your NVIDIA Jetson device with Ultralytics YOLOv8
- Visit our documentation, [Quick Start Guide: NVIDIA Jetson with Ultralytics YOLO11](nvidia-jetson.md) to set up your NVIDIA Jetson device with Ultralytics YOLO11
- Install [DeepStream SDK](https://developer.nvidia.com/deepstream-getting-started) according to the JetPack version
- For JetPack 4.6.4, install [DeepStream 6.0.1](https://docs.nvidia.com/metropolis/deepstream/6.0.1/dev-guide/text/DS_Quickstart.html)
@ -43,7 +43,7 @@ Before you start to follow this guide:
In this guide we have used the Debian package method of installing DeepStream SDK to the Jetson device. You can also visit the [DeepStream SDK on Jetson (Archived)](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived) to access legacy versions of DeepStream.
## DeepStream Configuration for YOLOv8
## DeepStream Configuration for YOLO11
Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) GitHub repository which includes NVIDIA DeepStream SDK support for YOLO models. We appreciate the efforts of marcoslucianops for his contributions!
@ -61,7 +61,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
cd DeepStream-Yolo
```
3. Download Ultralytics YOLOv8 detection model (.pt) of your choice from [YOLOv8 releases](https://github.com/ultralytics/assets/releases). Here we use [yolov8s.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt).
3. Download Ultralytics YOLO11 detection model (.pt) of your choice from [YOLO11 releases](https://github.com/ultralytics/assets/releases). Here we use [yolov8s.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt).
```bash
wget https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt
@ -69,7 +69,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
!!! note
You can also use a [custom trained YOLOv8 model](https://docs.ultralytics.com/modes/train/).
You can also use a [custom trained YOLO11 model](https://docs.ultralytics.com/modes/train/).
4. Convert model to ONNX
@ -179,7 +179,7 @@ deepstream-app -c deepstream_app_config.txt
It will take a long time to generate the TensorRT engine file before starting the inference. So please be patient.
<div align=center><img width=1000 src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-deepstream.avif" alt="YOLOv8 with deepstream"></div>
<div align=center><img width=1000 src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-deepstream.avif" alt="YOLO11 with deepstream"></div>
!!! tip
@ -317,21 +317,21 @@ This guide was initially created by our friends at Seeed Studio, Lakshantha and
## FAQ
### How do I set up Ultralytics YOLOv8 on an NVIDIA Jetson device?
### How do I set up Ultralytics YOLO11 on an NVIDIA Jetson device?
To set up Ultralytics YOLOv8 on an [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) device, you first need to install the [DeepStream SDK](https://developer.nvidia.com/deepstream-getting-started) compatible with your JetPack version. Follow the step-by-step guide in our [Quick Start Guide](nvidia-jetson.md) to configure your NVIDIA Jetson for YOLOv8 deployment.
To set up Ultralytics YOLO11 on an [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) device, you first need to install the [DeepStream SDK](https://developer.nvidia.com/deepstream-getting-started) compatible with your JetPack version. Follow the step-by-step guide in our [Quick Start Guide](nvidia-jetson.md) to configure your NVIDIA Jetson for YOLO11 deployment.
### What is the benefit of using TensorRT with YOLOv8 on NVIDIA Jetson?
### What is the benefit of using TensorRT with YOLO11 on NVIDIA Jetson?
Using TensorRT with YOLOv8 optimizes the model for inference, significantly reducing latency and improving throughput on NVIDIA Jetson devices. TensorRT provides high-performance, low-latency [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) inference through layer fusion, precision calibration, and kernel auto-tuning. This leads to faster and more efficient execution, particularly useful for real-time applications like video analytics and autonomous machines.
Using TensorRT with YOLO11 optimizes the model for inference, significantly reducing latency and improving throughput on NVIDIA Jetson devices. TensorRT provides high-performance, low-latency [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) inference through layer fusion, precision calibration, and kernel auto-tuning. This leads to faster and more efficient execution, particularly useful for real-time applications like video analytics and autonomous machines.
### Can I run Ultralytics YOLOv8 with DeepStream SDK across different NVIDIA Jetson hardware?
### Can I run Ultralytics YOLO11 with DeepStream SDK across different NVIDIA Jetson hardware?
Yes, the guide for deploying Ultralytics YOLOv8 with the DeepStream SDK and TensorRT is compatible across the entire NVIDIA Jetson lineup. This includes devices like the Jetson Orin NX 16GB with [JetPack 5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and the Jetson Nano 4GB with [JetPack 4.6.4](https://developer.nvidia.com/jetpack-sdk-464). Refer to the section [DeepStream Configuration for YOLOv8](#deepstream-configuration-for-yolov8) for detailed steps.
Yes, the guide for deploying Ultralytics YOLO11 with the DeepStream SDK and TensorRT is compatible across the entire NVIDIA Jetson lineup. This includes devices like the Jetson Orin NX 16GB with [JetPack 5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and the Jetson Nano 4GB with [JetPack 4.6.4](https://developer.nvidia.com/jetpack-sdk-464). Refer to the section [DeepStream Configuration for YOLO11](#deepstream-configuration-for-yolo11) for detailed steps.
### How can I convert a YOLOv8 model to ONNX for DeepStream?
### How can I convert a YOLO11 model to ONNX for DeepStream?
To convert a YOLOv8 model to ONNX format for deployment with DeepStream, use the `utils/export_yoloV8.py` script from the [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) repository.
To convert a YOLO11 model to ONNX format for deployment with DeepStream, use the `utils/export_yoloV8.py` script from the [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) repository.
Here's an example command:
@ -341,12 +341,12 @@ python3 utils/export_yoloV8.py -w yolov8s.pt --opset 12 --simplify
For more details on model conversion, check out our [model export section](../modes/export.md).
### What are the performance benchmarks for YOLOv8 on NVIDIA Jetson Orin NX?
### What are the performance benchmarks for YOLO on NVIDIA Jetson Orin NX?
The performance of YOLOv8 models on NVIDIA Jetson Orin NX 16GB varies based on TensorRT precision levels. For example, YOLOv8s models achieve:
The performance of YOLO11 models on NVIDIA Jetson Orin NX 16GB varies based on TensorRT precision levels. For example, YOLOv8s models achieve:
- **FP32 Precision**: 15.63 ms/im, 64 FPS
- **FP16 Precision**: 7.94 ms/im, 126 FPS
- **INT8 Precision**: 5.53 ms/im, 181 FPS
These benchmarks underscore the efficiency and capability of using TensorRT-optimized YOLOv8 models on NVIDIA Jetson hardware. For further details, see our [Benchmark Results](#benchmark-results) section.
These benchmarks underscore the efficiency and capability of using TensorRT-optimized YOLO11 models on NVIDIA Jetson hardware. For further details, see our [Benchmark Results](#benchmark-results) section.

@ -1,7 +1,7 @@
---
comments: true
description: Learn how to define clear goals and objectives for your computer vision project with our practical guide. Includes tips on problem statements, measurable objectives, and key decisions.
keywords: computer vision, project planning, problem statement, measurable objectives, dataset preparation, model selection, YOLOv8, Ultralytics
keywords: computer vision, project planning, problem statement, measurable objectives, dataset preparation, model selection, YOLO11, Ultralytics
---
# A Practical Guide for Defining Your [Computer Vision](https://www.ultralytics.com/glossary/computer-vision-cv) Project
@ -30,7 +30,7 @@ Let's walk through an example.
Consider a computer vision project where you want to [estimate the speed of vehicles](./speed-estimation.md) on a highway. The core issue is that current speed monitoring methods are inefficient and error-prone due to outdated radar systems and manual processes. The project aims to develop a real-time computer vision system that can replace legacy [speed estimation](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects) systems.
<p align="center">
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/speed-estimation-using-yolov8.avif" alt="Speed Estimation Using YOLOv8">
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/speed-estimation-using-yolov8.avif" alt="Speed Estimation Using YOLO11">
</p>
Primary users include traffic management authorities and law enforcement, while secondary stakeholders are highway planners and the public benefiting from safer roads. Key requirements involve evaluating budget, time, and personnel, as well as addressing technical needs like high-resolution cameras and real-time data processing. Additionally, regulatory constraints on privacy and [data security](https://www.ultralytics.com/glossary/data-security) must be considered.
@ -85,7 +85,7 @@ The most popular computer vision tasks include [image classification](https://ww
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/image-classification-vs-object-detection-vs-image-segmentation.avif" alt="Overview of Computer Vision Tasks">
</p>
For a detailed explanation of various tasks, please take a look at the Ultralytics Docs page on [YOLOv8 Tasks](../tasks/index.md).
For a detailed explanation of various tasks, please take a look at the Ultralytics Docs page on [YOLO11 Tasks](../tasks/index.md).
### Can a Pre-trained Model Remember Classes It Knew Before Custom Training?
@ -114,12 +114,12 @@ Connecting with other computer vision enthusiasts can be incredibly helpful for
### Community Support Channels
- **GitHub Issues:** Head over to the YOLOv8 GitHub repository. You can use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features. The community and maintainers can assist with specific problems you encounter.
- **GitHub Issues:** Head over to the YOLO11 GitHub repository. You can use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features. The community and maintainers can assist with specific problems you encounter.
- **Ultralytics Discord Server:** Become part of the [Ultralytics Discord server](https://discord.com/invite/ultralytics). Connect with fellow users and developers, seek support, exchange knowledge, and discuss ideas.
### Comprehensive Guides and Documentation
- **Ultralytics YOLOv8 Documentation:** Explore the [official YOLOv8 documentation](./index.md) for in-depth guides and valuable tips on various computer vision tasks and projects.
- **Ultralytics YOLO11 Documentation:** Explore the [official YOLO11 documentation](./index.md) for in-depth guides and valuable tips on various computer vision tasks and projects.
## Conclusion
@ -138,11 +138,11 @@ To define a clear problem statement for your Ultralytics computer vision project
Providing a well-defined problem statement ensures that the project remains focused and aligned with your objectives. For a detailed guide, refer to our [practical guide](#defining-a-clear-problem-statement).
### Why should I use Ultralytics YOLOv8 for speed estimation in my computer vision project?
### Why should I use Ultralytics YOLO11 for speed estimation in my computer vision project?
Ultralytics YOLOv8 is ideal for speed estimation because of its real-time object tracking capabilities, high accuracy, and robust performance in detecting and monitoring vehicle speeds. It overcomes inefficiencies and inaccuracies of traditional radar systems by leveraging cutting-edge computer vision technology. Check out our blog on [speed estimation using YOLOv8](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects) for more insights and practical examples.
Ultralytics YOLO11 is ideal for speed estimation because of its real-time object tracking capabilities, high accuracy, and robust performance in detecting and monitoring vehicle speeds. It overcomes inefficiencies and inaccuracies of traditional radar systems by leveraging cutting-edge computer vision technology. Check out our blog on [speed estimation using YOLO11](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects) for more insights and practical examples.
### How do I set effective measurable objectives for my computer vision project with Ultralytics YOLOv8?
### How do I set effective measurable objectives for my computer vision project with Ultralytics YOLO11?
Set effective and measurable objectives using the SMART criteria:

@ -1,14 +1,14 @@
---
comments: true
description: Learn how to calculate distances between objects using Ultralytics YOLOv8 for accurate spatial positioning and scene understanding.
keywords: Ultralytics, YOLOv8, distance calculation, computer vision, object tracking, spatial positioning
description: Learn how to calculate distances between objects using Ultralytics YOLO11 for accurate spatial positioning and scene understanding.
keywords: Ultralytics, YOLO11, distance calculation, computer vision, object tracking, spatial positioning
---
# Distance Calculation using Ultralytics YOLOv8
# Distance Calculation using Ultralytics YOLO11
## What is Distance Calculation?
Measuring the gap between two objects is known as distance calculation within a specified space. In the case of [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), the [bounding box](https://www.ultralytics.com/glossary/bounding-box) centroid is employed to calculate the distance for bounding boxes highlighted by the user.
Measuring the gap between two objects is known as distance calculation within a specified space. In the case of [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics), the [bounding box](https://www.ultralytics.com/glossary/bounding-box) centroid is employed to calculate the distance for bounding boxes highlighted by the user.
<p align="center">
<br>
@ -18,14 +18,14 @@ Measuring the gap between two objects is known as distance calculation within a
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Distance Calculation using Ultralytics YOLOv8
<strong>Watch:</strong> Distance Calculation using Ultralytics YOLO11
</p>
## Visuals
| Distance Calculation using Ultralytics YOLOv8 |
| Distance Calculation using Ultralytics YOLO11 |
| :---------------------------------------------------------------------------------------------------------------------------: |
| ![Ultralytics YOLOv8 Distance Calculation](https://github.com/ultralytics/docs/releases/download/0/distance-calculation.avif) |
| ![Ultralytics YOLO11 Distance Calculation](https://github.com/ultralytics/docs/releases/download/0/distance-calculation.avif) |
## Advantages of Distance Calculation?
@ -36,7 +36,7 @@ Measuring the gap between two objects is known as distance calculation within a
- Click on any two bounding boxes with Left Mouse click for distance calculation
!!! example "Distance Calculation using YOLOv8 Example"
!!! example "Distance Calculation using YOLO11 Example"
=== "Video Stream"
@ -45,7 +45,7 @@ Measuring the gap between two objects is known as distance calculation within a
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -98,29 +98,29 @@ Measuring the gap between two objects is known as distance calculation within a
## FAQ
### How do I calculate distances between objects using Ultralytics YOLOv8?
### How do I calculate distances between objects using Ultralytics YOLO11?
To calculate distances between objects using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), you need to identify the bounding box centroids of the detected objects. This process involves initializing the `DistanceCalculation` class from Ultralytics' `solutions` module and using the model's tracking outputs to calculate the distances. You can refer to the implementation in the [distance calculation example](#distance-calculation-using-ultralytics-yolov8).
To calculate distances between objects using [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics), you need to identify the bounding box centroids of the detected objects. This process involves initializing the `DistanceCalculation` class from Ultralytics' `solutions` module and using the model's tracking outputs to calculate the distances. You can refer to the implementation in the [distance calculation example](#distance-calculation-using-ultralytics-yolo11).
### What are the advantages of using distance calculation with Ultralytics YOLOv8?
### What are the advantages of using distance calculation with Ultralytics YOLO11?
Using distance calculation with Ultralytics YOLOv8 offers several advantages:
Using distance calculation with Ultralytics YOLO11 offers several advantages:
- **Localization Precision:** Provides accurate spatial positioning for objects.
- **Size Estimation:** Helps estimate physical sizes, contributing to better contextual understanding.
- **Scene Understanding:** Enhances 3D scene comprehension, aiding improved decision-making in applications like autonomous driving and surveillance.
### Can I perform distance calculation in real-time video streams with Ultralytics YOLOv8?
### Can I perform distance calculation in real-time video streams with Ultralytics YOLO11?
Yes, you can perform distance calculation in real-time video streams with Ultralytics YOLOv8. The process involves capturing video frames using [OpenCV](https://www.ultralytics.com/glossary/opencv), running YOLOv8 [object detection](https://www.ultralytics.com/glossary/object-detection), and using the `DistanceCalculation` class to calculate distances between objects in successive frames. For a detailed implementation, see the [video stream example](#distance-calculation-using-ultralytics-yolov8).
Yes, you can perform distance calculation in real-time video streams with Ultralytics YOLO11. The process involves capturing video frames using [OpenCV](https://www.ultralytics.com/glossary/opencv), running YOLO11 [object detection](https://www.ultralytics.com/glossary/object-detection), and using the `DistanceCalculation` class to calculate distances between objects in successive frames. For a detailed implementation, see the [video stream example](#distance-calculation-using-ultralytics-yolo11).
### How do I delete points drawn during distance calculation using Ultralytics YOLOv8?
### How do I delete points drawn during distance calculation using Ultralytics YOLO11?
To delete points drawn during distance calculation with Ultralytics YOLOv8, you can use a right mouse click. This action will clear all the points you have drawn. For more details, refer to the note section under the [distance calculation example](#distance-calculation-using-ultralytics-yolov8).
To delete points drawn during distance calculation with Ultralytics YOLO11, you can use a right mouse click. This action will clear all the points you have drawn. For more details, refer to the note section under the [distance calculation example](#distance-calculation-using-ultralytics-yolo11).
### What are the key arguments for initializing the DistanceCalculation class in Ultralytics YOLOv8?
### What are the key arguments for initializing the DistanceCalculation class in Ultralytics YOLO11?
The key arguments for initializing the `DistanceCalculation` class in Ultralytics YOLOv8 include:
The key arguments for initializing the `DistanceCalculation` class in Ultralytics YOLO11 include:
- `names`: Dictionary mapping class indices to class names.
- `view_img`: Flag to indicate if the video stream should be displayed.

@ -197,10 +197,10 @@ Setup and configuration of an X11 or Wayland display server is outside the scope
### Using Docker with a GUI
Now you can display graphical applications inside your Docker container. For example, you can run the following [CLI command](../usage/cli.md) to visualize the [predictions](../modes/predict.md) from a [YOLOv8 model](../models/yolov8.md):
Now you can display graphical applications inside your Docker container. For example, you can run the following [CLI command](../usage/cli.md) to visualize the [predictions](../modes/predict.md) from a [YOLO11 model](../models/yolo11.md):
```bash
yolo predict model=yolov8n.pt show=True
yolo predict model=yolo11n.pt show=True
```
??? info "Testing"

@ -1,14 +1,14 @@
---
comments: true
description: Transform complex data into insightful heatmaps using Ultralytics YOLOv8. Discover patterns, trends, and anomalies with vibrant visualizations.
keywords: Ultralytics, YOLOv8, heatmaps, data visualization, data analysis, complex data, patterns, trends, anomalies
description: Transform complex data into insightful heatmaps using Ultralytics YOLO11. Discover patterns, trends, and anomalies with vibrant visualizations.
keywords: Ultralytics, YOLO11, heatmaps, data visualization, data analysis, complex data, patterns, trends, anomalies
---
# Advanced [Data Visualization](https://www.ultralytics.com/glossary/data-visualization): Heatmaps using Ultralytics YOLOv8 🚀
# Advanced [Data Visualization](https://www.ultralytics.com/glossary/data-visualization): Heatmaps using Ultralytics YOLO11 🚀
## Introduction to Heatmaps
A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) transforms complex data into a vibrant, color-coded matrix. This visual tool employs a spectrum of colors to represent varying data values, where warmer hues indicate higher intensities and cooler tones signify lower values. Heatmaps excel in visualizing intricate data patterns, correlations, and anomalies, offering an accessible and engaging approach to data interpretation across diverse domains.
A heatmap generated with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) transforms complex data into a vibrant, color-coded matrix. This visual tool employs a spectrum of colors to represent varying data values, where warmer hues indicate higher intensities and cooler tones signify lower values. Heatmaps excel in visualizing intricate data patterns, correlations, and anomalies, offering an accessible and engaging approach to data interpretation across diverse domains.
<p align="center">
<br>
@ -18,7 +18,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Heatmaps using Ultralytics YOLOv8
<strong>Watch:</strong> Heatmaps using Ultralytics YOLO11
</p>
## Why Choose Heatmaps for Data Analysis?
@ -31,15 +31,15 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
| Transportation | Retail |
| :--------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------: |
| ![Ultralytics YOLOv8 Transportation Heatmap](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-transportation-heatmap.avif) | ![Ultralytics YOLOv8 Retail Heatmap](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-retail-heatmap.avif) |
| Ultralytics YOLOv8 Transportation Heatmap | Ultralytics YOLOv8 Retail Heatmap |
| ![Ultralytics YOLO11 Transportation Heatmap](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-transportation-heatmap.avif) | ![Ultralytics YOLO11 Retail Heatmap](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-retail-heatmap.avif) |
| Ultralytics YOLO11 Transportation Heatmap | Ultralytics YOLO11 Retail Heatmap |
!!! tip "Heatmap Configuration"
- `heatmap_alpha`: Ensure this value is within the range (0.0 - 1.0).
- `decay_factor`: Used for removing heatmap after an object is no longer in the frame, its value should also be in the range (0.0 - 1.0).
!!! example "Heatmaps using Ultralytics YOLOv8 Example"
!!! example "Heatmaps using Ultralytics YOLO11 Example"
=== "Heatmap"
@ -48,7 +48,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -86,7 +86,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -127,7 +127,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -169,7 +169,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -211,7 +211,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt") # YOLOv8 custom/pretrained model
model = YOLO("yolo11n.pt") # YOLO11 custom/pretrained model
im0 = cv2.imread("path/to/image.png") # path to image file
h, w = im0.shape[:2] # image height and width
@ -236,7 +236,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -326,20 +326,20 @@ These colormaps are commonly used for visualizing data with different color repr
## FAQ
### How does Ultralytics YOLOv8 generate heatmaps and what are their benefits?
### How does Ultralytics YOLO11 generate heatmaps and what are their benefits?
Ultralytics YOLOv8 generates heatmaps by transforming complex data into a color-coded matrix where different hues represent data intensities. Heatmaps make it easier to visualize patterns, correlations, and anomalies in the data. Warmer hues indicate higher values, while cooler tones represent lower values. The primary benefits include intuitive visualization of data distribution, efficient pattern detection, and enhanced spatial analysis for decision-making. For more details and configuration options, refer to the [Heatmap Configuration](#arguments-heatmap) section.
Ultralytics YOLO11 generates heatmaps by transforming complex data into a color-coded matrix where different hues represent data intensities. Heatmaps make it easier to visualize patterns, correlations, and anomalies in the data. Warmer hues indicate higher values, while cooler tones represent lower values. The primary benefits include intuitive visualization of data distribution, efficient pattern detection, and enhanced spatial analysis for decision-making. For more details and configuration options, refer to the [Heatmap Configuration](#arguments-heatmap) section.
### Can I use Ultralytics YOLOv8 to perform object tracking and generate a heatmap simultaneously?
### Can I use Ultralytics YOLO11 to perform object tracking and generate a heatmap simultaneously?
Yes, Ultralytics YOLOv8 supports object tracking and heatmap generation concurrently. This can be achieved through its `Heatmap` solution integrated with object tracking models. To do so, you need to initialize the heatmap object and use YOLOv8's tracking capabilities. Here's a simple example:
Yes, Ultralytics YOLO11 supports object tracking and heatmap generation concurrently. This can be achieved through its `Heatmap` solution integrated with object tracking models. To do so, you need to initialize the heatmap object and use YOLO11's tracking capabilities. Here's a simple example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, view_img=True, shape="circle", names=model.names)
@ -359,11 +359,11 @@ cv2.destroyAllWindows()
For further guidance, check the [Tracking Mode](../modes/track.md) page.
### What makes Ultralytics YOLOv8 heatmaps different from other data visualization tools like those from [OpenCV](https://www.ultralytics.com/glossary/opencv) or Matplotlib?
### What makes Ultralytics YOLO11 heatmaps different from other data visualization tools like those from [OpenCV](https://www.ultralytics.com/glossary/opencv) or Matplotlib?
Ultralytics YOLOv8 heatmaps are specifically designed for integration with its [object detection](https://www.ultralytics.com/glossary/object-detection) and tracking models, providing an end-to-end solution for real-time data analysis. Unlike generic visualization tools like OpenCV or Matplotlib, YOLOv8 heatmaps are optimized for performance and automated processing, supporting features like persistent tracking, decay factor adjustment, and real-time video overlay. For more information on YOLOv8's unique features, visit the [Ultralytics YOLOv8 Introduction](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8).
Ultralytics YOLO11 heatmaps are specifically designed for integration with its [object detection](https://www.ultralytics.com/glossary/object-detection) and tracking models, providing an end-to-end solution for real-time data analysis. Unlike generic visualization tools like OpenCV or Matplotlib, YOLO11 heatmaps are optimized for performance and automated processing, supporting features like persistent tracking, decay factor adjustment, and real-time video overlay. For more information on YOLO11's unique features, visit the [Ultralytics YOLO11 Introduction](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8).
### How can I visualize only specific object classes in heatmaps using Ultralytics YOLOv8?
### How can I visualize only specific object classes in heatmaps using Ultralytics YOLO11?
You can visualize specific object classes by specifying the desired classes in the `track()` method of the YOLO model. For instance, if you only want to visualize cars and persons (assuming their class indices are 0 and 2), you can set the `classes` parameter accordingly.
@ -372,7 +372,7 @@ import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, view_img=True, shape="circle", names=model.names)
@ -391,6 +391,6 @@ cap.release()
cv2.destroyAllWindows()
```
### Why should businesses choose Ultralytics YOLOv8 for heatmap generation in data analysis?
### Why should businesses choose Ultralytics YOLO11 for heatmap generation in data analysis?
Ultralytics YOLOv8 offers seamless integration of advanced object detection and real-time heatmap generation, making it an ideal choice for businesses looking to visualize data more effectively. The key advantages include intuitive data distribution visualization, efficient pattern detection, and enhanced spatial analysis for better decision-making. Additionally, YOLOv8's cutting-edge features such as persistent tracking, customizable colormaps, and support for various export formats make it superior to other tools like [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) and OpenCV for comprehensive data analysis. Learn more about business applications at [Ultralytics Plans](https://www.ultralytics.com/plans).
Ultralytics YOLO11 offers seamless integration of advanced object detection and real-time heatmap generation, making it an ideal choice for businesses looking to visualize data more effectively. The key advantages include intuitive data distribution visualization, efficient pattern detection, and enhanced spatial analysis for better decision-making. Additionally, YOLO11's cutting-edge features such as persistent tracking, customizable colormaps, and support for various export formats make it superior to other tools like [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) and OpenCV for comprehensive data analysis. Learn more about business applications at [Ultralytics Plans](https://www.ultralytics.com/plans).

@ -23,7 +23,7 @@ Hyperparameters are high-level, structural settings for the algorithm. They are
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/hyperparameter-tuning-visual.avif" alt="Hyperparameter Tuning Visual">
</p>
For a full list of augmentation hyperparameters used in YOLOv8 please refer to the [configurations page](../usage/cfg.md#augmentation-settings).
For a full list of augmentation hyperparameters used in YOLO11 please refer to the [configurations page](../usage/cfg.md#augmentation-settings).
### Genetic Evolution and Mutation
@ -67,7 +67,7 @@ The process is repeated until either the set number of iterations is reached or
## Usage Example
Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyperparameter tuning of YOLOv8n on COCO8 for 30 epochs with an AdamW optimizer and skipping plotting, checkpointing and validation other than on final epoch for faster Tuning.
Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyperparameter tuning of YOLO11n on COCO8 for 30 epochs with an AdamW optimizer and skipping plotting, checkpointing and validation other than on final epoch for faster Tuning.
!!! example
@ -77,7 +77,7 @@ Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyp
from ultralytics import YOLO
# Initialize the YOLO model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Tune hyperparameters on COCO8 for 30 epochs
model.tune(data="coco8.yaml", epochs=30, iterations=300, optimizer="AdamW", plots=False, save=False, val=False)
@ -202,7 +202,7 @@ The hyperparameter tuning process in Ultralytics YOLO is simplified yet powerful
1. [Hyperparameter Optimization in Wikipedia](https://en.wikipedia.org/wiki/Hyperparameter_optimization)
2. [YOLOv5 Hyperparameter Evolution Guide](../yolov5/tutorials/hyperparameter_evolution.md)
3. [Efficient Hyperparameter Tuning with Ray Tune and YOLOv8](../integrations/ray-tune.md)
3. [Efficient Hyperparameter Tuning with Ray Tune and YOLO11](../integrations/ray-tune.md)
For deeper insights, you can explore the `Tuner` class source code and accompanying documentation. Should you have any questions, feature requests, or need further assistance, feel free to reach out to us on [GitHub](https://github.com/ultralytics/ultralytics/issues/new/choose) or [Discord](https://discord.com/invite/ultralytics).
@ -220,7 +220,7 @@ To optimize the learning rate for Ultralytics YOLO, start by setting an initial
from ultralytics import YOLO
# Initialize the YOLO model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Tune hyperparameters on COCO8 for 30 epochs
model.tune(data="coco8.yaml", epochs=30, iterations=300, optimizer="AdamW", plots=False, save=False, val=False)
@ -228,9 +228,9 @@ To optimize the learning rate for Ultralytics YOLO, start by setting an initial
For more details, check the [Ultralytics YOLO configuration page](../usage/cfg.md#augmentation-settings).
### What are the benefits of using genetic algorithms for hyperparameter tuning in YOLOv8?
### What are the benefits of using genetic algorithms for hyperparameter tuning in YOLO11?
Genetic algorithms in Ultralytics YOLOv8 provide a robust method for exploring the hyperparameter space, leading to highly optimized model performance. Key benefits include:
Genetic algorithms in Ultralytics YOLO11 provide a robust method for exploring the hyperparameter space, leading to highly optimized model performance. Key benefits include:
- **Efficient Search**: Genetic algorithms like mutation can quickly explore a large set of hyperparameters.
- **Avoiding Local Minima**: By introducing randomness, they help in avoiding local minima, ensuring better global optimization.
@ -240,7 +240,7 @@ To see how genetic algorithms can optimize hyperparameters, check out the [hyper
### How long does the hyperparameter tuning process take for Ultralytics YOLO?
The time required for hyperparameter tuning with Ultralytics YOLO largely depends on several factors such as the size of the dataset, the complexity of the model architecture, the number of iterations, and the computational resources available. For instance, tuning YOLOv8n on a dataset like COCO8 for 30 epochs might take several hours to days, depending on the hardware.
The time required for hyperparameter tuning with Ultralytics YOLO largely depends on several factors such as the size of the dataset, the complexity of the model architecture, the number of iterations, and the computational resources available. For instance, tuning YOLO11n on a dataset like COCO8 for 30 epochs might take several hours to days, depending on the hardware.
To effectively manage tuning time, define a clear tuning budget beforehand ([internal section link](#preparing-for-hyperparameter-tuning)). This helps in balancing resource allocation and optimization goals.

@ -18,7 +18,7 @@ Whether you're a beginner or an expert in [deep learning](https://www.ultralytic
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Ultralytics YOLOv8 Guides Overview
<strong>Watch:</strong> Ultralytics YOLO11 Guides Overview
</p>
## Guides
@ -30,14 +30,14 @@ Here's a compilation of in-depth guides to help you master different aspects of
- [Model Deployment Options](model-deployment-options.md): Overview of YOLO [model deployment](https://www.ultralytics.com/glossary/model-deployment) formats like ONNX, OpenVINO, and TensorRT, with pros and cons for each to inform your deployment strategy.
- [K-Fold Cross Validation](kfold-cross-validation.md) 🚀 NEW: Learn how to improve model generalization using K-Fold cross-validation technique.
- [Hyperparameter Tuning](hyperparameter-tuning.md) 🚀 NEW: Discover how to optimize your YOLO models by fine-tuning hyperparameters using the Tuner class and genetic evolution algorithms.
- [SAHI Tiled Inference](sahi-tiled-inference.md) 🚀 NEW: Comprehensive guide on leveraging SAHI's sliced inference capabilities with YOLOv8 for object detection in high-resolution images.
- [SAHI Tiled Inference](sahi-tiled-inference.md) 🚀 NEW: Comprehensive guide on leveraging SAHI's sliced inference capabilities with YOLO11 for object detection in high-resolution images.
- [AzureML Quickstart](azureml-quickstart.md) 🚀 NEW: Get up and running with Ultralytics YOLO models on Microsoft's Azure [Machine Learning](https://www.ultralytics.com/glossary/machine-learning-ml) platform. Learn how to train, deploy, and scale your object detection projects in the cloud.
- [Conda Quickstart](conda-quickstart.md) 🚀 NEW: Step-by-step guide to setting up a [Conda](https://anaconda.org/conda-forge/ultralytics) environment for Ultralytics. Learn how to install and start using the Ultralytics package efficiently with Conda.
- [Docker Quickstart](docker-quickstart.md) 🚀 NEW: Complete guide to setting up and using Ultralytics YOLO models with [Docker](https://hub.docker.com/r/ultralytics/ultralytics). Learn how to install Docker, manage GPU support, and run YOLO models in isolated containers for consistent development and deployment.
- [Raspberry Pi](raspberry-pi.md) 🚀 NEW: Quickstart tutorial to run YOLO models to the latest Raspberry Pi hardware.
- [NVIDIA Jetson](nvidia-jetson.md) 🚀 NEW: Quickstart guide for deploying YOLO models on NVIDIA Jetson devices.
- [DeepStream on NVIDIA Jetson](deepstream-nvidia-jetson.md) 🚀 NEW: Quickstart guide for deploying YOLO models on NVIDIA Jetson devices using DeepStream and TensorRT.
- [Triton Inference Server Integration](triton-inference-server.md) 🚀 NEW: Dive into the integration of Ultralytics YOLOv8 with NVIDIA's Triton Inference Server for scalable and efficient deep learning inference deployments.
- [Triton Inference Server Integration](triton-inference-server.md) 🚀 NEW: Dive into the integration of Ultralytics YOLO11 with NVIDIA's Triton Inference Server for scalable and efficient deep learning inference deployments.
- [YOLO Thread-Safe Inference](yolo-thread-safe-inference.md) 🚀 NEW: Guidelines for performing inference with YOLO models in a thread-safe manner. Learn the importance of thread safety and best practices to prevent race conditions and ensure consistent predictions.
- [Isolating Segmentation Objects](isolating-segmentation-objects.md) 🚀 NEW: Step-by-step recipe and explanation on how to extract and/or isolate objects from images using Ultralytics Segmentation.
- [Edge TPU on Raspberry Pi](coral-edge-tpu-on-raspberry-pi.md): [Google Edge TPU](https://coral.ai/products/accelerator) accelerates YOLO inference on [Raspberry Pi](https://www.raspberrypi.com/).
@ -46,7 +46,7 @@ Here's a compilation of in-depth guides to help you master different aspects of
- [Steps of a Computer Vision Project ](steps-of-a-cv-project.md) 🚀 NEW: Learn about the key steps involved in a computer vision project, including defining goals, selecting models, preparing data, and evaluating results.
- [Defining A Computer Vision Project's Goals](defining-project-goals.md) 🚀 NEW: Walk through how to effectively define clear and measurable goals for your computer vision project. Learn the importance of a well-defined problem statement and how it creates a roadmap for your project.
- [Data Collection and Annotation](data-collection-and-annotation.md) 🚀 NEW: Explore the tools, techniques, and best practices for collecting and annotating data to create high-quality inputs for your computer vision models.
- [Preprocessing Annotated Data](preprocessing_annotated_data.md) 🚀 NEW: Learn about preprocessing and augmenting image data in computer vision projects using YOLOv8, including normalization, dataset augmentation, splitting, and exploratory data analysis (EDA).
- [Preprocessing Annotated Data](preprocessing_annotated_data.md) 🚀 NEW: Learn about preprocessing and augmenting image data in computer vision projects using YOLO11, including normalization, dataset augmentation, splitting, and exploratory data analysis (EDA).
- [Tips for Model Training](model-training-tips.md) 🚀 NEW: Explore tips on optimizing [batch sizes](https://www.ultralytics.com/glossary/batch-size), using [mixed precision](https://www.ultralytics.com/glossary/mixed-precision), applying pre-trained weights, and more to make training your computer vision model a breeze.
- [Insights on Model Evaluation and Fine-Tuning](model-evaluation-insights.md) 🚀 NEW: Gain insights into the strategies and best practices for evaluating and fine-tuning your computer vision models. Learn about the iterative process of refining models to achieve optimal results.
- [A Guide on Model Testing](model-testing.md) 🚀 NEW: A thorough guide on testing your computer vision models in realistic settings. Learn how to verify accuracy, reliability, and performance in line with project goals.
@ -75,14 +75,14 @@ Training a custom object detection model with Ultralytics YOLO is straightforwar
```python
from ultralytics import YOLO
model = YOLO("yolov8s.pt") # Load a pre-trained YOLO model
model = YOLO("yolo11n.pt") # Load a pre-trained YOLO model
model.train(data="path/to/dataset.yaml", epochs=50) # Train on custom dataset
```
=== "CLI"
```bash
yolo task=detect mode=train model=yolov8s.pt data=path/to/dataset.yaml epochs=50
yolo task=detect mode=train model=yolo11n.pt data=path/to/dataset.yaml epochs=50
```
For detailed dataset formatting and additional options, refer to our [Tips for Model Training](model-training-tips.md) guide.

@ -1,14 +1,14 @@
---
comments: true
description: Master instance segmentation and tracking with Ultralytics YOLOv8. Learn techniques for precise object identification and tracking.
keywords: instance segmentation, tracking, YOLOv8, Ultralytics, object detection, machine learning, computer vision, python
description: Master instance segmentation and tracking with Ultralytics YOLO11. Learn techniques for precise object identification and tracking.
keywords: instance segmentation, tracking, YOLO11, Ultralytics, object detection, machine learning, computer vision, python
---
# Instance Segmentation and Tracking using Ultralytics YOLOv8 🚀
# Instance Segmentation and Tracking using Ultralytics YOLO11 🚀
## What is [Instance Segmentation](https://www.ultralytics.com/glossary/instance-segmentation)?
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) instance segmentation involves identifying and outlining individual objects in an image, providing a detailed understanding of spatial distribution. Unlike [semantic segmentation](https://www.ultralytics.com/glossary/semantic-segmentation), it uniquely labels and precisely delineates each object, crucial for tasks like [object detection](https://www.ultralytics.com/glossary/object-detection) and medical imaging.
[Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) instance segmentation involves identifying and outlining individual objects in an image, providing a detailed understanding of spatial distribution. Unlike [semantic segmentation](https://www.ultralytics.com/glossary/semantic-segmentation), it uniquely labels and precisely delineates each object, crucial for tasks like [object detection](https://www.ultralytics.com/glossary/object-detection) and medical imaging.
There are two types of instance segmentation tracking available in the Ultralytics package:
@ -24,7 +24,7 @@ There are two types of instance segmentation tracking available in the Ultralyti
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Instance Segmentation with Object Tracking using Ultralytics YOLOv8
<strong>Watch:</strong> Instance Segmentation with Object Tracking using Ultralytics YOLO11
</p>
## Samples
@ -44,7 +44,7 @@ There are two types of instance segmentation tracking available in the Ultralyti
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n-seg.pt") # segmentation model
model = YOLO("yolo11n-seg.pt") # segmentation model
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -91,7 +91,7 @@ There are two types of instance segmentation tracking available in the Ultralyti
track_history = defaultdict(lambda: [])
model = YOLO("yolov8n-seg.pt") # segmentation model
model = YOLO("yolo11n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -142,9 +142,9 @@ For any inquiries, feel free to post your questions in the [Ultralytics Issue Se
## FAQ
### How do I perform instance segmentation using Ultralytics YOLOv8?
### How do I perform instance segmentation using Ultralytics YOLO11?
To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO model with a segmentation version of YOLOv8 and process video frames through it. Here's a simplified code example:
To perform instance segmentation using Ultralytics YOLO11, initialize the YOLO model with a segmentation version of YOLO11 and process video frames through it. Here's a simplified code example:
!!! example
@ -156,7 +156,7 @@ To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO m
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n-seg.pt") # segmentation model
model = YOLO("yolo11n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -186,17 +186,17 @@ To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO m
cv2.destroyAllWindows()
```
Learn more about instance segmentation in the [Ultralytics YOLOv8 guide](#what-is-instance-segmentation).
Learn more about instance segmentation in the [Ultralytics YOLO11 guide](#what-is-instance-segmentation).
### What is the difference between instance segmentation and object tracking in Ultralytics YOLOv8?
### What is the difference between instance segmentation and object tracking in Ultralytics YOLO11?
Instance segmentation identifies and outlines individual objects within an image, giving each object a unique label and mask. Object tracking extends this by assigning consistent labels to objects across video frames, facilitating continuous tracking of the same objects over time. Learn more about the distinctions in the [Ultralytics YOLOv8 documentation](#samples).
Instance segmentation identifies and outlines individual objects within an image, giving each object a unique label and mask. Object tracking extends this by assigning consistent labels to objects across video frames, facilitating continuous tracking of the same objects over time. Learn more about the distinctions in the [Ultralytics YOLO11 documentation](#samples).
### Why should I use Ultralytics YOLOv8 for instance segmentation and tracking over other models like Mask R-CNN or Faster R-CNN?
### Why should I use Ultralytics YOLO11 for instance segmentation and tracking over other models like Mask R-CNN or Faster R-CNN?
Ultralytics YOLOv8 offers real-time performance, superior [accuracy](https://www.ultralytics.com/glossary/accuracy), and ease of use compared to other models like Mask R-CNN or Faster R-CNN. YOLOv8 provides a seamless integration with Ultralytics HUB, allowing users to manage models, datasets, and training pipelines efficiently. Discover more about the benefits of YOLOv8 in the [Ultralytics blog](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8).
Ultralytics YOLO11 offers real-time performance, superior [accuracy](https://www.ultralytics.com/glossary/accuracy), and ease of use compared to other models like Mask R-CNN or Faster R-CNN. YOLO11 provides a seamless integration with Ultralytics HUB, allowing users to manage models, datasets, and training pipelines efficiently. Discover more about the benefits of YOLO11 in the [Ultralytics blog](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8).
### How can I implement object tracking using Ultralytics YOLOv8?
### How can I implement object tracking using Ultralytics YOLO11?
To implement object tracking, use the `model.track` method and ensure that each object's ID is consistently assigned across frames. Below is a simple example:
@ -214,7 +214,7 @@ To implement object tracking, use the `model.track` method and ensure that each
track_history = defaultdict(lambda: [])
model = YOLO("yolov8n-seg.pt") # segmentation model
model = YOLO("yolo11n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -247,6 +247,6 @@ To implement object tracking, use the `model.track` method and ensure that each
Find more in the [Instance Segmentation and Tracking section](#samples).
### Are there any datasets provided by Ultralytics suitable for training YOLOv8 models for instance segmentation and tracking?
### Are there any datasets provided by Ultralytics suitable for training YOLO11 models for instance segmentation and tracking?
Yes, Ultralytics offers several datasets suitable for training YOLOv8 models, including segmentation and tracking datasets. Dataset examples, structures, and instructions for use can be found in the [Ultralytics Datasets documentation](https://docs.ultralytics.com/datasets/).
Yes, Ultralytics offers several datasets suitable for training YOLO11 models, including segmentation and tracking datasets. Dataset examples, structures, and instructions for use can be found in the [Ultralytics Datasets documentation](https://docs.ultralytics.com/datasets/).

@ -1,7 +1,7 @@
---
comments: true
description: Learn to extract isolated objects from inference results using Ultralytics Predict Mode. Step-by-step guide for segmentation object isolation.
keywords: Ultralytics, segmentation, object isolation, Predict Mode, YOLOv8, machine learning, object detection, binary mask, image processing
keywords: Ultralytics, segmentation, object isolation, Predict Mode, YOLO11, machine learning, object detection, binary mask, image processing
---
# Isolating Segmentation Objects
@ -24,7 +24,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt")
model = YOLO("yolo11n-seg.pt")
# Run inference
results = model.predict()
@ -141,7 +141,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
=== "Black Background Pixels"
```py
```python
# Create 3-channel mask
mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
@ -192,7 +192,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
=== "Transparent Background Pixels"
```py
```python
# Isolate object with transparent background (when saved as PNG)
isolated = np.dstack([img, b_mask])
```
@ -248,7 +248,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
??? example "Example Final Step"
```py
```python
# Save isolated object to file
_ = cv2.imwrite(f"{img_name}_{label}-{ci}.png", iso_crop)
```
@ -267,7 +267,7 @@ import numpy as np
from ultralytics import YOLO
m = YOLO("yolov8n-seg.pt") # (4)!
m = YOLO("yolo11n-seg.pt") # (4)!
res = m.predict() # (3)!
# Iterate detection results (5)
@ -310,16 +310,16 @@ for r in res:
## FAQ
### How do I isolate objects using Ultralytics YOLOv8 for segmentation tasks?
### How do I isolate objects using Ultralytics YOLO11 for segmentation tasks?
To isolate objects using Ultralytics YOLOv8, follow these steps:
To isolate objects using Ultralytics YOLO11, follow these steps:
1. **Load the model and run inference:**
```python
from ultralytics import YOLO
model = YOLO("yolov8n-seg.pt")
model = YOLO("yolo11n-seg.pt")
results = model.predict(source="path/to/your/image.jpg")
```
@ -345,7 +345,7 @@ Refer to the guide on [Predict Mode](../modes/predict.md) and the [Segment Task]
### What options are available for saving the isolated objects after segmentation?
Ultralytics YOLOv8 offers two main options for saving isolated objects:
Ultralytics YOLO11 offers two main options for saving isolated objects:
1. **With a Black Background:**
@ -361,7 +361,7 @@ Ultralytics YOLOv8 offers two main options for saving isolated objects:
For further details, visit the [Predict Mode](../modes/predict.md) section.
### How can I crop isolated objects to their bounding boxes using Ultralytics YOLOv8?
### How can I crop isolated objects to their bounding boxes using Ultralytics YOLO11?
To crop isolated objects to their bounding boxes:
@ -378,9 +378,9 @@ To crop isolated objects to their bounding boxes:
Learn more about bounding box results in the [Predict Mode](../modes/predict.md#boxes) documentation.
### Why should I use Ultralytics YOLOv8 for object isolation in segmentation tasks?
### Why should I use Ultralytics YOLO11 for object isolation in segmentation tasks?
Ultralytics YOLOv8 provides:
Ultralytics YOLO11 provides:
- **High-speed** real-time object detection and segmentation.
- **Accurate bounding box and mask generation** for precise object isolation.
@ -388,9 +388,9 @@ Ultralytics YOLOv8 provides:
Explore the benefits of using YOLO in the [Segment Task documentation](../tasks/segment.md).
### Can I save isolated objects including the background using Ultralytics YOLOv8?
### Can I save isolated objects including the background using Ultralytics YOLO11?
Yes, this is a built-in feature in Ultralytics YOLOv8. Use the `save_crop` argument in the `predict()` method. For example:
Yes, this is a built-in feature in Ultralytics YOLO11. Use the `save_crop` argument in the `predict()` method. For example:
```python
results = model.predict(source="path/to/your/image.jpg", save_crop=True)

@ -1,26 +1,26 @@
---
comments: true
description: Learn about YOLOv8's diverse deployment options to maximize your model's performance. Explore PyTorch, TensorRT, OpenVINO, TF Lite, and more!.
keywords: YOLOv8, deployment options, export formats, PyTorch, TensorRT, OpenVINO, TF Lite, machine learning, model deployment
description: Learn about YOLO11's diverse deployment options to maximize your model's performance. Explore PyTorch, TensorRT, OpenVINO, TF Lite, and more!.
keywords: YOLO11, deployment options, export formats, PyTorch, TensorRT, OpenVINO, TF Lite, machine learning, model deployment
---
# Understanding YOLOv8's Deployment Options
# Understanding YOLO11's Deployment Options
## Introduction
You've come a long way on your journey with YOLOv8. You've diligently collected data, meticulously annotated it, and put in the hours to train and rigorously evaluate your custom YOLOv8 model. Now, it's time to put your model to work for your specific application, use case, or project. But there's a critical decision that stands before you: how to export and deploy your model effectively.
You've come a long way on your journey with YOLO11. You've diligently collected data, meticulously annotated it, and put in the hours to train and rigorously evaluate your custom YOLO11 model. Now, it's time to put your model to work for your specific application, use case, or project. But there's a critical decision that stands before you: how to export and deploy your model effectively.
This guide walks you through YOLOv8's deployment options and the essential factors to consider to choose the right option for your project.
This guide walks you through YOLO11's deployment options and the essential factors to consider to choose the right option for your project.
## How to Select the Right Deployment Option for Your YOLOv8 Model
## How to Select the Right Deployment Option for Your YOLO11 Model
When it's time to deploy your YOLOv8 model, selecting a suitable export format is very important. As outlined in the [Ultralytics YOLOv8 Modes documentation](../modes/export.md#usage-examples), the model.export() function allows for converting your trained model into a variety of formats tailored to diverse environments and performance requirements.
When it's time to deploy your YOLO11 model, selecting a suitable export format is very important. As outlined in the [Ultralytics YOLO11 Modes documentation](../modes/export.md#usage-examples), the model.export() function allows for converting your trained model into a variety of formats tailored to diverse environments and performance requirements.
The ideal format depends on your model's intended operational context, balancing speed, hardware constraints, and ease of integration. In the following section, we'll take a closer look at each export option, understanding when to choose each one.
### YOLOv8's Deployment Options
### YOLO11's Deployment Options
Let's walk through the different YOLOv8 deployment options. For a detailed walkthrough of the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
Let's walk through the different YOLO11 deployment options. For a detailed walkthrough of the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
#### PyTorch
@ -258,9 +258,9 @@ NCNN is a high-performance neural network inference framework optimized for the
- **Hardware Acceleration**: Tailored for ARM CPUs and GPUs, with specific optimizations for these architectures.
## Comparative Analysis of YOLOv8 Deployment Options
## Comparative Analysis of YOLO11 Deployment Options
The following table provides a snapshot of the various deployment options available for YOLOv8 models, helping you to assess which may best fit your project needs based on several critical criteria. For an in-depth look at each deployment option's format, please see the [Ultralytics documentation page on export formats](../modes/export.md#export-formats).
The following table provides a snapshot of the various deployment options available for YOLO11 models, helping you to assess which may best fit your project needs based on several critical criteria. For an in-depth look at each deployment option's format, please see the [Ultralytics documentation page on export formats](../modes/export.md#export-formats).
| Deployment Option | Performance Benchmarks | Compatibility and Integration | Community Support and Ecosystem | Case Studies | Maintenance and Updates | Security Considerations | Hardware Acceleration |
| ----------------- | ----------------------------------------------- | ---------------------------------------------- | --------------------------------------------- | ------------------------------------------ | ------------------------------------------- | ------------------------------------------------- | ---------------------------------- |
@ -282,33 +282,33 @@ This comparative analysis gives you a high-level overview. For deployment, it's
## Community and Support
When you're getting started with YOLOv8, having a helpful community and support can make a significant impact. Here's how to connect with others who share your interests and get the assistance you need.
When you're getting started with YOLO11, having a helpful community and support can make a significant impact. Here's how to connect with others who share your interests and get the assistance you need.
### Engage with the Broader Community
- **GitHub Discussions:** The YOLOv8 repository on GitHub has a "Discussions" section where you can ask questions, report issues, and suggest improvements.
- **GitHub Discussions:** The YOLO11 repository on GitHub has a "Discussions" section where you can ask questions, report issues, and suggest improvements.
- **Ultralytics Discord Server:** Ultralytics has a [Discord server](https://discord.com/invite/ultralytics) where you can interact with other users and developers.
### Official Documentation and Resources
- **Ultralytics YOLOv8 Docs:** The [official documentation](../index.md) provides a comprehensive overview of YOLOv8, along with guides on installation, usage, and troubleshooting.
- **Ultralytics YOLO11 Docs:** The [official documentation](../index.md) provides a comprehensive overview of YOLO11, along with guides on installation, usage, and troubleshooting.
These resources will help you tackle challenges and stay updated on the latest trends and best practices in the YOLOv8 community.
These resources will help you tackle challenges and stay updated on the latest trends and best practices in the YOLO11 community.
## Conclusion
In this guide, we've explored the different deployment options for YOLOv8. We've also discussed the important factors to consider when making your choice. These options allow you to customize your model for various environments and performance requirements, making it suitable for real-world applications.
In this guide, we've explored the different deployment options for YOLO11. We've also discussed the important factors to consider when making your choice. These options allow you to customize your model for various environments and performance requirements, making it suitable for real-world applications.
Don't forget that the YOLOv8 and Ultralytics community is a valuable source of help. Connect with other developers and experts to learn unique tips and solutions you might not find in regular documentation. Keep seeking knowledge, exploring new ideas, and sharing your experiences.
Don't forget that the YOLO11 and Ultralytics community is a valuable source of help. Connect with other developers and experts to learn unique tips and solutions you might not find in regular documentation. Keep seeking knowledge, exploring new ideas, and sharing your experiences.
Happy deploying!
## FAQ
### What are the deployment options available for YOLOv8 on different hardware platforms?
### What are the deployment options available for YOLO11 on different hardware platforms?
Ultralytics YOLOv8 supports various deployment formats, each designed for specific environments and hardware platforms. Key formats include:
Ultralytics YOLO11 supports various deployment formats, each designed for specific environments and hardware platforms. Key formats include:
- **PyTorch** for research and prototyping, with excellent Python integration.
- **TorchScript** for production environments where Python is unavailable.
@ -318,18 +318,18 @@ Ultralytics YOLOv8 supports various deployment formats, each designed for specif
Each format has unique advantages. For a detailed walkthrough, see our [export process documentation](../modes/export.md#usage-examples).
### How do I improve the inference speed of my YOLOv8 model on an Intel CPU?
### How do I improve the inference speed of my YOLO11 model on an Intel CPU?
To enhance inference speed on Intel CPUs, you can deploy your YOLOv8 model using Intel's OpenVINO toolkit. OpenVINO offers significant performance boosts by optimizing models to leverage Intel hardware efficiently.
To enhance inference speed on Intel CPUs, you can deploy your YOLO11 model using Intel's OpenVINO toolkit. OpenVINO offers significant performance boosts by optimizing models to leverage Intel hardware efficiently.
1. Convert your YOLOv8 model to the OpenVINO format using the `model.export()` function.
1. Convert your YOLO11 model to the OpenVINO format using the `model.export()` function.
2. Follow the detailed setup guide in the [Intel OpenVINO Export documentation](../integrations/openvino.md).
For more insights, check out our [blog post](https://www.ultralytics.com/blog/achieve-faster-inference-speeds-ultralytics-yolov8-openvino).
### Can I deploy YOLOv8 models on mobile devices?
### Can I deploy YOLO11 models on mobile devices?
Yes, YOLOv8 models can be deployed on mobile devices using [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) Lite (TF Lite) for both Android and iOS platforms. TF Lite is designed for mobile and embedded devices, providing efficient on-device inference.
Yes, YOLO11 models can be deployed on mobile devices using [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) Lite (TF Lite) for both Android and iOS platforms. TF Lite is designed for mobile and embedded devices, providing efficient on-device inference.
!!! example
@ -349,9 +349,9 @@ Yes, YOLOv8 models can be deployed on mobile devices using [TensorFlow](https://
For more details on deploying models to mobile, refer to our [TF Lite integration guide](../integrations/tflite.md).
### What factors should I consider when choosing a deployment format for my YOLOv8 model?
### What factors should I consider when choosing a deployment format for my YOLO11 model?
When choosing a deployment format for YOLOv8, consider the following factors:
When choosing a deployment format for YOLO11, consider the following factors:
- **Performance**: Some formats like TensorRT provide exceptional speeds on NVIDIA GPUs, while OpenVINO is optimized for Intel hardware.
- **Compatibility**: ONNX offers broad compatibility across different platforms.
@ -360,11 +360,11 @@ When choosing a deployment format for YOLOv8, consider the following factors:
For a comparative analysis, refer to our [export formats documentation](../modes/export.md#export-formats).
### How can I deploy YOLOv8 models in a web application?
### How can I deploy YOLO11 models in a web application?
To deploy YOLOv8 models in a web application, you can use TensorFlow.js (TF.js), which allows for running [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) models directly in the browser. This approach eliminates the need for backend infrastructure and provides real-time performance.
To deploy YOLO11 models in a web application, you can use TensorFlow.js (TF.js), which allows for running [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) models directly in the browser. This approach eliminates the need for backend infrastructure and provides real-time performance.
1. Export the YOLOv8 model to the TF.js format.
1. Export the YOLO11 model to the TF.js format.
2. Integrate the exported model into your web application.
For step-by-step instructions, refer to our guide on [TensorFlow.js integration](../integrations/tfjs.md).

@ -27,7 +27,7 @@ It's also important to follow best practices when deploying a model because depl
Often times, once a model is [trained](./model-training-tips.md), [evaluated](./model-evaluation-insights.md), and [tested](./model-testing.md), it needs to be converted into specific formats to be deployed effectively in various environments, such as cloud, edge, or local devices.
With respect to YOLOv8, you can [export your model](../modes/export.md) to different formats. For example, when you need to transfer your model between different frameworks, ONNX is an excellent tool and [exporting to YOLOv8 to ONNX](../integrations/onnx.md) is easy. You can check out more options about integrating your model into different environments smoothly and effectively [here](../integrations/index.md).
With respect to YOLO11, you can [export your model](../modes/export.md) to different formats. For example, when you need to transfer your model between different frameworks, ONNX is an excellent tool and [exporting to YOLO11 to ONNX](../integrations/onnx.md) is easy. You can check out more options about integrating your model into different environments smoothly and effectively [here](../integrations/index.md).
### Choosing a Deployment Environment
@ -94,7 +94,7 @@ Experiencing a drop in your model's accuracy after deployment can be frustrating
- **Review Model Export and Conversion:** Re-export the model and make sure that the conversion process maintains the integrity of the model weights and architecture.
- **Test with a Controlled Dataset:** Deploy the model in a test environment with a dataset you control and compare the results with the training phase. You can identify if the issue is with the deployment environment or the data.
When deploying YOLOv8, several factors can affect model accuracy. Converting models to formats like [TensorRT](../integrations/tensorrt.md) involves optimizations such as weight quantization and layer fusion, which can cause minor precision losses. Using FP16 (half-precision) instead of FP32 (full-precision) can speed up inference but may introduce numerical precision errors. Also, hardware constraints, like those on the [Jetson Nano](./nvidia-jetson.md), with lower CUDA core counts and reduced memory bandwidth, can impact performance.
When deploying YOLO11, several factors can affect model accuracy. Converting models to formats like [TensorRT](../integrations/tensorrt.md) involves optimizations such as weight quantization and layer fusion, which can cause minor precision losses. Using FP16 (half-precision) instead of FP32 (full-precision) can speed up inference but may introduce numerical precision errors. Also, hardware constraints, like those on the [Jetson Nano](./nvidia-jetson.md), with lower CUDA core counts and reduced memory bandwidth, can impact performance.
### Inferences Are Taking Longer Than You Expected
@ -106,7 +106,7 @@ When deploying [machine learning](https://www.ultralytics.com/glossary/machine-l
- **Profile the Inference Pipeline:** Identifying bottlenecks in the inference pipeline can help pinpoint the source of delays. Use profiling tools to analyze each step of the inference process, identifying and addressing any stages that cause significant delays, such as inefficient layers or data transfer issues.
- **Use Appropriate Precision:** Using higher precision than necessary can slow down inference times. Experiment with using lower precision, such as FP16 (half-precision), instead of FP32 (full-precision). While FP16 can reduce inference time, also keep in mind that it can impact model accuracy.
If you are facing this issue while deploying YOLOv8, consider that YOLOv8 offers [various model sizes](../models/yolov8.md), such as YOLOv8n (nano) for devices with lower memory capacity and YOLOv8x (extra-large) for more powerful GPUs. Choosing the right model variant for your hardware can help balance memory usage and processing time.
If you are facing this issue while deploying YOLO11, consider that YOLO11 offers [various model sizes](../models/yolov8.md), such as YOLO11n (nano) for devices with lower memory capacity and YOLOv8x (extra-large) for more powerful GPUs. Choosing the right model variant for your hardware can help balance memory usage and processing time.
Also keep in mind that the size of the input images directly impacts memory usage and processing time. Lower resolutions reduce memory usage and speed up inference, while higher resolutions improve accuracy but require more memory and processing power.
@ -132,12 +132,12 @@ Being part of a community of computer vision enthusiasts can help you solve prob
### Community Resources
- **GitHub Issues:** Explore the [YOLOv8 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are very active and ready to help.
- **GitHub Issues:** Explore the [YOLO11 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are very active and ready to help.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to chat with other users and developers, get support, and share your experiences.
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Visit the [official YOLOv8 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects.
- **Ultralytics YOLO11 Documentation:** Visit the [official YOLO11 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects.
Using these resources will help you solve challenges and stay up-to-date with the latest trends and practices in the computer vision community.
@ -149,22 +149,22 @@ After deploying your model, the next step would be monitoring, maintaining, and
## FAQ
### What are the best practices for deploying a machine learning model using Ultralytics YOLOv8?
### What are the best practices for deploying a machine learning model using Ultralytics YOLO11?
Deploying a machine learning model, particularly with Ultralytics YOLOv8, involves several best practices to ensure efficiency and reliability. First, choose the deployment environment that suits your needs—cloud, edge, or local. Optimize your model through techniques like [pruning, quantization, and knowledge distillation](#model-optimization-techniques) for efficient deployment in resource-constrained environments. Lastly, ensure data consistency and preprocessing steps align with the training phase to maintain performance. You can also refer to [model deployment options](./model-deployment-options.md) for more detailed guidelines.
Deploying a machine learning model, particularly with Ultralytics YOLO11, involves several best practices to ensure efficiency and reliability. First, choose the deployment environment that suits your needs—cloud, edge, or local. Optimize your model through techniques like [pruning, quantization, and knowledge distillation](#model-optimization-techniques) for efficient deployment in resource-constrained environments. Lastly, ensure data consistency and preprocessing steps align with the training phase to maintain performance. You can also refer to [model deployment options](./model-deployment-options.md) for more detailed guidelines.
### How can I troubleshoot common deployment issues with Ultralytics YOLOv8 models?
### How can I troubleshoot common deployment issues with Ultralytics YOLO11 models?
Troubleshooting deployment issues can be broken down into a few key steps. If your model's accuracy drops after deployment, check for data consistency, validate preprocessing steps, and ensure the hardware/software environment matches what you used during training. For slow inference times, perform warm-up runs, optimize your inference engine, use asynchronous processing, and profile your inference pipeline. Refer to [troubleshooting deployment issues](#troubleshooting-deployment-issues) for a detailed guide on these best practices.
### How does Ultralytics YOLOv8 optimization enhance model performance on edge devices?
### How does Ultralytics YOLO11 optimization enhance model performance on edge devices?
Optimizing Ultralytics YOLOv8 models for edge devices involves using techniques like pruning to reduce the model size, quantization to convert weights to lower precision, and knowledge distillation to train smaller models that mimic larger ones. These techniques ensure the model runs efficiently on devices with limited computational power. Tools like [TensorFlow Lite](../integrations/tflite.md) and [NVIDIA Jetson](./nvidia-jetson.md) are particularly useful for these optimizations. Learn more about these techniques in our section on [model optimization](#model-optimization-techniques).
Optimizing Ultralytics YOLO11 models for edge devices involves using techniques like pruning to reduce the model size, quantization to convert weights to lower precision, and knowledge distillation to train smaller models that mimic larger ones. These techniques ensure the model runs efficiently on devices with limited computational power. Tools like [TensorFlow Lite](../integrations/tflite.md) and [NVIDIA Jetson](./nvidia-jetson.md) are particularly useful for these optimizations. Learn more about these techniques in our section on [model optimization](#model-optimization-techniques).
### What are the security considerations for deploying machine learning models with Ultralytics YOLOv8?
### What are the security considerations for deploying machine learning models with Ultralytics YOLO11?
Security is paramount when deploying machine learning models. Ensure secure data transmission using encryption protocols like TLS. Implement robust access controls, including strong authentication and role-based access control (RBAC). Model obfuscation techniques, such as encrypting model parameters and serving models in a secure environment like a trusted execution environment (TEE), offer additional protection. For detailed practices, refer to [security considerations](#security-considerations-in-model-deployment).
### How do I choose the right deployment environment for my Ultralytics YOLOv8 model?
### How do I choose the right deployment environment for my Ultralytics YOLO11 model?
Selecting the optimal deployment environment for your Ultralytics YOLOv8 model depends on your application's specific needs. Cloud deployment offers scalability and ease of access, making it ideal for applications with high data volumes. Edge deployment is best for low-latency applications requiring real-time responses, using tools like [TensorFlow Lite](../integrations/tflite.md). Local deployment suits scenarios needing stringent data privacy and control. For a comprehensive overview of each environment, check out our section on [choosing a deployment environment](#choosing-a-deployment-environment).
Selecting the optimal deployment environment for your Ultralytics YOLO11 model depends on your application's specific needs. Cloud deployment offers scalability and ease of access, making it ideal for applications with high data volumes. Edge deployment is best for low-latency applications requiring real-time responses, using tools like [TensorFlow Lite](../integrations/tflite.md). Local deployment suits scenarios needing stringent data privacy and control. For a comprehensive overview of each environment, check out our section on [choosing a deployment environment](#choosing-a-deployment-environment).

@ -1,6 +1,6 @@
---
comments: true
description: Explore the most effective ways to assess and refine YOLOv8 models for better performance. Learn about evaluation metrics, fine-tuning processes, and how to customize your model for specific needs.
description: Explore the most effective ways to assess and refine YOLO11 models for better performance. Learn about evaluation metrics, fine-tuning processes, and how to customize your model for specific needs.
keywords: Model Evaluation, Machine Learning Model Evaluation, Fine Tuning Machine Learning, Fine Tune Model, Evaluating Models, Model Fine Tuning, How to Fine Tune a Model
---
@ -45,23 +45,23 @@ Other mAP metrics include mAP@0.75, which uses a stricter IoU threshold of 0.75,
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/mean-average-precision-overview.avif" alt="Mean Average Precision Overview">
</p>
## Evaluating YOLOv8 Model Performance
## Evaluating YOLO11 Model Performance
With respect to YOLOv8, you can use the [validation mode](../modes/val.md) to evaluate the model. Also, be sure to take a look at our guide that goes in-depth into [YOLOv8 performance metrics](./yolo-performance-metrics.md) and how they can be interpreted.
With respect to YOLO11, you can use the [validation mode](../modes/val.md) to evaluate the model. Also, be sure to take a look at our guide that goes in-depth into [YOLO11 performance metrics](./yolo-performance-metrics.md) and how they can be interpreted.
### Common Community Questions
When evaluating your YOLOv8 model, you might run into a few hiccups. Based on common community questions, here are some tips to help you get the most out of your YOLOv8 model:
When evaluating your YOLO11 model, you might run into a few hiccups. Based on common community questions, here are some tips to help you get the most out of your YOLO11 model:
#### Handling Variable Image Sizes
Evaluating your YOLOv8 model with images of different sizes can help you understand its performance on diverse datasets. Using the `rect=true` validation parameter, YOLOv8 adjusts the network's stride for each batch based on the image sizes, allowing the model to handle rectangular images without forcing them to a single size.
Evaluating your YOLO11 model with images of different sizes can help you understand its performance on diverse datasets. Using the `rect=true` validation parameter, YOLO11 adjusts the network's stride for each batch based on the image sizes, allowing the model to handle rectangular images without forcing them to a single size.
The `imgsz` validation parameter sets the maximum dimension for image resizing, which is 640 by default. You can adjust this based on your dataset's maximum dimensions and the GPU memory available. Even with `imgsz` set, `rect=true` lets the model manage varying image sizes effectively by dynamically adjusting the stride.
#### Accessing YOLOv8 Metrics
#### Accessing YOLO11 Metrics
If you want to get a deeper understanding of your YOLOv8 model's performance, you can easily access specific evaluation metrics with a few lines of Python code. The code snippet below will let you load your model, run an evaluation, and print out various metrics that show how well your model is doing.
If you want to get a deeper understanding of your YOLO11 model's performance, you can easily access specific evaluation metrics with a few lines of Python code. The code snippet below will let you load your model, run an evaluation, and print out various metrics that show how well your model is doing.
!!! example "Usage"
@ -71,7 +71,7 @@ If you want to get a deeper understanding of your YOLOv8 model's performance, yo
from ultralytics import YOLO
# Load the model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Run the evaluation
results = model.val(data="coco8.yaml")
@ -101,7 +101,7 @@ If you want to get a deeper understanding of your YOLOv8 model's performance, yo
print("Recall curve:", results.box.r_curve)
```
The results object also includes speed metrics like preprocess time, inference time, loss, and postprocess time. By analyzing these metrics, you can fine-tune and optimize your YOLOv8 model for better performance, making it more effective for your specific use case.
The results object also includes speed metrics like preprocess time, inference time, loss, and postprocess time. By analyzing these metrics, you can fine-tune and optimize your YOLO11 model for better performance, making it more effective for your specific use case.
## How Does Fine-Tuning Work?
@ -115,11 +115,11 @@ Fine-tuning a model means paying close attention to several vital parameters and
Usually, during the initial training [epochs](https://www.ultralytics.com/glossary/epoch), the learning rate starts low and gradually increases to stabilize the training process. However, since your model has already learned some features from the previous dataset, starting with a higher learning rate right away can be more beneficial.
When evaluating your YOLOv8 model, you can set the `warmup_epochs` validation parameter to `warmup_epochs=0` to prevent the learning rate from starting too high. By following this process, the training will continue from the provided weights, adjusting to the nuances of your new data.
When evaluating your YOLO11 model, you can set the `warmup_epochs` validation parameter to `warmup_epochs=0` to prevent the learning rate from starting too high. By following this process, the training will continue from the provided weights, adjusting to the nuances of your new data.
### Image Tiling for Small Objects
Image tiling can improve detection accuracy for small objects. By dividing larger images into smaller segments, such as splitting 1280x1280 images into multiple 640x640 segments, you maintain the original resolution, and the model can learn from high-resolution fragments. When using YOLOv8, make sure to adjust your labels for these new segments correctly.
Image tiling can improve detection accuracy for small objects. By dividing larger images into smaller segments, such as splitting 1280x1280 images into multiple 640x640 segments, you maintain the original resolution, and the model can learn from high-resolution fragments. When using YOLO11, make sure to adjust your labels for these new segments correctly.
## Engage with the Community
@ -127,12 +127,12 @@ Sharing your ideas and questions with other [computer vision](https://www.ultral
### Finding Help and Support
- **GitHub Issues:** Explore the YOLOv8 GitHub repository and use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to ask questions, report bugs, and suggest features. The community and maintainers are available to assist with any issues you encounter.
- **GitHub Issues:** Explore the YOLO11 GitHub repository and use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to ask questions, report bugs, and suggest features. The community and maintainers are available to assist with any issues you encounter.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to connect with other users and developers, get support, share knowledge, and brainstorm ideas.
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Check out the [official YOLOv8 documentation](./index.md) for comprehensive guides and valuable insights on various computer vision tasks and projects.
- **Ultralytics YOLO11 Documentation:** Check out the [official YOLO11 documentation](./index.md) for comprehensive guides and valuable insights on various computer vision tasks and projects.
## Final Thoughts
@ -140,30 +140,30 @@ Evaluating and fine-tuning your computer vision model are important steps for su
## FAQ
### What are the key metrics for evaluating YOLOv8 model performance?
### What are the key metrics for evaluating YOLO11 model performance?
To evaluate YOLOv8 model performance, important metrics include Confidence Score, Intersection over Union (IoU), and Mean Average Precision (mAP). Confidence Score measures the model's certainty for each detected object class. IoU evaluates how well the predicted bounding box overlaps with the ground truth. Mean Average Precision (mAP) aggregates precision scores across classes, with mAP@.5 and mAP@.5:.95 being two common types for varying IoU thresholds. Learn more about these metrics in our [YOLOv8 performance metrics guide](./yolo-performance-metrics.md).
To evaluate YOLO11 model performance, important metrics include Confidence Score, Intersection over Union (IoU), and Mean Average Precision (mAP). Confidence Score measures the model's certainty for each detected object class. IoU evaluates how well the predicted bounding box overlaps with the ground truth. Mean Average Precision (mAP) aggregates precision scores across classes, with mAP@.5 and mAP@.5:.95 being two common types for varying IoU thresholds. Learn more about these metrics in our [YOLO11 performance metrics guide](./yolo-performance-metrics.md).
### How can I fine-tune a pre-trained YOLOv8 model for my specific dataset?
### How can I fine-tune a pre-trained YOLO11 model for my specific dataset?
Fine-tuning a pre-trained YOLOv8 model involves adjusting its parameters to improve performance on a specific task or dataset. Start by evaluating your model using metrics, then set a higher initial learning rate by adjusting the `warmup_epochs` parameter to 0 for immediate stability. Use parameters like `rect=true` for handling varied image sizes effectively. For more detailed guidance, refer to our section on [fine-tuning YOLOv8 models](#how-does-fine-tuning-work).
Fine-tuning a pre-trained YOLO11 model involves adjusting its parameters to improve performance on a specific task or dataset. Start by evaluating your model using metrics, then set a higher initial learning rate by adjusting the `warmup_epochs` parameter to 0 for immediate stability. Use parameters like `rect=true` for handling varied image sizes effectively. For more detailed guidance, refer to our section on [fine-tuning YOLO11 models](#how-does-fine-tuning-work).
### How can I handle variable image sizes when evaluating my YOLOv8 model?
### How can I handle variable image sizes when evaluating my YOLO11 model?
To handle variable image sizes during evaluation, use the `rect=true` parameter in YOLOv8, which adjusts the network's stride for each batch based on image sizes. The `imgsz` parameter sets the maximum dimension for image resizing, defaulting to 640. Adjust `imgsz` to suit your dataset and GPU memory. For more details, visit our [section on handling variable image sizes](#handling-variable-image-sizes).
To handle variable image sizes during evaluation, use the `rect=true` parameter in YOLO11, which adjusts the network's stride for each batch based on image sizes. The `imgsz` parameter sets the maximum dimension for image resizing, defaulting to 640. Adjust `imgsz` to suit your dataset and GPU memory. For more details, visit our [section on handling variable image sizes](#handling-variable-image-sizes).
### What practical steps can I take to improve mean average precision for my YOLOv8 model?
### What practical steps can I take to improve mean average precision for my YOLO11 model?
Improving mean average precision (mAP) for a YOLOv8 model involves several steps:
Improving mean average precision (mAP) for a YOLO11 model involves several steps:
1. **Tuning Hyperparameters**: Experiment with different learning rates, [batch sizes](https://www.ultralytics.com/glossary/batch-size), and image augmentations.
2. **[Data Augmentation](https://www.ultralytics.com/glossary/data-augmentation)**: Use techniques like Mosaic and MixUp to create diverse training samples.
3. **Image Tiling**: Split larger images into smaller tiles to improve detection accuracy for small objects.
Refer to our detailed guide on [model fine-tuning](#tips-for-fine-tuning-your-model) for specific strategies.
### How do I access YOLOv8 model evaluation metrics in Python?
### How do I access YOLO11 model evaluation metrics in Python?
You can access YOLOv8 model evaluation metrics using Python with the following steps:
You can access YOLO11 model evaluation metrics using Python with the following steps:
!!! example "Usage"
@ -173,7 +173,7 @@ You can access YOLOv8 model evaluation metrics using Python with the following s
from ultralytics import YOLO
# Load the model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Run the evaluation
results = model.val(data="coco8.yaml")
@ -185,4 +185,4 @@ You can access YOLOv8 model evaluation metrics using Python with the following s
print("Mean recall:", results.box.mr)
```
Analyzing these metrics helps fine-tune and optimize your YOLOv8 model. For a deeper dive, check out our guide on [YOLOv8 metrics](../modes/val.md).
Analyzing these metrics helps fine-tune and optimize your YOLO11 model. For a deeper dive, check out our guide on [YOLO11 metrics](../modes/val.md).

@ -123,12 +123,12 @@ Joining a community of computer vision enthusiasts can help you solve problems a
### Community Resources
- **GitHub Issues:** Check out the [YOLOv8 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are highly active and supportive.
- **GitHub Issues:** Check out the [YOLO11 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are highly active and supportive.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to chat with other users and developers, get support, and share your experiences.
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Visit the [official YOLOv8 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects.
- **Ultralytics YOLO11 Documentation:** Visit the [official YOLO11 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects.
Using these resources will help you solve challenges and stay up-to-date with the latest trends and practices in the computer vision community.

@ -44,22 +44,22 @@ Next, the testing results can be analyzed:
- **Error Analysis:** Perform a thorough error analysis to understand the types of errors (e.g., false positives vs. false negatives) and their potential causes.
- **Bias and Fairness:** Check for any biases in the model's predictions. Ensure that the model performs equally well across different subsets of the data, especially if it includes sensitive attributes like race, gender, or age.
## Testing Your YOLOv8 Model
## Testing Your YOLO11 Model
To test your YOLOv8 model, you can use the validation mode. It's a straightforward way to understand the model's strengths and areas that need improvement. Also, you'll need to format your test dataset correctly for YOLOv8. For more details on how to use the validation mode, check out the [Model Validation](../modes/val.md) docs page.
To test your YOLO11 model, you can use the validation mode. It's a straightforward way to understand the model's strengths and areas that need improvement. Also, you'll need to format your test dataset correctly for YOLO11. For more details on how to use the validation mode, check out the [Model Validation](../modes/val.md) docs page.
## Using YOLOv8 to Predict on Multiple Test Images
## Using YOLO11 to Predict on Multiple Test Images
If you want to test your trained YOLOv8 model on multiple images stored in a folder, you can easily do so in one go. Instead of using the validation mode, which is typically used to evaluate model performance on a validation set and provide detailed metrics, you might just want to see predictions on all images in your test set. For this, you can use the [prediction mode](../modes/predict.md).
If you want to test your trained YOLO11 model on multiple images stored in a folder, you can easily do so in one go. Instead of using the validation mode, which is typically used to evaluate model performance on a validation set and provide detailed metrics, you might just want to see predictions on all images in your test set. For this, you can use the [prediction mode](../modes/predict.md).
### Difference Between Validation and Prediction Modes
- **[Validation Mode](../modes/val.md):** Used to evaluate the model's performance by comparing predictions against known labels (ground truth). It provides detailed metrics such as accuracy, precision, recall, and F1 score.
- **[Prediction Mode](../modes/predict.md):** Used to run the model on new, unseen data to generate predictions. It does not provide detailed performance metrics but allows you to see how the model performs on real-world images.
## Running YOLOv8 Predictions Without Custom Training
## Running YOLO11 Predictions Without Custom Training
If you are interested in testing the basic YOLOv8 model to understand whether it can be used for your application without custom training, you can use the prediction mode. While the model is pre-trained on datasets like COCO, running predictions on your own dataset can give you a quick sense of how well it might perform in your specific context.
If you are interested in testing the basic YOLO11 model to understand whether it can be used for your application without custom training, you can use the prediction mode. While the model is pre-trained on datasets like COCO, running predictions on your own dataset can give you a quick sense of how well it might perform in your specific context.
## Overfitting and [Underfitting](https://www.ultralytics.com/glossary/underfitting) in [Machine Learning](https://www.ultralytics.com/glossary/machine-learning-ml)
@ -128,12 +128,12 @@ Becoming part of a community of computer vision enthusiasts can aid in solving p
### Community Resources
- **GitHub Issues:** Explore the [YOLOv8 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are very active and ready to help.
- **GitHub Issues:** Explore the [YOLO11 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are very active and ready to help.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to chat with other users and developers, get support, and share your experiences.
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Check out the [official YOLOv8 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects.
- **Ultralytics YOLO11 Documentation:** Check out the [official YOLO11 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects.
These resources will help you navigate challenges and remain updated on the latest trends and practices within the computer vision community.
@ -147,9 +147,9 @@ Building trustworthy computer vision models relies on rigorous model testing. By
Model evaluation and model testing are distinct steps in a computer vision project. Model evaluation involves using a labeled dataset to compute metrics such as [accuracy](https://www.ultralytics.com/glossary/accuracy), precision, recall, and [F1 score](https://www.ultralytics.com/glossary/f1-score), providing insights into the model's performance with a controlled dataset. Model testing, on the other hand, assesses the model's performance in real-world scenarios by applying it to new, unseen data, ensuring the model's learned behavior aligns with expectations outside the evaluation environment. For a detailed guide, refer to the [steps in a computer vision project](./steps-of-a-cv-project.md).
### How can I test my Ultralytics YOLOv8 model on multiple images?
### How can I test my Ultralytics YOLO11 model on multiple images?
To test your Ultralytics YOLOv8 model on multiple images, you can use the [prediction mode](../modes/predict.md). This mode allows you to run the model on new, unseen data to generate predictions without providing detailed metrics. This is ideal for real-world performance testing on larger image sets stored in a folder. For evaluating performance metrics, use the [validation mode](../modes/val.md) instead.
To test your Ultralytics YOLO11 model on multiple images, you can use the [prediction mode](../modes/predict.md). This mode allows you to run the model on new, unseen data to generate predictions without providing detailed metrics. This is ideal for real-world performance testing on larger image sets stored in a folder. For evaluating performance metrics, use the [validation mode](../modes/val.md) instead.
### What should I do if my computer vision model shows signs of overfitting or underfitting?
@ -195,6 +195,6 @@ Post-testing, if the model performance meets the project goals, proceed with dep
Gain insights from the [Model Testing Vs. Model Evaluation](#model-testing-vs-model-evaluation) section to refine and enhance model effectiveness in real-world applications.
### How do I run YOLOv8 predictions without custom training?
### How do I run YOLO11 predictions without custom training?
You can run predictions using the pre-trained YOLOv8 model on your dataset to see if it suits your application needs. Utilize the [prediction mode](../modes/predict.md) to get a quick sense of performance results without diving into custom training.
You can run predictions using the pre-trained YOLO11 model on your dataset to see if it suits your application needs. Utilize the [prediction mode](../modes/predict.md) to get a quick sense of performance results without diving into custom training.

@ -46,25 +46,25 @@ There are a few different aspects to think about when you are planning on using
When training models on large datasets, efficiently utilizing your GPU is key. Batch size is an important factor. It is the number of data samples that a machine learning model processes in a single training iteration.
Using the maximum batch size supported by your GPU, you can fully take advantage of its capabilities and reduce the time model training takes. However, you want to avoid running out of GPU memory. If you encounter memory errors, reduce the batch size incrementally until the model trains smoothly.
With respect to YOLOv8, you can set the `batch_size` parameter in the [training configuration](../modes/train.md) to match your GPU capacity. Also, setting `batch=-1` in your training script will automatically determine the [batch size](https://www.ultralytics.com/glossary/batch-size) that can be efficiently processed based on your device's capabilities. By fine-tuning the batch size, you can make the most of your GPU resources and improve the overall training process.
With respect to YOLO11, you can set the `batch_size` parameter in the [training configuration](../modes/train.md) to match your GPU capacity. Also, setting `batch=-1` in your training script will automatically determine the [batch size](https://www.ultralytics.com/glossary/batch-size) that can be efficiently processed based on your device's capabilities. By fine-tuning the batch size, you can make the most of your GPU resources and improve the overall training process.
### Subset Training
Subset training is a smart strategy that involves training your model on a smaller set of data that represents the larger dataset. It can save time and resources, especially during initial model development and testing. If you are running short on time or experimenting with different model configurations, subset training is a good option.
When it comes to YOLOv8, you can easily implement subset training by using the `fraction` parameter. This parameter lets you specify what fraction of your dataset to use for training. For example, setting `fraction=0.1` will train your model on 10% of the data. You can use this technique for quick iterations and tuning your model before committing to training a model using a full dataset. Subset training helps you make rapid progress and identify potential issues early on.
When it comes to YOLO11, you can easily implement subset training by using the `fraction` parameter. This parameter lets you specify what fraction of your dataset to use for training. For example, setting `fraction=0.1` will train your model on 10% of the data. You can use this technique for quick iterations and tuning your model before committing to training a model using a full dataset. Subset training helps you make rapid progress and identify potential issues early on.
### Multi-scale Training
Multiscale training is a technique that improves your model's ability to generalize by training it on images of varying sizes. Your model can learn to detect objects at different scales and distances and become more robust.
For example, when you train YOLOv8, you can enable multiscale training by setting the `scale` parameter. This parameter adjusts the size of training images by a specified factor, simulating objects at different distances. For example, setting `scale=0.5` will reduce the image size by half, while `scale=2.0` will double it. Configuring this parameter allows your model to experience a variety of image scales and improve its detection capabilities across different object sizes and scenarios.
For example, when you train YOLO11, you can enable multiscale training by setting the `scale` parameter. This parameter adjusts the size of training images by a specified factor, simulating objects at different distances. For example, setting `scale=0.5` will reduce the image size by half, while `scale=2.0` will double it. Configuring this parameter allows your model to experience a variety of image scales and improve its detection capabilities across different object sizes and scenarios.
### Caching
Caching is an important technique to improve the efficiency of training machine learning models. By storing preprocessed images in memory, caching reduces the time the GPU spends waiting for data to be loaded from the disk. The model can continuously receive data without delays caused by disk I/O operations.
Caching can be controlled when training YOLOv8 using the `cache` parameter:
Caching can be controlled when training YOLO11 using the `cache` parameter:
- _`cache=True`_: Stores dataset images in RAM, providing the fastest access speed but at the cost of increased memory usage.
- _`cache='disk'`_: Stores the images on disk, slower than RAM but faster than loading fresh data each time.
@ -80,19 +80,19 @@ Mixed precision training uses both 16-bit (FP16) and 32-bit (FP32) floating-poin
To implement mixed precision training, you'll need to modify your training scripts and ensure your hardware (like GPUs) supports it. Many modern [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) frameworks, such as [Tensorflow](https://www.ultralytics.com/glossary/tensorflow), offer built-in support for mixed precision.
Mixed precision training is straightforward when working with YOLOv8. You can use the `amp` flag in your training configuration. Setting `amp=True` enables Automatic Mixed Precision (AMP) training. Mixed precision training is a simple yet effective way to optimize your model training process.
Mixed precision training is straightforward when working with YOLO11. You can use the `amp` flag in your training configuration. Setting `amp=True` enables Automatic Mixed Precision (AMP) training. Mixed precision training is a simple yet effective way to optimize your model training process.
### Pre-trained Weights
Using pretrained weights is a smart way to speed up your model's training process. Pretrained weights come from models already trained on large datasets, giving your model a head start. [Transfer learning](https://www.ultralytics.com/glossary/transfer-learning) adapts pretrained models to new, related tasks. Fine-tuning a pre-trained model involves starting with these weights and then continuing training on your specific dataset. This method of training results in faster training times and often better performance because the model starts with a solid understanding of basic features.
The `pretrained` parameter makes transfer learning easy with YOLOv8. Setting `pretrained=True` will use default pre-trained weights, or you can specify a path to a custom pre-trained model. Using pre-trained weights and transfer learning effectively boosts your model's capabilities and reduces training costs.
The `pretrained` parameter makes transfer learning easy with YOLO11. Setting `pretrained=True` will use default pre-trained weights, or you can specify a path to a custom pre-trained model. Using pre-trained weights and transfer learning effectively boosts your model's capabilities and reduces training costs.
### Other Techniques to Consider When Handling a Large Dataset
There are a couple of other techniques to consider when handling a large dataset:
- **[Learning Rate](https://www.ultralytics.com/glossary/learning-rate) Schedulers**: Implementing learning rate schedulers dynamically adjusts the learning rate during training. A well-tuned learning rate can prevent the model from overshooting minima and improve stability. When training YOLOv8, the `lrf` parameter helps manage learning rate scheduling by setting the final learning rate as a fraction of the initial rate.
- **[Learning Rate](https://www.ultralytics.com/glossary/learning-rate) Schedulers**: Implementing learning rate schedulers dynamically adjusts the learning rate during training. A well-tuned learning rate can prevent the model from overshooting minima and improve stability. When training YOLO11, the `lrf` parameter helps manage learning rate scheduling by setting the final learning rate as a fraction of the initial rate.
- **Distributed Training**: For handling large datasets, distributed training can be a game-changer. You can reduce the training time by spreading the training workload across multiple GPUs or machines.
## The Number of Epochs To Train For
@ -101,7 +101,7 @@ When training a model, an epoch refers to one complete pass through the entire t
A common question that comes up is how to determine the number of epochs to train the model for. A good starting point is 300 epochs. If the model overfits early, you can reduce the number of epochs. If [overfitting](https://www.ultralytics.com/glossary/overfitting) does not occur after 300 epochs, you can extend the training to 600, 1200, or more epochs.
However, the ideal number of epochs can vary based on your dataset's size and project goals. Larger datasets might require more epochs for the model to learn effectively, while smaller datasets might need fewer epochs to avoid overfitting. With respect to YOLOv8, you can set the `epochs` parameter in your training script.
However, the ideal number of epochs can vary based on your dataset's size and project goals. Larger datasets might require more epochs for the model to learn effectively, while smaller datasets might need fewer epochs to avoid overfitting. With respect to YOLO11, you can set the `epochs` parameter in your training script.
## Early Stopping
@ -113,7 +113,7 @@ The process involves setting a patience parameter that determines how many [epoc
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/early-stopping-overview.avif" alt="Early Stopping Overview">
</p>
For YOLOv8, you can enable early stopping by setting the patience parameter in your training configuration. For example, `patience=5` means training will stop if there's no improvement in validation metrics for 5 consecutive epochs. Using this method ensures the training process remains efficient and achieves optimal performance without excessive computation.
For YOLO11, you can enable early stopping by setting the patience parameter in your training configuration. For example, `patience=5` means training will stop if there's no improvement in validation metrics for 5 consecutive epochs. Using this method ensures the training process remains efficient and achieves optimal performance without excessive computation.
## Choosing Between Cloud and Local Training
@ -143,13 +143,13 @@ Different optimizers have various strengths and weaknesses. Let's take a glimpse
- Combines the benefits of both SGD with momentum and RMSProp.
- Adjusts the learning rate for each parameter based on estimates of the first and second moments of the gradients.
- Well-suited for noisy data and sparse gradients.
- Efficient and generally requires less tuning, making it a recommended optimizer for YOLOv8.
- Efficient and generally requires less tuning, making it a recommended optimizer for YOLO11.
- **RMSProp (Root Mean Square Propagation)**:
- Adjusts the learning rate for each parameter by dividing the gradient by a running average of the magnitudes of recent gradients.
- Helps in handling the vanishing gradient problem and is effective for [recurrent neural networks](https://www.ultralytics.com/glossary/recurrent-neural-network-rnn).
For YOLOv8, the `optimizer` parameter lets you choose from various optimizers, including SGD, Adam, AdamW, NAdam, RAdam, and RMSProp, or you can set it to `auto` for automatic selection based on model configuration.
For YOLO11, the `optimizer` parameter lets you choose from various optimizers, including SGD, Adam, AdamW, NAdam, RAdam, and RMSProp, or you can set it to `auto` for automatic selection based on model configuration.
## Connecting with the Community
@ -157,12 +157,12 @@ Being part of a community of computer vision enthusiasts can help you solve prob
### Community Resources
- **GitHub Issues:** Visit the [YOLOv8 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are very active and ready to help.
- **GitHub Issues:** Visit the [YOLO11 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are very active and ready to help.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to chat with other users and developers, get support, and share your experiences.
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Check out the [official YOLOv8 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects.
- **Ultralytics YOLO11 Documentation:** Check out the [official YOLO11 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects.
Using these resources will help you solve challenges and stay up-to-date with the latest trends and practices in the computer vision community.
@ -174,20 +174,20 @@ Training computer vision models involves following good practices, optimizing yo
### How can I improve GPU utilization when training a large dataset with Ultralytics YOLO?
To improve GPU utilization, set the `batch_size` parameter in your training configuration to the maximum size supported by your GPU. This ensures that you make full use of the GPU's capabilities, reducing training time. If you encounter memory errors, incrementally reduce the batch size until training runs smoothly. For YOLOv8, setting `batch=-1` in your training script will automatically determine the optimal batch size for efficient processing. For further information, refer to the [training configuration](../modes/train.md).
To improve GPU utilization, set the `batch_size` parameter in your training configuration to the maximum size supported by your GPU. This ensures that you make full use of the GPU's capabilities, reducing training time. If you encounter memory errors, incrementally reduce the batch size until training runs smoothly. For YOLO11, setting `batch=-1` in your training script will automatically determine the optimal batch size for efficient processing. For further information, refer to the [training configuration](../modes/train.md).
### What is mixed precision training, and how do I enable it in YOLOv8?
### What is mixed precision training, and how do I enable it in YOLO11?
Mixed precision training utilizes both 16-bit (FP16) and 32-bit (FP32) floating-point types to balance computational speed and precision. This approach speeds up training and reduces memory usage without sacrificing model [accuracy](https://www.ultralytics.com/glossary/accuracy). To enable mixed precision training in YOLOv8, set the `amp` parameter to `True` in your training configuration. This activates Automatic Mixed Precision (AMP) training. For more details on this optimization technique, see the [training configuration](../modes/train.md).
Mixed precision training utilizes both 16-bit (FP16) and 32-bit (FP32) floating-point types to balance computational speed and precision. This approach speeds up training and reduces memory usage without sacrificing model [accuracy](https://www.ultralytics.com/glossary/accuracy). To enable mixed precision training in YOLO11, set the `amp` parameter to `True` in your training configuration. This activates Automatic Mixed Precision (AMP) training. For more details on this optimization technique, see the [training configuration](../modes/train.md).
### How does multiscale training enhance YOLOv8 model performance?
### How does multiscale training enhance YOLO11 model performance?
Multiscale training enhances model performance by training on images of varying sizes, allowing the model to better generalize across different scales and distances. In YOLOv8, you can enable multiscale training by setting the `scale` parameter in the training configuration. For example, `scale=0.5` reduces the image size by half, while `scale=2.0` doubles it. This technique simulates objects at different distances, making the model more robust across various scenarios. For settings and more details, check out the [training configuration](../modes/train.md).
Multiscale training enhances model performance by training on images of varying sizes, allowing the model to better generalize across different scales and distances. In YOLO11, you can enable multiscale training by setting the `scale` parameter in the training configuration. For example, `scale=0.5` reduces the image size by half, while `scale=2.0` doubles it. This technique simulates objects at different distances, making the model more robust across various scenarios. For settings and more details, check out the [training configuration](../modes/train.md).
### How can I use pre-trained weights to speed up training in YOLOv8?
### How can I use pre-trained weights to speed up training in YOLO11?
Using pre-trained weights can significantly reduce training times and improve model performance by starting from a model that already understands basic features. In YOLOv8, you can set the `pretrained` parameter to `True` or specify a path to custom pre-trained weights in your training configuration. This approach, known as transfer learning, leverages knowledge from large datasets to adapt to your specific task. Learn more about pre-trained weights and their advantages [here](../modes/train.md).
Using pre-trained weights can significantly reduce training times and improve model performance by starting from a model that already understands basic features. In YOLO11, you can set the `pretrained` parameter to `True` or specify a path to custom pre-trained weights in your training configuration. This approach, known as transfer learning, leverages knowledge from large datasets to adapt to your specific task. Learn more about pre-trained weights and their advantages [here](../modes/train.md).
### What is the recommended number of epochs for training a model, and how do I set this in YOLOv8?
### What is the recommended number of epochs for training a model, and how do I set this in YOLO11?
The number of epochs refers to the complete passes through the training dataset during model training. A typical starting point is 300 epochs. If your model overfits early, you can reduce the number. Alternatively, if overfitting isn't observed, you might extend training to 600, 1200, or more epochs. To set this in YOLOv8, use the `epochs` parameter in your training script. For additional advice on determining the ideal number of epochs, refer to this section on [number of epochs](#the-number-of-epochs-to-train-for).
The number of epochs refers to the complete passes through the training dataset during model training. A typical starting point is 300 epochs. If your model overfits early, you can reduce the number. Alternatively, if overfitting isn't observed, you might extend training to 600, 1200, or more epochs. To set this in YOLO11, use the `epochs` parameter in your training script. For additional advice on determining the ideal number of epochs, refer to this section on [number of epochs](#the-number-of-epochs-to-train-for).

@ -1,14 +1,14 @@
---
comments: true
description: Learn how to use Ultralytics YOLOv8 for real-time object blurring to enhance privacy and focus in your images and videos.
keywords: YOLOv8, object blurring, real-time processing, privacy protection, image manipulation, video editing, Ultralytics
description: Learn how to use Ultralytics YOLO11 for real-time object blurring to enhance privacy and focus in your images and videos.
keywords: YOLO11, object blurring, real-time processing, privacy protection, image manipulation, video editing, Ultralytics
---
# Object Blurring using Ultralytics YOLOv8 🚀
# Object Blurring using Ultralytics YOLO11 🚀
## What is Object Blurring?
Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves applying a blurring effect to specific detected objects in an image or video. This can be achieved using the YOLOv8 model capabilities to identify and manipulate objects within a given scene.
Object blurring with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) involves applying a blurring effect to specific detected objects in an image or video. This can be achieved using the YOLO11 model capabilities to identify and manipulate objects within a given scene.
<p align="center">
<br>
@ -18,16 +18,16 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Object Blurring using Ultralytics YOLOv8
<strong>Watch:</strong> Object Blurring using Ultralytics YOLO11
</p>
## Advantages of Object Blurring?
- **Privacy Protection**: Object blurring is an effective tool for safeguarding privacy by concealing sensitive or personally identifiable information in images or videos.
- **Selective Focus**: YOLOv8 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
- **Real-time Processing**: YOLOv8's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.
- **Selective Focus**: YOLO11 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
- **Real-time Processing**: YOLO11's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.
!!! example "Object Blurring using YOLOv8 Example"
!!! example "Object Blurring using YOLO11 Example"
=== "Object Blurring"
@ -37,7 +37,7 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
names = model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -86,20 +86,20 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
## FAQ
### What is object blurring with Ultralytics YOLOv8?
### What is object blurring with Ultralytics YOLO11?
Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves automatically detecting and applying a blurring effect to specific objects in images or videos. This technique enhances privacy by concealing sensitive information while retaining relevant visual data. YOLOv8's real-time processing capabilities make it suitable for applications requiring immediate privacy protection and selective focus adjustments.
Object blurring with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) involves automatically detecting and applying a blurring effect to specific objects in images or videos. This technique enhances privacy by concealing sensitive information while retaining relevant visual data. YOLO11's real-time processing capabilities make it suitable for applications requiring immediate privacy protection and selective focus adjustments.
### How can I implement real-time object blurring using YOLOv8?
### How can I implement real-time object blurring using YOLO11?
To implement real-time object blurring with YOLOv8, follow the provided Python example. This involves using YOLOv8 for [object detection](https://www.ultralytics.com/glossary/object-detection) and OpenCV for applying the blur effect. Here's a simplified version:
To implement real-time object blurring with YOLO11, follow the provided Python example. This involves using YOLO11 for [object detection](https://www.ultralytics.com/glossary/object-detection) and OpenCV for applying the blur effect. Here's a simplified version:
```python
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while cap.isOpened():
@ -112,7 +112,7 @@ while cap.isOpened():
obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = cv2.blur(obj, (50, 50))
cv2.imshow("YOLOv8 Blurring", im0)
cv2.imshow("YOLO11 Blurring", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
@ -120,9 +120,9 @@ cap.release()
cv2.destroyAllWindows()
```
### What are the benefits of using Ultralytics YOLOv8 for object blurring?
### What are the benefits of using Ultralytics YOLO11 for object blurring?
Ultralytics YOLOv8 offers several advantages for object blurring:
Ultralytics YOLO11 offers several advantages for object blurring:
- **Privacy Protection**: Effectively obscure sensitive or identifiable information.
- **Selective Focus**: Target specific objects for blurring, maintaining essential visual content.
@ -130,10 +130,10 @@ Ultralytics YOLOv8 offers several advantages for object blurring:
For more detailed applications, check the [advantages of object blurring section](#advantages-of-object-blurring).
### Can I use Ultralytics YOLOv8 to blur faces in a video for privacy reasons?
### Can I use Ultralytics YOLO11 to blur faces in a video for privacy reasons?
Yes, Ultralytics YOLOv8 can be configured to detect and blur faces in videos to protect privacy. By training or using a pre-trained model to specifically recognize faces, the detection results can be processed with [OpenCV](https://www.ultralytics.com/glossary/opencv) to apply a blur effect. Refer to our guide on [object detection with YOLOv8](https://docs.ultralytics.com/models/yolov8/) and modify the code to target face detection.
Yes, Ultralytics YOLO11 can be configured to detect and blur faces in videos to protect privacy. By training or using a pre-trained model to specifically recognize faces, the detection results can be processed with [OpenCV](https://www.ultralytics.com/glossary/opencv) to apply a blur effect. Refer to our guide on [object detection with YOLO11](https://docs.ultralytics.com/models/yolov8/) and modify the code to target face detection.
### How does YOLOv8 compare to other object detection models like Faster R-CNN for object blurring?
### How does YOLO11 compare to other object detection models like Faster R-CNN for object blurring?
Ultralytics YOLOv8 typically outperforms models like Faster R-CNN in terms of speed, making it more suitable for real-time applications. While both models offer accurate detection, YOLOv8's architecture is optimized for rapid inference, which is critical for tasks like real-time object blurring. Learn more about the technical differences and performance metrics in our [YOLOv8 documentation](https://docs.ultralytics.com/models/yolov8/).
Ultralytics YOLO11 typically outperforms models like Faster R-CNN in terms of speed, making it more suitable for real-time applications. While both models offer accurate detection, YOLO11's architecture is optimized for rapid inference, which is critical for tasks like real-time object blurring. Learn more about the technical differences and performance metrics in our [YOLO11 documentation](https://docs.ultralytics.com/models/yolov8/).

@ -1,14 +1,14 @@
---
comments: true
description: Learn to accurately identify and count objects in real-time using Ultralytics YOLOv8 for applications like crowd analysis and surveillance.
keywords: object counting, YOLOv8, Ultralytics, real-time object detection, AI, deep learning, object tracking, crowd analysis, surveillance, resource optimization
description: Learn to accurately identify and count objects in real-time using Ultralytics YOLO11 for applications like crowd analysis and surveillance.
keywords: object counting, YOLO11, Ultralytics, real-time object detection, AI, deep learning, object tracking, crowd analysis, surveillance, resource optimization
---
# Object Counting using Ultralytics YOLOv8
# Object Counting using Ultralytics YOLO11
## What is Object Counting?
Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves accurate identification and counting of specific objects in videos and camera streams. YOLOv8 excels in real-time applications, providing efficient and precise object counting for various scenarios like crowd analysis and surveillance, thanks to its state-of-the-art algorithms and [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) capabilities.
Object counting with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) involves accurate identification and counting of specific objects in videos and camera streams. YOLO11 excels in real-time applications, providing efficient and precise object counting for various scenarios like crowd analysis and surveillance, thanks to its state-of-the-art algorithms and [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) capabilities.
<table>
<tr>
@ -19,7 +19,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Object Counting using Ultralytics YOLOv8
<strong>Watch:</strong> Object Counting using Ultralytics YOLO11
</td>
<td align="center">
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/Fj9TStNBVoY"
@ -28,7 +28,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Class-wise Object Counting using Ultralytics YOLOv8
<strong>Watch:</strong> Class-wise Object Counting using Ultralytics YOLO11
</td>
</tr>
</table>
@ -43,10 +43,10 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| Logistics | Aquaculture |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Conveyor Belt Packets Counting Using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/conveyor-belt-packets-counting.avif) | ![Fish Counting in Sea using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/fish-counting-in-sea-using-ultralytics-yolov8.avif) |
| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
| ![Conveyor Belt Packets Counting Using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/conveyor-belt-packets-counting.avif) | ![Fish Counting in Sea using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/fish-counting-in-sea-using-ultralytics-yolov8.avif) |
| Conveyor Belt Packets Counting Using Ultralytics YOLO11 | Fish Counting in Sea using Ultralytics YOLO11 |
!!! example "Object Counting using YOLOv8 Example"
!!! example "Object Counting using YOLO11 Example"
=== "Count in Region"
@ -55,7 +55,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -90,6 +90,46 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
cv2.destroyAllWindows()
```
=== "OBB Object Counting"
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolo11n-obb.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
reg_pts=region_points,
names=model.names,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Count in Polygon"
```python
@ -97,7 +137,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -123,7 +163,6 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
@ -139,7 +178,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -165,7 +204,6 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
@ -181,7 +219,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -207,7 +245,6 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False, classes=classes_to_count)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
@ -240,12 +277,12 @@ Here's a table with the `ObjectCounter` arguments:
## FAQ
### How do I count objects in a video using Ultralytics YOLOv8?
### How do I count objects in a video using Ultralytics YOLO11?
To count objects in a video using Ultralytics YOLOv8, you can follow these steps:
To count objects in a video using Ultralytics YOLO11, you can follow these steps:
1. Import the necessary libraries (`cv2`, `ultralytics`).
2. Load a pretrained YOLOv8 model.
2. Load a pretrained YOLO11 model.
3. Define the counting region (e.g., a polygon, line, etc.).
4. Set up the video capture and initialize the object counter.
5. Process each frame to track objects and count them within the defined region.
@ -284,14 +321,14 @@ def count_objects_in_region(video_path, output_video_path, model_path):
cv2.destroyAllWindows()
count_objects_in_region("path/to/video.mp4", "output_video.avi", "yolov8n.pt")
count_objects_in_region("path/to/video.mp4", "output_video.avi", "yolo11n.pt")
```
Explore more configurations and options in the [Object Counting](#object-counting-using-ultralytics-yolov8) section.
Explore more configurations and options in the [Object Counting](#object-counting-using-ultralytics-yolo11) section.
### What are the advantages of using Ultralytics YOLOv8 for object counting?
### What are the advantages of using Ultralytics YOLO11 for object counting?
Using Ultralytics YOLOv8 for object counting offers several advantages:
Using Ultralytics YOLO11 for object counting offers several advantages:
1. **Resource Optimization:** It facilitates efficient resource management by providing accurate counts, helping optimize resource allocation in industries like inventory management.
2. **Enhanced Security:** It enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.
@ -299,9 +336,9 @@ Using Ultralytics YOLOv8 for object counting offers several advantages:
For real-world applications and code examples, visit the [Advantages of Object Counting](#advantages-of-object-counting) section.
### How can I count specific classes of objects using Ultralytics YOLOv8?
### How can I count specific classes of objects using Ultralytics YOLO11?
To count specific classes of objects using Ultralytics YOLOv8, you need to specify the classes you are interested in during the tracking phase. Below is a Python example:
To count specific classes of objects using Ultralytics YOLO11, you need to specify the classes you are interested in during the tracking phase. Below is a Python example:
```python
import cv2
@ -335,27 +372,27 @@ def count_specific_classes(video_path, output_video_path, model_path, classes_to
cv2.destroyAllWindows()
count_specific_classes("path/to/video.mp4", "output_specific_classes.avi", "yolov8n.pt", [0, 2])
count_specific_classes("path/to/video.mp4", "output_specific_classes.avi", "yolo11n.pt", [0, 2])
```
In this example, `classes_to_count=[0, 2]`, which means it counts objects of class `0` and `2` (e.g., person and car).
### Why should I use YOLOv8 over other [object detection](https://www.ultralytics.com/glossary/object-detection) models for real-time applications?
### Why should I use YOLO11 over other [object detection](https://www.ultralytics.com/glossary/object-detection) models for real-time applications?
Ultralytics YOLOv8 provides several advantages over other object detection models like Faster R-CNN, SSD, and previous YOLO versions:
Ultralytics YOLO11 provides several advantages over other object detection models like Faster R-CNN, SSD, and previous YOLO versions:
1. **Speed and Efficiency:** YOLOv8 offers real-time processing capabilities, making it ideal for applications requiring high-speed inference, such as surveillance and autonomous driving.
1. **Speed and Efficiency:** YOLO11 offers real-time processing capabilities, making it ideal for applications requiring high-speed inference, such as surveillance and autonomous driving.
2. **[Accuracy](https://www.ultralytics.com/glossary/accuracy):** It provides state-of-the-art accuracy for object detection and tracking tasks, reducing the number of false positives and improving overall system reliability.
3. **Ease of Integration:** YOLOv8 offers seamless integration with various platforms and devices, including mobile and edge devices, which is crucial for modern AI applications.
3. **Ease of Integration:** YOLO11 offers seamless integration with various platforms and devices, including mobile and edge devices, which is crucial for modern AI applications.
4. **Flexibility:** Supports various tasks like object detection, segmentation, and tracking with configurable models to meet specific use-case requirements.
Check out Ultralytics [YOLOv8 Documentation](https://docs.ultralytics.com/models/yolov8/) for a deeper dive into its features and performance comparisons.
Check out Ultralytics [YOLO11 Documentation](https://docs.ultralytics.com/models/yolo11/) for a deeper dive into its features and performance comparisons.
### Can I use YOLOv8 for advanced applications like crowd analysis and traffic management?
### Can I use YOLO11 for advanced applications like crowd analysis and traffic management?
Yes, Ultralytics YOLOv8 is perfectly suited for advanced applications like crowd analysis and traffic management due to its real-time detection capabilities, scalability, and integration flexibility. Its advanced features allow for high-accuracy object tracking, counting, and classification in dynamic environments. Example use cases include:
Yes, Ultralytics YOLO11 is perfectly suited for advanced applications like crowd analysis and traffic management due to its real-time detection capabilities, scalability, and integration flexibility. Its advanced features allow for high-accuracy object tracking, counting, and classification in dynamic environments. Example use cases include:
- **Crowd Analysis:** Monitor and manage large gatherings, ensuring safety and optimizing crowd flow.
- **Traffic Management:** Track and count vehicles, analyze traffic patterns, and manage congestion in real-time.
For more information and implementation details, refer to the guide on [Real World Applications](#real-world-applications) of object counting with YOLOv8.
For more information and implementation details, refer to the guide on [Real World Applications](#real-world-applications) of object counting with YOLO11.

@ -1,14 +1,14 @@
---
comments: true
description: Learn how to crop and extract objects using Ultralytics YOLOv8 for focused analysis, reduced data volume, and enhanced precision.
keywords: Ultralytics, YOLOv8, object cropping, object detection, image processing, video analysis, AI, machine learning
description: Learn how to crop and extract objects using Ultralytics YOLO11 for focused analysis, reduced data volume, and enhanced precision.
keywords: Ultralytics, YOLO11, object cropping, object detection, image processing, video analysis, AI, machine learning
---
# Object Cropping using Ultralytics YOLOv8
# Object Cropping using Ultralytics YOLO11
## What is Object Cropping?
Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves isolating and extracting specific detected objects from an image or video. The YOLOv8 model capabilities are utilized to accurately identify and delineate objects, enabling precise cropping for further analysis or manipulation.
Object cropping with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) involves isolating and extracting specific detected objects from an image or video. The YOLO11 model capabilities are utilized to accurately identify and delineate objects, enabling precise cropping for further analysis or manipulation.
<p align="center">
<br>
@ -18,23 +18,23 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Object Cropping using Ultralytics YOLOv8
<strong>Watch:</strong> Object Cropping using Ultralytics YOLO
</p>
## Advantages of Object Cropping?
- **Focused Analysis**: YOLOv8 facilitates targeted object cropping, allowing for in-depth examination or processing of individual items within a scene.
- **Focused Analysis**: YOLO11 facilitates targeted object cropping, allowing for in-depth examination or processing of individual items within a scene.
- **Reduced Data Volume**: By extracting only relevant objects, object cropping helps in minimizing data size, making it efficient for storage, transmission, or subsequent computational tasks.
- **Enhanced Precision**: YOLOv8's [object detection](https://www.ultralytics.com/glossary/object-detection) [accuracy](https://www.ultralytics.com/glossary/accuracy) ensures that the cropped objects maintain their spatial relationships, preserving the integrity of the visual information for detailed analysis.
- **Enhanced Precision**: YOLO11's [object detection](https://www.ultralytics.com/glossary/object-detection) [accuracy](https://www.ultralytics.com/glossary/accuracy) ensures that the cropped objects maintain their spatial relationships, preserving the integrity of the visual information for detailed analysis.
## Visuals
| Airport Luggage |
| :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Conveyor Belt at Airport Suitcases Cropping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/suitcases-cropping-airport-conveyor-belt.avif) |
| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
| ![Conveyor Belt at Airport Suitcases Cropping using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/suitcases-cropping-airport-conveyor-belt.avif) |
| Suitcases Cropping at airport conveyor belt using Ultralytics YOLO11 |
!!! example "Object Cropping using YOLOv8 Example"
!!! example "Object Cropping using YOLO11 Example"
=== "Object Cropping"
@ -46,7 +46,7 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
names = model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -98,22 +98,22 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
## FAQ
### What is object cropping in Ultralytics YOLOv8 and how does it work?
### What is object cropping in Ultralytics YOLO11 and how does it work?
Object cropping using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) involves isolating and extracting specific objects from an image or video based on YOLOv8's detection capabilities. This process allows for focused analysis, reduced data volume, and enhanced [precision](https://www.ultralytics.com/glossary/precision) by leveraging YOLOv8 to identify objects with high accuracy and crop them accordingly. For an in-depth tutorial, refer to the [object cropping example](#object-cropping-using-ultralytics-yolov8).
Object cropping using [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) involves isolating and extracting specific objects from an image or video based on YOLO11's detection capabilities. This process allows for focused analysis, reduced data volume, and enhanced [precision](https://www.ultralytics.com/glossary/precision) by leveraging YOLO11 to identify objects with high accuracy and crop them accordingly. For an in-depth tutorial, refer to the [object cropping example](#object-cropping-using-ultralytics-yolo11).
### Why should I use Ultralytics YOLOv8 for object cropping over other solutions?
### Why should I use Ultralytics YOLO11 for object cropping over other solutions?
Ultralytics YOLOv8 stands out due to its precision, speed, and ease of use. It allows detailed and accurate object detection and cropping, essential for [focused analysis](#advantages-of-object-cropping) and applications needing high data integrity. Moreover, YOLOv8 integrates seamlessly with tools like OpenVINO and TensorRT for deployments requiring real-time capabilities and optimization on diverse hardware. Explore the benefits in the [guide on model export](../modes/export.md).
Ultralytics YOLO11 stands out due to its precision, speed, and ease of use. It allows detailed and accurate object detection and cropping, essential for [focused analysis](#advantages-of-object-cropping) and applications needing high data integrity. Moreover, YOLO11 integrates seamlessly with tools like OpenVINO and TensorRT for deployments requiring real-time capabilities and optimization on diverse hardware. Explore the benefits in the [guide on model export](../modes/export.md).
### How can I reduce the data volume of my dataset using object cropping?
By using Ultralytics YOLOv8 to crop only relevant objects from your images or videos, you can significantly reduce the data size, making it more efficient for storage and processing. This process involves training the model to detect specific objects and then using the results to crop and save these portions only. For more information on exploiting Ultralytics YOLOv8's capabilities, visit our [quickstart guide](../quickstart.md).
By using Ultralytics YOLO11 to crop only relevant objects from your images or videos, you can significantly reduce the data size, making it more efficient for storage and processing. This process involves training the model to detect specific objects and then using the results to crop and save these portions only. For more information on exploiting Ultralytics YOLO11's capabilities, visit our [quickstart guide](../quickstart.md).
### Can I use Ultralytics YOLOv8 for real-time video analysis and object cropping?
### Can I use Ultralytics YOLO11 for real-time video analysis and object cropping?
Yes, Ultralytics YOLOv8 can process real-time video feeds to detect and crop objects dynamically. The model's high-speed inference capabilities make it ideal for real-time applications such as surveillance, sports analysis, and automated inspection systems. Check out the [tracking and prediction modes](../modes/predict.md) to understand how to implement real-time processing.
Yes, Ultralytics YOLO11 can process real-time video feeds to detect and crop objects dynamically. The model's high-speed inference capabilities make it ideal for real-time applications such as surveillance, sports analysis, and automated inspection systems. Check out the [tracking and prediction modes](../modes/predict.md) to understand how to implement real-time processing.
### What are the hardware requirements for efficiently running YOLOv8 for object cropping?
### What are the hardware requirements for efficiently running YOLO11 for object cropping?
Ultralytics YOLOv8 is optimized for both CPU and GPU environments, but to achieve optimal performance, especially for real-time or high-volume inference, a dedicated GPU (e.g., NVIDIA Tesla, RTX series) is recommended. For deployment on lightweight devices, consider using CoreML for iOS or TFLite for Android. More details on supported devices and formats can be found in our [model deployment options](../guides/model-deployment-options.md).
Ultralytics YOLO11 is optimized for both CPU and GPU environments, but to achieve optimal performance, especially for real-time or high-volume inference, a dedicated GPU (e.g., NVIDIA Tesla, RTX series) is recommended. For deployment on lightweight devices, consider using CoreML for iOS or TFLite for Android. More details on supported devices and formats can be found in our [model deployment options](../guides/model-deployment-options.md).

@ -1,14 +1,14 @@
---
comments: true
description: Optimize parking spaces and enhance safety with Ultralytics YOLOv8. Explore real-time vehicle detection and smart parking solutions.
keywords: parking management, YOLOv8, Ultralytics, vehicle detection, real-time tracking, parking lot optimization, smart parking
description: Optimize parking spaces and enhance safety with Ultralytics YOLO11. Explore real-time vehicle detection and smart parking solutions.
keywords: parking management, YOLO11, Ultralytics, vehicle detection, real-time tracking, parking lot optimization, smart parking
---
# Parking Management using Ultralytics YOLOv8 🚀
# Parking Management using Ultralytics YOLO11 🚀
## What is Parking Management System?
Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) ensures efficient and safe parking by organizing spaces and monitoring availability. YOLOv8 can improve parking lot management through real-time vehicle detection, and insights into parking occupancy.
Parking management with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) ensures efficient and safe parking by organizing spaces and monitoring availability. YOLO11 can improve parking lot management through real-time vehicle detection, and insights into parking occupancy.
<p align="center">
<br>
@ -18,21 +18,21 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Implement Parking Management Using Ultralytics YOLOv8 🚀
<strong>Watch:</strong> How to Implement Parking Management Using Ultralytics YOLO 🚀
</p>
## Advantages of Parking Management System?
- **Efficiency**: Parking lot management optimizes the use of parking spaces and reduces congestion.
- **Safety and Security**: Parking management using YOLOv8 improves the safety of both people and vehicles through surveillance and security measures.
- **Reduced Emissions**: Parking management using YOLOv8 manages traffic flow to minimize idle time and emissions in parking lots.
- **Safety and Security**: Parking management using YOLO11 improves the safety of both people and vehicles through surveillance and security measures.
- **Reduced Emissions**: Parking management using YOLO11 manages traffic flow to minimize idle time and emissions in parking lots.
## Real World Applications
| Parking Management System | Parking Management System |
| :----------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Parking lots Analytics Using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/parking-management-aerial-view-ultralytics-yolov8.avif) | ![Parking management top view using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/parking-management-top-view-ultralytics-yolov8.avif) |
| Parking management Aerial View using Ultralytics YOLOv8 | Parking management Top View using Ultralytics YOLOv8 |
| ![Parking lots Analytics Using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/parking-management-aerial-view-ultralytics-yolov8.avif) | ![Parking management top view using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/parking-management-top-view-ultralytics-yolov8.avif) |
| Parking management Aerial View using Ultralytics YOLO11 | Parking management Top View using Ultralytics YOLO11 |
## Parking Management System Code Workflow
@ -49,7 +49,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
Max Image Size of 1920 * 1080 supported
!!! example "Parking slots Annotator Ultralytics YOLOv8"
!!! example "Parking slots Annotator Ultralytics YOLO11"
=== "Parking Annotator"
@ -61,11 +61,11 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
- After defining the parking areas with polygons, click `save` to store a JSON file with the data in your working directory.
![Ultralytics YOLOv8 Points Selection Demo](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-points-selection-demo.avif)
![Ultralytics YOLO11 Points Selection Demo](https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-points-selection-demo.avif)
### Python Code for Parking Management
!!! example "Parking management using YOLOv8 Example"
!!! example "Parking management using YOLO11 Example"
=== "Parking Management"
@ -84,7 +84,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
# Initialize parking management object
parking_manager = solutions.ParkingManagement(
model="yolov8n.pt", # path to model file
model="yolo11n.pt", # path to model file
json_file="bounding_boxes.json", # path to parking annotations file
)
@ -104,7 +104,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
| Name | Type | Default | Description |
| ------------------------ | ------- | ------------- | -------------------------------------------------------------- |
| `model` | `str` | `None` | Path to the YOLOv8 model. |
| `model` | `str` | `None` | Path to the YOLO11 model. |
| `json_file` | `str` | `None` | Path to the JSON file, that have all parking coordinates data. |
| `occupied_region_color` | `tuple` | `(0, 0, 255)` | RGB color for occupied regions. |
| `available_region_color` | `tuple` | `(0, 255, 0)` | RGB color for available regions. |
@ -115,33 +115,33 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
## FAQ
### How does Ultralytics YOLOv8 enhance parking management systems?
### How does Ultralytics YOLO11 enhance parking management systems?
Ultralytics YOLOv8 greatly enhances parking management systems by providing **real-time vehicle detection** and monitoring. This results in optimized usage of parking spaces, reduced congestion, and improved safety through continuous surveillance. The [Parking Management System](https://github.com/ultralytics/ultralytics) enables efficient traffic flow, minimizing idle times and emissions in parking lots, thereby contributing to environmental sustainability. For further details, refer to the [parking management code workflow](#python-code-for-parking-management).
Ultralytics YOLO11 greatly enhances parking management systems by providing **real-time vehicle detection** and monitoring. This results in optimized usage of parking spaces, reduced congestion, and improved safety through continuous surveillance. The [Parking Management System](https://github.com/ultralytics/ultralytics) enables efficient traffic flow, minimizing idle times and emissions in parking lots, thereby contributing to environmental sustainability. For further details, refer to the [parking management code workflow](#python-code-for-parking-management).
### What are the benefits of using Ultralytics YOLOv8 for smart parking?
### What are the benefits of using Ultralytics YOLO11 for smart parking?
Using Ultralytics YOLOv8 for smart parking yields numerous benefits:
Using Ultralytics YOLO11 for smart parking yields numerous benefits:
- **Efficiency**: Optimizes the use of parking spaces and decreases congestion.
- **Safety and Security**: Enhances surveillance and ensures the safety of vehicles and pedestrians.
- **Environmental Impact**: Helps in reducing emissions by minimizing vehicle idle times. More details on the advantages can be seen [here](#advantages-of-parking-management-system).
### How can I define parking spaces using Ultralytics YOLOv8?
### How can I define parking spaces using Ultralytics YOLO11?
Defining parking spaces is straightforward with Ultralytics YOLOv8:
Defining parking spaces is straightforward with Ultralytics YOLO11:
1. Capture a frame from a video or camera stream.
2. Use the provided code to launch a GUI for selecting an image and drawing polygons to define parking spaces.
3. Save the labeled data in JSON format for further processing. For comprehensive instructions, check the [selection of points](#selection-of-points) section.
### Can I customize the YOLOv8 model for specific parking management needs?
### Can I customize the YOLO11 model for specific parking management needs?
Yes, Ultralytics YOLOv8 allows customization for specific parking management needs. You can adjust parameters such as the **occupied and available region colors**, margins for text display, and much more. Utilizing the `ParkingManagement` class's [optional arguments](#optional-arguments-parkingmanagement), you can tailor the model to suit your particular requirements, ensuring maximum efficiency and effectiveness.
Yes, Ultralytics YOLO11 allows customization for specific parking management needs. You can adjust parameters such as the **occupied and available region colors**, margins for text display, and much more. Utilizing the `ParkingManagement` class's [optional arguments](#optional-arguments-parkingmanagement), you can tailor the model to suit your particular requirements, ensuring maximum efficiency and effectiveness.
### What are some real-world applications of Ultralytics YOLOv8 in parking lot management?
### What are some real-world applications of Ultralytics YOLO11 in parking lot management?
Ultralytics YOLOv8 is utilized in various real-world applications for parking lot management, including:
Ultralytics YOLO11 is utilized in various real-world applications for parking lot management, including:
- **Parking Space Detection**: Accurately identifying available and occupied spaces.
- **Surveillance**: Enhancing security through real-time monitoring.

@ -1,7 +1,7 @@
---
comments: true
description: Learn essential data preprocessing techniques for annotated computer vision data, including resizing, normalizing, augmenting, and splitting datasets for optimal model training.
keywords: data preprocessing, computer vision, image resizing, normalization, data augmentation, training dataset, validation dataset, test dataset, YOLOv8
keywords: data preprocessing, computer vision, image resizing, normalization, data augmentation, training dataset, validation dataset, test dataset, YOLO11
---
# Data Preprocessing Techniques for Annotated [Computer Vision](https://www.ultralytics.com/glossary/computer-vision-cv) Data
@ -36,7 +36,7 @@ To make resizing a simpler task, you can use the following tools:
- **[OpenCV](https://www.ultralytics.com/glossary/opencv)**: A popular computer vision library with extensive functions for image processing.
- **PIL (Pillow)**: A Python Imaging Library for opening, manipulating, and saving image files.
With respect to YOLOv8, the 'imgsz' parameter during [model training](../modes/train.md) allows for flexible input sizes. When set to a specific size, such as 640, the model will resize input images so their largest dimension is 640 pixels while maintaining the original aspect ratio.
With respect to YOLO11, the 'imgsz' parameter during [model training](../modes/train.md) allows for flexible input sizes. When set to a specific size, such as 640, the model will resize input images so their largest dimension is 640 pixels while maintaining the original aspect ratio.
By evaluating your model's and dataset's specific needs, you can determine whether resizing is a necessary preprocessing step or if your model can efficiently handle images of varying sizes.
@ -47,7 +47,7 @@ Another preprocessing technique is normalization. Normalization scales the pixel
- **Min-Max Scaling**: Scales pixel values to a range of 0 to 1.
- **Z-Score Normalization**: Scales pixel values based on their mean and standard deviation.
With respect to YOLOv8, normalization is seamlessly handled as part of its preprocessing pipeline during model training. YOLOv8 automatically performs several preprocessing steps, including conversion to RGB, scaling pixel values to the range [0, 1], and normalization using predefined mean and standard deviation values.
With respect to YOLO11, normalization is seamlessly handled as part of its preprocessing pipeline during model training. YOLO11 automatically performs several preprocessing steps, including conversion to RGB, scaling pixel values to the range [0, 1], and normalization using predefined mean and standard deviation values.
### Splitting the Dataset
@ -76,9 +76,9 @@ Common augmentation techniques include flipping, rotation, scaling, and color ad
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/overview-of-data-augmentations.avif" alt="Overview of Data Augmentations">
</p>
With respect to YOLOv8, you can [augment your custom dataset](../modes/train.md) by modifying the dataset configuration file, a .yaml file. In this file, you can add an augmentation section with parameters that specify how you want to augment your data.
With respect to YOLO11, you can [augment your custom dataset](../modes/train.md) by modifying the dataset configuration file, a .yaml file. In this file, you can add an augmentation section with parameters that specify how you want to augment your data.
The [Ultralytics YOLOv8 repository](https://github.com/ultralytics/ultralytics/tree/main) supports a wide range of data augmentations. You can apply various transformations such as:
The [Ultralytics YOLO11 repository](https://github.com/ultralytics/ultralytics/tree/main) supports a wide range of data augmentations. You can apply various transformations such as:
- Random Crops
- Flipping: Images can be flipped horizontally or vertically.
@ -89,12 +89,12 @@ Also, you can adjust the intensity of these augmentation techniques through spec
## A Case Study of Preprocessing
Consider a project aimed at developing a model to detect and classify different types of vehicles in traffic images using YOLOv8. We've collected traffic images and annotated them with bounding boxes and labels.
Consider a project aimed at developing a model to detect and classify different types of vehicles in traffic images using YOLO11. We've collected traffic images and annotated them with bounding boxes and labels.
Here's what each step of preprocessing would look like for this project:
- Resizing Images: Since YOLOv8 handles flexible input sizes and performs resizing automatically, manual resizing is not required. The model will adjust the image size according to the specified 'imgsz' parameter during training.
- Normalizing Pixel Values: YOLOv8 automatically normalizes pixel values to a range of 0 to 1 during preprocessing, so it's not required.
- Resizing Images: Since YOLO11 handles flexible input sizes and performs resizing automatically, manual resizing is not required. The model will adjust the image size according to the specified 'imgsz' parameter during training.
- Normalizing Pixel Values: YOLO11 automatically normalizes pixel values to a range of 0 to 1 during preprocessing, so it's not required.
- Splitting the Dataset: Divide the dataset into training (70%), validation (20%), and test (10%) sets using tools like scikit-learn.
- [Data Augmentation](https://www.ultralytics.com/glossary/data-augmentation): Modify the dataset configuration file (.yaml) to include data augmentation techniques such as random crops, horizontal flips, and brightness adjustments.
@ -132,12 +132,12 @@ Having discussions about your project with other computer vision enthusiasts can
### Channels to Connect with the Community
- **GitHub Issues:** Visit the YOLOv8 GitHub repository and use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features. The community and maintainers are there to help with any issues you face.
- **GitHub Issues:** Visit the YOLO11 GitHub repository and use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features. The community and maintainers are there to help with any issues you face.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to connect with other users and developers, get support, share knowledge, and brainstorm ideas.
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Refer to the [official YOLOv8 documentation](./index.md) for thorough guides and valuable insights on numerous computer vision tasks and projects.
- **Ultralytics YOLO11 Documentation:** Refer to the [official YOLO11 documentation](./index.md) for thorough guides and valuable insights on numerous computer vision tasks and projects.
## Your Dataset Is Ready!
@ -151,7 +151,7 @@ Data preprocessing is essential in computer vision projects because it ensures t
### How can I use Ultralytics YOLO for data augmentation?
For data augmentation with Ultralytics YOLOv8, you need to modify the dataset configuration file (.yaml). In this file, you can specify various augmentation techniques such as random crops, horizontal flips, and brightness adjustments. This can be effectively done using the training configurations [explained here](../modes/train.md). Data augmentation helps create a more robust dataset, reduce [overfitting](https://www.ultralytics.com/glossary/overfitting), and improve model generalization.
For data augmentation with Ultralytics YOLO11, you need to modify the dataset configuration file (.yaml). In this file, you can specify various augmentation techniques such as random crops, horizontal flips, and brightness adjustments. This can be effectively done using the training configurations [explained here](../modes/train.md). Data augmentation helps create a more robust dataset, reduce [overfitting](https://www.ultralytics.com/glossary/overfitting), and improve model generalization.
### What are the best data normalization techniques for computer vision data?
@ -160,12 +160,12 @@ Normalization scales pixel values to a standard range for faster convergence and
- **Min-Max Scaling**: Scales pixel values to a range of 0 to 1.
- **Z-Score Normalization**: Scales pixel values based on their mean and standard deviation.
For YOLOv8, normalization is handled automatically, including conversion to RGB and pixel value scaling. Learn more about it in the [model training section](../modes/train.md).
For YOLO11, normalization is handled automatically, including conversion to RGB and pixel value scaling. Learn more about it in the [model training section](../modes/train.md).
### How should I split my annotated dataset for training?
To split your dataset, a common practice is to divide it into 70% for training, 20% for validation, and 10% for testing. It is important to maintain the data distribution of classes across these splits and avoid data leakage by performing augmentation only on the training set. Use tools like scikit-learn or [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) for efficient dataset splitting. See the detailed guide on [dataset preparation](../guides/data-collection-and-annotation.md).
### Can I handle varying image sizes in YOLOv8 without manual resizing?
### Can I handle varying image sizes in YOLO11 without manual resizing?
Yes, Ultralytics YOLOv8 can handle varying image sizes through the 'imgsz' parameter during model training. This parameter ensures that images are resized so their largest dimension matches the specified size (e.g., 640 pixels), while maintaining the aspect ratio. For more flexible input handling and automatic adjustments, check the [model training section](../modes/train.md).
Yes, Ultralytics YOLO11 can handle varying image sizes through the 'imgsz' parameter during model training. This parameter ensures that images are resized so their largest dimension matches the specified size (e.g., 640 pixels), while maintaining the aspect ratio. For more flexible input handling and automatic adjustments, check the [model training section](../modes/train.md).

@ -1,14 +1,14 @@
---
comments: true
description: Learn how to manage and optimize queues using Ultralytics YOLOv8 to reduce wait times and increase efficiency in various real-world applications.
keywords: queue management, YOLOv8, Ultralytics, reduce wait times, efficiency, customer satisfaction, retail, airports, healthcare, banks
description: Learn how to manage and optimize queues using Ultralytics YOLO11 to reduce wait times and increase efficiency in various real-world applications.
keywords: queue management, YOLO11, Ultralytics, reduce wait times, efficiency, customer satisfaction, retail, airports, healthcare, banks
---
# Queue Management using Ultralytics YOLOv8 🚀
# Queue Management using Ultralytics YOLO11 🚀
## What is Queue Management?
Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves organizing and controlling lines of people or vehicles to reduce wait times and enhance efficiency. It's about optimizing queues to improve customer satisfaction and system performance in various settings like retail, banks, airports, and healthcare facilities.
Queue management using [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) involves organizing and controlling lines of people or vehicles to reduce wait times and enhance efficiency. It's about optimizing queues to improve customer satisfaction and system performance in various settings like retail, banks, airports, and healthcare facilities.
<p align="center">
<br>
@ -18,7 +18,7 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Implement Queue Management with Ultralytics YOLOv8 | Airport and Metro Station
<strong>Watch:</strong> How to Implement Queue Management with Ultralytics YOLO11 | Airport and Metro Station
</p>
## Advantages of Queue Management?
@ -30,10 +30,10 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
| Logistics | Retail |
| :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Queue management at airport ticket counter using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-management-airport-ticket-counter-ultralytics-yolov8.avif) | ![Queue monitoring in crowd using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/queue-monitoring-crowd-ultralytics-yolov8.avif) |
| Queue management at airport ticket counter Using Ultralytics YOLOv8 | Queue monitoring in crowd Ultralytics YOLOv8 |
| ![Queue management at airport ticket counter using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/queue-management-airport-ticket-counter-ultralytics-yolov8.avif) | ![Queue monitoring in crowd using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/queue-monitoring-crowd-ultralytics-yolov8.avif) |
| Queue management at airport ticket counter Using Ultralytics YOLO11 | Queue monitoring in crowd Ultralytics YOLO11 |
!!! example "Queue Management using YOLOv8 Example"
!!! example "Queue Management using YOLO11 Example"
=== "Queue Manager"
@ -42,7 +42,7 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
@ -84,7 +84,7 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
@ -135,11 +135,11 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
## FAQ
### How can I use Ultralytics YOLOv8 for real-time queue management?
### How can I use Ultralytics YOLO11 for real-time queue management?
To use Ultralytics YOLOv8 for real-time queue management, you can follow these steps:
To use Ultralytics YOLO11 for real-time queue management, you can follow these steps:
1. Load the YOLOv8 model with `YOLO("yolov8n.pt")`.
1. Load the YOLO11 model with `YOLO("yolo11n.pt")`.
2. Capture the video feed using `cv2.VideoCapture`.
3. Define the region of interest (ROI) for queue management.
4. Process frames to detect objects and manage queues.
@ -151,7 +151,7 @@ import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video.mp4")
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
@ -176,9 +176,9 @@ cv2.destroyAllWindows()
Leveraging Ultralytics [HUB](https://docs.ultralytics.com/hub/) can streamline this process by providing a user-friendly platform for deploying and managing your queue management solution.
### What are the key advantages of using Ultralytics YOLOv8 for queue management?
### What are the key advantages of using Ultralytics YOLO11 for queue management?
Using Ultralytics YOLOv8 for queue management offers several benefits:
Using Ultralytics YOLO11 for queue management offers several benefits:
- **Plummeting Waiting Times:** Efficiently organizes queues, reducing customer wait times and boosting satisfaction.
- **Enhancing Efficiency:** Analyzes queue data to optimize staff deployment and operations, thereby reducing costs.
@ -187,20 +187,20 @@ Using Ultralytics YOLOv8 for queue management offers several benefits:
For more details, explore our [Queue Management](https://docs.ultralytics.com/reference/solutions/queue_management/) solutions.
### Why should I choose Ultralytics YOLOv8 over competitors like [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) or Detectron2 for queue management?
### Why should I choose Ultralytics YOLO11 over competitors like [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) or Detectron2 for queue management?
Ultralytics YOLOv8 has several advantages over TensorFlow and Detectron2 for queue management:
Ultralytics YOLO11 has several advantages over TensorFlow and Detectron2 for queue management:
- **Real-time Performance:** YOLOv8 is known for its real-time detection capabilities, offering faster processing speeds.
- **Real-time Performance:** YOLO11 is known for its real-time detection capabilities, offering faster processing speeds.
- **Ease of Use:** Ultralytics provides a user-friendly experience, from training to deployment, via [Ultralytics HUB](https://docs.ultralytics.com/hub/).
- **Pretrained Models:** Access to a range of pretrained models, minimizing the time needed for setup.
- **Community Support:** Extensive documentation and active community support make problem-solving easier.
Learn how to get started with [Ultralytics YOLO](https://docs.ultralytics.com/quickstart/).
### Can Ultralytics YOLOv8 handle multiple types of queues, such as in airports and retail?
### Can Ultralytics YOLO11 handle multiple types of queues, such as in airports and retail?
Yes, Ultralytics YOLOv8 can manage various types of queues, including those in airports and retail environments. By configuring the QueueManager with specific regions and settings, YOLOv8 can adapt to different queue layouts and densities.
Yes, Ultralytics YOLO11 can manage various types of queues, including those in airports and retail environments. By configuring the QueueManager with specific regions and settings, YOLO11 can adapt to different queue layouts and densities.
Example for airports:
@ -215,9 +215,9 @@ queue_airport = solutions.QueueManager(
For more information on diverse applications, check out our [Real World Applications](#real-world-applications) section.
### What are some real-world applications of Ultralytics YOLOv8 in queue management?
### What are some real-world applications of Ultralytics YOLO11 in queue management?
Ultralytics YOLOv8 is used in various real-world applications for queue management:
Ultralytics YOLO11 is used in various real-world applications for queue management:
- **Retail:** Monitors checkout lines to reduce wait times and improve customer satisfaction.
- **Airports:** Manages queues at ticket counters and security checkpoints for a smoother passenger experience.

@ -1,12 +1,12 @@
---
comments: true
description: Learn how to implement YOLOv8 with SAHI for sliced inference. Optimize memory usage and enhance detection accuracy for large-scale applications.
keywords: YOLOv8, SAHI, Sliced Inference, Object Detection, Ultralytics, High-resolution Images, Computational Efficiency, Integration Guide
description: Learn how to implement YOLO11 with SAHI for sliced inference. Optimize memory usage and enhance detection accuracy for large-scale applications.
keywords: YOLO11, SAHI, Sliced Inference, Object Detection, Ultralytics, High-resolution Images, Computational Efficiency, Integration Guide
---
# Ultralytics Docs: Using YOLOv8 with SAHI for Sliced Inference
# Ultralytics Docs: Using YOLO11 with SAHI for Sliced Inference
Welcome to the Ultralytics documentation on how to use YOLOv8 with [SAHI](https://github.com/obss/sahi) (Slicing Aided Hyper Inference). This comprehensive guide aims to furnish you with all the essential knowledge you'll need to implement SAHI alongside YOLOv8. We'll deep-dive into what SAHI is, why sliced inference is critical for large-scale applications, and how to integrate these functionalities with YOLOv8 for enhanced [object detection](https://www.ultralytics.com/glossary/object-detection) performance.
Welcome to the Ultralytics documentation on how to use YOLO11 with [SAHI](https://github.com/obss/sahi) (Slicing Aided Hyper Inference). This comprehensive guide aims to furnish you with all the essential knowledge you'll need to implement SAHI alongside YOLO11. We'll deep-dive into what SAHI is, why sliced inference is critical for large-scale applications, and how to integrate these functionalities with YOLO11 for enhanced [object detection](https://www.ultralytics.com/glossary/object-detection) performance.
<p align="center">
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/sahi-sliced-inference-overview.avif" alt="SAHI Sliced Inference Overview">
@ -24,7 +24,7 @@ SAHI (Slicing Aided Hyper Inference) is an innovative library designed to optimi
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Inference with SAHI (Slicing Aided Hyper Inference) using Ultralytics YOLOv8
<strong>Watch:</strong> Inference with SAHI (Slicing Aided Hyper Inference) using Ultralytics YOLO11
</p>
### Key Features of SAHI
@ -47,12 +47,12 @@ Sliced Inference refers to the practice of subdividing a large or high-resolutio
<table border="0">
<tr>
<th>YOLOv8 without SAHI</th>
<th>YOLOv8 with SAHI</th>
<th>YOLO11 without SAHI</th>
<th>YOLO11 with SAHI</th>
</tr>
<tr>
<td><img src="https://github.com/ultralytics/docs/releases/download/0/yolov8-without-sahi.avif" alt="YOLOv8 without SAHI" width="640"></td>
<td><img src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-sahi.avif" alt="YOLOv8 with SAHI" width="640"></td>
<td><img src="https://github.com/ultralytics/docs/releases/download/0/yolov8-without-sahi.avif" alt="YOLO11 without SAHI" width="640"></td>
<td><img src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-sahi.avif" alt="YOLO11 with SAHI" width="640"></td>
</tr>
</table>
@ -68,15 +68,15 @@ pip install -U ultralytics sahi
### Import Modules and Download Resources
Here's how to import the necessary modules and download a YOLOv8 model and some test images:
Here's how to import the necessary modules and download a YOLO11 model and some test images:
```python
from sahi.utils.file import download_from_url
from sahi.utils.yolov8 import download_yolov8s_model
# Download YOLOv8 model
yolov8_model_path = "models/yolov8s.pt"
download_yolov8s_model(yolov8_model_path)
# Download YOLO11 model
model_path = "models/yolo11s.pt"
download_yolov8s_model(model_path)
# Download test images
download_from_url(
@ -89,11 +89,11 @@ download_from_url(
)
```
## Standard Inference with YOLOv8
## Standard Inference with YOLO11
### Instantiate the Model
You can instantiate a YOLOv8 model for object detection like this:
You can instantiate a YOLO11 model for object detection like this:
```python
from sahi import AutoDetectionModel
@ -129,7 +129,7 @@ result.export_visuals(export_dir="demo_data/")
Image("demo_data/prediction_visual.png")
```
## Sliced Inference with YOLOv8
## Sliced Inference with YOLO11
Perform sliced inference by specifying the slice dimensions and overlap ratios:
@ -170,7 +170,7 @@ from sahi.predict import predict
predict(
model_type="yolov8",
model_path="path/to/yolov8n.pt",
model_path="path/to/yolo11n.pt",
model_device="cpu", # or 'cuda:0'
model_confidence_threshold=0.4,
source="path/to/dir",
@ -181,7 +181,7 @@ predict(
)
```
That's it! Now you're equipped to use YOLOv8 with SAHI for both standard and sliced inference.
That's it! Now you're equipped to use YOLO11 with SAHI for both standard and sliced inference.
## Citations and Acknowledgments
@ -206,23 +206,23 @@ We extend our thanks to the SAHI research group for creating and maintaining thi
## FAQ
### How can I integrate YOLOv8 with SAHI for sliced inference in object detection?
### How can I integrate YOLO11 with SAHI for sliced inference in object detection?
Integrating Ultralytics YOLOv8 with SAHI (Slicing Aided Hyper Inference) for sliced inference optimizes your object detection tasks on high-resolution images by partitioning them into manageable slices. This approach improves memory usage and ensures high detection accuracy. To get started, you need to install the ultralytics and sahi libraries:
Integrating Ultralytics YOLO11 with SAHI (Slicing Aided Hyper Inference) for sliced inference optimizes your object detection tasks on high-resolution images by partitioning them into manageable slices. This approach improves memory usage and ensures high detection accuracy. To get started, you need to install the ultralytics and sahi libraries:
```bash
pip install -U ultralytics sahi
```
Then, download a YOLOv8 model and test images:
Then, download a YOLO11 model and test images:
```python
from sahi.utils.file import download_from_url
from sahi.utils.yolov8 import download_yolov8s_model
# Download YOLOv8 model
yolov8_model_path = "models/yolov8s.pt"
download_yolov8s_model(yolov8_model_path)
# Download YOLO11 model
model_path = "models/yolo11s.pt"
download_yolov8s_model(model_path)
# Download test images
download_from_url(
@ -231,11 +231,11 @@ download_from_url(
)
```
For more detailed instructions, refer to our [Sliced Inference guide](#sliced-inference-with-yolov8).
For more detailed instructions, refer to our [Sliced Inference guide](#sliced-inference-with-yolo11).
### Why should I use SAHI with YOLOv8 for object detection on large images?
### Why should I use SAHI with YOLO11 for object detection on large images?
Using SAHI with Ultralytics YOLOv8 for object detection on large images offers several benefits:
Using SAHI with Ultralytics YOLO11 for object detection on large images offers several benefits:
- **Reduced Computational Burden**: Smaller slices are faster to process and consume less memory, making it feasible to run high-quality detections on hardware with limited resources.
- **Maintained Detection Accuracy**: SAHI uses intelligent algorithms to merge overlapping boxes, preserving the detection quality.
@ -243,9 +243,9 @@ Using SAHI with Ultralytics YOLOv8 for object detection on large images offers s
Learn more about the [benefits of sliced inference](#benefits-of-sliced-inference) in our documentation.
### Can I visualize prediction results when using YOLOv8 with SAHI?
### Can I visualize prediction results when using YOLO11 with SAHI?
Yes, you can visualize prediction results when using YOLOv8 with SAHI. Here's how you can export and visualize the results:
Yes, you can visualize prediction results when using YOLO11 with SAHI. Here's how you can export and visualize the results:
```python
from IPython.display import Image
@ -256,9 +256,9 @@ Image("demo_data/prediction_visual.png")
This command will save the visualized predictions to the specified directory and you can then load the image to view it in your notebook or application. For a detailed guide, check out the [Standard Inference section](#visualize-results).
### What features does SAHI offer for improving YOLOv8 object detection?
### What features does SAHI offer for improving YOLO11 object detection?
SAHI (Slicing Aided Hyper Inference) offers several features that complement Ultralytics YOLOv8 for object detection:
SAHI (Slicing Aided Hyper Inference) offers several features that complement Ultralytics YOLO11 for object detection:
- **Seamless Integration**: SAHI easily integrates with YOLO models, requiring minimal code adjustments.
- **Resource Efficiency**: It partitions large images into smaller slices, which optimizes memory usage and speed.
@ -266,9 +266,9 @@ SAHI (Slicing Aided Hyper Inference) offers several features that complement Ult
For a deeper understanding, read about SAHI's [key features](#key-features-of-sahi).
### How do I handle large-scale inference projects using YOLOv8 and SAHI?
### How do I handle large-scale inference projects using YOLO11 and SAHI?
To handle large-scale inference projects using YOLOv8 and SAHI, follow these best practices:
To handle large-scale inference projects using YOLO11 and SAHI, follow these best practices:
1. **Install Required Libraries**: Ensure that you have the latest versions of ultralytics and sahi.
2. **Configure Sliced Inference**: Determine the optimal slice dimensions and overlap ratios for your specific project.
@ -281,7 +281,7 @@ from sahi.predict import predict
predict(
model_type="yolov8",
model_path="path/to/yolov8n.pt",
model_path="path/to/yolo11n.pt",
model_device="cpu", # or 'cuda:0'
model_confidence_threshold=0.4,
source="path/to/dir",

@ -1,17 +1,17 @@
---
comments: true
description: Enhance your security with real-time object detection using Ultralytics YOLOv8. Reduce false positives and integrate seamlessly with existing systems.
keywords: YOLOv8, Security Alarm System, real-time object detection, Ultralytics, computer vision, integration, false positives
description: Enhance your security with real-time object detection using Ultralytics YOLO11. Reduce false positives and integrate seamlessly with existing systems.
keywords: YOLO11, Security Alarm System, real-time object detection, Ultralytics, computer vision, integration, false positives
---
# Security Alarm System Project Using Ultralytics YOLOv8
# Security Alarm System Project Using Ultralytics YOLO11
<img src="https://github.com/ultralytics/docs/releases/download/0/security-alarm-system-ultralytics-yolov8.avif" alt="Security Alarm System">
The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanced [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) capabilities to enhance security measures. YOLOv8, developed by Ultralytics, provides real-time object detection, allowing the system to identify and respond to potential security threats promptly. This project offers several advantages:
The Security Alarm System Project utilizing Ultralytics YOLO11 integrates advanced [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) capabilities to enhance security measures. YOLO11, developed by Ultralytics, provides real-time object detection, allowing the system to identify and respond to potential security threats promptly. This project offers several advantages:
- **Real-time Detection:** YOLOv8's efficiency enables the Security Alarm System to detect and respond to security incidents in real-time, minimizing response time.
- **[Accuracy](https://www.ultralytics.com/glossary/accuracy):** YOLOv8 is known for its accuracy in object detection, reducing false positives and enhancing the reliability of the security alarm system.
- **Real-time Detection:** YOLO11's efficiency enables the Security Alarm System to detect and respond to security incidents in real-time, minimizing response time.
- **[Accuracy](https://www.ultralytics.com/glossary/accuracy):** YOLO11 is known for its accuracy in object detection, reducing false positives and enhancing the reliability of the security alarm system.
- **Integration Capabilities:** The project can be seamlessly integrated with existing security infrastructure, providing an upgraded layer of intelligent surveillance.
<p align="center">
@ -22,7 +22,7 @@ The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanc
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Security Alarm System Project with Ultralytics YOLOv8 [Object Detection](https://www.ultralytics.com/glossary/object-detection)
<strong>Watch:</strong> Security Alarm System Project with Ultralytics YOLO11 <a href="https://www.ultralytics.com/glossary/object-detection">Object Detection</a>
</p>
### Code
@ -90,7 +90,7 @@ class ObjectDetection:
self.email_sent = False
# model information
self.model = YOLO("yolov8n.pt")
self.model = YOLO("yolo11n.pt")
# visual information
self.annotator = None
@ -155,7 +155,7 @@ class ObjectDetection:
self.email_sent = False
self.display_fps(im0)
cv2.imshow("YOLOv8 Detection", im0)
cv2.imshow("YOLO11 Detection", im0)
frame_count += 1
if cv2.waitKey(5) & 0xFF == 27:
break
@ -179,22 +179,22 @@ That's it! When you execute the code, you'll receive a single notification on yo
## FAQ
### How does Ultralytics YOLOv8 improve the accuracy of a security alarm system?
### How does Ultralytics YOLO11 improve the accuracy of a security alarm system?
Ultralytics YOLOv8 enhances security alarm systems by delivering high-accuracy, real-time object detection. Its advanced algorithms significantly reduce false positives, ensuring that the system only responds to genuine threats. This increased reliability can be seamlessly integrated with existing security infrastructure, upgrading the overall surveillance quality.
Ultralytics YOLO11 enhances security alarm systems by delivering high-accuracy, real-time object detection. Its advanced algorithms significantly reduce false positives, ensuring that the system only responds to genuine threats. This increased reliability can be seamlessly integrated with existing security infrastructure, upgrading the overall surveillance quality.
### Can I integrate Ultralytics YOLOv8 with my existing security infrastructure?
### Can I integrate Ultralytics YOLO11 with my existing security infrastructure?
Yes, Ultralytics YOLOv8 can be seamlessly integrated with your existing security infrastructure. The system supports various modes and provides flexibility for customization, allowing you to enhance your existing setup with advanced object detection capabilities. For detailed instructions on integrating YOLOv8 in your projects, visit the [integration section](https://docs.ultralytics.com/integrations/).
Yes, Ultralytics YOLO11 can be seamlessly integrated with your existing security infrastructure. The system supports various modes and provides flexibility for customization, allowing you to enhance your existing setup with advanced object detection capabilities. For detailed instructions on integrating YOLO11 in your projects, visit the [integration section](https://docs.ultralytics.com/integrations/).
### What are the storage requirements for running Ultralytics YOLOv8?
### What are the storage requirements for running Ultralytics YOLO11?
Running Ultralytics YOLOv8 on a standard setup typically requires around 5GB of free disk space. This includes space for storing the YOLOv8 model and any additional dependencies. For cloud-based solutions, Ultralytics HUB offers efficient project management and dataset handling, which can optimize storage needs. Learn more about the [Pro Plan](../hub/pro.md) for enhanced features including extended storage.
Running Ultralytics YOLO11 on a standard setup typically requires around 5GB of free disk space. This includes space for storing the YOLO11 model and any additional dependencies. For cloud-based solutions, Ultralytics HUB offers efficient project management and dataset handling, which can optimize storage needs. Learn more about the [Pro Plan](../hub/pro.md) for enhanced features including extended storage.
### What makes Ultralytics YOLOv8 different from other object detection models like Faster R-CNN or SSD?
### What makes Ultralytics YOLO11 different from other object detection models like Faster R-CNN or SSD?
Ultralytics YOLOv8 provides an edge over models like Faster R-CNN or SSD with its real-time detection capabilities and higher accuracy. Its unique architecture allows it to process images much faster without compromising on [precision](https://www.ultralytics.com/glossary/precision), making it ideal for time-sensitive applications like security alarm systems. For a comprehensive comparison of object detection models, you can explore our [guide](https://docs.ultralytics.com/models/).
Ultralytics YOLO11 provides an edge over models like Faster R-CNN or SSD with its real-time detection capabilities and higher accuracy. Its unique architecture allows it to process images much faster without compromising on [precision](https://www.ultralytics.com/glossary/precision), making it ideal for time-sensitive applications like security alarm systems. For a comprehensive comparison of object detection models, you can explore our [guide](https://docs.ultralytics.com/models/).
### How can I reduce the frequency of false positives in my security system using Ultralytics YOLOv8?
### How can I reduce the frequency of false positives in my security system using Ultralytics YOLO11?
To reduce false positives, ensure your Ultralytics YOLOv8 model is adequately trained with a diverse and well-annotated dataset. Fine-tuning hyperparameters and regularly updating the model with new data can significantly improve detection accuracy. Detailed [hyperparameter tuning](https://www.ultralytics.com/glossary/hyperparameter-tuning) techniques can be found in our [hyperparameter tuning guide](../guides/hyperparameter-tuning.md).
To reduce false positives, ensure your Ultralytics YOLO11 model is adequately trained with a diverse and well-annotated dataset. Fine-tuning hyperparameters and regularly updating the model with new data can significantly improve detection accuracy. Detailed [hyperparameter tuning](https://www.ultralytics.com/glossary/hyperparameter-tuning) techniques can be found in our [hyperparameter tuning guide](../guides/hyperparameter-tuning.md).

@ -1,14 +1,14 @@
---
comments: true
description: Learn how to estimate object speed using Ultralytics YOLOv8 for applications in traffic control, autonomous navigation, and surveillance.
keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision, traffic control, autonomous navigation, surveillance, security
description: Learn how to estimate object speed using Ultralytics YOLO11 for applications in traffic control, autonomous navigation, and surveillance.
keywords: Ultralytics YOLO11, speed estimation, object tracking, computer vision, traffic control, autonomous navigation, surveillance, security
---
# Speed Estimation using Ultralytics YOLOv8 🚀
# Speed Estimation using Ultralytics YOLO11 🚀
## What is Speed Estimation?
[Speed estimation](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects) is the process of calculating the rate of movement of an object within a given context, often employed in [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) applications. Using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) you can now calculate the speed of object using [object tracking](../modes/track.md) alongside distance and time data, crucial for tasks like traffic and surveillance. The accuracy of speed estimation directly influences the efficiency and reliability of various applications, making it a key component in the advancement of intelligent systems and real-time decision-making processes.
[Speed estimation](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects) is the process of calculating the rate of movement of an object within a given context, often employed in [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) applications. Using [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) you can now calculate the speed of object using [object tracking](../modes/track.md) alongside distance and time data, crucial for tasks like traffic and surveillance. The accuracy of speed estimation directly influences the efficiency and reliability of various applications, making it a key component in the advancement of intelligent systems and real-time decision-making processes.
<p align="center">
<br>
@ -18,12 +18,12 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Speed Estimation using Ultralytics YOLOv8
<strong>Watch:</strong> Speed Estimation using Ultralytics YOLO11
</p>
!!! tip "Check Out Our Blog"
For deeper insights into speed estimation, check out our blog post: [Ultralytics YOLOv8 for Speed Estimation in Computer Vision Projects](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects)
For deeper insights into speed estimation, check out our blog post: [Ultralytics YOLO11 for Speed Estimation in Computer Vision Projects](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects)
## Advantages of Speed Estimation?
@ -35,10 +35,10 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
| Transportation | Transportation |
| :------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Speed Estimation on Road using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-road-using-ultralytics-yolov8.avif) | ![Speed Estimation on Bridge using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-bridge-using-ultralytics-yolov8.avif) |
| Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
| ![Speed Estimation on Road using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-road-using-ultralytics-yolov8.avif) | ![Speed Estimation on Bridge using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/speed-estimation-on-bridge-using-ultralytics-yolov8.avif) |
| Speed Estimation on Road using Ultralytics YOLO11 | Speed Estimation on Bridge using Ultralytics YOLO11 |
!!! example "Speed Estimation using YOLOv8 Example"
!!! example "Speed Estimation using YOLO11 Example"
=== "Speed Estimation"
@ -47,7 +47,7 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -102,9 +102,9 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
## FAQ
### How do I estimate object speed using Ultralytics YOLOv8?
### How do I estimate object speed using Ultralytics YOLO11?
Estimating object speed with Ultralytics YOLOv8 involves combining [object detection](https://www.ultralytics.com/glossary/object-detection) and tracking techniques. First, you need to detect objects in each frame using the YOLOv8 model. Then, track these objects across frames to calculate their movement over time. Finally, use the distance traveled by the object between frames and the frame rate to estimate its speed.
Estimating object speed with Ultralytics YOLO11 involves combining [object detection](https://www.ultralytics.com/glossary/object-detection) and tracking techniques. First, you need to detect objects in each frame using the YOLO11 model. Then, track these objects across frames to calculate their movement over time. Finally, use the distance traveled by the object between frames and the frame rate to estimate its speed.
**Example**:
@ -113,7 +113,7 @@ import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
@ -142,43 +142,43 @@ cv2.destroyAllWindows()
For more details, refer to our [official blog post](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects).
### What are the benefits of using Ultralytics YOLOv8 for speed estimation in traffic management?
### What are the benefits of using Ultralytics YOLO11 for speed estimation in traffic management?
Using Ultralytics YOLOv8 for speed estimation offers significant advantages in traffic management:
Using Ultralytics YOLO11 for speed estimation offers significant advantages in traffic management:
- **Enhanced Safety**: Accurately estimate vehicle speeds to detect over-speeding and improve road safety.
- **Real-Time Monitoring**: Benefit from YOLOv8's real-time object detection capability to monitor traffic flow and congestion effectively.
- **Real-Time Monitoring**: Benefit from YOLO11's real-time object detection capability to monitor traffic flow and congestion effectively.
- **Scalability**: Deploy the model on various hardware setups, from edge devices to servers, ensuring flexible and scalable solutions for large-scale implementations.
For more applications, see [advantages of speed estimation](#advantages-of-speed-estimation).
### Can YOLOv8 be integrated with other AI frameworks like [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) or [PyTorch](https://www.ultralytics.com/glossary/pytorch)?
### Can YOLO11 be integrated with other AI frameworks like [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) or [PyTorch](https://www.ultralytics.com/glossary/pytorch)?
Yes, YOLOv8 can be integrated with other AI frameworks like TensorFlow and PyTorch. Ultralytics provides support for exporting YOLOv8 models to various formats like ONNX, TensorRT, and CoreML, ensuring smooth interoperability with other ML frameworks.
Yes, YOLO11 can be integrated with other AI frameworks like TensorFlow and PyTorch. Ultralytics provides support for exporting YOLO11 models to various formats like ONNX, TensorRT, and CoreML, ensuring smooth interoperability with other ML frameworks.
To export a YOLOv8 model to ONNX format:
To export a YOLO11 model to ONNX format:
```bash
yolo export --weights yolov8n.pt --include onnx
yolo export --weights yolo11n.pt --include onnx
```
Learn more about exporting models in our [guide on export](../modes/export.md).
### How accurate is the speed estimation using Ultralytics YOLOv8?
### How accurate is the speed estimation using Ultralytics YOLO11?
The [accuracy](https://www.ultralytics.com/glossary/accuracy) of speed estimation using Ultralytics YOLOv8 depends on several factors, including the quality of the object tracking, the resolution and frame rate of the video, and environmental variables. While the speed estimator provides reliable estimates, it may not be 100% accurate due to variances in frame processing speed and object occlusion.
The [accuracy](https://www.ultralytics.com/glossary/accuracy) of speed estimation using Ultralytics YOLO11 depends on several factors, including the quality of the object tracking, the resolution and frame rate of the video, and environmental variables. While the speed estimator provides reliable estimates, it may not be 100% accurate due to variances in frame processing speed and object occlusion.
**Note**: Always consider margin of error and validate the estimates with ground truth data when possible.
For further accuracy improvement tips, check the [Arguments `SpeedEstimator` section](#arguments-speedestimator).
### Why choose Ultralytics YOLOv8 over other object detection models like TensorFlow Object Detection API?
### Why choose Ultralytics YOLO11 over other object detection models like TensorFlow Object Detection API?
Ultralytics YOLOv8 offers several advantages over other object detection models, such as the TensorFlow Object Detection API:
Ultralytics YOLO11 offers several advantages over other object detection models, such as the TensorFlow Object Detection API:
- **Real-Time Performance**: YOLOv8 is optimized for real-time detection, providing high speed and accuracy.
- **Ease of Use**: Designed with a user-friendly interface, YOLOv8 simplifies model training and deployment.
- **Real-Time Performance**: YOLO11 is optimized for real-time detection, providing high speed and accuracy.
- **Ease of Use**: Designed with a user-friendly interface, YOLO11 simplifies model training and deployment.
- **Versatility**: Supports multiple tasks, including object detection, segmentation, and pose estimation.
- **Community and Support**: YOLOv8 is backed by an active community and extensive documentation, ensuring developers have the resources they need.
- **Community and Support**: YOLO11 is backed by an active community and extensive documentation, ensuring developers have the resources they need.
For more information on the benefits of YOLOv8, explore our detailed [model page](../models/yolov8.md).
For more information on the benefits of YOLO11, explore our detailed [model page](../models/yolov8.md).

@ -100,7 +100,7 @@ However, if you choose to collect images or take your own pictures, you'll need
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/different-types-of-image-annotation.avif" alt="Different Types of Image Annotation">
</p>
[Data collection and annotation](./data-collection-and-annotation.md) can be a time-consuming manual effort. Annotation tools can help make this process easier. Here are some useful open annotation tools: [LabeI Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/labelmeai/labelme).
[Data collection and annotation](./data-collection-and-annotation.md) can be a time-consuming manual effort. Annotation tools can help make this process easier. Here are some useful open annotation tools: [LabeI Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/wkentaro/labelme).
## Step 3: [Data Augmentation](https://www.ultralytics.com/glossary/data-augmentation) and Splitting Your Dataset
@ -166,7 +166,7 @@ Once your model has been thoroughly tested, it's time to deploy it. Deployment i
- Setting Up the Environment: Configure the necessary infrastructure for your chosen deployment option, whether it's cloud-based (AWS, Google Cloud, Azure) or edge-based (local devices, IoT).
- **[Exporting the Model](../modes/export.md):** Export your model to the appropriate format (e.g., ONNX, TensorRT, CoreML for YOLOv8) to ensure compatibility with your deployment platform.
- **[Exporting the Model](../modes/export.md):** Export your model to the appropriate format (e.g., ONNX, TensorRT, CoreML for YOLO11) to ensure compatibility with your deployment platform.
- **Deploying the Model:** Deploy the model by setting up APIs or endpoints and integrating it with your application.
- **Ensuring Scalability**: Implement load balancers, auto-scaling groups, and monitoring tools to manage resources and handle increasing data and user requests.
@ -188,12 +188,12 @@ Connecting with a community of computer vision enthusiasts can help you tackle a
### Community Resources
- **GitHub Issues:** Check out the [YOLOv8 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The active community and maintainers are there to help with specific issues.
- **GitHub Issues:** Check out the [YOLO11 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The active community and maintainers are there to help with specific issues.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to interact with other users and developers, get support, and share insights.
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Explore the [official YOLOv8 documentation](./index.md) for detailed guides with helpful tips on different computer vision tasks and projects.
- **Ultralytics YOLO11 Documentation:** Explore the [official YOLO11 documentation](./index.md) for detailed guides with helpful tips on different computer vision tasks and projects.
Using these resources will help you overcome challenges and stay updated with the latest trends and best practices in the computer vision community.
@ -215,7 +215,7 @@ Data annotation is vital for teaching your model to recognize patterns. The type
- **Object Detection**: Bounding boxes drawn around objects.
- **Image Segmentation**: Each pixel labeled according to the object it belongs to.
Tools like [Label Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/labelmeai/labelme) can assist in this process. For more details, refer to our [data collection and annotation guide](./data-collection-and-annotation.md).
Tools like [Label Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/wkentaro/labelme) can assist in this process. For more details, refer to our [data collection and annotation guide](./data-collection-and-annotation.md).
### What steps should I follow to augment and split my dataset effectively?
@ -229,7 +229,7 @@ After splitting, apply data augmentation techniques like rotation, scaling, and
### How can I export my trained computer vision model for deployment?
Exporting your model ensures compatibility with different deployment platforms. Ultralytics provides multiple formats, including ONNX, TensorRT, and CoreML. To export your YOLOv8 model, follow this guide:
Exporting your model ensures compatibility with different deployment platforms. Ultralytics provides multiple formats, including ONNX, TensorRT, and CoreML. To export your YOLO11 model, follow this guide:
- Use the `export` function with the desired format parameter.
- Ensure the exported model fits the specifications of your deployment environment (e.g., edge devices, cloud).

@ -1,14 +1,14 @@
---
comments: true
description: Learn how to set up a real-time object detection application using Streamlit and Ultralytics YOLOv8. Follow this step-by-step guide to implement webcam-based object detection.
keywords: Streamlit, YOLOv8, Real-time Object Detection, Streamlit Application, YOLOv8 Streamlit Tutorial, Webcam Object Detection
description: Learn how to set up a real-time object detection application using Streamlit and Ultralytics YOLO11. Follow this step-by-step guide to implement webcam-based object detection.
keywords: Streamlit, YOLO11, Real-time Object Detection, Streamlit Application, YOLO11 Streamlit Tutorial, Webcam Object Detection
---
# Live Inference with Streamlit Application using Ultralytics YOLOv8
# Live Inference with Streamlit Application using Ultralytics YOLO11
## Introduction
Streamlit makes it simple to build and deploy interactive web applications. Combining this with Ultralytics YOLOv8 allows for real-time [object detection](https://www.ultralytics.com/glossary/object-detection) and analysis directly in your browser. YOLOv8 high accuracy and speed ensure seamless performance for live video streams, making it ideal for applications in security, retail, and beyond.
Streamlit makes it simple to build and deploy interactive web applications. Combining this with Ultralytics YOLO11 allows for real-time [object detection](https://www.ultralytics.com/glossary/object-detection) and analysis directly in your browser. YOLO11 high accuracy and speed ensure seamless performance for live video streams, making it ideal for applications in security, retail, and beyond.
<p align="center">
<br>
@ -18,19 +18,19 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Use Streamlit with Ultralytics for Real-Time [Computer Vision](https://www.ultralytics.com/glossary/computer-vision-cv) in Your Browser
<strong>Watch:</strong> How to Use Streamlit with Ultralytics for Real-Time <a href="https://www.ultralytics.com/glossary/computer-vision-cv">Computer Vision</a> in Your Browser
</p>
| Aquaculture | Animals husbandry |
| :----------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------: |
| ![Fish Detection using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/fish-detection-ultralytics-yolov8.avif) | ![Animals Detection using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/animals-detection-yolov8.avif) |
| Fish Detection using Ultralytics YOLOv8 | Animals Detection using Ultralytics YOLOv8 |
| ![Fish Detection using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/fish-detection-ultralytics-yolov8.avif) | ![Animals Detection using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/animals-detection-yolov8.avif) |
| Fish Detection using Ultralytics YOLO11 | Animals Detection using Ultralytics YOLO11 |
## Advantages of Live Inference
- **Seamless Real-Time Object Detection**: Streamlit combined with YOLOv8 enables real-time object detection directly from your webcam feed. This allows for immediate analysis and insights, making it ideal for applications requiring instant feedback.
- **Seamless Real-Time Object Detection**: Streamlit combined with YOLO11 enables real-time object detection directly from your webcam feed. This allows for immediate analysis and insights, making it ideal for applications requiring instant feedback.
- **User-Friendly Deployment**: Streamlit's interactive interface makes it easy to deploy and use the application without extensive technical knowledge. Users can start live inference with a simple click, enhancing accessibility and usability.
- **Efficient Resource Utilization**: YOLOv8 optimized algorithm ensure high-speed processing with minimal computational resources. This efficiency allows for smooth and reliable webcam inference even on standard hardware, making advanced computer vision accessible to a wider audience.
- **Efficient Resource Utilization**: YOLO11 optimized algorithm ensure high-speed processing with minimal computational resources. This efficiency allows for smooth and reliable webcam inference even on standard hardware, making advanced computer vision accessible to a wider audience.
## Streamlit Application Code
@ -56,7 +56,7 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
yolo streamlit-predict
```
This will launch the Streamlit application in your default web browser. You will see the main title, subtitle, and the sidebar with configuration options. Select your desired YOLOv8 model, set the confidence and NMS thresholds, and click the "Start" button to begin the real-time object detection.
This will launch the Streamlit application in your default web browser. You will see the main title, subtitle, and the sidebar with configuration options. Select your desired YOLO11 model, set the confidence and NMS thresholds, and click the "Start" button to begin the real-time object detection.
You can optionally supply a specific model in Python:
@ -75,7 +75,7 @@ You can optionally supply a specific model in Python:
## Conclusion
By following this guide, you have successfully created a real-time object detection application using Streamlit and Ultralytics YOLOv8. This application allows you to experience the power of YOLOv8 in detecting objects through your webcam, with a user-friendly interface and the ability to stop the video stream at any time.
By following this guide, you have successfully created a real-time object detection application using Streamlit and Ultralytics YOLO11. This application allows you to experience the power of YOLO11 in detecting objects through your webcam, with a user-friendly interface and the ability to stop the video stream at any time.
For further enhancements, you can explore adding more features such as recording the video stream, saving the annotated frames, or integrating with other computer vision libraries.
@ -90,13 +90,13 @@ Engage with the community to learn more, troubleshoot issues, and share your pro
### Official Documentation
- **Ultralytics YOLOv8 Documentation:** Refer to the [official YOLOv8 documentation](https://docs.ultralytics.com/) for comprehensive guides and insights on various computer vision tasks and projects.
- **Ultralytics YOLO11 Documentation:** Refer to the [official YOLO11 documentation](https://docs.ultralytics.com/) for comprehensive guides and insights on various computer vision tasks and projects.
## FAQ
### How can I set up a real-time object detection application using Streamlit and Ultralytics YOLOv8?
### How can I set up a real-time object detection application using Streamlit and Ultralytics YOLO11?
Setting up a real-time object detection application with Streamlit and Ultralytics YOLOv8 is straightforward. First, ensure you have the Ultralytics Python package installed using:
Setting up a real-time object detection application with Streamlit and Ultralytics YOLO11 is straightforward. First, ensure you have the Ultralytics Python package installed using:
```bash
pip install ultralytics
@ -124,29 +124,29 @@ Then, you can create a basic Streamlit application to run live inference:
For more details on the practical setup, refer to the [Streamlit Application Code section](#streamlit-application-code) of the documentation.
### What are the main advantages of using Ultralytics YOLOv8 with Streamlit for real-time object detection?
### What are the main advantages of using Ultralytics YOLO11 with Streamlit for real-time object detection?
Using Ultralytics YOLOv8 with Streamlit for real-time object detection offers several advantages:
Using Ultralytics YOLO11 with Streamlit for real-time object detection offers several advantages:
- **Seamless Real-Time Detection**: Achieve high-[accuracy](https://www.ultralytics.com/glossary/accuracy), real-time object detection directly from webcam feeds.
- **User-Friendly Interface**: Streamlit's intuitive interface allows easy use and deployment without extensive technical knowledge.
- **Resource Efficiency**: YOLOv8's optimized algorithms ensure high-speed processing with minimal computational resources.
- **Resource Efficiency**: YOLO11's optimized algorithms ensure high-speed processing with minimal computational resources.
Discover more about these advantages [here](#advantages-of-live-inference).
### How do I deploy a Streamlit object detection application in my web browser?
After coding your Streamlit application integrating Ultralytics YOLOv8, you can deploy it by running:
After coding your Streamlit application integrating Ultralytics YOLO11, you can deploy it by running:
```bash
streamlit run <file-name.py>
```
This command will launch the application in your default web browser, enabling you to select YOLOv8 models, set confidence, and NMS thresholds, and start real-time object detection with a simple click. For a detailed guide, refer to the [Streamlit Application Code](#streamlit-application-code) section.
This command will launch the application in your default web browser, enabling you to select YOLO11 models, set confidence, and NMS thresholds, and start real-time object detection with a simple click. For a detailed guide, refer to the [Streamlit Application Code](#streamlit-application-code) section.
### What are some use cases for real-time object detection using Streamlit and Ultralytics YOLOv8?
### What are some use cases for real-time object detection using Streamlit and Ultralytics YOLO11?
Real-time object detection using Streamlit and Ultralytics YOLOv8 can be applied in various sectors:
Real-time object detection using Streamlit and Ultralytics YOLO11 can be applied in various sectors:
- **Security**: Real-time monitoring for unauthorized access.
- **Retail**: Customer counting, shelf management, and more.
@ -154,12 +154,12 @@ Real-time object detection using Streamlit and Ultralytics YOLOv8 can be applied
For more in-depth use cases and examples, explore [Ultralytics Solutions](https://docs.ultralytics.com/solutions/).
### How does Ultralytics YOLOv8 compare to other object detection models like YOLOv5 and RCNNs?
### How does Ultralytics YOLO11 compare to other object detection models like YOLOv5 and RCNNs?
Ultralytics YOLOv8 provides several enhancements over prior models like YOLOv5 and RCNNs:
Ultralytics YOLO11 provides several enhancements over prior models like YOLOv5 and RCNNs:
- **Higher Speed and Accuracy**: Improved performance for real-time applications.
- **Ease of Use**: Simplified interfaces and deployment.
- **Resource Efficiency**: Optimized for better speed with minimal computational requirements.
For a comprehensive comparison, check [Ultralytics YOLOv8 Documentation](https://docs.ultralytics.com/models/yolov8/) and related blog posts discussing model performance.
For a comprehensive comparison, check [Ultralytics YOLO11 Documentation](https://docs.ultralytics.com/models/yolov8/) and related blog posts discussing model performance.

@ -1,12 +1,12 @@
---
comments: true
description: Learn how to integrate Ultralytics YOLOv8 with NVIDIA Triton Inference Server for scalable, high-performance AI model deployment.
keywords: Triton Inference Server, YOLOv8, Ultralytics, NVIDIA, deep learning, AI model deployment, ONNX, scalable inference
description: Learn how to integrate Ultralytics YOLO11 with NVIDIA Triton Inference Server for scalable, high-performance AI model deployment.
keywords: Triton Inference Server, YOLO11, Ultralytics, NVIDIA, deep learning, AI model deployment, ONNX, scalable inference
---
# Triton Inference Server with Ultralytics YOLOv8
# Triton Inference Server with Ultralytics YOLO11
The [Triton Inference Server](https://developer.nvidia.com/triton-inference-server) (formerly known as TensorRT Inference Server) is an open-source software solution developed by NVIDIA. It provides a cloud inference solution optimized for NVIDIA GPUs. Triton simplifies the deployment of AI models at scale in production. Integrating Ultralytics YOLOv8 with Triton Inference Server allows you to deploy scalable, high-performance [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) inference workloads. This guide provides steps to set up and test the integration.
The [Triton Inference Server](https://developer.nvidia.com/triton-inference-server) (formerly known as TensorRT Inference Server) is an open-source software solution developed by NVIDIA. It provides a cloud inference solution optimized for NVIDIA GPUs. Triton simplifies the deployment of AI models at scale in production. Integrating Ultralytics YOLO11 with Triton Inference Server allows you to deploy scalable, high-performance [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) inference workloads. This guide provides steps to set up and test the integration.
<p align="center">
<br>
@ -38,7 +38,7 @@ Ensure you have the following prerequisites before proceeding:
pip install tritonclient[all]
```
## Exporting YOLOv8 to ONNX Format
## Exporting YOLO11 to ONNX Format
Before deploying the model on Triton, it must be exported to the ONNX format. ONNX (Open Neural Network Exchange) is a format that allows models to be transferred between different deep learning frameworks. Use the `export` function from the `YOLO` class:
@ -46,7 +46,7 @@ Before deploying the model on Triton, it must be exported to the ONNX format. ON
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("yolo11n.pt") # load an official model
# Export the model
onnx_file = model.export(format="onnx", dynamic=True)
@ -141,21 +141,21 @@ subprocess.call(f"docker kill {container_id}", shell=True)
---
By following the above steps, you can deploy and run Ultralytics YOLOv8 models efficiently on Triton Inference Server, providing a scalable and high-performance solution for deep learning inference tasks. If you face any issues or have further queries, refer to the [official Triton documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html) or reach out to the Ultralytics community for support.
By following the above steps, you can deploy and run Ultralytics YOLO11 models efficiently on Triton Inference Server, providing a scalable and high-performance solution for deep learning inference tasks. If you face any issues or have further queries, refer to the [official Triton documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html) or reach out to the Ultralytics community for support.
## FAQ
### How do I set up Ultralytics YOLOv8 with NVIDIA Triton Inference Server?
### How do I set up Ultralytics YOLO11 with NVIDIA Triton Inference Server?
Setting up [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8/) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) involves a few key steps:
Setting up [Ultralytics YOLO11](https://docs.ultralytics.com/models/yolov8/) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) involves a few key steps:
1. **Export YOLOv8 to ONNX format**:
1. **Export YOLO11 to ONNX format**:
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("yolo11n.pt") # load an official model
# Export the model to ONNX format
onnx_file = model.export(format="onnx", dynamic=True)
@ -209,21 +209,21 @@ Setting up [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8/) wit
time.sleep(1)
```
This setup can help you efficiently deploy YOLOv8 models at scale on Triton Inference Server for high-performance AI model inference.
This setup can help you efficiently deploy YOLO11 models at scale on Triton Inference Server for high-performance AI model inference.
### What benefits does using Ultralytics YOLOv8 with NVIDIA Triton Inference Server offer?
### What benefits does using Ultralytics YOLO11 with NVIDIA Triton Inference Server offer?
Integrating [Ultralytics YOLOv8](../models/yolov8.md) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) provides several advantages:
Integrating [Ultralytics YOLO11](../models/yolov8.md) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) provides several advantages:
- **Scalable AI Inference**: Triton allows serving multiple models from a single server instance, supporting dynamic model loading and unloading, making it highly scalable for diverse AI workloads.
- **High Performance**: Optimized for NVIDIA GPUs, Triton Inference Server ensures high-speed inference operations, perfect for real-time applications such as [object detection](https://www.ultralytics.com/glossary/object-detection).
- **Ensemble and Model Versioning**: Triton's ensemble mode enables combining multiple models to improve results, and its model versioning supports A/B testing and rolling updates.
For detailed instructions on setting up and running YOLOv8 with Triton, you can refer to the [setup guide](#setting-up-triton-model-repository).
For detailed instructions on setting up and running YOLO11 with Triton, you can refer to the [setup guide](#setting-up-triton-model-repository).
### Why should I export my YOLOv8 model to ONNX format before using Triton Inference Server?
### Why should I export my YOLO11 model to ONNX format before using Triton Inference Server?
Using ONNX (Open Neural Network Exchange) format for your [Ultralytics YOLOv8](../models/yolov8.md) model before deploying it on [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) offers several key benefits:
Using ONNX (Open Neural Network Exchange) format for your [Ultralytics YOLO11](../models/yolov8.md) model before deploying it on [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) offers several key benefits:
- **Interoperability**: ONNX format supports transfer between different deep learning frameworks (such as PyTorch, TensorFlow), ensuring broader compatibility.
- **Optimization**: Many deployment environments, including Triton, optimize for ONNX, enabling faster inference and better performance.
@ -234,15 +234,15 @@ To export your model, use:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
onnx_file = model.export(format="onnx", dynamic=True)
```
You can follow the steps in the [exporting guide](../modes/export.md) to complete the process.
### Can I run inference using the Ultralytics YOLOv8 model on Triton Inference Server?
### Can I run inference using the Ultralytics YOLO11 model on Triton Inference Server?
Yes, you can run inference using the [Ultralytics YOLOv8](../models/yolov8.md) model on [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server). Once your model is set up in the Triton Model Repository and the server is running, you can load and run inference on your model as follows:
Yes, you can run inference using the [Ultralytics YOLO11](../models/yolov8.md) model on [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server). Once your model is set up in the Triton Model Repository and the server is running, you can load and run inference on your model as follows:
```python
from ultralytics import YOLO
@ -254,14 +254,14 @@ model = YOLO("http://localhost:8000/yolo", task="detect")
results = model("path/to/image.jpg")
```
For an in-depth guide on setting up and running Triton Server with YOLOv8, refer to the [running triton inference server](#running-triton-inference-server) section.
For an in-depth guide on setting up and running Triton Server with YOLO11, refer to the [running triton inference server](#running-triton-inference-server) section.
### How does Ultralytics YOLOv8 compare to [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) and PyTorch models for deployment?
### How does Ultralytics YOLO11 compare to [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) and PyTorch models for deployment?
[Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8/) offers several unique advantages compared to TensorFlow and PyTorch models for deployment:
[Ultralytics YOLO11](https://docs.ultralytics.com/models/yolov8/) offers several unique advantages compared to TensorFlow and PyTorch models for deployment:
- **Real-time Performance**: Optimized for real-time object detection tasks, YOLOv8 provides state-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed, making it ideal for applications requiring live video analytics.
- **Ease of Use**: YOLOv8 integrates seamlessly with Triton Inference Server and supports diverse export formats (ONNX, TensorRT, CoreML), making it flexible for various deployment scenarios.
- **Advanced Features**: YOLOv8 includes features like dynamic model loading, model versioning, and ensemble inference, which are crucial for scalable and reliable AI deployments.
- **Real-time Performance**: Optimized for real-time object detection tasks, YOLO11 provides state-of-the-art [accuracy](https://www.ultralytics.com/glossary/accuracy) and speed, making it ideal for applications requiring live video analytics.
- **Ease of Use**: YOLO11 integrates seamlessly with Triton Inference Server and supports diverse export formats (ONNX, TensorRT, CoreML), making it flexible for various deployment scenarios.
- **Advanced Features**: YOLO11 includes features like dynamic model loading, model versioning, and ensemble inference, which are crucial for scalable and reliable AI deployments.
For more details, compare the deployment options in the [model deployment guide](../modes/export.md).

@ -47,7 +47,7 @@ The VSCode compatible protocols for viewing images using the integrated terminal
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Run inference on an image
results = model.predict(source="ultralytics/assets/bus.jpg")
@ -111,7 +111,7 @@ from sixel import SixelWriter
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
# Run inference on an image
results = model.predict(source="ultralytics/assets/bus.jpg")
@ -164,7 +164,7 @@ To view YOLO inference results in a VSCode terminal on macOS or Linux, follow th
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
results = model.predict(source="path_to_image")
plot = results[0].plot()
```

@ -1,23 +1,23 @@
---
comments: true
description: Discover VisionEye's object mapping and tracking powered by Ultralytics YOLOv8. Simulate human eye precision, track objects, and calculate distances effortlessly.
keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, distance calculation, computer vision, AI, machine learning, Python, tutorial
description: Discover VisionEye's object mapping and tracking powered by Ultralytics YOLO11. Simulate human eye precision, track objects, and calculate distances effortlessly.
keywords: VisionEye, YOLO11, Ultralytics, object mapping, object tracking, distance calculation, computer vision, AI, machine learning, Python, tutorial
---
# VisionEye View Object Mapping using Ultralytics YOLOv8 🚀
# VisionEye View Object Mapping using Ultralytics YOLO11 🚀
## What is VisionEye Object Mapping?
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) VisionEye offers the capability for computers to identify and pinpoint objects, simulating the observational [precision](https://www.ultralytics.com/glossary/precision) of the human eye. This functionality enables computers to discern and focus on specific objects, much like the way the human eye observes details from a particular viewpoint.
[Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) VisionEye offers the capability for computers to identify and pinpoint objects, simulating the observational [precision](https://www.ultralytics.com/glossary/precision) of the human eye. This functionality enables computers to discern and focus on specific objects, much like the way the human eye observes details from a particular viewpoint.
## Samples
| VisionEye View | VisionEye View With Object Tracking | VisionEye View With Distance Calculation |
| :----------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![VisionEye View Object Mapping using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-view-object-mapping-yolov8.avif) | ![VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-object-mapping-with-tracking.avif) | ![VisionEye View with Distance Calculation using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/visioneye-distance-calculation-yolov8.avif) |
| VisionEye View Object Mapping using Ultralytics YOLOv8 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8 | VisionEye View with Distance Calculation using Ultralytics YOLOv8 |
| ![VisionEye View Object Mapping using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/visioneye-view-object-mapping-yolov8.avif) | ![VisionEye View Object Mapping with Object Tracking using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/visioneye-object-mapping-with-tracking.avif) | ![VisionEye View with Distance Calculation using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/visioneye-distance-calculation-yolov8.avif) |
| VisionEye View Object Mapping using Ultralytics YOLO11 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLO11 | VisionEye View with Distance Calculation using Ultralytics YOLO11 |
!!! example "VisionEye Object Mapping using YOLOv8"
!!! example "VisionEye Object Mapping using YOLO11"
=== "VisionEye Object Mapping"
@ -27,7 +27,7 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -71,7 +71,7 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -118,7 +118,7 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -180,16 +180,16 @@ For any inquiries, feel free to post your questions in the [Ultralytics Issue Se
## FAQ
### How do I start using VisionEye Object Mapping with Ultralytics YOLOv8?
### How do I start using VisionEye Object Mapping with Ultralytics YOLO11?
To start using VisionEye Object Mapping with Ultralytics YOLOv8, first, you'll need to install the Ultralytics YOLO package via pip. Then, you can use the sample code provided in the documentation to set up [object detection](https://www.ultralytics.com/glossary/object-detection) with VisionEye. Here's a simple example to get you started:
To start using VisionEye Object Mapping with Ultralytics YOLO11, first, you'll need to install the Ultralytics YOLO package via pip. Then, you can use the sample code provided in the documentation to set up [object detection](https://www.ultralytics.com/glossary/object-detection) with VisionEye. Here's a simple example to get you started:
```python
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while True:
@ -210,12 +210,12 @@ cap.release()
cv2.destroyAllWindows()
```
### What are the key features of VisionEye's object tracking capability using Ultralytics YOLOv8?
### What are the key features of VisionEye's object tracking capability using Ultralytics YOLO11?
VisionEye's object tracking with Ultralytics YOLOv8 allows users to follow the movement of objects within a video frame. Key features include:
VisionEye's object tracking with Ultralytics YOLO11 allows users to follow the movement of objects within a video frame. Key features include:
1. **Real-Time Object Tracking**: Keeps up with objects as they move.
2. **Object Identification**: Utilizes YOLOv8's powerful detection algorithms.
2. **Object Identification**: Utilizes YOLO11's powerful detection algorithms.
3. **Distance Calculation**: Calculates distances between objects and specified points.
4. **Annotation and Visualization**: Provides visual markers for tracked objects.
@ -226,7 +226,7 @@ import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while True:
@ -249,9 +249,9 @@ cv2.destroyAllWindows()
For a comprehensive guide, visit the [VisionEye Object Mapping with Object Tracking](#samples).
### How can I calculate distances with VisionEye's YOLOv8 model?
### How can I calculate distances with VisionEye's YOLO11 model?
Distance calculation with VisionEye and Ultralytics YOLOv8 involves determining the distance of detected objects from a specified point in the frame. It enhances spatial analysis capabilities, useful in applications such as autonomous driving and surveillance.
Distance calculation with VisionEye and Ultralytics YOLO11 involves determining the distance of detected objects from a specified point in the frame. It enhances spatial analysis capabilities, useful in applications such as autonomous driving and surveillance.
Here's a simplified example:
@ -262,7 +262,7 @@ import cv2
from ultralytics import YOLO
model = YOLO("yolov8s.pt")
model = YOLO("yolo11n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
center_point = (0, 480) # Example center point
pixel_per_meter = 10
@ -290,19 +290,19 @@ cv2.destroyAllWindows()
For detailed instructions, refer to the [VisionEye with Distance Calculation](#samples).
### Why should I use Ultralytics YOLOv8 for object mapping and tracking?
### Why should I use Ultralytics YOLO11 for object mapping and tracking?
Ultralytics YOLOv8 is renowned for its speed, [accuracy](https://www.ultralytics.com/glossary/accuracy), and ease of integration, making it a top choice for object mapping and tracking. Key advantages include:
Ultralytics YOLO11 is renowned for its speed, [accuracy](https://www.ultralytics.com/glossary/accuracy), and ease of integration, making it a top choice for object mapping and tracking. Key advantages include:
1. **State-of-the-art Performance**: Delivers high accuracy in real-time object detection.
2. **Flexibility**: Supports various tasks such as detection, tracking, and distance calculation.
3. **Community and Support**: Extensive documentation and active GitHub community for troubleshooting and enhancements.
4. **Ease of Use**: Intuitive API simplifies complex tasks, allowing for rapid deployment and iteration.
For more information on applications and benefits, check out the [Ultralytics YOLOv8 documentation](https://docs.ultralytics.com/models/yolov8/).
For more information on applications and benefits, check out the [Ultralytics YOLO11 documentation](https://docs.ultralytics.com/models/yolov8/).
### How can I integrate VisionEye with other [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) tools like Comet or ClearML?
Ultralytics YOLOv8 can integrate seamlessly with various machine learning tools like Comet and ClearML, enhancing experiment tracking, collaboration, and reproducibility. Follow the detailed guides on [how to use YOLOv5 with Comet](https://www.ultralytics.com/blog/how-to-use-yolov5-with-comet) and [integrate YOLOv8 with ClearML](https://docs.ultralytics.com/integrations/clearml/) to get started.
Ultralytics YOLO11 can integrate seamlessly with various machine learning tools like Comet and ClearML, enhancing experiment tracking, collaboration, and reproducibility. Follow the detailed guides on [how to use YOLOv5 with Comet](https://www.ultralytics.com/blog/how-to-use-yolov5-with-comet) and [integrate YOLO11 with ClearML](https://docs.ultralytics.com/integrations/clearml/) to get started.
For further exploration and integration examples, check our [Ultralytics Integrations Guide](https://docs.ultralytics.com/integrations/).

@ -1,12 +1,12 @@
---
comments: true
description: Optimize your fitness routine with real-time workouts monitoring using Ultralytics YOLOv8. Track and improve your exercise form and performance.
keywords: workouts monitoring, Ultralytics YOLOv8, pose estimation, fitness tracking, exercise assessment, real-time feedback, exercise form, performance metrics
description: Optimize your fitness routine with real-time workouts monitoring using Ultralytics YOLO11. Track and improve your exercise form and performance.
keywords: workouts monitoring, Ultralytics YOLO11, pose estimation, fitness tracking, exercise assessment, real-time feedback, exercise form, performance metrics
---
# Workouts Monitoring using Ultralytics YOLOv8
# Workouts Monitoring using Ultralytics YOLO11
Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) enhances exercise assessment by accurately tracking key body landmarks and joints in real-time. This technology provides instant feedback on exercise form, tracks workout routines, and measures performance metrics, optimizing training sessions for users and trainers alike.
Monitoring workouts through pose estimation with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) enhances exercise assessment by accurately tracking key body landmarks and joints in real-time. This technology provides instant feedback on exercise form, tracks workout routines, and measures performance metrics, optimizing training sessions for users and trainers alike.
<p align="center">
<br>
@ -16,7 +16,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Workouts Monitoring using Ultralytics YOLOv8 | Pushups, Pullups, Ab Workouts
<strong>Watch:</strong> Workouts Monitoring using Ultralytics YOLO11 | Pushups, Pullups, Ab Workouts
</p>
## Advantages of Workouts Monitoring?
@ -43,7 +43,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
model = YOLO("yolo11n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -74,7 +74,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
model = YOLO("yolo11n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -108,7 +108,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
### KeyPoints Map
![keyPoints Order Ultralytics YOLOv8 Pose](https://github.com/ultralytics/docs/releases/download/0/keypoints-order-ultralytics-yolov8-pose.avif)
![keyPoints Order Ultralytics YOLO11 Pose](https://github.com/ultralytics/docs/releases/download/0/keypoints-order-ultralytics-yolov8-pose.avif)
### Arguments `AIGym`
@ -131,16 +131,16 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
## FAQ
### How do I monitor my workouts using Ultralytics YOLOv8?
### How do I monitor my workouts using Ultralytics YOLO11?
To monitor your workouts using Ultralytics YOLOv8, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for pushups, pullups, or ab workouts as shown:
To monitor your workouts using Ultralytics YOLO11, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for pushups, pullups, or ab workouts as shown:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
model = YOLO("yolo11n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -165,9 +165,9 @@ cv2.destroyAllWindows()
For further customization and settings, you can refer to the [AIGym](#arguments-aigym) section in the documentation.
### What are the benefits of using Ultralytics YOLOv8 for workout monitoring?
### What are the benefits of using Ultralytics YOLO11 for workout monitoring?
Using Ultralytics YOLOv8 for workout monitoring provides several key benefits:
Using Ultralytics YOLO11 for workout monitoring provides several key benefits:
- **Optimized Performance:** By tailoring workouts based on monitoring data, you can achieve better results.
- **Goal Achievement:** Easily track and adjust fitness goals for measurable progress.
@ -177,13 +177,13 @@ Using Ultralytics YOLOv8 for workout monitoring provides several key benefits:
You can watch a [YouTube video demonstration](https://www.youtube.com/watch?v=LGGxqLZtvuw) to see these benefits in action.
### How accurate is Ultralytics YOLOv8 in detecting and tracking exercises?
### How accurate is Ultralytics YOLO11 in detecting and tracking exercises?
Ultralytics YOLOv8 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high [precision](https://www.ultralytics.com/glossary/precision) and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases pushups and pullups counting.
Ultralytics YOLO11 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high [precision](https://www.ultralytics.com/glossary/precision) and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases pushups and pullups counting.
### Can I use Ultralytics YOLOv8 for custom workout routines?
### Can I use Ultralytics YOLO11 for custom workout routines?
Yes, Ultralytics YOLOv8 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as "pushup", "pullup", and "abworkout." You can specify keypoints and angles to detect specific exercises. Here is an example setup:
Yes, Ultralytics YOLO11 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as "pushup", "pullup", and "abworkout." You can specify keypoints and angles to detect specific exercises. Here is an example setup:
```python
from ultralytics import solutions
@ -198,7 +198,7 @@ gym_object = solutions.AIGym(
For more details on setting arguments, refer to the [Arguments `AIGym`](#arguments-aigym) section. This flexibility allows you to monitor various exercises and customize routines based on your needs.
### How can I save the workout monitoring output using Ultralytics YOLOv8?
### How can I save the workout monitoring output using Ultralytics YOLO11?
To save the workout monitoring output, you can modify the code to include a video writer that saves the processed frames. Here's an example:
@ -207,7 +207,7 @@ import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
model = YOLO("yolo11n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
@ -234,4 +234,4 @@ cv2.destroyAllWindows()
video_writer.release()
```
This setup writes the monitored video to an output file. For more details, refer to the [Workouts Monitoring with Save Output](#workouts-monitoring-using-ultralytics-yolov8) section.
This setup writes the monitored video to an output file. For more details, refer to the [Workouts Monitoring with Save Output](#workouts-monitoring-using-ultralytics-yolo11) section.

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save