Merge branch 'main' into stale-actions

stale-actions
Ultralytics Assistant 5 days ago committed by GitHub
commit c380c80d68
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      .github/ISSUE_TEMPLATE/bug-report.yml
  2. 47
      .github/workflows/ci.yaml
  3. 42
      .github/workflows/codeql.yaml
  4. 18
      .github/workflows/docker.yaml
  5. 6
      .github/workflows/docs.yml
  6. 2
      .github/workflows/format.yml
  7. 20
      .github/workflows/links.yml
  8. 17
      .github/workflows/publish.yml
  9. 1
      .gitignore
  10. 12
      README.md
  11. 12
      README.zh-CN.md
  12. 1
      docker/Dockerfile
  13. 12
      docker/Dockerfile-cpu
  14. 2
      docker/Dockerfile-jupyter
  15. 1
      docker/Dockerfile-runner
  16. 2
      docs/README.md
  17. 2
      docs/build_docs.py
  18. 1
      docs/en/datasets/index.md
  19. 141
      docs/en/datasets/pose/dog-pose.md
  20. 11
      docs/en/datasets/pose/hand-keypoints.md
  21. 9
      docs/en/datasets/pose/index.md
  22. 8
      docs/en/datasets/segment/coco.md
  23. 131
      docs/en/guides/analytics.md
  24. 1
      docs/en/guides/distance-calculation.md
  25. 166
      docs/en/guides/heatmaps.md
  26. 169
      docs/en/guides/object-counting.md
  27. 56
      docs/en/guides/queue-management.md
  28. 99
      docs/en/guides/region-counting.md
  29. 16
      docs/en/guides/speed-estimation.md
  30. 12
      docs/en/guides/streamlit-live-inference.md
  31. 36
      docs/en/guides/workouts-monitoring.md
  32. 10
      docs/en/help/CI.md
  33. 1
      docs/en/help/privacy.md
  34. 2
      docs/en/help/security.md
  35. 6
      docs/en/hub/models.md
  36. 2
      docs/en/index.md
  37. 39
      docs/en/integrations/albumentations.md
  38. 2
      docs/en/integrations/index.md
  39. 1
      docs/en/integrations/kaggle.md
  40. 2
      docs/en/integrations/ray-tune.md
  41. 325
      docs/en/integrations/sony-imx500.md
  42. 6
      docs/en/integrations/tensorrt.md
  43. 4
      docs/en/macros/export-args.md
  44. 1
      docs/en/macros/export-table.md
  45. 2
      docs/en/macros/predict-args.md
  46. 2
      docs/en/macros/train-args.md
  47. 2
      docs/en/macros/validation-args.md
  48. 28
      docs/en/models/sam-2.md
  49. 1
      docs/en/models/yolo-nas.md
  50. 2
      docs/en/models/yolo11.md
  51. 2
      docs/en/models/yolov5.md
  52. 1
      docs/en/models/yolov7.md
  53. 2
      docs/en/models/yolov8.md
  54. 35
      docs/en/modes/benchmark.md
  55. 2
      docs/en/quickstart.md
  56. 4
      docs/en/reference/models/sam/predict.md
  57. 16
      docs/en/reference/solutions/region_counter.md
  58. 4
      docs/en/reference/utils/torch_utils.md
  59. 4
      docs/en/tasks/segment.md
  60. 1
      docs/en/usage/cfg.md
  61. 11
      docs/en/usage/simple-utilities.md
  62. 6
      docs/mkdocs_github_authors.yaml
  63. 199
      docs/overrides/javascript/benchmark.js
  64. 199
      docs/overrides/javascript/extra.js
  65. 11
      docs/overrides/javascript/giscus.js
  66. 11
      docs/overrides/stylesheets/style.css
  67. 2
      examples/YOLOv8-SAHI-Inference-Video/yolov8_sahi.py
  68. 2
      examples/heatmaps.ipynb
  69. 2
      examples/hub.ipynb
  70. 2
      examples/object_counting.ipynb
  71. 2
      examples/object_tracking.ipynb
  72. 2
      examples/tutorial.ipynb
  73. 10
      mkdocs.yml
  74. 9
      tests/test_exports.py
  75. 2
      tests/test_solutions.py
  76. 2
      ultralytics/__init__.py
  77. 16
      ultralytics/cfg/__init__.py
  78. 23
      ultralytics/cfg/datasets/dog-pose.yaml
  79. 3
      ultralytics/cfg/default.yaml
  80. 2
      ultralytics/cfg/solutions/default.yaml
  81. 8
      ultralytics/data/augment.py
  82. 5
      ultralytics/data/converter.py
  83. 2
      ultralytics/data/loaders.py
  84. 196
      ultralytics/engine/exporter.py
  85. 21
      ultralytics/engine/model.py
  86. 6
      ultralytics/engine/predictor.py
  87. 6
      ultralytics/engine/results.py
  88. 19
      ultralytics/engine/trainer.py
  89. 3
      ultralytics/models/fastsam/predict.py
  90. 3
      ultralytics/models/rtdetr/train.py
  91. 4
      ultralytics/models/sam/__init__.py
  92. 2
      ultralytics/models/sam/model.py
  93. 55
      ultralytics/models/sam/modules/sam.py
  94. 845
      ultralytics/models/sam/predict.py
  95. 7
      ultralytics/models/yolo/detect/train.py
  96. 25
      ultralytics/nn/autobackend.py
  97. 7
      ultralytics/nn/modules/block.py
  98. 2
      ultralytics/nn/modules/conv.py
  99. 12
      ultralytics/nn/modules/head.py
  100. 18
      ultralytics/nn/tasks.py
  101. Some files were not shown because too many files have changed in this diff Show More

@ -52,9 +52,9 @@ body:
- type: textarea - type: textarea
attributes: attributes:
label: Environment label: Environment
description: Many issues are often related to dependency versions and hardware. Please provide the output of `yolo checks` or `ultralytics.checks()` command to help us diagnose the problem. description: Many issues are often related to dependency versions and hardware. Please provide the output of `yolo checks` (CLI) or `ultralytics.utils.checks.collect_system_info()` (Python) command to help us diagnose the problem.
placeholder: | placeholder: |
Paste output of `yolo checks` or `ultralytics.checks()` command, i.e.: Paste output of `yolo checks` (CLI) or `ultralytics.utils.checks.collect_system_info()` (Python) command, i.e.:
``` ```
Ultralytics 8.3.2 🚀 Python-3.11.2 torch-2.4.1 CPU (Apple M3) Ultralytics 8.3.2 🚀 Python-3.11.2 torch-2.4.1 CPU (Apple M3)
Setup complete ✅ (8 CPUs, 16.0 GB RAM, 266.5/460.4 GB disk) Setup complete ✅ (8 CPUs, 16.0 GB RAM, 266.5/460.4 GB disk)

@ -52,16 +52,15 @@ jobs:
- uses: actions/setup-python@v5 - uses: actions/setup-python@v5
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
cache: "pip" # caching pip dependencies - uses: astral-sh/setup-uv@v4
- name: Install requirements - name: Install requirements
shell: bash # for Windows compatibility shell: bash # for Windows compatibility
run: | run: |
python -m pip install --upgrade pip wheel uv pip install --system . --extra-index-url https://download.pytorch.org/whl/cpu
pip install . --extra-index-url https://download.pytorch.org/whl/cpu
- name: Check environment - name: Check environment
run: | run: |
yolo checks yolo checks
pip list uv pip list
- name: Test HUB training - name: Test HUB training
shell: python shell: python
env: env:
@ -111,6 +110,7 @@ jobs:
- name: Install requirements - name: Install requirements
shell: bash # for Windows compatibility shell: bash # for Windows compatibility
run: | run: |
# Warnings: uv causes numpy errors during benchmarking
python -m pip install --upgrade pip wheel python -m pip install --upgrade pip wheel
pip install -e ".[export]" "coverage[toml]" --extra-index-url https://download.pytorch.org/whl/cpu pip install -e ".[export]" "coverage[toml]" --extra-index-url https://download.pytorch.org/whl/cpu
- name: Check environment - name: Check environment
@ -143,7 +143,7 @@ jobs:
coverage xml -o coverage-benchmarks.xml coverage xml -o coverage-benchmarks.xml
- name: Upload Coverage Reports to CodeCov - name: Upload Coverage Reports to CodeCov
if: github.repository == 'ultralytics/ultralytics' if: github.repository == 'ultralytics/ultralytics'
uses: codecov/codecov-action@v4 uses: codecov/codecov-action@v5
with: with:
flags: Benchmarks flags: Benchmarks
env: env:
@ -172,12 +172,11 @@ jobs:
- uses: actions/setup-python@v5 - uses: actions/setup-python@v5
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
cache: "pip" # caching pip dependencies - uses: astral-sh/setup-uv@v4
- name: Install requirements - name: Install requirements
shell: bash # for Windows compatibility shell: bash # for Windows compatibility
run: | run: |
# CoreML must be installed before export due to protobuf error from AutoInstall # CoreML must be installed before export due to protobuf error from AutoInstall
python -m pip install --upgrade pip wheel
slow="" slow=""
torch="" torch=""
if [ "${{ matrix.torch }}" == "1.8.0" ]; then if [ "${{ matrix.torch }}" == "1.8.0" ]; then
@ -186,11 +185,11 @@ jobs:
if [[ "${{ github.event_name }}" =~ ^(schedule|workflow_dispatch)$ ]]; then if [[ "${{ github.event_name }}" =~ ^(schedule|workflow_dispatch)$ ]]; then
slow="pycocotools mlflow" slow="pycocotools mlflow"
fi fi
pip install -e ".[export]" $torch $slow pytest-cov --extra-index-url https://download.pytorch.org/whl/cpu uv pip install --system -e ".[export]" $torch $slow pytest-cov --extra-index-url https://download.pytorch.org/whl/cpu
- name: Check environment - name: Check environment
run: | run: |
yolo checks yolo checks
pip list uv pip list
- name: Pytest tests - name: Pytest tests
shell: bash # for Windows compatibility shell: bash # for Windows compatibility
run: | run: |
@ -201,7 +200,7 @@ jobs:
pytest $slow --cov=ultralytics/ --cov-report xml tests/ pytest $slow --cov=ultralytics/ --cov-report xml tests/
- name: Upload Coverage Reports to CodeCov - name: Upload Coverage Reports to CodeCov
if: github.repository == 'ultralytics/ultralytics' # && matrix.os == 'ubuntu-latest' && matrix.python-version == '3.11' if: github.repository == 'ultralytics/ultralytics' # && matrix.os == 'ubuntu-latest' && matrix.python-version == '3.11'
uses: codecov/codecov-action@v4 uses: codecov/codecov-action@v5
with: with:
flags: Tests flags: Tests
env: env:
@ -213,12 +212,13 @@ jobs:
runs-on: gpu-latest runs-on: gpu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v4
- name: Install requirements - name: Install requirements
run: pip install . pytest-cov run: uv pip install --system . pytest-cov
- name: Check environment - name: Check environment
run: | run: |
yolo checks yolo checks
pip list uv pip list
- name: Pytest tests - name: Pytest tests
run: | run: |
slow="" slow=""
@ -227,7 +227,7 @@ jobs:
fi fi
pytest $slow --cov=ultralytics/ --cov-report xml tests/test_cuda.py pytest $slow --cov=ultralytics/ --cov-report xml tests/test_cuda.py
- name: Upload Coverage Reports to CodeCov - name: Upload Coverage Reports to CodeCov
uses: codecov/codecov-action@v4 uses: codecov/codecov-action@v5
with: with:
flags: GPU flags: GPU
env: env:
@ -294,13 +294,8 @@ jobs:
channels: conda-forge,defaults channels: conda-forge,defaults
channel-priority: true channel-priority: true
activate-environment: anaconda-client-env activate-environment: anaconda-client-env
- name: Cleanup toolcache - name: Cleanup disk space
run: | uses: ultralytics/actions/cleanup-disk@main
echo "Free space before deletion:"
df -h /
rm -rf /opt/hostedtoolcache
echo "Free space after deletion:"
df -h /
- name: Install Linux packages - name: Install Linux packages
run: | run: |
# Fix cv2 ImportError: 'libEGL.so.1: cannot open shared object file: No such file or directory' # Fix cv2 ImportError: 'libEGL.so.1: cannot open shared object file: No such file or directory'
@ -348,14 +343,14 @@ jobs:
Summary: Summary:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [HUB, Benchmarks, Tests, GPU, RaspberryPi, Conda] # Add job names that you want to check for failure needs: [HUB, Benchmarks, Tests, GPU, RaspberryPi, Conda]
if: always() # This ensures the job runs even if previous jobs fail if: always()
steps: steps:
- name: Check for failure and notify - name: Check for failure and notify
if: (needs.HUB.result == 'failure' || needs.Benchmarks.result == 'failure' || needs.Tests.result == 'failure' || needs.GPU.result == 'failure' || needs.RaspberryPi.result == 'failure' || needs.Conda.result == 'failure' ) && github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event_name == 'push') && github.run_attempt == '1' if: (needs.HUB.result == 'failure' || needs.Benchmarks.result == 'failure' || needs.Tests.result == 'failure' || needs.GPU.result == 'failure' || needs.RaspberryPi.result == 'failure' || needs.Conda.result == 'failure' ) && github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event_name == 'push') && github.run_attempt == '1'
uses: slackapi/slack-github-action@v1.27.0 uses: slackapi/slack-github-action@v2.0.0
with: with:
webhook-type: incoming-webhook
webhook: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
payload: | payload: |
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n"} text: "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}

@ -1,42 +0,0 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
name: "CodeQL"
on:
schedule:
- cron: "0 0 1 * *"
workflow_dispatch:
jobs:
analyze:
name: Analyze
runs-on: ${{ 'ubuntu-latest' }}
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: ["python"]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: security-extended,security-and-quality
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

@ -134,12 +134,12 @@ jobs:
- name: Build Image - name: Build Image
if: github.event_name == 'push' || github.event.inputs[matrix.dockerfile] == 'true' if: github.event_name == 'push' || github.event.inputs[matrix.dockerfile] == 'true'
uses: nick-invision/retry@v3 uses: ultralytics/actions/retry@main
with: with:
timeout_minutes: 120 timeout_minutes: 120
retry_wait_seconds: 60 retry_delay_seconds: 60
max_attempts: 3 # retry twice retries: 2
command: | run: |
docker build \ docker build \
--platform ${{ matrix.platforms }} \ --platform ${{ matrix.platforms }} \
-f docker/${{ matrix.dockerfile }} \ -f docker/${{ matrix.dockerfile }} \
@ -172,7 +172,7 @@ jobs:
fi fi
if [[ "${{ matrix.tags }}" == "latest-python" ]]; then if [[ "${{ matrix.tags }}" == "latest-python" ]]; then
t=ultralytics/ultralytics:latest-jupyter t=ultralytics/ultralytics:latest-jupyter
v=ultralytics/ultralytics:${{ steps.get_version.outputs.version_tag }}-jupyter v=ultralytics/ultralytics:${{ steps.get_version.outputs.version }}-jupyter
docker build -f docker/Dockerfile-jupyter -t $t -t $v . docker build -f docker/Dockerfile-jupyter -t $t -t $v .
docker push $t docker push $t
if [[ "${{ steps.check_tag.outputs.new_release }}" == "true" ]]; then if [[ "${{ steps.check_tag.outputs.new_release }}" == "true" ]]; then
@ -202,9 +202,9 @@ jobs:
steps: steps:
- name: Check for failure and notify - name: Check for failure and notify
if: needs.docker.result == 'failure' && github.repository == 'ultralytics/ultralytics' && github.event_name == 'push' && github.run_attempt == '1' if: needs.docker.result == 'failure' && github.repository == 'ultralytics/ultralytics' && github.event_name == 'push' && github.run_attempt == '1'
uses: slackapi/slack-github-action@v1.27.0 uses: slackapi/slack-github-action@v2.0.0
with: with:
webhook-type: incoming-webhook
webhook: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
payload: | payload: |
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n"} text: "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}

@ -29,7 +29,7 @@ on:
jobs: jobs:
Docs: Docs:
if: github.repository == 'ultralytics/ultralytics' if: github.repository == 'ultralytics/ultralytics'
runs-on: macos-14 runs-on: ubuntu-latest
steps: steps:
- name: Git config - name: Git config
run: | run: |
@ -46,9 +46,9 @@ jobs:
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: "3.x" python-version: "3.x"
cache: "pip" # caching pip dependencies - uses: astral-sh/setup-uv@v4
- name: Install Dependencies - name: Install Dependencies
run: pip install ruff black tqdm minify-html mkdocs-material "mkdocstrings[python]" mkdocs-jupyter mkdocs-redirects mkdocs-ultralytics-plugin mkdocs-macros-plugin run: uv pip install --system ruff black tqdm mkdocs-material "mkdocstrings[python]" mkdocs-jupyter mkdocs-redirects mkdocs-ultralytics-plugin mkdocs-macros-plugin
- name: Ruff fixes - name: Ruff fixes
continue-on-error: true continue-on-error: true
run: ruff check --fix --unsafe-fixes --select D --ignore=D100,D104,D203,D205,D212,D213,D401,D406,D407,D413 . run: ruff check --fix --unsafe-fixes --select D --ignore=D100,D104,D203,D205,D212,D213,D401,D406,D407,D413 .

@ -15,7 +15,7 @@ on:
jobs: jobs:
format: format:
runs-on: macos-14 runs-on: ubuntu-latest
steps: steps:
- name: Run Ultralytics Formatting - name: Run Ultralytics Formatting
uses: ultralytics/actions@main uses: ultralytics/actions@main

@ -29,12 +29,12 @@ jobs:
sudo mv lychee /usr/local/bin sudo mv lychee /usr/local/bin
- name: Test Markdown and HTML links with retry - name: Test Markdown and HTML links with retry
uses: nick-invision/retry@v3 uses: ultralytics/actions/retry@main
with: with:
timeout_minutes: 5 timeout_minutes: 60
retry_wait_seconds: 60 retry_delay_seconds: 900
max_attempts: 3 retries: 2
command: | run: |
lychee \ lychee \
--scheme https \ --scheme https \
--timeout 60 \ --timeout 60 \
@ -59,12 +59,12 @@ jobs:
- name: Test Markdown, HTML, YAML, Python and Notebook links with retry - name: Test Markdown, HTML, YAML, Python and Notebook links with retry
if: github.event_name == 'workflow_dispatch' if: github.event_name == 'workflow_dispatch'
uses: nick-invision/retry@v3 uses: ultralytics/actions/retry@main
with: with:
timeout_minutes: 5 timeout_minutes: 60
retry_wait_seconds: 60 retry_delay_seconds: 900
max_attempts: 3 retries: 2
command: | run: |
lychee \ lychee \
--scheme https \ --scheme https \
--timeout 60 \ --timeout 60 \

@ -90,19 +90,20 @@ jobs:
fi fi
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
echo "PR_TITLE=$PR_TITLE" >> $GITHUB_ENV echo "PR_TITLE=$PR_TITLE" >> $GITHUB_ENV
- name: Notify on Slack (Success) - name: Notify on Slack (Success)
if: success() && github.event_name == 'push' && steps.check_pypi.outputs.increment == 'True' if: success() && github.event_name == 'push' && steps.check_pypi.outputs.increment == 'True'
uses: slackapi/slack-github-action@v1.27.0 uses: slackapi/slack-github-action@v2.0.0
with: with:
webhook-type: incoming-webhook
webhook: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
payload: | payload: |
{"text": "<!channel> GitHub Actions success for ${{ github.workflow }} ✅\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* NEW `${{ github.repository }} ${{ steps.check_pypi.outputs.current_tag }}` pip package published 😃\n*Job Status:* ${{ job.status }}\n*Pull Request:* <https://github.com/${{ github.repository }}/pull/${{ env.PR_NUMBER }}> ${{ env.PR_TITLE }}\n"} text: "<!channel> GitHub Actions success for ${{ github.workflow }} ✅\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* NEW `${{ github.repository }} ${{ steps.check_pypi.outputs.current_tag }}` pip package published 😃\n*Job Status:* ${{ job.status }}\n*Pull Request:* <https://github.com/${{ github.repository }}/pull/${{ env.PR_NUMBER }}> ${{ env.PR_TITLE }}\n"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
- name: Notify on Slack (Failure) - name: Notify on Slack (Failure)
if: failure() if: failure()
uses: slackapi/slack-github-action@v1.27.0 uses: slackapi/slack-github-action@v2.0.0
with: with:
webhook-type: incoming-webhook
webhook: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
payload: | payload: |
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n*Job Status:* ${{ job.status }}\n*Pull Request:* <https://github.com/${{ github.repository }}/pull/${{ env.PR_NUMBER }}> ${{ env.PR_TITLE }}\n"} text: "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n*Job Status:* ${{ job.status }}\n*Pull Request:* <https://github.com/${{ github.repository }}/pull/${{ env.PR_NUMBER }}> ${{ env.PR_TITLE }}\n"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}

1
.gitignore vendored

@ -163,6 +163,7 @@ weights/
*_openvino_model/ *_openvino_model/
*_paddle_model/ *_paddle_model/
*_ncnn_model/ *_ncnn_model/
*_imx_model/
pnnx* pnnx*
# Autogenerated files for tests # Autogenerated files for tests

@ -8,7 +8,7 @@
<div> <div>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a> <a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://pepy.tech/project/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a> <a href="https://pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a> <a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a> <a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a> <a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
@ -27,7 +27,9 @@ We hope that the resources here will help you get the most out of YOLO. Please b
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://www.ultralytics.com/license). To request an Enterprise License please complete the form at [Ultralytics Licensing](https://www.ultralytics.com/license).
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="YOLO11 performance plots"></a> <a href="https://docs.ultralytics.com/models/yolo11/" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="YOLO11 performance plots">
</a>
<div align="center"> <div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a> <a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
@ -55,7 +57,7 @@ See below for a quickstart install and usage examples, and see our [Docs](https:
Pip install the ultralytics package including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) in a [**Python>=3.8**](https://www.python.org/) environment with [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/). Pip install the ultralytics package including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) in a [**Python>=3.8**](https://www.python.org/) environment with [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/).
[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Ultralytics Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/) [![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Ultralytics Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)
```bash ```bash
pip install ultralytics pip install ultralytics
@ -150,8 +152,8 @@ See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage e
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 | | [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 |
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 | | [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco-seg.yaml device=0` - **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu` - **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco.yaml batch=1 device=0|cpu`
</details> </details>

@ -8,7 +8,7 @@
<div> <div>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a> <a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://pepy.tech/project/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a> <a href="https://pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a> <a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a> <a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a> <a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
@ -27,7 +27,9 @@
想申请企业许可证,请完成 [Ultralytics Licensing](https://www.ultralytics.com/license) 上的表单。 想申请企业许可证,请完成 [Ultralytics Licensing](https://www.ultralytics.com/license) 上的表单。
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="YOLO11 performance plots"></a> <a href="https://docs.ultralytics.com/models/yolo11/" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="YOLO11 performance plots">
</a>
<div align="center"> <div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a> <a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
@ -55,7 +57,7 @@
在 [**Python>=3.8**](https://www.python.org/) 环境中使用 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) 通过 pip 安装包含所有[依赖项](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) 的 ultralytics 包。 在 [**Python>=3.8**](https://www.python.org/) 环境中使用 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) 通过 pip 安装包含所有[依赖项](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) 的 ultralytics 包。
[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Ultralytics Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/) [![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Ultralytics Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)
```bash ```bash
pip install ultralytics pip install ultralytics
@ -150,8 +152,8 @@ YOLO11 [检测](https://docs.ultralytics.com/tasks/detect/)、[分割](https://d
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 | | [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 |
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 | | [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 |
- **mAP<sup>val</sup>** 值针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val segment data=coco-seg.yaml device=0` - **mAP<sup>val</sup>** 值针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val segment data=coco.yaml device=0`
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu` - **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val segment data=coco.yaml batch=1 device=0|cpu`
</details> </details>

@ -56,7 +56,6 @@ RUN pip install numpy==1.23.5
# Remove extra build files # Remove extra build files
RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json
# Usage Examples ------------------------------------------------------------------------------------------------------- # Usage Examples -------------------------------------------------------------------------------------------------------
# Build and Push # Build and Push

@ -2,8 +2,8 @@
# Builds ultralytics/ultralytics:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics # Builds ultralytics/ultralytics:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLO11 deployments # Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLO11 deployments
# Start FROM Ubuntu image https://hub.docker.com/_/ubuntu # Use official Python base image for reproducibility (3.11.10 for export and 3.12.6 for inference)
FROM ubuntu:23.10 FROM python:3.11.10-slim-bookworm
# Set environment variables # Set environment variables
ENV PYTHONUNBUFFERED=1 \ ENV PYTHONUNBUFFERED=1 \
@ -39,14 +39,14 @@ RUN pip install -e ".[export]" --extra-index-url https://download.pytorch.org/wh
RUN yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32 RUN yolo export model=tmp/yolo11n.pt format=edgetpu imgsz=32
RUN yolo export model=tmp/yolo11n.pt format=ncnn imgsz=32 RUN yolo export model=tmp/yolo11n.pt format=ncnn imgsz=32
# Requires Python<=3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991 # Requires Python<=3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
# RUN pip install "paddlepaddle>=2.6.0" x2paddle RUN pip install "paddlepaddle>=2.6.0" x2paddle
# Creates a symbolic link to make 'python' point to 'python3'
RUN ln -sf /usr/bin/python3 /usr/bin/python
# Remove extra build files # Remove extra build files
RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json RUN rm -rf tmp /root/.config/Ultralytics/persistent_cache.json
# Set default command to bash
CMD ["/bin/bash"]
# Usage Examples ------------------------------------------------------------------------------------------------------- # Usage Examples -------------------------------------------------------------------------------------------------------
# Build and Push # Build and Push

@ -17,7 +17,7 @@ RUN mkdir /data/weights && /usr/local/bin/yolo settings weights_dir="/data/weigh
RUN mkdir /data/runs && /usr/local/bin/yolo settings runs_dir="/data/runs" RUN mkdir /data/runs && /usr/local/bin/yolo settings runs_dir="/data/runs"
# Start JupyterLab with tutorial notebook # Start JupyterLab with tutorial notebook
ENTRYPOINT ["/usr/local/bin/jupyter", "lab", "--allow-root", "/ultralytics/examples/tutorial.ipynb"] ENTRYPOINT ["/usr/local/bin/jupyter", "lab", "--allow-root", "--ip=*", "/ultralytics/examples/tutorial.ipynb"]
# Usage Examples ------------------------------------------------------------------------------------------------------- # Usage Examples -------------------------------------------------------------------------------------------------------

@ -35,7 +35,6 @@ ENTRYPOINT sh -c './config.sh --url https://github.com/ultralytics/ultralytics \
--replace && \ --replace && \
./run.sh' ./run.sh'
# Usage Examples ------------------------------------------------------------------------------------------------------- # Usage Examples -------------------------------------------------------------------------------------------------------
# Build and Push # Build and Push

@ -15,7 +15,7 @@
## 🛠 Installation ## 🛠 Installation
[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/)
[![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)
To install the ultralytics package in developer mode, ensure you have Git and Python 3 installed on your system. Then, follow these steps: To install the ultralytics package in developer mode, ensure you have Git and Python 3 installed on your system. Then, follow these steps:

@ -252,7 +252,7 @@ def minify_html_files():
content = f.read() content = f.read()
original_size = len(content) original_size = len(content)
minified_content = minify(content) minified_content = minify(content, keep_closing_tags=True, minify_css=True, minify_js=True)
minified_size = len(minified_content) minified_size = len(minified_content)
total_original_size += original_size total_original_size += original_size

@ -74,6 +74,7 @@ Pose estimation is a technique used to determine the pose of the object relative
- [COCO8-pose](pose/coco8-pose.md): A smaller dataset for pose estimation tasks, containing a subset of 8 COCO images with human pose annotations. - [COCO8-pose](pose/coco8-pose.md): A smaller dataset for pose estimation tasks, containing a subset of 8 COCO images with human pose annotations.
- [Tiger-pose](pose/tiger-pose.md): A compact dataset consisting of 263 images focused on tigers, annotated with 12 keypoints per tiger for pose estimation tasks. - [Tiger-pose](pose/tiger-pose.md): A compact dataset consisting of 263 images focused on tigers, annotated with 12 keypoints per tiger for pose estimation tasks.
- [Hand-Keypoints](pose/hand-keypoints.md): A concise dataset featuring over 26,000 images centered on human hands, annotated with 21 keypoints per hand, designed for pose estimation tasks. - [Hand-Keypoints](pose/hand-keypoints.md): A concise dataset featuring over 26,000 images centered on human hands, annotated with 21 keypoints per hand, designed for pose estimation tasks.
- [Dog-pose](pose/dog-pose.md): A comprehensive dataset featuring approximately 6,000 images focused on dogs, annotated with 24 keypoints per dog, tailored for pose estimation tasks.
## [Classification](classify/index.md) ## [Classification](classify/index.md)

@ -0,0 +1,141 @@
---
comments: true
description: Discover the Dog-Pose dataset for pose detection. Featuring 6,773 training and 1,703 test images, it's a robust dataset for training YOLO11 models.
keywords: Dog-Pose, Ultralytics, pose detection dataset, YOLO11, machine learning, computer vision, training data
---
# Dog-Pose Dataset
## Introduction
The [Ultralytics](https://www.ultralytics.com/) Dog-pose dataset is a high-quality and extensive dataset specifically curated for dog keypoint estimation. With 6,773 training images and 1,703 test images, this dataset provides a solid foundation for training robust pose estimation models. Each annotated image includes 24 keypoints with 3 dimensions per keypoint (x, y, visibility), making it a valuable resource for advanced research and development in computer vision.
<img src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-dogs.avif" alt="Ultralytics Dog-pose display image" width="800">
This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLO11](https://github.com/ultralytics/ultralytics).
## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It includes paths, keypoint details, and other relevant information. In the case of the Dog-pose dataset, The `dog-pose.yaml` is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dog-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dog-pose.yaml).
!!! example "ultralytics/cfg/datasets/dog-pose.yaml"
```yaml
--8<-- "ultralytics/cfg/datasets/dog-pose.yaml"
```
## Usage
To train a YOLO11n-pose model on the Dog-pose dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch) with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n-pose.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="dog-pose.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo pose train data=dog-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
Here are some examples of images from the Dog-pose dataset, along with their corresponding annotations:
<img src="https://github.com/ultralytics/docs/releases/download/0/mosaiced-training-batch-2-dog-pose.avif" alt="Dataset sample image" width="800">
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
The example showcases the variety and complexity of the images in the Dog-pose dataset and the benefits of using mosaicing during the training process.
## Citations and Acknowledgments
If you use the Dog-pose dataset in your research or development work, please cite the following paper:
!!! quote ""
=== "BibTeX"
```bibtex
@inproceedings{khosla2011fgvc,
title={Novel dataset for Fine-Grained Image Categorization},
author={Aditya Khosla and Nityananda Jayadevaprakash and Bangpeng Yao and Li Fei-Fei},
booktitle={First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2011}
}
@inproceedings{deng2009imagenet,
title={ImageNet: A Large-Scale Hierarchical Image Database},
author={Jia Deng and Wei Dong and Richard Socher and Li-Jia Li and Kai Li and Li Fei-Fei},
booktitle={IEEE Computer Vision and Pattern Recognition (CVPR)},
year={2009}
}
```
We would like to acknowledge the Stanford team for creating and maintaining this valuable resource for the [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) community. For more information about the Dog-pose dataset and its creators, visit the [Stanford Dogs Dataset website](http://vision.stanford.edu/aditya86/ImageNetDogs/).
## FAQ
### What is the Dog-pose dataset, and how is it used with Ultralytics YOLO11?
The Dog-Pose dataset features 6,000 images annotated with 17 keypoints for dog pose estimation. Ideal for training and validating models with [Ultralytics YOLO11](https://docs.ultralytics.com/models/yolo11/), it supports applications like animal behavior analysis and veterinary studies.
### How do I train a YOLO11 model using the Dog-pose dataset in Ultralytics?
To train a YOLO11n-pose model on the Dog-pose dataset for 100 epochs with an image size of 640, follow these examples:
!!! example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n-pose.pt")
# Train the model
results = model.train(data="dog-pose.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo pose train data=dog-pose.yaml model=yolo11n-pose.pt epochs=100 imgsz=640
```
For a comprehensive list of training arguments, refer to the model [Training](../../modes/train.md) page.
### What are the benefits of using the Dog-pose dataset?
The Dog-pose dataset offers several benefits:
**Large and Diverse Dataset**: With 6,000 images, it provides a substantial amount of data covering a wide range of dog poses, breeds, and contexts, enabling robust model training and evaluation.
**Pose-specific Annotations**: Offers detailed annotations for pose estimation, ensuring high-quality data for training pose detection models.
**Real-World Scenarios**: Includes images from varied environments, enhancing the model's ability to generalize to real-world applications.
**Model Performance Improvement**: The diversity and scale of the dataset help improve model accuracy and robustness, particularly for tasks involving fine-grained pose estimation.
For more about its features and usage, see the [Dataset Introduction](#introduction) section.
### How does mosaicing benefit the YOLO11 training process using the Dog-pose dataset?
Mosaicing, as illustrated in the sample images from the Dog-pose dataset, merges multiple images into a single composite, enriching the diversity of objects and scenes in each training batch. This approach enhances the model's capacity to generalize across different object sizes, aspect ratios, and contexts, leading to improved performance. For example images, refer to the [Sample Images and Annotations](#sample-images-and-annotations) section.
### Where can I find the Dog-pose dataset YAML file and how do I use it?
The Dog-pose dataset YAML file can be found [here](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/dog-pose.yaml). This file defines the dataset configuration, including paths, classes, and other relevant information. Use this file with the YOLO11 training scripts as mentioned in the [Train Example](#how-do-i-train-a-yolo11-model-using-the-dog-pose-dataset-in-ultralytics) section.
For more FAQs and detailed documentation, visit the [Ultralytics Documentation](https://docs.ultralytics.com/).

@ -10,6 +10,17 @@ keywords: Hand KeyPoints, pose estimation, dataset, keypoints, MediaPipe, YOLO,
The hand-keypoints dataset contains 26,768 images of hands annotated with keypoints, making it suitable for training models like Ultralytics YOLO for pose estimation tasks. The annotations were generated using the Google MediaPipe library, ensuring high [accuracy](https://www.ultralytics.com/glossary/accuracy) and consistency, and the dataset is compatible [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) formats. The hand-keypoints dataset contains 26,768 images of hands annotated with keypoints, making it suitable for training models like Ultralytics YOLO for pose estimation tasks. The annotations were generated using the Google MediaPipe library, ensuring high [accuracy](https://www.ultralytics.com/glossary/accuracy) and consistency, and the dataset is compatible [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) formats.
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/fd6u1TW_AGY"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Hand Keypoints Estimation with Ultralytics YOLO11 | Human Hand Pose Estimation Tutorial
</p>
## Hand Landmarks ## Hand Landmarks
![Hand Landmarks](https://github.com/ultralytics/docs/releases/download/0/hand_landmarks.jpg) ![Hand Landmarks](https://github.com/ultralytics/docs/releases/download/0/hand_landmarks.jpg)

@ -127,6 +127,15 @@ This section outlines the datasets that are compatible with Ultralytics YOLO for
- **Usage**: Great for human hand pose estimation. - **Usage**: Great for human hand pose estimation.
- [Read more about Hand Keypoints](hand-keypoints.md) - [Read more about Hand Keypoints](hand-keypoints.md)
### Dog-Pose
- **Description**: The Dog Pose dataset contains approximately 6,000 images, providing a diverse and extensive resource for training and validation of dog pose estimation models.
- **Label Format**: Follows the Ultralytics YOLO format, with annotations for multiple keypoints specific to dog anatomy.
- **Number of Classes**: 1 (Dog).
- **Keypoints**: Includes 24 keypoints tailored to dog poses, such as limbs, joints, and head positions.
- **Usage**: Ideal for training models to estimate dog poses in various scenarios, from research to real-world applications.
- [Read more about Dog-Pose](dog-pose.md)
### Adding your own dataset ### Adding your own dataset
If you have your own dataset and would like to use it for training pose estimation models with Ultralytics YOLO format, ensure that it follows the format specified above under "Ultralytics YOLO format". Convert your annotations to the required format and specify the paths, number of classes, and class names in the YAML configuration file. If you have your own dataset and would like to use it for training pose estimation models with Ultralytics YOLO format, ensure that it follows the format specified above under "Ultralytics YOLO format". Convert your annotations to the required format and specify the paths, number of classes, and class names in the YAML configuration file.

@ -56,14 +56,14 @@ To train a YOLO11n-seg model on the COCO-Seg dataset for 100 [epochs](https://ww
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training) model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model # Train the model
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640) results = model.train(data="coco.yaml", epochs=100, imgsz=640)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Start training from a pretrained *.pt model # Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640 yolo segment train data=coco.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
``` ```
## Sample Images and Annotations ## Sample Images and Annotations
@ -118,14 +118,14 @@ To train a YOLO11n-seg model on the COCO-Seg dataset for 100 epochs with an imag
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training) model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model # Train the model
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640) results = model.train(data="coco.yaml", epochs=100, imgsz=640)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Start training from a pretrained *.pt model # Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640 yolo segment train data=coco.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
``` ```
### What are the key features of the COCO-Seg dataset? ### What are the key features of the COCO-Seg dataset?

@ -45,126 +45,15 @@ This guide provides a comprehensive overview of three fundamental types of [data
# generate the pie chart # generate the pie chart
yolo solutions analytics analytics_type="pie" show=True yolo solutions analytics analytics_type="pie" show=True
```
=== "Python"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter(
"ultralytics_analytics.avi",
cv2.VideoWriter_fourcc(*"MJPG"),
fps,
(1920, 1080), # This is fixed
)
analytics = solutions.Analytics(
analytics_type="line",
show=True,
)
frame_count = 0
while cap.isOpened():
success, im0 = cap.read()
if success:
frame_count += 1
im0 = analytics.process_data(im0, frame_count) # update analytics graph every frame
out.write(im0) # write the video file
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
=== "Pie Chart"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter(
"ultralytics_analytics.avi",
cv2.VideoWriter_fourcc(*"MJPG"),
fps,
(1920, 1080), # This is fixed
)
analytics = solutions.Analytics( # generate the bar plots
analytics_type="pie", yolo solutions analytics analytics_type="bar" show=True
show=True,
)
frame_count = 0
while cap.isOpened():
success, im0 = cap.read()
if success:
frame_count += 1
im0 = analytics.process_data(im0, frame_count) # update analytics graph every frame
out.write(im0) # write the video file
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
=== "Bar Plot"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter(
"ultralytics_analytics.avi",
cv2.VideoWriter_fourcc(*"MJPG"),
fps,
(1920, 1080), # This is fixed
)
analytics = solutions.Analytics( # generate the area plots
analytics_type="bar", yolo solutions analytics analytics_type="area" show=True
show=True,
)
frame_count = 0
while cap.isOpened():
success, im0 = cap.read()
if success:
frame_count += 1
im0 = analytics.process_data(im0, frame_count) # update analytics graph every frame
out.write(im0) # write the video file
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
``` ```
=== "Area chart" === "Python"
```python ```python
import cv2 import cv2
@ -173,9 +62,9 @@ This guide provides a comprehensive overview of three fundamental types of [data
cap = cv2.VideoCapture("Path/to/video/file.mp4") cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file" assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
out = cv2.VideoWriter( out = cv2.VideoWriter(
"ultralytics_analytics.avi", "ultralytics_analytics.avi",
cv2.VideoWriter_fourcc(*"MJPG"), cv2.VideoWriter_fourcc(*"MJPG"),
@ -183,11 +72,15 @@ This guide provides a comprehensive overview of three fundamental types of [data
(1920, 1080), # This is fixed (1920, 1080), # This is fixed
) )
# Init analytics
analytics = solutions.Analytics( analytics = solutions.Analytics(
analytics_type="area", show=True, # Display the output
show=True, analytics_type="line", # Pass the analytics type, could be "pie", "bar" or "area".
model="yolo11n.pt", # Path to the YOLO11 model file
# classes=[0, 2], # If you want to count specific classes i.e person and car with COCO pretrained model.
) )
# Process video
frame_count = 0 frame_count = 0
while cap.isOpened(): while cap.isOpened():
success, im0 = cap.read() success, im0 = cap.read()

@ -55,6 +55,7 @@ Measuring the gap between two objects is known as distance calculation within a
# Init distance-calculation obj # Init distance-calculation obj
distance = solutions.DistanceCalculation(model="yolo11n.pt", show=True) distance = solutions.DistanceCalculation(model="yolo11n.pt", show=True)
# Process video
while cap.isOpened(): while cap.isOpened():
success, im0 = cap.read() success, im0 = cap.read()
if not success: if not success:

@ -47,119 +47,12 @@ A heatmap generated with [Ultralytics YOLO11](https://github.com/ultralytics/ult
# Pass a custom colormap # Pass a custom colormap
yolo solutions heatmap colormap=cv2.COLORMAP_INFERNO yolo solutions heatmap colormap=cv2.COLORMAP_INFERNO
```
=== "Python"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init heatmap
heatmap = solutions.Heatmap(
show=True,
model="yolo11n.pt",
colormap=cv2.COLORMAP_PARULA,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = heatmap.generate_heatmap(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Line Counting"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# line for object counting # Heatmaps + object counting
line_points = [(20, 400), (1080, 404)] yolo solutions heatmap region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
# Init heatmap
heatmap = solutions.Heatmap(
show=True,
model="yolo11n.pt",
colormap=cv2.COLORMAP_PARULA,
region=line_points,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = heatmap.generate_heatmap(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
``` ```
=== "Polygon Counting" === "Python"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Define polygon points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]
# Init heatmap
heatmap = solutions.Heatmap(
show=True,
model="yolo11n.pt",
colormap=cv2.COLORMAP_PARULA,
region=region_points,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = heatmap.generate_heatmap(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Region Counting"
```python ```python
import cv2 import cv2
@ -173,51 +66,24 @@ A heatmap generated with [Ultralytics YOLO11](https://github.com/ultralytics/ult
# Video writer # Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Define region points # In case you want to apply object counting + heatmaps, you can pass region points.
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)] # region_points = [(20, 400), (1080, 400)] # Define line points
# region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360)] # Define region points
# Init heatmap # region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360), (20, 400)] # Define polygon points
heatmap = solutions.Heatmap(
show=True,
model="yolo11n.pt",
colormap=cv2.COLORMAP_PARULA,
region=region_points,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = heatmap.generate_heatmap(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Specific Classes"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("heatmap_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init heatmap # Init heatmap
heatmap = solutions.Heatmap( heatmap = solutions.Heatmap(
show=True, show=True, # Display the output
model="yolo11n.pt", model="yolo11n.pt", # Path to the YOLO11 model file
classes=[0, 2], colormap=cv2.COLORMAP_PARULA, # Colormap of heatmap
# region=region_points, # If you want to do object counting with heatmaps, you can pass region_points
# classes=[0, 2], # If you want to generate heatmap for specific classes i.e person and car.
# show_in=True, # Display in counts
# show_out=True, # Display out counts
# line_width=2, # Adjust the line width for bounding boxes and text display
) )
# Process video
while cap.isOpened(): while cap.isOpened():
success, im0 = cap.read() success, im0 = cap.read()
if not success: if not success:

@ -19,7 +19,7 @@ Object counting with [Ultralytics YOLO11](https://github.com/ultralytics/ultraly
allowfullscreen> allowfullscreen>
</iframe> </iframe>
<br> <br>
<strong>Watch:</strong> Object Counting using Ultralytics YOLO11 <strong>Watch:</strong> Object Counting using Ultralytics YOLOv8
</td> </td>
<td align="center"> <td align="center">
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/Fj9TStNBVoY" <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/Fj9TStNBVoY"
@ -58,7 +58,7 @@ Object counting with [Ultralytics YOLO11](https://github.com/ultralytics/ultraly
yolo solutions count source="path/to/video/file.mp4" yolo solutions count source="path/to/video/file.mp4"
# Pass region coordinates # Pass region coordinates
yolo solutions count region=[(20, 400), (1080, 404), (1080, 360), (20, 360)] yolo solutions count region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
``` ```
=== "Python" === "Python"
@ -73,165 +73,22 @@ Object counting with [Ultralytics YOLO11](https://github.com/ultralytics/ultraly
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points # Define region points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)] # region_points = [(20, 400), (1080, 400)] # For line counting
region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360)] # For rectangle region counting
# region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360), (20, 400)] # For polygon region counting
# Video writer # Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter # Init Object Counter
counter = solutions.ObjectCounter( counter = solutions.ObjectCounter(
show=True, show=True, # Display the output
region=region_points, region=region_points, # Pass region points
model="yolo11n.pt", model="yolo11n.pt", # model="yolo11n-obb.pt" for object counting using YOLO11 OBB model.
) # classes=[0, 2], # If you want to count specific classes i.e person and car with COCO pretrained model.
# show_in=True, # Display in counts
# Process video # show_out=True, # Display out counts
while cap.isOpened(): # line_width=2, # Adjust the line width for bounding boxes and text display
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = counter.count(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "OBB Object Counting"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# line or region points
line_points = [(20, 400), (1080, 400)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
show=True,
region=line_points,
model="yolo11n-obb.pt",
)
# Process video
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = counter.count(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Count in Polygon"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
show=True,
region=region_points,
model="yolo11n.pt",
)
# Process video
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = counter.count(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Count in Line"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points
line_points = [(20, 400), (1080, 400)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
show=True,
region=line_points,
model="yolo11n.pt",
)
# Process video
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = counter.count(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Specific Classes"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
show=True,
model="yolo11n.pt",
classes=[0, 1],
) )
# Process video # Process video
@ -291,7 +148,7 @@ def count_objects_in_region(video_path, output_video_path, model_path):
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)] region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360)]
counter = solutions.ObjectCounter(show=True, region=region_points, model=model_path) counter = solutions.ObjectCounter(show=True, region=region_points, model=model_path)
while cap.isOpened(): while cap.isOpened():

@ -45,7 +45,7 @@ Queue management using [Ultralytics YOLO11](https://github.com/ultralytics/ultra
yolo solutions queue source="path/to/video/file.mp4" yolo solutions queue source="path/to/video/file.mp4"
# Pass queue coordinates # Pass queue coordinates
yolo solutions queue region=[(20, 400), (1080, 404), (1080, 360), (20, 360)] yolo solutions queue region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
``` ```
=== "Python" === "Python"
@ -60,53 +60,23 @@ Queue management using [Ultralytics YOLO11](https://github.com/ultralytics/ultra
assert cap.isOpened(), "Error reading video file" assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)] # Define queue region points
queue_region = [(20, 400), (1080, 400), (1080, 360), (20, 360)] # Define queue region points
# queue_region = [(20, 400), (1080, 400), (1080, 360), (20, 360), (20, 400)] # Define queue polygon points
# Init Queue Manager
queue = solutions.QueueManager( queue = solutions.QueueManager(
model="yolo11n.pt", show=True, # Display the output
region=queue_region, model="yolo11n.pt", # Path to the YOLO11 model file
) region=queue_region, # Pass queue region points
# classes=[0, 2], # If you want to count specific classes i.e person and car with COCO pretrained model.
while cap.isOpened(): # line_width=2, # Adjust the line width for bounding boxes and text display
success, im0 = cap.read()
if success:
out = queue.process_queue(im0)
video_writer.write(im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
continue
print("Video frame is empty or video processing has been successfully completed.")
break
cap.release()
cv2.destroyAllWindows()
```
=== "Queue Manager Specific Classes"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
queue = solutions.QueueManager(
model="yolo11n.pt",
classes=3,
) )
# Process video
while cap.isOpened(): while cap.isOpened():
success, im0 = cap.read() success, im0 = cap.read()
@ -156,7 +126,7 @@ import cv2
from ultralytics import solutions from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video.mp4") cap = cv2.VideoCapture("path/to/video.mp4")
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)] queue_region = [(20, 400), (1080, 400), (1080, 360), (20, 360)]
queue = solutions.QueueManager( queue = solutions.QueueManager(
model="yolo11n.pt", model="yolo11n.pt",

@ -34,56 +34,65 @@ keywords: object counting, regions, YOLOv8, computer vision, Ultralytics, effici
| ![People Counting in Different Region using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/people-counting-different-region-ultralytics-yolov8.avif) | ![Crowd Counting in Different Region using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/crowd-counting-different-region-ultralytics-yolov8.avif) | | ![People Counting in Different Region using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/people-counting-different-region-ultralytics-yolov8.avif) | ![Crowd Counting in Different Region using Ultralytics YOLOv8](https://github.com/ultralytics/docs/releases/download/0/crowd-counting-different-region-ultralytics-yolov8.avif) |
| People Counting in Different Region using Ultralytics YOLOv8 | Crowd Counting in Different Region using Ultralytics YOLOv8 | | People Counting in Different Region using Ultralytics YOLOv8 | Crowd Counting in Different Region using Ultralytics YOLOv8 |
## Steps to Run !!! example "Region Counting Example"
### Step 1: Install Required Libraries === "Python"
Begin by cloning the Ultralytics repository, installing dependencies, and navigating to the local directory using the provided commands in Step 2. ```python
import cv2
```bash from ultralytics import solutions
# Clone Ultralytics repo
git clone https://github.com/ultralytics/ultralytics cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
# Navigate to the local directory w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
cd ultralytics/examples/YOLOv8-Region-Counter
# Define region points
# region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360)] # Pass region as list
# pass region as dictionary
region_points = {
"region-01": [(50, 50), (250, 50), (250, 250), (50, 250)],
"region-02": [(640, 640), (780, 640), (780, 720), (640, 720)]
}
# Video writer
video_writer = cv2.VideoWriter("region_counting.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
region = solutions.RegionCounter(
show=True,
region=region_points,
model="yolo11n.pt",
)
# Process video
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = region.count(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
``` ```
### Step 2: Run Region Counting Using Ultralytics YOLOv8 !!! tip "Ultralytics Example Code"
Execute the following basic commands for inference.
???+ tip "Region is Movable" The Ultralytics region counting module is available in our [examples section](https://github.com/ultralytics/ultralytics/blob/main/examples/YOLOv8-Region-Counter/yolov8_region_counter.py). You can explore this example for code customization and modify it to suit your specific use case.
During video playback, you can interactively move the region within the video by clicking and dragging using the left mouse button. ### Argument `RegionCounter`
```bash
# Save results
python yolov8_region_counter.py --source "path/to/video.mp4" --save-img
# Run model on CPU
python yolov8_region_counter.py --source "path/to/video.mp4" --device cpu
# Change model file
python yolov8_region_counter.py --source "path/to/video.mp4" --weights "path/to/model.pt"
# Detect specific classes (e.g., first and third classes)
python yolov8_region_counter.py --source "path/to/video.mp4" --classes 0 2
# View results without saving
python yolov8_region_counter.py --source "path/to/video.mp4" --view-img
```
### Optional Arguments Here's a table with the `RegionCounter` arguments:
| Name | Type | Default | Description | | Name | Type | Default | Description |
| -------------------- | ------ | ------------ | --------------------------------------------------------------------------- | | ------------ | ------ | -------------------------- | ---------------------------------------------------- |
| `--source` | `str` | `None` | Path to video file, for webcam 0 | | `model` | `str` | `None` | Path to Ultralytics YOLO Model File |
| `--line_thickness` | `int` | `2` | [Bounding Box](https://www.ultralytics.com/glossary/bounding-box) thickness | | `region` | `list` | `[(20, 400), (1260, 400)]` | List of points defining the counting region. |
| `--save-img` | `bool` | `False` | Save the predicted video/image | | `line_width` | `int` | `2` | Line thickness for bounding boxes. |
| `--weights` | `str` | `yolov8n.pt` | Weights file path | | `show` | `bool` | `False` | Flag to control whether to display the video stream. |
| `--classes` | `list` | `None` | Detect specific classes i.e. --classes 0 2 |
| `--region-thickness` | `int` | `2` | Region Box thickness |
| `--track-thickness` | `int` | `2` | Tracking line thickness |
## FAQ ## FAQ
@ -107,7 +116,7 @@ Follow these steps to run object counting in Ultralytics YOLOv8:
python yolov8_region_counter.py --source "path/to/video.mp4" --save-img python yolov8_region_counter.py --source "path/to/video.mp4" --save-img
``` ```
For more options, visit the [Run Region Counting](#steps-to-run) section. For more options, visit the [Run Region Counting](https://github.com/ultralytics/ultralytics/blob/main/examples/YOLOv8-Region-Counter/readme.md) section.
### Why should I use Ultralytics YOLOv8 for object counting in regions? ### Why should I use Ultralytics YOLOv8 for object counting in regions?
@ -121,7 +130,7 @@ Explore deeper benefits in the [Advantages](#advantages-of-object-counting-in-re
### Can the defined regions be adjusted during video playback? ### Can the defined regions be adjusted during video playback?
Yes, with Ultralytics YOLOv8, regions can be interactively moved during video playback. Simply click and drag with the left mouse button to reposition the region. This feature enhances flexibility for dynamic environments. Learn more in the tip section for [movable regions](#step-2-run-region-counting-using-ultralytics-yolov8). Yes, with Ultralytics YOLOv8, regions can be interactively moved during video playback. Simply click and drag with the left mouse button to reposition the region. This feature enhances flexibility for dynamic environments. Learn more in the tip section for [movable regions](https://github.com/ultralytics/ultralytics/blob/33cdaa5782efb2bc2b5ede945771ba647882830d/examples/YOLOv8-Region-Counter/yolov8_region_counter.py#L39).
### What are some real-world applications of object counting in regions? ### What are some real-world applications of object counting in regions?

@ -50,7 +50,7 @@ keywords: Ultralytics YOLO11, speed estimation, object tracking, computer vision
yolo solutions speed source="path/to/video/file.mp4" yolo solutions speed source="path/to/video/file.mp4"
# Pass region coordinates # Pass region coordinates
yolo solutions speed region=[(20, 400), (1080, 404), (1080, 360), (20, 360)] yolo solutions speed region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
``` ```
=== "Python" === "Python"
@ -61,16 +61,24 @@ keywords: Ultralytics YOLO11, speed estimation, object tracking, computer vision
from ultralytics import solutions from ultralytics import solutions
cap = cv2.VideoCapture("Path/to/video/file.mp4") cap = cv2.VideoCapture("Path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file" assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("speed_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) video_writer = cv2.VideoWriter("speed_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
speed_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)] # Define speed region points
speed_region = [(20, 400), (1080, 400), (1080, 360), (20, 360)]
speed = solutions.SpeedEstimator(model="yolo11n.pt", region=speed_region, show=True) speed = solutions.SpeedEstimator(
show=True, # Display the output
model="yolo11n-pose.pt", # Path to the YOLO11 model file.
region=speed_region, # Pass region points
# classes=[0, 2], # If you want to estimate speed of specific classes.
# line_width=2, # Adjust the line width for bounding boxes and text display
)
# Process video
while cap.isOpened(): while cap.isOpened():
success, im0 = cap.read() success, im0 = cap.read()

@ -40,6 +40,12 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
!!! example "Streamlit Application" !!! example "Streamlit Application"
=== "CLI"
```bash
yolo streamlit-predict
```
=== "Python" === "Python"
```python ```python
@ -50,12 +56,6 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
### Make sure to run the file using command `streamlit run <file-name.py>` ### Make sure to run the file using command `streamlit run <file-name.py>`
``` ```
=== "CLI"
```bash
yolo streamlit-predict
```
This will launch the Streamlit application in your default web browser. You will see the main title, subtitle, and the sidebar with configuration options. Select your desired YOLO11 model, set the confidence and NMS thresholds, and click the "Start" button to begin the real-time object detection. This will launch the Streamlit application in your default web browser. You will see the main title, subtitle, and the sidebar with configuration options. Select your desired YOLO11 model, set the confidence and NMS thresholds, and click the "Start" button to begin the real-time object detection.
You can optionally supply a specific model in Python: You can optionally supply a specific model in Python:

@ -60,40 +60,18 @@ Monitoring workouts through pose estimation with [Ultralytics YOLO11](https://gi
assert cap.isOpened(), "Error reading video file" assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
gym = solutions.AIGym( # Video writer
model="yolo11n-pose.pt",
show=True,
kpts=[6, 8, 10],
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = gym.monitor(im0)
cv2.destroyAllWindows()
```
=== "Workouts Monitoring with Save Output"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("workouts.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) video_writer = cv2.VideoWriter("workouts.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init AIGym
gym = solutions.AIGym( gym = solutions.AIGym(
show=True, show=True, # Display the frame
kpts=[6, 8, 10], kpts=[6, 8, 10], # keypoints index of person for monitoring specific exercise, by default it's for pushup
model="yolo11n-pose.pt", # Path to the YOLO11 pose estimation model file
# line_width=2, # Adjust the line width for bounding boxes and text display
) )
# Process video
while cap.isOpened(): while cap.isOpened():
success, im0 = cap.read() success, im0 = cap.read()
if not success: if not success:

@ -23,11 +23,11 @@ Here's a brief description of our CI actions:
Below is the table showing the status of these CI tests for our main repositories: Below is the table showing the status of these CI tests for our main repositories:
| Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPI and Docs Publishing | | Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPI and Docs Publishing |
| --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [yolov3](https://github.com/ultralytics/yolov3) | [![YOLOv3 CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov3/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml) | | | [yolov3](https://github.com/ultralytics/yolov3) | [![YOLOv3 CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov3/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov3/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/github-code-scanning/codeql) | |
| [yolov5](https://github.com/ultralytics/yolov5) | [![YOLOv5 CI](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov5/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml) | | | [yolov5](https://github.com/ultralytics/yolov5) | [![YOLOv5 CI](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov5/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov5/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/github-code-scanning/codeql) | |
| [ultralytics](https://github.com/ultralytics/ultralytics) | [![ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml) | [![Publish Docker Images](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml) | [![Check Broken links](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml) | [![Publish to PyPI and Deploy Docs](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml) | | [ultralytics](https://github.com/ultralytics/ultralytics) | [![ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml) | [![Publish Docker Images](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml) | [![Check Broken links](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/github-code-scanning/codeql) | [![Publish to PyPI and Deploy Docs](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml) |
| [hub-sdk](https://github.com/ultralytics/hub-sdk) | [![HUB-SDK CI](https://github.com/ultralytics/hub-sdk/actions/workflows/ci.yml/badge.svg)](https://github.com/ultralytics/hub-sdk/actions/workflows/ci.yml) | | [![Check Broken links](https://github.com/ultralytics/hub-sdk/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/hub-sdk/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/hub-sdk/actions/workflows/codeql.yaml/badge.svg)](https://github.com/ultralytics/hub-sdk/actions/workflows/codeql.yaml) | [![Publish to PyPI](https://github.com/ultralytics/hub-sdk/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/hub-sdk/actions/workflows/publish.yml) | | [hub-sdk](https://github.com/ultralytics/hub-sdk) | [![HUB-SDK CI](https://github.com/ultralytics/hub-sdk/actions/workflows/ci.yml/badge.svg)](https://github.com/ultralytics/hub-sdk/actions/workflows/ci.yml) | | [![Check Broken links](https://github.com/ultralytics/hub-sdk/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/hub-sdk/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/hub-sdk/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/hub-sdk/actions/workflows/github-code-scanning/codeql) | [![Publish to PyPI](https://github.com/ultralytics/hub-sdk/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/hub-sdk/actions/workflows/publish.yml) |
| [hub](https://github.com/ultralytics/hub) | [![HUB CI](https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg)](https://github.com/ultralytics/hub/actions/workflows/ci.yaml) | | [![Check Broken links](https://github.com/ultralytics/hub/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/hub/actions/workflows/links.yml) | | | | [hub](https://github.com/ultralytics/hub) | [![HUB CI](https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg)](https://github.com/ultralytics/hub/actions/workflows/ci.yaml) | | [![Check Broken links](https://github.com/ultralytics/hub/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/hub/actions/workflows/links.yml) | | |
| [mkdocs](https://github.com/ultralytics/mkdocs) | [![Ultralytics Actions](https://github.com/ultralytics/mkdocs/actions/workflows/format.yml/badge.svg)](https://github.com/ultralytics/mkdocs/actions/workflows/format.yml) | | | [![CodeQL](https://github.com/ultralytics/mkdocs/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/mkdocs/actions/workflows/github-code-scanning/codeql) | [![Publish to PyPI](https://github.com/ultralytics/mkdocs/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/mkdocs/actions/workflows/publish.yml) | | [mkdocs](https://github.com/ultralytics/mkdocs) | [![Ultralytics Actions](https://github.com/ultralytics/mkdocs/actions/workflows/format.yml/badge.svg)](https://github.com/ultralytics/mkdocs/actions/workflows/format.yml) | | | [![CodeQL](https://github.com/ultralytics/mkdocs/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/mkdocs/actions/workflows/github-code-scanning/codeql) | [![Publish to PyPI](https://github.com/ultralytics/mkdocs/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/mkdocs/actions/workflows/publish.yml) |
| [thop](https://github.com/ultralytics/thop) | [![Ultralytics Actions](https://github.com/ultralytics/thop/actions/workflows/format.yml/badge.svg)](https://github.com/ultralytics/thop/actions/workflows/format.yml) | | | [![CodeQL](https://github.com/ultralytics/thop/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/thop/actions/workflows/github-code-scanning/codeql) | [![Publish to PyPI](https://github.com/ultralytics/thop/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/mkdocs/actions/workflows/publish.yml) | | [thop](https://github.com/ultralytics/thop) | [![Ultralytics Actions](https://github.com/ultralytics/thop/actions/workflows/format.yml/badge.svg)](https://github.com/ultralytics/thop/actions/workflows/format.yml) | | | [![CodeQL](https://github.com/ultralytics/thop/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/thop/actions/workflows/github-code-scanning/codeql) | [![Publish to PyPI](https://github.com/ultralytics/thop/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/mkdocs/actions/workflows/publish.yml) |

@ -153,6 +153,7 @@ Ultralytics collects three primary types of data using Google Analytics:
- **Usage Metrics**: These include how often and in what ways the YOLO Python package is used, preferred features, and typical command-line arguments. - **Usage Metrics**: These include how often and in what ways the YOLO Python package is used, preferred features, and typical command-line arguments.
- **System Information**: General non-identifiable information about the computing environments where the package is run. - **System Information**: General non-identifiable information about the computing environments where the package is run.
- **Performance Data**: Metrics related to the performance of models during training, validation, and inference. - **Performance Data**: Metrics related to the performance of models during training, validation, and inference.
This data helps us enhance user experience and optimize software performance. Learn more in the [Anonymized Google Analytics](#anonymized-google-analytics) section. This data helps us enhance user experience and optimize software performance. Learn more in the [Anonymized Google Analytics](#anonymized-google-analytics) section.
### How can I disable data collection in the Ultralytics YOLO package? ### How can I disable data collection in the Ultralytics YOLO package?

@ -17,7 +17,7 @@ We utilize [Snyk](https://snyk.io/advisor/python/ultralytics) to conduct compreh
Our security strategy includes GitHub's [CodeQL](https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql) scanning. CodeQL delves deep into our codebase, identifying complex vulnerabilities like SQL injection and XSS by analyzing the code's semantic structure. This advanced level of analysis ensures early detection and resolution of potential security risks. Our security strategy includes GitHub's [CodeQL](https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql) scanning. CodeQL delves deep into our codebase, identifying complex vulnerabilities like SQL injection and XSS by analyzing the code's semantic structure. This advanced level of analysis ensures early detection and resolution of potential security risks.
[![CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml) [![CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/github-code-scanning/codeql)
## GitHub Dependabot Alerts ## GitHub Dependabot Alerts

@ -1,7 +1,7 @@
--- ---
comments: true comments: true
description: Explore Ultralytics HUB for easy training, analysis, preview, deployment and sharing of custom vision AI models using YOLOv8. Start training today!. description: Explore Ultralytics HUB for easy training, analysis, preview, deployment and sharing of custom vision AI models using YOLO11. Start training today!.
keywords: Ultralytics HUB, YOLOv8, custom AI models, model training, model deployment, model analysis, vision AI keywords: Ultralytics HUB, YOLO11, custom AI models, model training, model deployment, model analysis, vision AI
--- ---
# Ultralytics HUB Models # Ultralytics HUB Models
@ -66,7 +66,7 @@ In this step, you have to choose the project in which you want to create your mo
!!! info !!! info
You can read more about the available [YOLOv8](https://docs.ultralytics.com/models/yolov8/) (and [YOLOv5](https://docs.ultralytics.com/models/yolov5/)) architectures in our documentation. You can read more about the available [YOLO models](https://docs.ultralytics.com/models/) and architectures in our documentation.
By default, your model will use a pre-trained model (trained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset) to reduce training time. You can change this behavior and tweak your model's configuration by opening the **Advanced Model Configuration** accordion. By default, your model will use a pre-trained model (trained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset) to reduce training time. You can change this behavior and tweak your model's configuration by opening the **Advanced Model Configuration** accordion.

@ -20,7 +20,7 @@ keywords: Ultralytics, YOLO, YOLO11, object detection, image segmentation, deep
<br> <br>
<br> <br>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a> <a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://pepy.tech/project/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a> <a href="https://pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a> <a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a> <a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a> <a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>

@ -158,3 +158,42 @@ If you are interested in learning more about Albumentations, check out the follo
In this guide, we explored the key aspects of Albumentations, a great Python library for image augmentation. We discussed its wide range of transformations, optimized performance, and how you can use it in your next YOLO11 project. In this guide, we explored the key aspects of Albumentations, a great Python library for image augmentation. We discussed its wide range of transformations, optimized performance, and how you can use it in your next YOLO11 project.
Also, if you'd like to know more about other Ultralytics YOLO11 integrations, visit our [integration guide page](../integrations/index.md). You'll find valuable resources and insights there. Also, if you'd like to know more about other Ultralytics YOLO11 integrations, visit our [integration guide page](../integrations/index.md). You'll find valuable resources and insights there.
## FAQ
### How can I integrate Albumentations with YOLO11 for improved data augmentation?
Albumentations integrates seamlessly with YOLO11 and applies automatically during training if you have the package installed. Here's how to get started:
```python
# Install required packages
# !pip install albumentations ultralytics
from ultralytics import YOLO
# Load and train model with automatic augmentations
model = YOLO("yolo11n.pt")
model.train(data="coco8.yaml", epochs=100)
```
The integration includes optimized augmentations like blur, median blur, grayscale conversion, and CLAHE with carefully tuned probabilities to enhance model performance.
### What are the key benefits of using Albumentations over other augmentation libraries?
Albumentations stands out for several reasons:
1. Performance: Built on OpenCV and NumPy with SIMD optimization for superior speed
2. Flexibility: Supports 70+ transformations across pixel-level, spatial-level, and mixing-level augmentations
3. Compatibility: Works seamlessly with popular frameworks like [PyTorch](../integrations/torchscript.md) and [TensorFlow](../integrations/tensorboard.md)
4. Reliability: Extensive test suite prevents silent data corruption
5. Ease of use: Single unified API for all augmentation types
### What types of computer vision tasks can benefit from Albumentations augmentation?
Albumentations enhances various [computer vision tasks](../tasks/index.md) including:
- [Object Detection](../tasks/detect.md): Improves model robustness to lighting, scale, and orientation variations
- [Instance Segmentation](../tasks/segment.md): Enhances mask prediction accuracy through diverse transformations
- [Classification](../tasks/classify.md): Increases model generalization with color and geometric augmentations
- [Pose Estimation](../tasks/pose.md): Helps models adapt to different viewpoints and lighting conditions
The library's diverse augmentation options make it valuable for any vision task requiring robust model performance.

@ -61,6 +61,8 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
- [Albumentations](albumentations.md): Enhance your Ultralytics models with powerful image augmentations to improve model robustness and generalization. - [Albumentations](albumentations.md): Enhance your Ultralytics models with powerful image augmentations to improve model robustness and generalization.
- [SONY IMX500](sony-imx500.md): Optimize and deploy [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8/) models on Raspberry Pi AI Cameras with the IMX500 sensor for fast, low-power performance.
## Deployment Integrations ## Deployment Integrations
- [CoreML](coreml.md): CoreML, developed by [Apple](https://www.apple.com/), is a framework designed for efficiently integrating machine learning models into applications across iOS, macOS, watchOS, and tvOS, using Apple's hardware for effective and secure [model deployment](https://www.ultralytics.com/glossary/model-deployment). - [CoreML](coreml.md): CoreML, developed by [Apple](https://www.apple.com/), is a framework designed for efficiently integrating machine learning models into applications across iOS, macOS, watchOS, and tvOS, using Apple's hardware for effective and secure [model deployment](https://www.ultralytics.com/glossary/model-deployment).

@ -127,6 +127,7 @@ Kaggle offers unique features that make it an excellent choice:
- **Free Access to TPUs**: Speed up training with powerful TPUs without extra costs. - **Free Access to TPUs**: Speed up training with powerful TPUs without extra costs.
- **Comprehensive History**: Track changes over time with a detailed history of notebook commits. - **Comprehensive History**: Track changes over time with a detailed history of notebook commits.
- **Resource Availability**: Significant resources are provided for each notebook session, including 12 hours of execution time for CPU and GPU sessions. - **Resource Availability**: Significant resources are provided for each notebook session, including 12 hours of execution time for CPU and GPU sessions.
For a comparison with Google Colab, refer to our [Google Colab guide](./google-colab.md). For a comparison with Google Colab, refer to our [Google Colab guide](./google-colab.md).
### How can I revert to a previous version of my Kaggle notebook? ### How can I revert to a previous version of my Kaggle notebook?

@ -106,6 +106,8 @@ In this example, we demonstrate how to use a custom search space for hyperparame
!!! example "Usage" !!! example "Usage"
```python ```python
from ray import tune
from ultralytics import YOLO from ultralytics import YOLO
# Define a YOLO model # Define a YOLO model

@ -0,0 +1,325 @@
---
comments: true
description: Learn to export Ultralytics YOLOv8 models to Sony's IMX500 format to optimize your models for efficient deployment.
keywords: Sony, IMX500, IMX 500, Atrios, MCT, model export, quantization, pruning, deep learning optimization, Raspberry Pi AI Camera, edge AI, PyTorch, IMX
---
# Sony IMX500 Export for Ultralytics YOLOv8
This guide covers exporting and deploying Ultralytics YOLOv8 models to Raspberry Pi AI Cameras that feature the Sony IMX500 sensor.
Deploying computer vision models on devices with limited computational power, such as [Raspberry Pi AI Camera](https://www.raspberrypi.com/products/ai-camera/), can be tricky. Using a model format optimized for faster performance makes a huge difference.
The IMX500 model format is designed to use minimal power while delivering fast performance for neural networks. It allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for high-speed and low-power inferencing. In this guide, we'll walk you through exporting and deploying your models to the IMX500 format while making it easier for your models to perform well on the [Raspberry Pi AI Camera](https://www.raspberrypi.com/products/ai-camera/).
<p align="center">
<img width="100%" src="https://github.com/ultralytics/assets/releases/download/v8.3.0/ai-camera.avif" alt="Raspberry Pi AI Camera">
</p>
## Why Should You Export to IMX500
Sony's [IMX500 Intelligent Vision Sensor](https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera) is a game-changing piece of hardware in edge AI processing. It's the world's first intelligent vision sensor with on-chip AI capabilities. This sensor helps overcome many challenges in edge AI, including data processing bottlenecks, privacy concerns, and performance limitations.
While other sensors merely pass along images and frames, the IMX500 tells a whole story. It processes data directly on the sensor, allowing devices to generate insights in real-time.
## Sony's IMX500 Export for YOLOv8 Models
The IMX500 is designed to transform how devices handle data directly on the sensor, without needing to send it off to the cloud for processing.
The IMX500 works with quantized models. Quantization makes models smaller and faster without losing much [accuracy](https://www.ultralytics.com/glossary/accuracy). It is ideal for the limited resources of edge computing, allowing applications to respond quickly by reducing latency and allowing for quick data processing locally, without cloud dependency. Local processing also keeps user data private and secure since it's not sent to a remote server.
**IMX500 Key Features:**
- **Metadata Output:** Instead of transmitting images only, the IMX500 can output both image and metadata (inference result), and can output metadata only for minimizing data size, reducing bandwidth, and lowering costs.
- **Addresses Privacy Concerns:** By processing data on the device, the IMX500 addresses privacy concerns, ideal for human-centric applications like person counting and occupancy tracking.
- **Real-time Processing:** Fast, on-sensor processing supports real-time decisions, perfect for edge AI applications such as autonomous systems.
**Before You Begin:** For best results, ensure your YOLOv8 model is well-prepared for export by following our [Model Training Guide](https://docs.ultralytics.com/modes/train/), [Data Preparation Guide](https://docs.ultralytics.com/datasets/), and [Hyperparameter Tuning Guide](https://docs.ultralytics.com/guides/hyperparameter-tuning/).
## Usage Examples
Export an Ultralytics YOLOv8 model to IMX500 format and run inference with the exported model.
!!! note
Here we perform inference just to make sure the model works as expected. However, for deployment and inference on the Raspberry Pi AI Camera, please jump to [Using IMX500 Export in Deployment](#using-imx500-export-in-deployment) section.
!!! example
=== "Python"
```python
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")
# Export the model
model.export(format="imx") # exports with PTQ quantization by default
# Load the exported model
imx_model = YOLO("yolov8n_imx_model")
# Run inference
results = imx_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to imx format with Post-Training Quantization (PTQ)
yolo export model=yolov8n.pt format=imx
# Run inference with the exported model
yolo predict model=yolov8n_imx_model source='https://ultralytics.com/images/bus.jpg'
```
The export process will create an ONNX model for quantization validation, along with a directory named `<model-name>_imx_model`. This directory will include the `packerOut.zip` file, which is essential for packaging the model for deployment on the IMX500 hardware. Additionally, the `<model-name>_imx_model` folder will contain a text file (`labels.txt`) listing all the labels associated with the model.
```bash
yolov8n_imx_model
├── dnnParams.xml
├── labels.txt
├── packerOut.zip
├── yolov8n_imx.onnx
├── yolov8n_imx500_model_MemoryReport.json
└── yolov8n_imx500_model.pbtxt
```
## Arguments
When exporting a model to IMX500 format, you can specify various arguments:
| Key | Value | Description |
| -------- | ------ | -------------------------------------------------------- |
| `format` | `imx` | Format to export to (imx) |
| `int8` | `True` | Enable INT8 quantization for the model (default: `True`) |
| `imgsz` | `640` | Image size for the model input (default: `640`) |
## Using IMX500 Export in Deployment
After exporting Ultralytics YOLOv8n model to IMX500 format, it can be deployed to Raspberry Pi AI Camera for inference.
### Hardware Prerequisites
Make sure you have the below hardware:
1. Raspberry Pi 5 or Raspberry Pi 4 Model B
2. Raspberry Pi AI Camera
Connect the Raspberry Pi AI camera to the 15-pin MIPI CSI connector on the Raspberry Pi and power on the Raspberry Pi
### Software Prerequisites
!!! note
This guide has been tested with Raspberry Pi OS Bookworm running on a Raspberry Pi 5
Step 1: Open a terminal window and execute the following commands to update the Raspberry Pi software to the latest version.
```bash
sudo apt update && sudo apt full-upgrade
```
Step 2: Install IMX500 firmware which is required to operate the IMX500 sensor along with a packager tool.
```bash
sudo apt install imx500-all imx500-tools
```
Step 3: Install prerequisites to run `picamera2` application. We will use this application later for the deployment process.
```bash
sudo apt install python3-opencv python3-munkres
```
Step 4: Reboot Raspberry Pi for the changes to take into effect
```bash
sudo reboot
```
### Package Model and Deploy to AI Camera
After obtaining `packerOut.zip` from the IMX500 conversion process, you can pass this file into the packager tool to obtain an RPK file. This file can then be deployed directly to the AI Camera using `picamera2`.
Step 1: Package the model into RPK file
```bash
imx500-package -i <path to packerOut.zip> -o <output folder>
```
The above will generate a `network.rpk` file inside the specified output folder.
Step 2: Clone `picamera2` repository, install it and navigate to the imx500 examples
```bash
git clone -b next https://github.com/raspberrypi/picamera2
cd picamera2
pip install -e . --break-system-packages
cd examples/imx500
```
Step 3: Run YOLOv8 object detection, using the labels.txt file that has been generated during the IMX500 export.
```bash
python imx500_object_detection_demo.py --model <path to network.rpk> --fps 25 --bbox-normalization --ignore-dash-labels --bbox-order xy –labels <path to labels.txt>
```
Then you will be able to see live inference output as follows
<p align="center">
<img width="100%" src="https://github.com/ultralytics/assets/releases/download/v8.3.0/imx500-inference-rpi.avif" alt="Inference on Raspberry Pi AI Camera">
</p>
## Benchmarks
YOLOv8 benchmarks below were run by the Ultralytics team on Raspberry Pi AI Camera with `imx` model format measuring speed and accuracy.
| Model | Format | Status | Size (MB) | mAP50-95(B) | Inference time (ms/im) |
| ------- | ------ | ------ | --------- | ----------- | ---------------------- |
| YOLOv8n | imx | ✅ | 2.9 | 0.522 | 66.66 |
!!! note
Validation for the above benchmark was done using coco8 dataset
## What's Under the Hood?
<p align="center">
<img width="640" src="https://github.com/ultralytics/assets/releases/download/v8.3.0/imx500-deploy.avif" alt="IMX500 deployment">
</p>
### Sony Model Compression Toolkit (MCT)
[Sony's Model Compression Toolkit (MCT)](https://github.com/sony/model_optimization) is a powerful tool for optimizing deep learning models through quantization and pruning. It supports various quantization methods and provides advanced algorithms to reduce model size and computational complexity without significantly sacrificing accuracy. MCT is particularly useful for deploying models on resource-constrained devices, ensuring efficient inference and reduced latency.
### Supported Features of MCT
Sony's MCT offers a range of features designed to optimize neural network models:
1. **Graph Optimizations**: Transforms models into more efficient versions by folding layers like batch normalization into preceding layers.
2. **Quantization Parameter Search**: Minimizes quantization noise using metrics like Mean-Square-Error, No-Clipping, and Mean-Average-Error.
3. **Advanced Quantization Algorithms**:
- **Shift Negative Correction**: Addresses performance issues from symmetric activation quantization.
- **Outliers Filtering**: Uses z-score to detect and remove outliers.
- **Clustering**: Utilizes non-uniform quantization grids for better distribution matching.
- **Mixed-Precision Search**: Assigns different quantization bit-widths per layer based on sensitivity.
4. **Visualization**: Use TensorBoard to observe model performance insights, quantization phases, and bit-width configurations.
#### Quantization
MCT supports several quantization methods to reduce model size and improve inference speed:
1. **Post-Training Quantization (PTQ)**:
- Available via Keras and PyTorch APIs.
- Complexity: Low
- Computational Cost: Low (CPU minutes)
2. **Gradient-based Post-Training Quantization (GPTQ)**:
- Available via Keras and PyTorch APIs.
- Complexity: Medium
- Computational Cost: Moderate (2-3 GPU hours)
3. **Quantization-Aware Training (QAT)**:
- Complexity: High
- Computational Cost: High (12-36 GPU hours)
MCT also supports various quantization schemes for weights and activations:
1. Power-of-Two (hardware-friendly)
2. Symmetric
3. Uniform
#### Structured Pruning
MCT introduces structured, hardware-aware model pruning designed for specific hardware architectures. This technique leverages the target platform's Single Instruction, Multiple Data (SIMD) capabilities by pruning SIMD groups. This reduces model size and complexity while optimizing channel utilization, aligned with the SIMD architecture for targeted resource utilization of weights memory footprint. Available via Keras and PyTorch APIs.
### IMX500 Converter Tool (Compiler)
The IMX500 Converter Tool is integral to the IMX500 toolset, allowing the compilation of models for deployment on Sony's IMX500 sensor (for instance, Raspberry Pi AI Cameras). This tool facilitates the transition of Ultralytics YOLOv8 models processed through Ultralytics software, ensuring they are compatible and perform efficiently on the specified hardware. The export procedure following model quantization involves the generation of binary files that encapsulate essential data and device-specific configurations, streamlining the deployment process on the Raspberry Pi AI Camera.
## Real-World Use Cases
Export to IMX500 format has wide applicability across industries. Here are some examples:
- **Edge AI and IoT**: Enable object detection on drones or security cameras, where real-time processing on low-power devices is essential.
- **Wearable Devices**: Deploy models optimized for small-scale AI processing on health-monitoring wearables.
- **Smart Cities**: Use IMX500-exported YOLOv8 models for traffic monitoring and safety analysis with faster processing and minimal latency.
- **Retail Analytics**: Enhance in-store monitoring by deploying optimized models in point-of-sale systems or smart shelves.
## Conclusion
Exporting Ultralytics YOLOv8 models to Sony's IMX500 format allows you to deploy your models for efficient inference on IMX500-based cameras. By leveraging advanced quantization techniques, you can reduce model size and improve inference speed without significantly compromising accuracy.
For more information and detailed guidelines, refer to Sony's [IMX500 website](https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera).
## FAQ
### How do I export a YOLOv8 model to IMX500 format for Raspberry Pi AI Camera?
To export a YOLOv8 model to IMX500 format, use either the Python API or CLI command:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.export(format="imx") # Exports with PTQ quantization by default
```
The export process will create a directory containing the necessary files for deployment, including `packerOut.zip` which can be used with the IMX500 packager tool on Raspberry Pi.
### What are the key benefits of using the IMX500 format for edge AI deployment?
The IMX500 format offers several important advantages for edge deployment:
- On-chip AI processing reduces latency and power consumption
- Outputs both image and metadata (inference result) instead of images only
- Enhanced privacy by processing data locally without cloud dependency
- Real-time processing capabilities ideal for time-sensitive applications
- Optimized quantization for efficient model deployment on resource-constrained devices
### What hardware and software prerequisites are needed for IMX500 deployment?
For deploying IMX500 models, you'll need:
Hardware:
- Raspberry Pi 5 or Raspberry Pi 4 Model B
- Raspberry Pi AI Camera with IMX500 sensor
Software:
- Raspberry Pi OS Bookworm
- IMX500 firmware and tools (`sudo apt install imx500-all imx500-tools`)
- Python packages for `picamera2` (`sudo apt install python3-opencv python3-munkres`)
### What performance can I expect from YOLOv8 models on the IMX500?
Based on Ultralytics benchmarks on Raspberry Pi AI Camera:
- YOLOv8n achieves 66.66ms inference time per image
- mAP50-95 of 0.522 on COCO8 dataset
- Model size of only 2.9MB after quantization
This demonstrates that IMX500 format provides efficient real-time inference while maintaining good accuracy for edge AI applications.
### How do I package and deploy my exported model to the Raspberry Pi AI Camera?
After exporting to IMX500 format:
1. Use the packager tool to create an RPK file:
```bash
imx500-package -i <path to packerOut.zip> -o <output folder>
```
2. Clone and install picamera2:
```bash
git clone -b next https://github.com/raspberrypi/picamera2
cd picamera2 && pip install -e . --break-system-packages
```
3. Run inference using the generated RPK file:
```bash
python imx500_object_detection_demo.py --model <path to network.rpk> --fps 25 --bbox-normalization --labels <path to labels.txt>
```

@ -127,11 +127,11 @@ The arguments provided when using [export](../modes/export.md) for an Ultralytic
- Adjust the `workspace` value according to your calibration needs and resource availability. While a larger `workspace` may increase calibration time, it allows TensorRT to explore a wider range of optimization tactics, potentially enhancing model performance and [accuracy](https://www.ultralytics.com/glossary/accuracy). Conversely, a smaller `workspace` can reduce calibration time but may limit the optimization strategies, affecting the quality of the quantized model. - Adjust the `workspace` value according to your calibration needs and resource availability. While a larger `workspace` may increase calibration time, it allows TensorRT to explore a wider range of optimization tactics, potentially enhancing model performance and [accuracy](https://www.ultralytics.com/glossary/accuracy). Conversely, a smaller `workspace` can reduce calibration time but may limit the optimization strategies, affecting the quality of the quantized model.
- Default is `workspace=4` (GiB), this value may need to be increased if calibration crashes (exits without warning). - Default is `workspace=None`, which will allow for TensorRT to automatically allocate memory, when configuring manually, this value may need to be increased if calibration crashes (exits without warning).
- TensorRT will report `UNSUPPORTED_STATE` during export if the value for `workspace` is larger than the memory available to the device, which means the value for `workspace` should be lowered. - TensorRT will report `UNSUPPORTED_STATE` during export if the value for `workspace` is larger than the memory available to the device, which means the value for `workspace` should be lowered or set to `None`.
- If `workspace` is set to max value and calibration fails/crashes, consider reducing the values for `imgsz` and `batch` to reduce memory requirements. - If `workspace` is set to max value and calibration fails/crashes, consider using `None` for auto-allocation or by reducing the values for `imgsz` and `batch` to reduce memory requirements.
- <u><b>Remember</b> calibration for INT8 is specific to each device</u>, borrowing a "high-end" GPU for calibration, might result in poor performance when inference is run on another device. - <u><b>Remember</b> calibration for INT8 is specific to each device</u>, borrowing a "high-end" GPU for calibration, might result in poor performance when inference is run on another device.

@ -1,5 +1,5 @@
| Argument | Type | Default | Description | | Argument | Type | Default | Description |
| ----------- | ---------------- | --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ----------- | ----------------- | --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `format` | `str` | `'torchscript'` | Target format for the exported model, such as `'onnx'`, `'torchscript'`, `'tensorflow'`, or others, defining compatibility with various deployment environments. | | `format` | `str` | `'torchscript'` | Target format for the exported model, such as `'onnx'`, `'torchscript'`, `'tensorflow'`, or others, defining compatibility with various deployment environments. |
| `imgsz` | `int` or `tuple` | `640` | Desired image size for the model input. Can be an integer for square images or a tuple `(height, width)` for specific dimensions. | | `imgsz` | `int` or `tuple` | `640` | Desired image size for the model input. Can be an integer for square images or a tuple `(height, width)` for specific dimensions. |
| `keras` | `bool` | `False` | Enables export to Keras format for [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) SavedModel, providing compatibility with TensorFlow serving and APIs. | | `keras` | `bool` | `False` | Enables export to Keras format for [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) SavedModel, providing compatibility with TensorFlow serving and APIs. |
@ -9,7 +9,7 @@
| `dynamic` | `bool` | `False` | Allows dynamic input sizes for ONNX, TensorRT and OpenVINO exports, enhancing flexibility in handling varying image dimensions. | | `dynamic` | `bool` | `False` | Allows dynamic input sizes for ONNX, TensorRT and OpenVINO exports, enhancing flexibility in handling varying image dimensions. |
| `simplify` | `bool` | `True` | Simplifies the model graph for ONNX exports with `onnxslim`, potentially improving performance and compatibility. | | `simplify` | `bool` | `True` | Simplifies the model graph for ONNX exports with `onnxslim`, potentially improving performance and compatibility. |
| `opset` | `int` | `None` | Specifies the ONNX opset version for compatibility with different ONNX parsers and runtimes. If not set, uses the latest supported version. | | `opset` | `int` | `None` | Specifies the ONNX opset version for compatibility with different ONNX parsers and runtimes. If not set, uses the latest supported version. |
| `workspace` | `float` | `4.0` | Sets the maximum workspace size in GiB for TensorRT optimizations, balancing memory usage and performance. | | `workspace` | `float` or `None` | `None` | Sets the maximum workspace size in GiB for TensorRT optimizations, balancing memory usage and performance; use `None` for auto-allocation by TensorRT up to device maximum. |
| `nms` | `bool` | `False` | Adds Non-Maximum Suppression (NMS) to the CoreML export, essential for accurate and efficient detection post-processing. | | `nms` | `bool` | `False` | Adds Non-Maximum Suppression (NMS) to the CoreML export, essential for accurate and efficient detection post-processing. |
| `batch` | `int` | `1` | Specifies export model batch inference size or the max number of images the exported model will process concurrently in `predict` mode. | | `batch` | `int` | `1` | Specifies export model batch inference size or the max number of images the exported model will process concurrently in `predict` mode. |
| `device` | `str` | `None` | Specifies the device for exporting: GPU (`device=0`), CPU (`device=cpu`), MPS for Apple silicon (`device=mps`) or DLA for NVIDIA Jetson (`device=dla:0` or `device=dla:1`). | | `device` | `str` | `None` | Specifies the device for exporting: GPU (`device=0`), CPU (`device=cpu`), MPS for Apple silicon (`device=mps`) or DLA for NVIDIA Jetson (`device=dla:0` or `device=dla:1`). |

@ -14,3 +14,4 @@
| [PaddlePaddle](../integrations/paddlepaddle.md) | `paddle` | `{{ model_name or "yolo11n" }}_paddle_model/` | ✅ | `imgsz`, `batch` | | [PaddlePaddle](../integrations/paddlepaddle.md) | `paddle` | `{{ model_name or "yolo11n" }}_paddle_model/` | ✅ | `imgsz`, `batch` |
| [MNN](../integrations/mnn.md) | `mnn` | `{{ model_name or "yolo11n" }}.mnn` | ✅ | `imgsz`, `batch`, `int8`, `half` | | [MNN](../integrations/mnn.md) | `mnn` | `{{ model_name or "yolo11n" }}.mnn` | ✅ | `imgsz`, `batch`, `int8`, `half` |
| [NCNN](../integrations/ncnn.md) | `ncnn` | `{{ model_name or "yolo11n" }}_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` | | [NCNN](../integrations/ncnn.md) | `ncnn` | `{{ model_name or "yolo11n" }}_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
| [IMX500](../integrations/sony-imx500.md) | `imx` | `{{ model_name or "yolov8n" }}_imx_model/` | ✅ | `imgsz`, `int8` |

@ -13,7 +13,7 @@
| `augment` | `bool` | `False` | Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed. | | `augment` | `bool` | `False` | Enables test-time augmentation (TTA) for predictions, potentially improving detection robustness at the cost of inference speed. |
| `agnostic_nms` | `bool` | `False` | Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common. | | `agnostic_nms` | `bool` | `False` | Enables class-agnostic Non-Maximum Suppression (NMS), which merges overlapping boxes of different classes. Useful in multi-class detection scenarios where class overlap is common. |
| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. | | `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
| `retina_masks` | `bool` | `False` | Uses high-resolution segmentation masks if available in the model. This can enhance mask quality for segmentation tasks, providing finer detail. | | `retina_masks` | `bool` | `False` | Returns high-resolution segmentation masks. The returned masks (`masks.data`) will match the original image size if enabled. If disabled, they have the image size used during inference. |
| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or [embeddings](https://www.ultralytics.com/glossary/embeddings). Useful for downstream tasks like clustering or similarity search. | | `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or [embeddings](https://www.ultralytics.com/glossary/embeddings). Useful for downstream tasks like clustering or similarity search. |
| `project` | `str` | `None` | Name of the project directory where prediction outputs are saved if `save` is enabled. | | `project` | `str` | `None` | Name of the project directory where prediction outputs are saved if `save` is enabled. |
| `name` | `str` | `None` | Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if `save` is enabled. | | `name` | `str` | `None` | Name of the prediction run. Used for creating a subdirectory within the project folder, where prediction outputs are stored if `save` is enabled. |

@ -17,7 +17,6 @@
| `exist_ok` | `False` | If True, allows overwriting of an existing project/name directory. Useful for iterative experimentation without needing to manually clear previous outputs. | | `exist_ok` | `False` | If True, allows overwriting of an existing project/name directory. Useful for iterative experimentation without needing to manually clear previous outputs. |
| `pretrained` | `True` | Determines whether to start training from a pretrained model. Can be a boolean value or a string path to a specific model from which to load weights. Enhances training efficiency and model performance. | | `pretrained` | `True` | Determines whether to start training from a pretrained model. Can be a boolean value or a string path to a specific model from which to load weights. Enhances training efficiency and model performance. |
| `optimizer` | `'auto'` | Choice of optimizer for training. Options include `SGD`, `Adam`, `AdamW`, `NAdam`, `RAdam`, `RMSProp` etc., or `auto` for automatic selection based on model configuration. Affects convergence speed and stability. | | `optimizer` | `'auto'` | Choice of optimizer for training. Options include `SGD`, `Adam`, `AdamW`, `NAdam`, `RAdam`, `RMSProp` etc., or `auto` for automatic selection based on model configuration. Affects convergence speed and stability. |
| `verbose` | `False` | Enables verbose output during training, providing detailed logs and progress updates. Useful for debugging and closely monitoring the training process. |
| `seed` | `0` | Sets the random seed for training, ensuring reproducibility of results across runs with the same configurations. | | `seed` | `0` | Sets the random seed for training, ensuring reproducibility of results across runs with the same configurations. |
| `deterministic` | `True` | Forces deterministic algorithm use, ensuring reproducibility but may affect performance and speed due to the restriction on non-deterministic algorithms. | | `deterministic` | `True` | Forces deterministic algorithm use, ensuring reproducibility but may affect performance and speed due to the restriction on non-deterministic algorithms. |
| `single_cls` | `False` | Treats all classes in multi-class datasets as a single class during training. Useful for binary classification tasks or when focusing on object presence rather than classification. | | `single_cls` | `False` | Treats all classes in multi-class datasets as a single class during training. Useful for binary classification tasks or when focusing on object presence rather than classification. |
@ -41,7 +40,6 @@
| `dfl` | `1.5` | Weight of the distribution focal loss, used in certain YOLO versions for fine-grained classification. | | `dfl` | `1.5` | Weight of the distribution focal loss, used in certain YOLO versions for fine-grained classification. |
| `pose` | `12.0` | Weight of the pose loss in models trained for pose estimation, influencing the emphasis on accurately predicting pose keypoints. | | `pose` | `12.0` | Weight of the pose loss in models trained for pose estimation, influencing the emphasis on accurately predicting pose keypoints. |
| `kobj` | `2.0` | Weight of the keypoint objectness loss in pose estimation models, balancing detection confidence with pose accuracy. | | `kobj` | `2.0` | Weight of the keypoint objectness loss in pose estimation models, balancing detection confidence with pose accuracy. |
| `label_smoothing` | `0.0` | Applies label smoothing, softening hard labels to a mix of the target label and a uniform distribution over labels, can improve generalization. |
| `nbs` | `64` | Nominal batch size for normalization of loss. | | `nbs` | `64` | Nominal batch size for normalization of loss. |
| `overlap_mask` | `True` | Determines whether object masks should be merged into a single mask for training, or kept separate for each object. In case of overlap, the smaller mask is overlayed on top of the larger mask during merge. | | `overlap_mask` | `True` | Determines whether object masks should be merged into a single mask for training, or kept separate for each object. In case of overlap, the smaller mask is overlayed on top of the larger mask during merge. |
| `mask_ratio` | `4` | Downsample ratio for segmentation masks, affecting the resolution of masks used during training. | | `mask_ratio` | `4` | Downsample ratio for segmentation masks, affecting the resolution of masks used during training. |

@ -12,7 +12,7 @@
| `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. | | `device` | `str` | `None` | Specifies the device for validation (`cpu`, `cuda:0`, etc.). Allows flexibility in utilizing CPU or GPU resources. |
| `dnn` | `bool` | `False` | If `True`, uses the [OpenCV](https://www.ultralytics.com/glossary/opencv) DNN module for ONNX model inference, offering an alternative to [PyTorch](https://www.ultralytics.com/glossary/pytorch) inference methods. | | `dnn` | `bool` | `False` | If `True`, uses the [OpenCV](https://www.ultralytics.com/glossary/opencv) DNN module for ONNX model inference, offering an alternative to [PyTorch](https://www.ultralytics.com/glossary/pytorch) inference methods. |
| `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. | | `plots` | `bool` | `False` | When set to `True`, generates and saves plots of predictions versus ground truth for visual evaluation of the model's performance. |
| `rect` | `bool` | `False` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. | | `rect` | `bool` | `True` | If `True`, uses rectangular inference for batching, reducing padding and potentially increasing speed and efficiency. |
| `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. | | `split` | `str` | `val` | Determines the dataset split to use for validation (`val`, `test`, or `train`). Allows flexibility in choosing the data segment for performance evaluation. |
| `project` | `str` | `None` | Name of the project directory where validation outputs are saved. | | `project` | `str` | `None` | Name of the project directory where validation outputs are saved. |
| `name` | `str` | `None` | Name of the validation run. Used for creating a subdirectory within the project folder, where valdiation logs and outputs are stored. | | `name` | `str` | `None` | Name of the validation run. Used for creating a subdirectory within the project folder, where valdiation logs and outputs are stored. |

@ -194,6 +194,34 @@ SAM 2 can be utilized across a broad spectrum of tasks, including real-time vide
yolo predict model=sam2.1_b.pt source=path/to/video.mp4 yolo predict model=sam2.1_b.pt source=path/to/video.mp4
``` ```
#### Segment Video and Track objects
!!! example "Segment Video"
Segment the entire video content with specific prompts and track objects.
=== "Python"
```python
from ultralytics.models.sam import SAM2VideoPredictor
# Create SAM2VideoPredictor
overrides = dict(conf=0.25, task="segment", mode="predict", imgsz=1024, model="sam2_b.pt")
predictor = SAM2VideoPredictor(overrides=overrides)
# Run inference with single point
results = predictor(source="test.mp4", points=[920, 470], labels=1)
# Run inference with multiple points
results = predictor(source="test.mp4", points=[[920, 470], [909, 138]], labels=[1, 1])
# Run inference with multiple points prompt per object
results = predictor(source="test.mp4", points=[[[920, 470], [909, 138]]], labels=[[1, 1]])
# Run inference with negative points prompt
results = predictor(source="test.mp4", points=[[[920, 470], [909, 138]]], labels=[[1, 0]])
```
- This example demonstrates how SAM 2 can be used to segment the entire content of an image or video if no prompts (bboxes/points/masks) are provided. - This example demonstrates how SAM 2 can be used to segment the entire content of an image or video if no prompts (bboxes/points/masks) are provided.
## SAM 2 comparison vs YOLOv8 ## SAM 2 comparison vs YOLOv8

@ -149,6 +149,7 @@ YOLO-NAS introduces several key features that make it a superior choice for obje
- **Quantization-Friendly Basic Block:** Enhanced architecture that improves model performance with minimal [precision](https://www.ultralytics.com/glossary/precision) drop post quantization. - **Quantization-Friendly Basic Block:** Enhanced architecture that improves model performance with minimal [precision](https://www.ultralytics.com/glossary/precision) drop post quantization.
- **Sophisticated Training and Quantization:** Employs advanced training schemes and post-training quantization techniques. - **Sophisticated Training and Quantization:** Employs advanced training schemes and post-training quantization techniques.
- **AutoNAC Optimization and Pre-training:** Utilizes AutoNAC optimization and is pre-trained on prominent datasets like COCO, Objects365, and Roboflow 100. - **AutoNAC Optimization and Pre-training:** Utilizes AutoNAC optimization and is pre-trained on prominent datasets like COCO, Objects365, and Roboflow 100.
These features contribute to its high accuracy, efficient performance, and suitability for deployment in production environments. Learn more in the [Key Features](#key-features) section. These features contribute to its high accuracy, efficient performance, and suitability for deployment in production environments. Learn more in the [Key Features](#key-features) section.
### Which tasks and modes are supported by YOLO-NAS models? ### Which tasks and modes are supported by YOLO-NAS models?

@ -130,7 +130,7 @@ Note that the example below is for YOLO11 [Detect](../tasks/detect.md) models fo
!!! tip "Ultralytics YOLO11 Publication" !!! tip "Ultralytics YOLO11 Publication"
Ultralytics has not published a formal research paper for YOLO11 due to the rapidly evolving nature of the models. We focus on advancing the technology and making it easier to use, rather than producing static documentation. For the most up-to-date information on YOLO architecture, features, and usage, please refer to our [GitHub repository](https://github.com/ultralytics/ultralytics) and [documentation](https://docs.ultralytics.com). Ultralytics has not published a formal research paper for YOLO11 due to the rapidly evolving nature of the models. We focus on advancing the technology and making it easier to use, rather than producing static documentation. For the most up-to-date information on YOLO architecture, features, and usage, please refer to our [GitHub repository](https://github.com/ultralytics/ultralytics) and [documentation](https://docs.ultralytics.com/).
If you use YOLO11 or any other software from this repository in your work, please cite it using the following format: If you use YOLO11 or any other software from this repository in your work, please cite it using the following format:

@ -94,7 +94,7 @@ This example provides simple YOLOv5 training and inference examples. For full do
!!! tip "Ultralytics YOLOv5 Publication" !!! tip "Ultralytics YOLOv5 Publication"
Ultralytics has not published a formal research paper for YOLOv5 due to the rapidly evolving nature of the models. We focus on advancing the technology and making it easier to use, rather than producing static documentation. For the most up-to-date information on YOLO architecture, features, and usage, please refer to our [GitHub repository](https://github.com/ultralytics/ultralytics) and [documentation](https://docs.ultralytics.com). Ultralytics has not published a formal research paper for YOLOv5 due to the rapidly evolving nature of the models. We focus on advancing the technology and making it easier to use, rather than producing static documentation. For the most up-to-date information on YOLO architecture, features, and usage, please refer to our [GitHub repository](https://github.com/ultralytics/ultralytics) and [documentation](https://docs.ultralytics.com/).
If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv5 repository as follows: If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv5 repository as follows:

@ -151,4 +151,5 @@ YOLOv7 offers several key features that revolutionize real-time object detection
- **Dynamic Label Assignment**: Uses a coarse-to-fine lead guided method to assign dynamic targets for outputs across different branches, improving accuracy. - **Dynamic Label Assignment**: Uses a coarse-to-fine lead guided method to assign dynamic targets for outputs across different branches, improving accuracy.
- **Extended and Compound Scaling**: Efficiently utilizes parameters and computation to scale the model for various real-time applications. - **Extended and Compound Scaling**: Efficiently utilizes parameters and computation to scale the model for various real-time applications.
- **Efficiency**: Reduces parameter count by 40% and computation by 50% compared to other state-of-the-art models while achieving faster inference speeds. - **Efficiency**: Reduces parameter count by 40% and computation by 50% compared to other state-of-the-art models while achieving faster inference speeds.
For further details on these features, see the [YOLOv7 Overview](#overview) section. For further details on these features, see the [YOLOv7 Overview](#overview) section.

@ -167,7 +167,7 @@ Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for obj
!!! tip "Ultralytics YOLOv8 Publication" !!! tip "Ultralytics YOLOv8 Publication"
Ultralytics has not published a formal research paper for YOLOv8 due to the rapidly evolving nature of the models. We focus on advancing the technology and making it easier to use, rather than producing static documentation. For the most up-to-date information on YOLO architecture, features, and usage, please refer to our [GitHub repository](https://github.com/ultralytics/ultralytics) and [documentation](https://docs.ultralytics.com). Ultralytics has not published a formal research paper for YOLOv8 due to the rapidly evolving nature of the models. We focus on advancing the technology and making it easier to use, rather than producing static documentation. For the most up-to-date information on YOLO architecture, features, and usage, please refer to our [GitHub repository](https://github.com/ultralytics/ultralytics) and [documentation](https://docs.ultralytics.com/).
If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format: If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format:

@ -4,30 +4,41 @@ description: Learn how to evaluate your YOLO11 model's performance in real-world
keywords: model benchmarking, YOLO11, Ultralytics, performance evaluation, export formats, ONNX, TensorRT, OpenVINO, CoreML, TensorFlow, optimization, mAP50-95, inference time keywords: model benchmarking, YOLO11, Ultralytics, performance evaluation, export formats, ONNX, TensorRT, OpenVINO, CoreML, TensorFlow, optimization, mAP50-95, inference time
--- ---
<script>
const script = document.createElement('script');
script.src = "https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js";
document.head.appendChild(script);
const anotherScript = document.createElement('script');
anotherScript.src = "../../javascript/benchmark.js";
document.head.appendChild(anotherScript);
</script>
# Model Benchmarking with Ultralytics YOLO # Model Benchmarking with Ultralytics YOLO
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-ecosystem-integrations.avif" alt="Ultralytics YOLO ecosystem and integrations"> <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-ecosystem-integrations.avif" alt="Ultralytics YOLO ecosystem and integrations">
## Benchmark Visualization ## Benchmark Visualization
<script src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
!!! tip "Refresh Browser" !!! tip "Refresh Browser"
You may need to refresh the page to view the graphs correctly due to potential cookie issues. You may need to refresh the page to view the graphs correctly due to potential cookie issues.
<div style="display: flex; align-items: flex-start;"> <div style="display: flex; align-items: flex-start;">
<div style="margin-right: 20px;"> <div style="margin-right: 20px;">
<label><input type="checkbox" name="algorithm" value="YOLO11" checked><span>Ultralytics YOLO11</span></label><br> <label><input type="checkbox" name="algorithm" value="YOLO11" checked><span>YOLO11</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv6" checked><span>YOLOv6</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv7" checked><span>YOLOv7</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv10" checked><span>YOLOv10</span></label><br> <label><input type="checkbox" name="algorithm" value="YOLOv10" checked><span>YOLOv10</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv9" checked><span>YOLOv9</span></label><br> <label><input type="checkbox" name="algorithm" value="YOLOv9" checked><span>YOLOv9</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv8" checked><span>Ultralytics YOLOv8</span></label><br> <label><input type="checkbox" name="algorithm" value="YOLOv8" checked><span>YOLOv8</span></label><br>
<label><input type="checkbox" name="algorithm" value="PPYOLOE" checked><span>PPYOLOE</span></label><br> <label><input type="checkbox" name="algorithm" value="YOLOv7" checked><span>YOLOv7</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv5" checked><span>Ultralytics YOLOv5</span></label> <label><input type="checkbox" name="algorithm" value="YOLOv6-3.0" checked><span>YOLOv6-3.0</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv5" checked><span>YOLOv5</span></label><br>
<label><input type="checkbox" name="algorithm" value="PP-YOLOE+" checked><span>PP-YOLOE+</span></label><br>
<label><input type="checkbox" name="algorithm" value="DAMO-YOLO" checked><span>DAMO-YOLO</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOX" checked><span>YOLOX</span></label><br>
<label><input type="checkbox" name="algorithm" value="RTDETRv2" checked><span>RTDETRv2</span></label>
</div> </div>
<div style="flex-grow: 1;"><canvas id="chart"></canvas></div> <!-- Canva for plotting benchmarks --> <div style="flex-grow: 1;"><canvas id="chart"></canvas></div>
</div> </div>
## Introduction ## Introduction
@ -102,7 +113,7 @@ Arguments such as `model`, `data`, `imgsz`, `half`, `device`, and `verbose` prov
| `imgsz` | `640` | The input image size for the model. Can be a single integer for square images or a tuple `(width, height)` for non-square, e.g., `(640, 480)`. | | `imgsz` | `640` | The input image size for the model. Can be a single integer for square images or a tuple `(width, height)` for non-square, e.g., `(640, 480)`. |
| `half` | `False` | Enables FP16 (half-precision) inference, reducing memory usage and possibly increasing speed on compatible hardware. Use `half=True` to enable. | | `half` | `False` | Enables FP16 (half-precision) inference, reducing memory usage and possibly increasing speed on compatible hardware. Use `half=True` to enable. |
| `int8` | `False` | Activates INT8 quantization for further optimized performance on supported devices, especially useful for edge devices. Set `int8=True` to use. | | `int8` | `False` | Activates INT8 quantization for further optimized performance on supported devices, especially useful for edge devices. Set `int8=True` to use. |
| `device` | `None` | Defines the computation device(s) for benchmarking, such as `"cpu"`, `"cuda:0"`, or a list of devices like `"cuda:0,1"` for multi-GPU setups. | | `device` | `None` | Defines the computation device(s) for benchmarking, such as `"cpu"` or `"cuda:0"`. |
| `verbose` | `False` | Controls the level of detail in logging output. A boolean value; set `verbose=True` for detailed logs or a float for thresholding errors. | | `verbose` | `False` | Controls the level of detail in logging output. A boolean value; set `verbose=True` for detailed logs or a float for thresholding errors. |
## Export Formats ## Export Formats
@ -145,6 +156,7 @@ Exporting YOLO11 models to different formats such as ONNX, TensorRT, and OpenVIN
- **ONNX:** Provides up to 3x CPU speedup. - **ONNX:** Provides up to 3x CPU speedup.
- **TensorRT:** Offers up to 5x GPU speedup. - **TensorRT:** Offers up to 5x GPU speedup.
- **OpenVINO:** Specifically optimized for Intel hardware. - **OpenVINO:** Specifically optimized for Intel hardware.
These formats enhance both the speed and accuracy of your models, making them more efficient for various real-world applications. Visit the [Export](../modes/export.md) page for complete details. These formats enhance both the speed and accuracy of your models, making them more efficient for various real-world applications. Visit the [Export](../modes/export.md) page for complete details.
### Why is benchmarking crucial in evaluating YOLO11 models? ### Why is benchmarking crucial in evaluating YOLO11 models?
@ -155,6 +167,7 @@ Benchmarking your YOLO11 models is essential for several reasons:
- **Resource Allocation:** Gauge the performance across different hardware options. - **Resource Allocation:** Gauge the performance across different hardware options.
- **Optimization:** Determine which export format offers the best performance for specific use cases. - **Optimization:** Determine which export format offers the best performance for specific use cases.
- **Cost Efficiency:** Optimize hardware usage based on benchmark results. - **Cost Efficiency:** Optimize hardware usage based on benchmark results.
Key metrics such as mAP50-95, Top-5 accuracy, and inference time help in making these evaluations. Refer to the [Key Metrics](#key-metrics-in-benchmark-mode) section for more information. Key metrics such as mAP50-95, Top-5 accuracy, and inference time help in making these evaluations. Refer to the [Key Metrics](#key-metrics-in-benchmark-mode) section for more information.
### Which export formats are supported by YOLO11, and what are their advantages? ### Which export formats are supported by YOLO11, and what are their advantages?
@ -165,6 +178,7 @@ YOLO11 supports a variety of export formats, each tailored for specific hardware
- **TensorRT:** Ideal for GPU efficiency. - **TensorRT:** Ideal for GPU efficiency.
- **OpenVINO:** Optimized for Intel hardware. - **OpenVINO:** Optimized for Intel hardware.
- **CoreML & [TensorFlow](https://www.ultralytics.com/glossary/tensorflow):** Useful for iOS and general ML applications. - **CoreML & [TensorFlow](https://www.ultralytics.com/glossary/tensorflow):** Useful for iOS and general ML applications.
For a complete list of supported formats and their respective advantages, check out the [Supported Export Formats](#supported-export-formats) section. For a complete list of supported formats and their respective advantages, check out the [Supported Export Formats](#supported-export-formats) section.
### What arguments can I use to fine-tune my YOLO11 benchmarks? ### What arguments can I use to fine-tune my YOLO11 benchmarks?
@ -178,4 +192,5 @@ When running benchmarks, several arguments can be customized to suit specific ne
- **int8:** Activate INT8 quantization for edge devices. - **int8:** Activate INT8 quantization for edge devices.
- **device:** Specify the computation device (e.g., "cpu", "cuda:0"). - **device:** Specify the computation device (e.g., "cpu", "cuda:0").
- **verbose:** Control the level of logging detail. - **verbose:** Control the level of logging detail.
For a full list of arguments, refer to the [Arguments](#arguments) section. For a full list of arguments, refer to the [Arguments](#arguments) section.

@ -28,7 +28,7 @@ Ultralytics provides various installation methods including pip, conda, and Dock
Install the `ultralytics` package using pip, or update an existing installation by running `pip install -U ultralytics`. Visit the Python Package Index (PyPI) for more details on the `ultralytics` package: [https://pypi.org/project/ultralytics/](https://pypi.org/project/ultralytics/). Install the `ultralytics` package using pip, or update an existing installation by running `pip install -U ultralytics`. Visit the Python Package Index (PyPI) for more details on the `ultralytics` package: [https://pypi.org/project/ultralytics/](https://pypi.org/project/ultralytics/).
[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/)
[![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics)
```bash ```bash
# Install the ultralytics package from PyPI # Install the ultralytics package from PyPI

@ -17,4 +17,8 @@ keywords: Ultralytics, SAM, Segment Anything Model, SAM 2, Segment Anything Mode
## ::: ultralytics.models.sam.predict.SAM2Predictor ## ::: ultralytics.models.sam.predict.SAM2Predictor
<br><br><hr><br>
## ::: ultralytics.models.sam.predict.SAM2VideoPredictor
<br><br> <br><br>

@ -0,0 +1,16 @@
---
description: Explore the Ultralytics Object Counter for real-time video streams. Learn about initializing parameters, tracking objects, and more.
keywords: Ultralytics, Object Counter, Real-time Tracking, Video Stream, Python, Object Detection
---
# Reference for `ultralytics/solutions/region_counter.py`
!!! note
This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/solutions/region_counter.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/solutions/region_counter.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/solutions/region_counter.py) 🛠. Thank you 🙏!
<br>
## ::: ultralytics.solutions.region_counter.RegionCounter
<br><br>

@ -19,6 +19,10 @@ keywords: Ultralytics, torch utils, model optimization, device selection, infere
<br><br><hr><br> <br><br><hr><br>
## ::: ultralytics.utils.torch_utils.FXModel
<br><br><hr><br>
## ::: ultralytics.utils.torch_utils.torch_distributed_zero_first ## ::: ultralytics.utils.torch_utils.torch_distributed_zero_first
<br><br><hr><br> <br><br><hr><br>

@ -36,8 +36,8 @@ YOLO11 pretrained Segment models are shown here. Detect, Segment and Pose models
{% include "macros/yolo-seg-perf.md" %} {% include "macros/yolo-seg-perf.md" %}
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco-seg.yaml device=0` - **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu` - **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco.yaml batch=1 device=0|cpu`
## Train ## Train

@ -186,6 +186,7 @@ Default inference settings include:
- **IoU Threshold (`iou=0.7`)**: For Non-Maximum Suppression (NMS). - **IoU Threshold (`iou=0.7`)**: For Non-Maximum Suppression (NMS).
- **Image Size (`imgsz=640`)**: Resizes input images prior to inference. - **Image Size (`imgsz=640`)**: Resizes input images prior to inference.
- **Device (`device=None`)**: Selects CPU or GPU for inference. - **Device (`device=None`)**: Selects CPU or GPU for inference.
For a comprehensive overview, visit the [Predict Settings](#predict-settings) section and the [Predict Guide](../modes/predict.md). For a comprehensive overview, visit the [Predict Settings](#predict-settings) section and the [Predict Guide](../modes/predict.md).
### Why should I use mixed precision training with YOLO models? ### Why should I use mixed precision training with YOLO models?

@ -458,6 +458,17 @@ image_with_obb = ann.result()
#### Bounding Boxes Circle Annotation [Circle Label](https://docs.ultralytics.com/reference/utils/plotting/#ultralytics.utils.plotting.Annotator.circle_label) #### Bounding Boxes Circle Annotation [Circle Label](https://docs.ultralytics.com/reference/utils/plotting/#ultralytics.utils.plotting.Annotator.circle_label)
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/c-S5M36XWmg"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> In-Depth Guide to Text & Circle Annotations with Python Live Demos | Ultralytics Annotations 🚀
</p>
```python ```python
import cv2 import cv2

@ -10,6 +10,9 @@
130829914+IvorZhu331@users.noreply.github.com: 130829914+IvorZhu331@users.noreply.github.com:
avatar: https://avatars.githubusercontent.com/u/130829914?v=4 avatar: https://avatars.githubusercontent.com/u/130829914?v=4
username: IvorZhu331 username: IvorZhu331
131249114+ServiAmirPM@users.noreply.github.com:
avatar: https://avatars.githubusercontent.com/u/131249114?v=4
username: ServiAmirPM
131261051+MatthewNoyce@users.noreply.github.com: 131261051+MatthewNoyce@users.noreply.github.com:
avatar: https://avatars.githubusercontent.com/u/131261051?v=4 avatar: https://avatars.githubusercontent.com/u/131261051?v=4
username: MatthewNoyce username: MatthewNoyce
@ -109,6 +112,9 @@ chr043416@gmail.com:
davis.justin@mssm.org: davis.justin@mssm.org:
avatar: https://avatars.githubusercontent.com/u/23462437?v=4 avatar: https://avatars.githubusercontent.com/u/23462437?v=4
username: justincdavis username: justincdavis
francesco.mttl@gmail.com:
avatar: https://avatars.githubusercontent.com/u/3855193?v=4
username: ambitious-octopus
glenn.jocher@ultralytics.com: glenn.jocher@ultralytics.com:
avatar: https://avatars.githubusercontent.com/u/26833433?v=4 avatar: https://avatars.githubusercontent.com/u/26833433?v=4
username: glenn-jocher username: glenn-jocher

@ -0,0 +1,199 @@
// YOLO models chart ---------------------------------------------------------------------------------------------------
const data = {
YOLO11: {
n: { speed: 1.55, mAP: 39.5 },
s: { speed: 2.63, mAP: 47.0 },
m: { speed: 5.27, mAP: 51.4 },
l: { speed: 6.84, mAP: 53.2 },
x: { speed: 12.49, mAP: 54.7 },
},
YOLOv10: {
n: { speed: 1.56, mAP: 39.5 },
s: { speed: 2.66, mAP: 46.7 },
m: { speed: 5.48, mAP: 51.3 },
b: { speed: 6.54, mAP: 52.7 },
l: { speed: 8.33, mAP: 53.3 },
x: { speed: 12.2, mAP: 54.4 },
},
YOLOv9: {
t: { speed: 2.3, mAP: 37.8 },
s: { speed: 3.54, mAP: 46.5 },
m: { speed: 6.43, mAP: 51.5 },
c: { speed: 7.16, mAP: 52.8 },
e: { speed: 16.77, mAP: 55.1 },
},
YOLOv8: {
n: { speed: 1.47, mAP: 37.3 },
s: { speed: 2.66, mAP: 44.9 },
m: { speed: 5.86, mAP: 50.2 },
l: { speed: 9.06, mAP: 52.9 },
x: { speed: 14.37, mAP: 53.9 },
},
YOLOv7: { l: { speed: 6.84, mAP: 51.4 }, x: { speed: 11.57, mAP: 53.1 } },
"YOLOv6-3.0": {
n: { speed: 1.17, mAP: 37.5 },
s: { speed: 2.66, mAP: 45.0 },
m: { speed: 5.28, mAP: 50.0 },
l: { speed: 8.95, mAP: 52.8 },
},
YOLOv5: {
s: { speed: 1.92, mAP: 37.4 },
m: { speed: 4.03, mAP: 45.4 },
l: { speed: 6.61, mAP: 49.0 },
x: { speed: 11.89, mAP: 50.7 },
},
"PP-YOLOE+": {
t: { speed: 2.84, mAP: 39.9 },
s: { speed: 2.62, mAP: 43.7 },
m: { speed: 5.56, mAP: 49.8 },
l: { speed: 8.36, mAP: 52.9 },
x: { speed: 14.3, mAP: 54.7 },
},
"DAMO-YOLO": {
t: { speed: 2.32, mAP: 42.0 },
s: { speed: 3.45, mAP: 46.0 },
m: { speed: 5.09, mAP: 49.2 },
l: { speed: 7.18, mAP: 50.8 },
},
YOLOX: {
s: { speed: 2.56, mAP: 40.5 },
m: { speed: 5.43, mAP: 46.9 },
l: { speed: 9.04, mAP: 49.7 },
x: { speed: 16.1, mAP: 51.1 },
},
RTDETRv2: {
s: { speed: 5.03, mAP: 48.1 },
m: { speed: 7.51, mAP: 51.9 },
l: { speed: 9.76, mAP: 53.4 },
x: { speed: 15.03, mAP: 54.3 },
},
};
let chart = null; // chart variable will hold the reference to the current chart instance.
// Function to lighten a hex color by a specified amount.
function lightenHexColor(color, amount = 0.5) {
const r = parseInt(color.slice(1, 3), 16);
const g = parseInt(color.slice(3, 5), 16);
const b = parseInt(color.slice(5, 7), 16);
const newR = Math.min(255, Math.round(r + (255 - r) * amount));
const newG = Math.min(255, Math.round(g + (255 - g) * amount));
const newB = Math.min(255, Math.round(b + (255 - b) * amount));
return `#${newR.toString(16).padStart(2, "0")}${newG.toString(16).padStart(2, "0")}${newB.toString(16).padStart(2, "0")}`;
}
// Function to update the benchmarks chart.
function updateChart() {
if (chart) {
chart.destroy();
} // If a chart instance already exists, destroy it.
// Define a specific color map for models.
const colorMap = {
YOLO11: "#0b23a9",
YOLOv10: "#ff7f0e",
YOLOv9: "#2ca02c",
YOLOv8: "#d62728",
YOLOv7: "#9467bd",
"YOLOv6-3.0": "#8c564b",
YOLOv5: "#e377c2",
"PP-YOLOE+": "#7f7f7f",
"DAMO-YOLO": "#bcbd22",
YOLOX: "#17becf",
RTDETRv2: "#eccd22",
};
// Get the selected algorithms from the checkboxes.
const selectedAlgorithms = [
...document.querySelectorAll('input[name="algorithm"]:checked'),
].map((e) => e.value);
// Create the datasets for the selected algorithms.
const datasets = selectedAlgorithms.map((algorithm, i) => {
const baseColor =
colorMap[algorithm] || `hsl(${Math.random() * 360}, 70%, 50%)`;
const lineColor = i === 0 ? baseColor : lightenHexColor(baseColor, 0.6); // Lighten non-primary lines.
return {
label: algorithm, // Label for the data points in the legend.
data: Object.entries(data[algorithm]).map(([version, point]) => ({
x: point.speed, // Speed data points on the x-axis.
y: point.mAP, // mAP data points on the y-axis.
version: version.toUpperCase(), // Store the version as additional data.
})),
fill: false, // Don't fill the chart.
borderColor: lineColor, // Use the lightened color for the line.
tension: 0.3, // Smooth the line.
pointRadius: i === 0 ? 7 : 4, // Highlight primary dataset points.
pointHoverRadius: i === 0 ? 9 : 6, // Highlight hover for primary dataset.
pointBackgroundColor: lineColor, // Fill points with the line color.
pointBorderColor: "#ffffff", // Add a border around points for contrast.
borderWidth: i === 0 ? 3 : 1.5, // Slightly increase line size for the primary dataset.
};
});
if (datasets.length === 0) {
return;
} // If there are no selected algorithms, return without creating a new chart.
// Create a new chart instance.
chart = new Chart(document.getElementById("chart").getContext("2d"), {
type: "line", // Set the chart type to line.
data: { datasets },
options: {
plugins: {
legend: {
display: true,
position: "top",
labels: { color: "#808080" },
}, // Configure the legend.
tooltip: {
callbacks: {
label: (tooltipItem) => {
const { dataset, dataIndex } = tooltipItem;
const point = dataset.data[dataIndex];
return `${dataset.label}${point.version.toLowerCase()}: Speed = ${point.x}, mAP = ${point.y}`; // Custom tooltip label.
},
},
mode: "nearest",
intersect: false,
}, // Configure the tooltip.
},
interaction: { mode: "nearest", axis: "x", intersect: false }, // Configure the interaction mode.
scales: {
x: {
type: "linear",
position: "bottom",
title: {
display: true,
text: "Latency T4 TensorRT10 FP16 (ms/img)",
color: "#808080",
}, // X-axis title.
grid: { color: "#e0e0e0" }, // Grid line color.
ticks: { color: "#808080" }, // Tick label color.
},
y: {
title: { display: true, text: "mAP", color: "#808080" }, // Y-axis title.
grid: { color: "#e0e0e0" }, // Grid line color.
ticks: { color: "#808080" }, // Tick label color.
},
},
},
});
}
document$.subscribe(function () {
function initializeApp() {
if (typeof Chart !== "undefined") {
document
.querySelectorAll('input[name="algorithm"]')
.forEach((checkbox) =>
checkbox.addEventListener("change", updateChart),
);
updateChart();
} else {
setTimeout(initializeApp, 100); // Retry every 100ms
}
}
initializeApp(); // Initial chart rendering
});

@ -1,4 +1,4 @@
// Apply theme based on user preference // Apply theme colors based on dark/light mode
const applyTheme = (isDark) => { const applyTheme = (isDark) => {
document.body.setAttribute( document.body.setAttribute(
"data-md-color-scheme", "data-md-color-scheme",
@ -10,80 +10,74 @@ const applyTheme = (isDark) => {
); );
}; };
// Check and apply auto theme // Check and apply appropriate theme based on system/user preference
const checkAutoTheme = () => { const checkTheme = () => {
const supportedLangCodes = [ const palette = JSON.parse(localStorage.getItem(".__palette") || "{}");
"en",
"zh",
"ko",
"ja",
"ru",
"de",
"fr",
"es",
"pt",
"it",
"tr",
"vi",
"ar",
];
const langCode = window.location.pathname.split("/")[1];
const localStorageKey = `${supportedLangCodes.includes(langCode) ? `/${langCode}` : ""}/.__palette`;
const palette = JSON.parse(localStorage.getItem(localStorageKey) || "{}");
if (palette.index === 0) { if (palette.index === 0) {
// Auto mode is selected
applyTheme(window.matchMedia("(prefers-color-scheme: dark)").matches); applyTheme(window.matchMedia("(prefers-color-scheme: dark)").matches);
} }
}; };
// Event listeners for theme changes // Watch for system theme changes
const mediaQueryList = window.matchMedia("(prefers-color-scheme: dark)"); window
mediaQueryList.addListener(checkAutoTheme); .matchMedia("(prefers-color-scheme: dark)")
.addEventListener("change", checkTheme);
// Initial theme check
checkAutoTheme();
// Auto theme input listener // Initialize theme handling on page load
document.addEventListener("DOMContentLoaded", () => { document.addEventListener("DOMContentLoaded", () => {
const autoThemeInput = document.getElementById("__palette_1"); // Watch for theme toggle changes
autoThemeInput?.addEventListener("click", () => { document
if (autoThemeInput.checked) setTimeout(checkAutoTheme); .getElementById("__palette_1")
}); ?.addEventListener(
}); "change",
(e) => e.target.checked && setTimeout(checkTheme),
// Iframe navigation
window.onhashchange = () => {
window.parent.postMessage(
{
type: "navigation",
hash:
window.location.pathname +
window.location.search +
window.location.hash,
},
"*",
); );
}; // Initial theme check
checkTheme();
});
// Add Inkeep button // Inkeep --------------------------------------------------------------------------------------------------------------
document.addEventListener("DOMContentLoaded", () => { document.addEventListener("DOMContentLoaded", () => {
const enableSearchBar = true;
const inkeepScript = document.createElement("script"); const inkeepScript = document.createElement("script");
inkeepScript.src = "https://unpkg.com/@inkeep/uikit-js@0.3.11/dist/embed.js"; inkeepScript.src = "https://unpkg.com/@inkeep/uikit-js@0.3.18/dist/embed.js";
inkeepScript.type = "module"; inkeepScript.type = "module";
inkeepScript.defer = true; inkeepScript.defer = true;
document.head.appendChild(inkeepScript); document.head.appendChild(inkeepScript);
// Configure and initialize the widget if (enableSearchBar) {
const addInkeepWidget = () => { const containerDiv = document.createElement("div");
containerDiv.style.transform = "scale(0.7)";
containerDiv.style.transformOrigin = "left center";
const inkeepDiv = document.createElement("div");
inkeepDiv.id = "inkeepSearchBar";
containerDiv.appendChild(inkeepDiv);
const headerElement = document.querySelector(".md-header__inner");
const searchContainer = headerElement.querySelector(".md-header__source");
if (headerElement && searchContainer) {
headerElement.insertBefore(containerDiv, searchContainer);
}
}
// configure and initialize the widget
const addInkeepWidget = (componentType, targetElementId) => {
const inkeepWidget = Inkeep().embed({ const inkeepWidget = Inkeep().embed({
componentType: "ChatButton", componentType,
...(componentType !== "ChatButton"
? { targetElement: targetElementId }
: {}),
colorModeSync: { colorModeSync: {
observedElement: document.documentElement, observedElement: document.documentElement,
isDarkModeCallback: (el) => { isDarkModeCallback: (el) => {
const currentTheme = el.getAttribute("data-color-mode"); const currentTheme = el.getAttribute("data-color-mode");
return currentTheme === "dark"; return currentTheme === "dark";
}, },
colorModeAttribute: "data-color-mode", colorModeAttribute: "data-color-mode-scheme",
}, },
properties: { properties: {
chatButtonType: "PILL", chatButtonType: "PILL",
@ -99,13 +93,12 @@ document.addEventListener("DOMContentLoaded", () => {
theme: { theme: {
stylesheetUrls: ["/stylesheets/style.css"], stylesheetUrls: ["/stylesheets/style.css"],
}, },
// ...optional settings
}, },
modalSettings: { modalSettings: {
// optional settings // optional settings
}, },
searchSettings: { searchSettings: {
// optional settings placeholder: "Search",
}, },
aiChatSettings: { aiChatSettings: {
chatSubjectName: "Ultralytics", chatSubjectName: "Ultralytics",
@ -144,97 +137,9 @@ document.addEventListener("DOMContentLoaded", () => {
}); });
}; };
inkeepScript.addEventListener("load", () => { inkeepScript.addEventListener("load", () => {
addInkeepWidget(); // initialize the widget const widgetContainer = document.getElementById("inkeepSearchBar");
});
});
// This object contains the benchmark data for various object detection models. addInkeepWidget("ChatButton");
const data = { widgetContainer && addInkeepWidget("SearchBar", "#inkeepSearchBar");
'YOLOv5': {s: {speed: 1.92, mAP: 37.4}, m: {speed: 4.03, mAP: 45.4}, l: {speed: 6.61, mAP: 49.0}, x: {speed: 11.89, mAP: 50.7}}, });
'YOLOv6': {n: {speed: 1.17, mAP: 37.5}, s: {speed: 2.66, mAP: 45.0}, m: {speed: 5.28, mAP: 50.0}, l: {speed: 8.95, mAP: 52.8}},
'YOLOv7': {l: {speed: 6.84, mAP: 51.4}, x: {speed: 11.57, mAP: 53.1}},
'YOLOv8': {n: {speed: 1.47, mAP: 37.3}, s: {speed: 2.66, mAP: 44.9}, m: {speed: 5.86, mAP: 50.2}, l: {speed: 9.06, mAP: 52.9}, x: {speed: 14.37, mAP: 53.9}},
'YOLOv9': {t: {speed: 2.30, mAP: 37.8}, s: {speed: 3.54, mAP: 46.5}, m: {speed: 6.43, mAP: 51.5}, c: {speed: 7.16, mAP: 52.8}, e: {speed: 16.77, mAP: 55.1}},
'YOLOv10': {n: {speed: 1.56, mAP: 39.5}, s: {speed: 2.66, mAP: 46.7}, m: {speed: 5.48, mAP: 51.3}, b: {speed: 6.54, mAP: 52.7}, l: {speed: 8.33, mAP: 53.3}, x: {speed: 12.2, mAP: 54.4}},
'PPYOLOE': {t: {speed: 2.84, mAP: 39.9}, s: {speed: 2.62, mAP: 43.7}, m: {speed: 5.56, mAP: 49.8}, l: {speed: 8.36, mAP: 52.9}, x: {speed: 14.3, mAP: 54.7}},
'YOLO11': {n: {speed: 1.55, mAP: 39.5}, s: {speed: 2.63, mAP: 47.0}, m: {speed: 5.27, mAP: 51.4}, l: {speed: 6.84, mAP: 53.2}, x: {speed: 12.49, mAP: 54.7}}
};
let chart = null; // chart variable will hold the reference to the current chart instance.
// This function is responsible for updating the benchmarks chart.
function updateChart() {
// If a chart instance already exists, destroy it.
if (chart) chart.destroy();
// Get the selected algorithms from the checkboxes.
const selectedAlgorithms = [...document.querySelectorAll('input[name="algorithm"]:checked')].map(e => e.value);
// Create the datasets for the selected algorithms.
const datasets = selectedAlgorithms.map((algorithm, index) => ({
label: algorithm, // Label for the data points in the legend.
data: Object.entries(data[algorithm]).map(([version, point]) => ({
x: point.speed, // Speed data points on the x-axis.
y: point.mAP, // mAP data points on the y-axis.
version: version.toUpperCase() // Store the version as additional data.
})),
fill: false, // Don't fill the chart.
borderColor: `hsl(${index * 90}, 70%, 50%)`, // Assign a unique color to each dataset.
tension: 0.3, // Smooth the line.
pointRadius: 5, // Increase the dot size.
pointHoverRadius: 10, // Increase the dot size on hover.
borderWidth: 2 // Set the line thickness.
}));
// If there are no selected algorithms, return without creating a new chart.
if (datasets.length === 0) return;
// Create a new chart instance.
chart = new Chart(document.getElementById('chart').getContext('2d'), {
type: 'line', // Set the chart type to line.
data: { datasets },
options: {
plugins: {
legend: { display: true, position: 'top', labels: {color: '#808080'} }, // Configure the legend.
tooltip: {
callbacks: {
label: (tooltipItem) => {
const { dataset, dataIndex } = tooltipItem;
const point = dataset.data[dataIndex];
return `${dataset.label}${point.version.toLowerCase()}: Speed = ${point.x}, mAP = ${point.y}`; // Custom tooltip label.
}
},
mode: 'nearest',
intersect: false
} // Configure the tooltip.
},
interaction: { mode: 'nearest', axis: 'x', intersect: false }, // Configure the interaction mode.
scales: {
x: {
type: 'linear', position: 'bottom',
title: { display: true, text: 'Latency T4 TensorRT10 FP16 (ms/img)', color: '#808080'}, // X-axis title.
grid: { color: '#e0e0e0' }, // Grid line color.
ticks: { color: '#808080' } // Tick label color.
},
y: {
title: { display: true, text: 'mAP', color: '#808080'}, // Y-axis title.
grid: { color: '#e0e0e0' }, // Grid line color.
ticks: { color: '#808080' } // Tick label color.
}
}
}
}); });
}
// Poll for Chart.js to load, then initialize checkboxes and chart
function initializeApp() {
if (typeof Chart !== 'undefined') {
document.querySelectorAll('input[name="algorithm"]').forEach(checkbox =>
checkbox.addEventListener('change', updateChart)
);
updateChart();
} else {
setTimeout(initializeApp, 100); // Retry every 100ms
}
}
document.addEventListener("DOMContentLoaded", initializeApp); // Initial chart rendering on page load

@ -1,7 +1,9 @@
// Giscus functionality // Giscus functionality
function loadGiscus() { function loadGiscus() {
const giscusContainer = document.getElementById("giscus-container"); const giscusContainer = document.getElementById("giscus-container");
if (!giscusContainer || giscusContainer.querySelector("script")) return; if (!giscusContainer || giscusContainer.querySelector("script")) {
return;
}
const script = document.createElement("script"); const script = document.createElement("script");
script.src = "https://giscus.app/client.js"; script.src = "https://giscus.app/client.js";
@ -55,14 +57,17 @@ function setupGiscusLoader() {
const giscusContainer = document.getElementById("giscus-container"); const giscusContainer = document.getElementById("giscus-container");
if (giscusContainer) { if (giscusContainer) {
const observer = new IntersectionObserver((entries) => { const observer = new IntersectionObserver(
(entries) => {
entries.forEach((entry) => { entries.forEach((entry) => {
if (entry.isIntersecting) { if (entry.isIntersecting) {
loadGiscus(); loadGiscus();
observer.unobserve(entry.target); observer.unobserve(entry.target);
} }
}); });
}, { threshold: 0.1 }); // Trigger when 10% of the element is visible },
{ threshold: 0.1 },
); // Trigger when 10% of the element is visible
observer.observe(giscusContainer); observer.observe(giscusContainer);
} }

@ -265,8 +265,15 @@ div.highlight {
} }
/* MkDocs Ultralytics Plugin ---------------------------------------------------------------------------------------- */ /* MkDocs Ultralytics Plugin ---------------------------------------------------------------------------------------- */
/* Inkeep button font color ----------------------------------------------------------------------------------------- */ /* Inkeep ----------------------------------------------------------------------------------------------------------- */
.ikp-floating-button { .ikp-floating-button {
color: #111f68; color: #111f68;
} }
/* Inkeep button ---------------------------------------------------------------------------------------------------- */ #inkeepSearchBar {
transition: all 0.2s ease-in-out;
}
#inkeepSearchBar:hover {
transform: scale(1.1);
filter: brightness(1.2);
}
/* Inkeep ----------------------------------------------------------------------------------------------------------- */

@ -64,7 +64,7 @@ class SAHIInference:
break break
annotator = Annotator(frame) # Initialize annotator for plotting detection and tracking results annotator = Annotator(frame) # Initialize annotator for plotting detection and tracking results
results = get_sliced_prediction( results = get_sliced_prediction(
frame, frame[..., ::-1],
self.detection_model, self.detection_model,
slice_height=512, slice_height=512,
slice_width=512, slice_width=512,

@ -38,7 +38,7 @@
"\n", "\n",
"Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n", "Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n",
"\n", "\n",
"[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)" "[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)"
] ]
}, },
{ {

@ -36,7 +36,7 @@
"\n", "\n",
"Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n", "Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n",
"\n", "\n",
"[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)" "[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)"
] ]
}, },
{ {

@ -38,7 +38,7 @@
"\n", "\n",
"Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n", "Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n",
"\n", "\n",
"[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)" "[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)"
] ]
}, },
{ {

@ -38,7 +38,7 @@
"\n", "\n",
"Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n", "Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n",
"\n", "\n",
"[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)" "[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)"
] ]
}, },
{ {

@ -55,7 +55,7 @@
"\n", "\n",
"Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n", "Pip install `ultralytics` and [dependencies](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml) and check software and hardware.\n",
"\n", "\n",
"[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)" "[![PyPI - Version](https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white)](https://pypi.org/project/ultralytics/) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ultralytics?logo=python&logoColor=gold)](https://pypi.org/project/ultralytics/)"
] ]
}, },
{ {

@ -291,6 +291,7 @@ nav:
- COCO8-pose: datasets/pose/coco8-pose.md - COCO8-pose: datasets/pose/coco8-pose.md
- Tiger-pose: datasets/pose/tiger-pose.md - Tiger-pose: datasets/pose/tiger-pose.md
- Hand-keypoints: datasets/pose/hand-keypoints.md - Hand-keypoints: datasets/pose/hand-keypoints.md
- Dog-pose: datasets/pose/dog-pose.md
- Classification: - Classification:
- datasets/classify/index.md - datasets/classify/index.md
- Caltech 101: datasets/classify/caltech101.md - Caltech 101: datasets/classify/caltech101.md
@ -412,12 +413,14 @@ nav:
- TF.js: integrations/tfjs.md - TF.js: integrations/tfjs.md
- TFLite: integrations/tflite.md - TFLite: integrations/tflite.md
- TFLite Edge TPU: integrations/edge-tpu.md - TFLite Edge TPU: integrations/edge-tpu.md
- Sony IMX500: integrations/sony-imx500.md
- TensorBoard: integrations/tensorboard.md - TensorBoard: integrations/tensorboard.md
- TensorRT: integrations/tensorrt.md - TensorRT: integrations/tensorrt.md
- TorchScript: integrations/torchscript.md - TorchScript: integrations/torchscript.md
- VS Code: integrations/vscode.md - VS Code: integrations/vscode.md
- Weights & Biases: integrations/weights-biases.md - Weights & Biases: integrations/weights-biases.md
- Albumentations: integrations/albumentations.md - Albumentations: integrations/albumentations.md
- SONY IMX500: integrations/sony-imx500.md
- HUB: - HUB:
- hub/index.md - hub/index.md
- Web: - Web:
@ -559,7 +562,6 @@ nav:
- utils: reference/nn/modules/utils.md - utils: reference/nn/modules/utils.md
- tasks: reference/nn/tasks.md - tasks: reference/nn/tasks.md
- solutions: - solutions:
- solutions: reference/solutions/solutions.md
- ai_gym: reference/solutions/ai_gym.md - ai_gym: reference/solutions/ai_gym.md
- analytics: reference/solutions/analytics.md - analytics: reference/solutions/analytics.md
- distance_calculation: reference/solutions/distance_calculation.md - distance_calculation: reference/solutions/distance_calculation.md
@ -567,8 +569,10 @@ nav:
- object_counter: reference/solutions/object_counter.md - object_counter: reference/solutions/object_counter.md
- parking_management: reference/solutions/parking_management.md - parking_management: reference/solutions/parking_management.md
- queue_management: reference/solutions/queue_management.md - queue_management: reference/solutions/queue_management.md
- solutions: reference/solutions/solutions.md
- speed_estimation: reference/solutions/speed_estimation.md - speed_estimation: reference/solutions/speed_estimation.md
- streamlit_inference: reference/solutions/streamlit_inference.md - streamlit_inference: reference/solutions/streamlit_inference.md
- region_counter: reference/solutions/region_counter.md
- trackers: - trackers:
- basetrack: reference/trackers/basetrack.md - basetrack: reference/trackers/basetrack.md
- bot_sort: reference/trackers/bot_sort.md - bot_sort: reference/trackers/bot_sort.md
@ -624,8 +628,8 @@ nav:
# Plugins including 301 redirects navigation --------------------------------------------------------------------------- # Plugins including 301 redirects navigation ---------------------------------------------------------------------------
plugins: plugins:
- macros - macros
- search: # - search:
lang: en # lang: en
- mkdocstrings: - mkdocstrings:
enabled: true enabled: true
default_handler: python default_handler: python

@ -205,3 +205,12 @@ def test_export_ncnn():
"""Test YOLO exports to NCNN format.""" """Test YOLO exports to NCNN format."""
file = YOLO(MODEL).export(format="ncnn", imgsz=32) file = YOLO(MODEL).export(format="ncnn", imgsz=32)
YOLO(file)(SOURCE, imgsz=32) # exported model inference YOLO(file)(SOURCE, imgsz=32) # exported model inference
@pytest.mark.skipif(True, reason="Test disabled as keras and tensorflow version conflicts with tflite export.")
@pytest.mark.skipif(not LINUX or MACOS, reason="Skipping test on Windows and Macos")
def test_export_imx():
"""Test YOLOv8n exports to IMX format."""
model = YOLO("yolov8n.pt")
file = model.export(format="imx", imgsz=32)
YOLO(file)(SOURCE, imgsz=32)

@ -16,7 +16,7 @@ def test_major_solutions():
safe_download(url=MAJOR_SOLUTIONS_DEMO) safe_download(url=MAJOR_SOLUTIONS_DEMO)
cap = cv2.VideoCapture("solutions_ci_demo.mp4") cap = cv2.VideoCapture("solutions_ci_demo.mp4")
assert cap.isOpened(), "Error reading video file" assert cap.isOpened(), "Error reading video file"
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)] region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360)]
counter = solutions.ObjectCounter(region=region_points, model="yolo11n.pt", show=False) # Test object counter counter = solutions.ObjectCounter(region=region_points, model="yolo11n.pt", show=False) # Test object counter
heatmap = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, model="yolo11n.pt", show=False) # Test heatmaps heatmap = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, model="yolo11n.pt", show=False) # Test heatmaps
speed = solutions.SpeedEstimator(region=region_points, model="yolo11n.pt", show=False) # Test queue manager speed = solutions.SpeedEstimator(region=region_points, model="yolo11n.pt", show=False) # Test queue manager

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = "8.3.28" __version__ = "8.3.38"
import os import os

@ -83,13 +83,13 @@ SOLUTIONS_HELP_MSG = f"""
See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg' See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg'
1. Call object counting solution 1. Call object counting solution
yolo solutions count source="path/to/video/file.mp4" region=[(20, 400), (1080, 404), (1080, 360), (20, 360)] yolo solutions count source="path/to/video/file.mp4" region=[(20, 400), (1080, 400), (1080, 360), (20, 360)]
2. Call heatmaps solution 2. Call heatmaps solution
yolo solutions heatmap colormap=cv2.COLORMAP_PARAULA model=yolo11n.pt yolo solutions heatmap colormap=cv2.COLORMAP_PARAULA model=yolo11n.pt
3. Call queue management solution 3. Call queue management solution
yolo solutions queue region=[(20, 400), (1080, 404), (1080, 360), (20, 360)] model=yolo11n.pt yolo solutions queue region=[(20, 400), (1080, 400), (1080, 360), (20, 360)] model=yolo11n.pt
4. Call workouts monitoring solution for push-ups 4. Call workouts monitoring solution for push-ups
yolo solutions workout model=yolo11n-pose.pt kpts=[6, 8, 10] yolo solutions workout model=yolo11n-pose.pt kpts=[6, 8, 10]
@ -160,7 +160,6 @@ CFG_FRACTION_KEYS = { # fractional float arguments with 0.0<=values<=1.0
"weight_decay", "weight_decay",
"warmup_momentum", "warmup_momentum",
"warmup_bias_lr", "warmup_bias_lr",
"label_smoothing",
"hsv_h", "hsv_h",
"hsv_s", "hsv_s",
"hsv_v", "hsv_v",
@ -436,6 +435,9 @@ def _handle_deprecation(custom):
if key == "line_thickness": if key == "line_thickness":
deprecation_warn(key, "line_width") deprecation_warn(key, "line_width")
custom["line_width"] = custom.pop("line_thickness") custom["line_width"] = custom.pop("line_thickness")
if key == "label_smoothing":
deprecation_warn(key)
custom.pop("label_smoothing")
return custom return custom
@ -671,6 +673,9 @@ def handle_yolo_solutions(args: List[str]) -> None:
) )
s_n = "count" # Default solution if none provided s_n = "count" # Default solution if none provided
if args and args[0] == "help": # Add check for return if user call `yolo solutions help`
return
cls, method = SOLUTION_MAP[s_n] # solution class name, method name and default source cls, method = SOLUTION_MAP[s_n] # solution class name, method name and default source
from ultralytics import solutions # import ultralytics solutions from ultralytics import solutions # import ultralytics solutions
@ -735,9 +740,8 @@ def parse_key_value_pair(pair: str = "key=value"):
pair (str): A string containing a key-value pair in the format "key=value". pair (str): A string containing a key-value pair in the format "key=value".
Returns: Returns:
(tuple): A tuple containing two elements: key (str): The parsed key.
- key (str): The parsed key. value (str): The parsed value.
- value (str): The parsed value.
Raises: Raises:
AssertionError: If the value is missing or empty. AssertionError: If the value is missing or empty.

@ -0,0 +1,23 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Dogs dataset http://vision.stanford.edu/aditya86/ImageNetDogs/ by Stanford
# Documentation: https://docs.ultralytics.com/datasets/pose/dog-pose/
# Example usage: yolo train data=dog-pose.yaml
# parent
# ├── ultralytics
# └── datasets
# └── dog-pose ← downloads here (337 MB)
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/dog-pose # dataset root dir
train: train # train images (relative to 'path') 6773 images
val: val # val images (relative to 'path') 1703 images
# Keypoints
kpt_shape: [24, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible)
# Classes
names:
0: dog
# Download script/URL (optional)
download: https://github.com/ultralytics/assets/releases/download/v0.0.0/dog-pose.zip

@ -83,7 +83,7 @@ int8: False # (bool) CoreML/TF INT8 quantization
dynamic: False # (bool) ONNX/TF/TensorRT: dynamic axes dynamic: False # (bool) ONNX/TF/TensorRT: dynamic axes
simplify: True # (bool) ONNX: simplify model using `onnxslim` simplify: True # (bool) ONNX: simplify model using `onnxslim`
opset: # (int, optional) ONNX: opset version opset: # (int, optional) ONNX: opset version
workspace: 4 # (int) TensorRT: workspace size (GB) workspace: None # (float, optional) TensorRT: workspace size (GiB), `None` will let TensorRT auto-allocate memory
nms: False # (bool) CoreML: add NMS nms: False # (bool) CoreML: add NMS
# Hyperparameters ------------------------------------------------------------------------------------------------------ # Hyperparameters ------------------------------------------------------------------------------------------------------
@ -99,7 +99,6 @@ cls: 0.5 # (float) cls loss gain (scale with pixels)
dfl: 1.5 # (float) dfl loss gain dfl: 1.5 # (float) dfl loss gain
pose: 12.0 # (float) pose loss gain pose: 12.0 # (float) pose loss gain
kobj: 1.0 # (float) keypoint obj loss gain kobj: 1.0 # (float) keypoint obj loss gain
label_smoothing: 0.0 # (float) label smoothing (fraction)
nbs: 64 # (int) nominal batch size nbs: 64 # (int) nominal batch size
hsv_h: 0.015 # (float) image HSV-Hue augmentation (fraction) hsv_h: 0.015 # (float) image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # (float) image HSV-Saturation augmentation (fraction) hsv_s: 0.7 # (float) image HSV-Saturation augmentation (fraction)

@ -2,7 +2,7 @@
# Configuration for Ultralytics Solutions # Configuration for Ultralytics Solutions
# Object counting settings # Object counting settings
region: # Object counting, queue or speed estimation region points. Default region points are [(20, 400), (1080, 404), (1080, 360), (20, 360)] region: # Object counting, queue or speed estimation region points. Default region points are [(20, 400), (1080, 400), (1080, 360), (20, 360)]
show_in: True # Flag to display objects moving *into* the defined region show_in: True # Flag to display objects moving *into* the defined region
show_out: True # Flag to display objects moving *out of* the defined region show_out: True # Flag to display objects moving *out of* the defined region

@ -1591,7 +1591,7 @@ class LetterBox:
labels["ratio_pad"] = (labels["ratio_pad"], (left, top)) # for evaluation labels["ratio_pad"] = (labels["ratio_pad"], (left, top)) # for evaluation
if len(labels): if len(labels):
labels = self._update_labels(labels, ratio, dw, dh) labels = self._update_labels(labels, ratio, left, top)
labels["img"] = img labels["img"] = img
labels["resized_shape"] = new_shape labels["resized_shape"] = new_shape
return labels return labels
@ -2111,7 +2111,6 @@ class Format:
h (int): Height of the image. h (int): Height of the image.
Returns: Returns:
(tuple): Tuple containing:
masks (numpy.ndarray): Bitmap masks with shape (N, H, W) or (1, H, W) if mask_overlap is True. masks (numpy.ndarray): Bitmap masks with shape (N, H, W) or (1, H, W) if mask_overlap is True.
instances (Instances): Updated instances object with sorted segments if mask_overlap is True. instances (Instances): Updated instances object with sorted segments if mask_overlap is True.
cls (numpy.ndarray): Updated class labels, sorted if mask_overlap is True. cls (numpy.ndarray): Updated class labels, sorted if mask_overlap is True.
@ -2280,7 +2279,7 @@ def v8_transforms(dataset, imgsz, hyp, stretch=False):
Args: Args:
dataset (Dataset): The dataset object containing image data and annotations. dataset (Dataset): The dataset object containing image data and annotations.
imgsz (int): The target image size for resizing. imgsz (int): The target image size for resizing.
hyp (Dict): A dictionary of hyperparameters controlling various aspects of the transformations. hyp (Namespace): A dictionary of hyperparameters controlling various aspects of the transformations.
stretch (bool): If True, applies stretching to the image. If False, uses LetterBox resizing. stretch (bool): If True, applies stretching to the image. If False, uses LetterBox resizing.
Returns: Returns:
@ -2288,8 +2287,9 @@ def v8_transforms(dataset, imgsz, hyp, stretch=False):
Examples: Examples:
>>> from ultralytics.data.dataset import YOLODataset >>> from ultralytics.data.dataset import YOLODataset
>>> from ultralytics.utils import IterableSimpleNamespace
>>> dataset = YOLODataset(img_path="path/to/images", imgsz=640) >>> dataset = YOLODataset(img_path="path/to/images", imgsz=640)
>>> hyp = {"mosaic": 1.0, "copy_paste": 0.5, "degrees": 10.0, "translate": 0.2, "scale": 0.9} >>> hyp = IterableSimpleNamespace(mosaic=1.0, copy_paste=0.5, degrees=10.0, translate=0.2, scale=0.9)
>>> transforms = v8_transforms(dataset, imgsz=640, hyp=hyp) >>> transforms = v8_transforms(dataset, imgsz=640, hyp=hyp)
>>> augmented_data = transforms(dataset[0]) >>> augmented_data = transforms(dataset[0])
""" """

@ -577,7 +577,7 @@ def merge_multi_segment(segments):
return s return s
def yolo_bbox2segment(im_dir, save_dir=None, sam_model="sam_b.pt"): def yolo_bbox2segment(im_dir, save_dir=None, sam_model="sam_b.pt", device=None):
""" """
Converts existing object detection dataset (bounding boxes) to segmentation dataset or oriented bounding box (OBB) Converts existing object detection dataset (bounding boxes) to segmentation dataset or oriented bounding box (OBB)
in YOLO format. Generates segmentation data using SAM auto-annotator as needed. in YOLO format. Generates segmentation data using SAM auto-annotator as needed.
@ -587,6 +587,7 @@ def yolo_bbox2segment(im_dir, save_dir=None, sam_model="sam_b.pt"):
save_dir (str | Path): Path to save the generated labels, labels will be saved save_dir (str | Path): Path to save the generated labels, labels will be saved
into `labels-segment` in the same directory level of `im_dir` if save_dir is None. Default: None. into `labels-segment` in the same directory level of `im_dir` if save_dir is None. Default: None.
sam_model (str): Segmentation model to use for intermediate segmentation data; optional. sam_model (str): Segmentation model to use for intermediate segmentation data; optional.
device (int | str): The specific device to run SAM models. Default: None.
Notes: Notes:
The input directory structure assumed for dataset: The input directory structure assumed for dataset:
@ -621,7 +622,7 @@ def yolo_bbox2segment(im_dir, save_dir=None, sam_model="sam_b.pt"):
boxes[:, [0, 2]] *= w boxes[:, [0, 2]] *= w
boxes[:, [1, 3]] *= h boxes[:, [1, 3]] *= h
im = cv2.imread(label["im_file"]) im = cv2.imread(label["im_file"])
sam_results = sam_model(im, bboxes=xywh2xyxy(boxes), verbose=False, save=False) sam_results = sam_model(im, bboxes=xywh2xyxy(boxes), verbose=False, save=False, device=device)
label["segments"] = sam_results[0].masks.xyn label["segments"] = sam_results[0].masks.xyn
save_dir = Path(save_dir) if save_dir else Path(im_dir).parent / "labels-segment" save_dir = Path(save_dir) if save_dir else Path(im_dir).parent / "labels-segment"

@ -354,7 +354,7 @@ class LoadImagesAndVideos:
self.nf = ni + nv # number of files self.nf = ni + nv # number of files
self.ni = ni # number of images self.ni = ni # number of images
self.video_flag = [False] * ni + [True] * nv self.video_flag = [False] * ni + [True] * nv
self.mode = "image" self.mode = "video" if ni == 0 else "image" # default to video if no images
self.vid_stride = vid_stride # video frame-rate stride self.vid_stride = vid_stride # video frame-rate stride
self.bs = batch self.bs = batch
if any(videos): if any(videos):

@ -18,6 +18,7 @@ TensorFlow.js | `tfjs` | yolo11n_web_model/
PaddlePaddle | `paddle` | yolo11n_paddle_model/ PaddlePaddle | `paddle` | yolo11n_paddle_model/
MNN | `mnn` | yolo11n.mnn MNN | `mnn` | yolo11n.mnn
NCNN | `ncnn` | yolo11n_ncnn_model/ NCNN | `ncnn` | yolo11n_ncnn_model/
IMX | `imx` | yolo11n_imx_model/
Requirements: Requirements:
$ pip install "ultralytics[export]" $ pip install "ultralytics[export]"
@ -44,6 +45,7 @@ Inference:
yolo11n_paddle_model # PaddlePaddle yolo11n_paddle_model # PaddlePaddle
yolo11n.mnn # MNN yolo11n.mnn # MNN
yolo11n_ncnn_model # NCNN yolo11n_ncnn_model # NCNN
yolo11n_imx_model # IMX
TensorFlow.js: TensorFlow.js:
$ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
@ -77,7 +79,6 @@ from ultralytics.utils import (
ARM64, ARM64,
DEFAULT_CFG, DEFAULT_CFG,
IS_JETSON, IS_JETSON,
IS_RASPBERRYPI,
LINUX, LINUX,
LOGGER, LOGGER,
MACOS, MACOS,
@ -94,7 +95,7 @@ from ultralytics.utils.checks import check_imgsz, check_is_path_safe, check_requ
from ultralytics.utils.downloads import attempt_download_asset, get_github_assets, safe_download from ultralytics.utils.downloads import attempt_download_asset, get_github_assets, safe_download
from ultralytics.utils.files import file_size, spaces_in_path from ultralytics.utils.files import file_size, spaces_in_path
from ultralytics.utils.ops import Profile from ultralytics.utils.ops import Profile
from ultralytics.utils.torch_utils import TORCH_1_13, get_latest_opset, select_device, smart_inference_mode from ultralytics.utils.torch_utils import TORCH_1_13, get_latest_opset, select_device
def export_formats(): def export_formats():
@ -114,6 +115,7 @@ def export_formats():
["PaddlePaddle", "paddle", "_paddle_model", True, True], ["PaddlePaddle", "paddle", "_paddle_model", True, True],
["MNN", "mnn", ".mnn", True, True], ["MNN", "mnn", ".mnn", True, True],
["NCNN", "ncnn", "_ncnn_model", True, True], ["NCNN", "ncnn", "_ncnn_model", True, True],
["IMX", "imx", "_imx_model", True, True],
] ]
return dict(zip(["Format", "Argument", "Suffix", "CPU", "GPU"], zip(*x))) return dict(zip(["Format", "Argument", "Suffix", "CPU", "GPU"], zip(*x)))
@ -171,7 +173,6 @@ class Exporter:
self.callbacks = _callbacks or callbacks.get_default_callbacks() self.callbacks = _callbacks or callbacks.get_default_callbacks()
callbacks.add_integration_callbacks(self) callbacks.add_integration_callbacks(self)
@smart_inference_mode()
def __call__(self, model=None) -> str: def __call__(self, model=None) -> str:
"""Returns list of exported files/dirs after running callbacks.""" """Returns list of exported files/dirs after running callbacks."""
self.run_callbacks("on_export_start") self.run_callbacks("on_export_start")
@ -194,9 +195,22 @@ class Exporter:
flags = [x == fmt for x in fmts] flags = [x == fmt for x in fmts]
if sum(flags) != 1: if sum(flags) != 1:
raise ValueError(f"Invalid export format='{fmt}'. Valid formats are {fmts}") raise ValueError(f"Invalid export format='{fmt}'. Valid formats are {fmts}")
jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, mnn, ncnn = ( (
flags # export booleans jit,
) onnx,
xml,
engine,
coreml,
saved_model,
pb,
tflite,
edgetpu,
tfjs,
paddle,
mnn,
ncnn,
imx,
) = flags # export booleans
is_tf_format = any((saved_model, pb, tflite, edgetpu, tfjs)) is_tf_format = any((saved_model, pb, tflite, edgetpu, tfjs))
# Device # Device
@ -206,10 +220,14 @@ class Exporter:
self.args.device = "0" self.args.device = "0"
if fmt == "engine" and "dla" in str(self.args.device): # convert int/list to str first if fmt == "engine" and "dla" in str(self.args.device): # convert int/list to str first
dla = self.args.device.split(":")[-1] dla = self.args.device.split(":")[-1]
self.args.device = "0" # update device to "0"
assert dla in {"0", "1"}, f"Expected self.args.device='dla:0' or 'dla:1, but got {self.args.device}." assert dla in {"0", "1"}, f"Expected self.args.device='dla:0' or 'dla:1, but got {self.args.device}."
self.device = select_device("cpu" if self.args.device is None else self.args.device) self.device = select_device("cpu" if self.args.device is None else self.args.device)
# Checks # Checks
if imx and not self.args.int8:
LOGGER.warning("WARNING ⚠ IMX only supports int8 export, setting int8=True.")
self.args.int8 = True
if not hasattr(model, "names"): if not hasattr(model, "names"):
model.names = default_class_names() model.names = default_class_names()
model.names = check_class_names(model.names) model.names = check_class_names(model.names)
@ -247,8 +265,7 @@ class Exporter:
"WARNING ⚠ INT8 export requires a missing 'data' arg for calibration. " "WARNING ⚠ INT8 export requires a missing 'data' arg for calibration. "
f"Using default 'data={self.args.data}'." f"Using default 'data={self.args.data}'."
) )
if mnn and (IS_RASPBERRYPI or IS_JETSON):
raise SystemError("MNN export not supported on Raspberry Pi and NVIDIA Jetson")
# Input # Input
im = torch.zeros(self.args.batch, 3, *self.imgsz).to(self.device) im = torch.zeros(self.args.batch, 3, *self.imgsz).to(self.device)
file = Path( file = Path(
@ -264,6 +281,11 @@ class Exporter:
model.eval() model.eval()
model.float() model.float()
model = model.fuse() model = model.fuse()
if imx:
from ultralytics.utils.torch_utils import FXModel
model = FXModel(model)
for m in model.modules(): for m in model.modules():
if isinstance(m, (Detect, RTDETRDecoder)): # includes all Detect subclasses like Segment, Pose, OBB if isinstance(m, (Detect, RTDETRDecoder)): # includes all Detect subclasses like Segment, Pose, OBB
m.dynamic = self.args.dynamic m.dynamic = self.args.dynamic
@ -273,6 +295,15 @@ class Exporter:
elif isinstance(m, C2f) and not is_tf_format: elif isinstance(m, C2f) and not is_tf_format:
# EdgeTPU does not support FlexSplitV while split provides cleaner ONNX graph # EdgeTPU does not support FlexSplitV while split provides cleaner ONNX graph
m.forward = m.forward_split m.forward = m.forward_split
if isinstance(m, Detect) and imx:
from ultralytics.utils.tal import make_anchors
m.anchors, m.strides = (
x.transpose(0, 1)
for x in make_anchors(
torch.cat([s / m.stride.unsqueeze(-1) for s in self.imgsz], dim=1), m.stride, 0.5
)
)
y = None y = None
for _ in range(2): for _ in range(2):
@ -347,6 +378,8 @@ class Exporter:
f[11], _ = self.export_mnn() f[11], _ = self.export_mnn()
if ncnn: # NCNN if ncnn: # NCNN
f[12], _ = self.export_ncnn() f[12], _ = self.export_ncnn()
if imx:
f[13], _ = self.export_imx()
# Finish # Finish
f = [str(x) for x in f if x] # filter out '' and None f = [str(x) for x in f if x] # filter out '' and None
@ -469,8 +502,7 @@ class Exporter:
@try_export @try_export
def export_openvino(self, prefix=colorstr("OpenVINO:")): def export_openvino(self, prefix=colorstr("OpenVINO:")):
"""YOLO OpenVINO export.""" """YOLO OpenVINO export."""
# WARNING: numpy>=2.0.0 issue with OpenVINO on macOS https://github.com/ultralytics/ultralytics/pull/17221 check_requirements("openvino>=2024.5.0")
check_requirements(f'openvino{"<=2024.0.0" if ARM64 else ">=2024.0.0"}') # fix OpenVINO issue on ARM64
import openvino as ov import openvino as ov
LOGGER.info(f"\n{prefix} starting export with openvino {ov.__version__}...") LOGGER.info(f"\n{prefix} starting export with openvino {ov.__version__}...")
@ -498,7 +530,7 @@ class Exporter:
if self.args.int8: if self.args.int8:
fq = str(self.file).replace(self.file.suffix, f"_int8_openvino_model{os.sep}") fq = str(self.file).replace(self.file.suffix, f"_int8_openvino_model{os.sep}")
fq_ov = str(Path(fq) / self.file.with_suffix(".xml").name) fq_ov = str(Path(fq) / self.file.with_suffix(".xml").name)
check_requirements("nncf>=2.8.0") check_requirements("nncf>=2.14.0")
import nncf import nncf
def transform_fn(data_item) -> np.ndarray: def transform_fn(data_item) -> np.ndarray:
@ -568,8 +600,7 @@ class Exporter:
f = str(self.file.with_suffix(".mnn")) # MNN model file f = str(self.file.with_suffix(".mnn")) # MNN model file
args = ["", "-f", "ONNX", "--modelFile", f_onnx, "--MNNModel", f, "--bizCode", json.dumps(self.metadata)] args = ["", "-f", "ONNX", "--modelFile", f_onnx, "--MNNModel", f, "--bizCode", json.dumps(self.metadata)]
if self.args.int8: if self.args.int8:
args.append("--weightQuantBits") args.extend(("--weightQuantBits", "8"))
args.append("8")
if self.args.half: if self.args.half:
args.append("--fp16") args.append("--fp16")
mnnconvert.convert(args) mnnconvert.convert(args)
@ -751,10 +782,10 @@ class Exporter:
# Engine builder # Engine builder
builder = trt.Builder(logger) builder = trt.Builder(logger)
config = builder.create_builder_config() config = builder.create_builder_config()
workspace = int(self.args.workspace * (1 << 30)) workspace = int(self.args.workspace * (1 << 30)) if self.args.workspace is not None else 0
if is_trt10: if is_trt10 and workspace > 0:
config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace) config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace)
else: # TensorRT versions 7, 8 elif workspace > 0 and not is_trt10: # TensorRT versions 7, 8
config.max_workspace_size = workspace config.max_workspace_size = workspace
flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH) flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
network = builder.create_network(flag) network = builder.create_network(flag)
@ -793,7 +824,7 @@ class Exporter:
LOGGER.warning(f"{prefix} WARNING ⚠ 'dynamic=True' model requires max batch size, i.e. 'batch=16'") LOGGER.warning(f"{prefix} WARNING ⚠ 'dynamic=True' model requires max batch size, i.e. 'batch=16'")
profile = builder.create_optimization_profile() profile = builder.create_optimization_profile()
min_shape = (1, shape[1], 32, 32) # minimum input shape min_shape = (1, shape[1], 32, 32) # minimum input shape
max_shape = (*shape[:2], *(int(max(1, self.args.workspace) * d) for d in shape[2:])) # max input shape max_shape = (*shape[:2], *(int(max(1, workspace) * d) for d in shape[2:])) # max input shape
for inp in inputs: for inp in inputs:
profile.set_shape(inp.name, min=min_shape, opt=shape, max=max_shape) profile.set_shape(inp.name, min=min_shape, opt=shape, max=max_shape)
config.add_optimization_profile(profile) config.add_optimization_profile(profile)
@ -1069,6 +1100,137 @@ class Exporter:
yaml_save(Path(f) / "metadata.yaml", self.metadata) # add metadata.yaml yaml_save(Path(f) / "metadata.yaml", self.metadata) # add metadata.yaml
return f, None return f, None
@try_export
def export_imx(self, prefix=colorstr("IMX:")):
"""YOLO IMX export."""
gptq = False
assert LINUX, "export only supported on Linux. See https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-converter"
if getattr(self.model, "end2end", False):
raise ValueError("IMX export is not supported for end2end models.")
if "C2f" not in self.model.__str__():
raise ValueError("IMX export is only supported for YOLOv8 detection models")
check_requirements(("model-compression-toolkit==2.1.1", "sony-custom-layers==0.2.0", "tensorflow==2.12.0"))
check_requirements("imx500-converter[pt]==3.14.3") # Separate requirements for imx500-converter
import model_compression_toolkit as mct
import onnx
from sony_custom_layers.pytorch.object_detection.nms import multiclass_nms
try:
out = subprocess.run(
["java", "--version"], check=True, capture_output=True
) # Java 17 is required for imx500-converter
if "openjdk 17" not in str(out.stdout):
raise FileNotFoundError
except FileNotFoundError:
subprocess.run(["sudo", "apt", "install", "-y", "openjdk-17-jdk", "openjdk-17-jre"], check=True)
def representative_dataset_gen(dataloader=self.get_int8_calibration_dataloader(prefix)):
for batch in dataloader:
img = batch["img"]
img = img / 255.0
yield [img]
tpc = mct.get_target_platform_capabilities(
fw_name="pytorch", target_platform_name="imx500", target_platform_version="v1"
)
config = mct.core.CoreConfig(
mixed_precision_config=mct.core.MixedPrecisionQuantizationConfig(num_of_images=10),
quantization_config=mct.core.QuantizationConfig(concat_threshold_update=True),
)
resource_utilization = mct.core.ResourceUtilization(weights_memory=3146176 * 0.76)
quant_model = (
mct.gptq.pytorch_gradient_post_training_quantization( # Perform Gradient-Based Post Training Quantization
model=self.model,
representative_data_gen=representative_dataset_gen,
target_resource_utilization=resource_utilization,
gptq_config=mct.gptq.get_pytorch_gptq_config(n_epochs=1000, use_hessian_based_weights=False),
core_config=config,
target_platform_capabilities=tpc,
)[0]
if gptq
else mct.ptq.pytorch_post_training_quantization( # Perform post training quantization
in_module=self.model,
representative_data_gen=representative_dataset_gen,
target_resource_utilization=resource_utilization,
core_config=config,
target_platform_capabilities=tpc,
)[0]
)
class NMSWrapper(torch.nn.Module):
def __init__(
self,
model: torch.nn.Module,
score_threshold: float = 0.001,
iou_threshold: float = 0.7,
max_detections: int = 300,
):
"""
Wrapping PyTorch Module with multiclass_nms layer from sony_custom_layers.
Args:
model (nn.Module): Model instance.
score_threshold (float): Score threshold for non-maximum suppression.
iou_threshold (float): Intersection over union threshold for non-maximum suppression.
max_detections (float): The number of detections to return.
"""
super().__init__()
self.model = model
self.score_threshold = score_threshold
self.iou_threshold = iou_threshold
self.max_detections = max_detections
def forward(self, images):
# model inference
outputs = self.model(images)
boxes = outputs[0]
scores = outputs[1]
nms = multiclass_nms(
boxes=boxes,
scores=scores,
score_threshold=self.score_threshold,
iou_threshold=self.iou_threshold,
max_detections=self.max_detections,
)
return nms
quant_model = NMSWrapper(
model=quant_model,
score_threshold=self.args.conf or 0.001,
iou_threshold=self.args.iou,
max_detections=self.args.max_det,
).to(self.device)
f = Path(str(self.file).replace(self.file.suffix, "_imx_model"))
f.mkdir(exist_ok=True)
onnx_model = f / Path(str(self.file).replace(self.file.suffix, "_imx.onnx")) # js dir
mct.exporter.pytorch_export_model(
model=quant_model, save_model_path=onnx_model, repr_dataset=representative_dataset_gen
)
model_onnx = onnx.load(onnx_model) # load onnx model
for k, v in self.metadata.items():
meta = model_onnx.metadata_props.add()
meta.key, meta.value = k, str(v)
onnx.save(model_onnx, onnx_model)
subprocess.run(
["imxconv-pt", "-i", str(onnx_model), "-o", str(f), "--no-input-persistency", "--overwrite-output"],
check=True,
)
# Needed for imx models.
with open(f / "labels.txt", "w") as file:
file.writelines([f"{name}\n" for _, name in self.model.names.items()])
return f, None
def _add_tflite_metadata(self, file): def _add_tflite_metadata(self, file):
"""Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadata.""" """Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadata."""
import flatbuffers import flatbuffers

@ -2,7 +2,7 @@
import inspect import inspect
from pathlib import Path from pathlib import Path
from typing import List, Union from typing import Dict, List, Union
import numpy as np import numpy as np
import torch import torch
@ -881,7 +881,7 @@ class Model(nn.Module):
return self return self
@property @property
def names(self) -> list: def names(self) -> Dict[int, str]:
""" """
Retrieves the class names associated with the loaded model. Retrieves the class names associated with the loaded model.
@ -1126,3 +1126,20 @@ class Model(nn.Module):
description of the expected behavior and structure. description of the expected behavior and structure.
""" """
raise NotImplementedError("Please provide task map for your model!") raise NotImplementedError("Please provide task map for your model!")
def eval(self):
"""
Sets the model to evaluation mode.
This method changes the model's mode to evaluation, which affects layers like dropout and batch normalization
that behave differently during training and evaluation.
Returns:
(Model): The model instance with evaluation mode set.
Examples:
>> model = YOLO("yolo11n.pt")
>> model.eval()
"""
self.model.eval()
return self

@ -153,7 +153,11 @@ class BasePredictor:
(list): A list of transformed images. (list): A list of transformed images.
""" """
same_shapes = len({x.shape for x in im}) == 1 same_shapes = len({x.shape for x in im}) == 1
letterbox = LetterBox(self.imgsz, auto=same_shapes and self.model.pt, stride=self.model.stride) letterbox = LetterBox(
self.imgsz,
auto=same_shapes and (self.model.pt or getattr(self.model, "dynamic", False)),
stride=self.model.stride,
)
return [letterbox(image=x) for x in im] return [letterbox(image=x) for x in im]
def postprocess(self, preds, img, orig_imgs): def postprocess(self, preds, img, orig_imgs):

@ -535,9 +535,9 @@ class Results(SimpleClass):
# Plot Detect results # Plot Detect results
if pred_boxes is not None and show_boxes: if pred_boxes is not None and show_boxes:
for i, d in enumerate(reversed(pred_boxes)): for i, d in enumerate(reversed(pred_boxes)):
c, conf, id = int(d.cls), float(d.conf) if conf else None, None if d.id is None else int(d.id.item()) c, d_conf, id = int(d.cls), float(d.conf) if conf else None, None if d.id is None else int(d.id.item())
name = ("" if id is None else f"id:{id} ") + names[c] name = ("" if id is None else f"id:{id} ") + names[c]
label = (f"{name} {conf:.2f}" if conf else name) if labels else None label = (f"{name} {d_conf:.2f}" if conf else name) if labels else None
box = d.xyxyxyxy.reshape(-1, 4, 2).squeeze() if is_obb else d.xyxy.squeeze() box = d.xyxyxyxy.reshape(-1, 4, 2).squeeze() if is_obb else d.xyxy.squeeze()
annotator.box_label( annotator.box_label(
box, box,
@ -750,7 +750,7 @@ class Results(SimpleClass):
save_one_box( save_one_box(
d.xyxy, d.xyxy,
self.orig_img.copy(), self.orig_img.copy(),
file=Path(save_dir) / self.names[int(d.cls)] / f"{Path(file_name)}.jpg", file=Path(save_dir) / self.names[int(d.cls)] / Path(file_name).with_suffix(".jpg"),
BGR=True, BGR=True,
) )

@ -279,12 +279,7 @@ class BaseTrainer:
# Batch size # Batch size
if self.batch_size < 1 and RANK == -1: # single-GPU only, estimate best batch size if self.batch_size < 1 and RANK == -1: # single-GPU only, estimate best batch size
self.args.batch = self.batch_size = check_train_batch_size( self.args.batch = self.batch_size = self.auto_batch()
model=self.model,
imgsz=self.args.imgsz,
amp=self.amp,
batch=self.batch_size,
)
# Dataloaders # Dataloaders
batch_size = self.batch_size // max(world_size, 1) batch_size = self.batch_size // max(world_size, 1)
@ -478,6 +473,16 @@ class BaseTrainer:
self._clear_memory() self._clear_memory()
self.run_callbacks("teardown") self.run_callbacks("teardown")
def auto_batch(self, max_num_obj=0):
"""Get batch size by calculating memory occupation of model."""
return check_train_batch_size(
model=self.model,
imgsz=self.args.imgsz,
amp=self.amp,
batch=self.batch_size,
max_num_obj=max_num_obj,
) # returns batch size
def _get_memory(self): def _get_memory(self):
"""Get accelerator memory utilization in GB.""" """Get accelerator memory utilization in GB."""
if self.device.type == "mps": if self.device.type == "mps":
@ -792,7 +797,7 @@ class BaseTrainer:
g[0].append(param) g[0].append(param)
optimizers = {"Adam", "Adamax", "AdamW", "NAdam", "RAdam", "RMSProp", "SGD", "auto"} optimizers = {"Adam", "Adamax", "AdamW", "NAdam", "RAdam", "RMSProp", "SGD", "auto"}
name = {x.lower(): x for x in optimizers}.get(name.lower(), None) name = {x.lower(): x for x in optimizers}.get(name.lower())
if name in {"Adam", "Adamax", "AdamW", "NAdam", "RAdam"}: if name in {"Adam", "Adamax", "AdamW", "NAdam", "RAdam"}:
optimizer = getattr(optim, name, optim.Adam)(g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0) optimizer = getattr(optim, name, optim.Adam)(g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0)
elif name == "RMSProp": elif name == "RMSProp":

@ -64,6 +64,9 @@ class FastSAMPredictor(SegmentationPredictor):
if not isinstance(results, list): if not isinstance(results, list):
results = [results] results = [results]
for result in results: for result in results:
if len(result) == 0:
prompt_results.append(result)
continue
masks = result.masks.data masks = result.masks.data
if masks.shape[1:] != result.orig_shape: if masks.shape[1:] != result.orig_shape:
masks = scale_masks(masks[None], result.orig_shape)[0] masks = scale_masks(masks[None], result.orig_shape)[0]

@ -68,8 +68,11 @@ class RTDETRTrainer(DetectionTrainer):
hyp=self.args, hyp=self.args,
rect=False, rect=False,
cache=self.args.cache or None, cache=self.args.cache or None,
single_cls=self.args.single_cls or False,
prefix=colorstr(f"{mode}: "), prefix=colorstr(f"{mode}: "),
classes=self.args.classes,
data=self.data, data=self.data,
fraction=self.args.fraction if mode == "train" else 1.0,
) )
def get_validator(self): def get_validator(self):

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
from .model import SAM from .model import SAM
from .predict import Predictor, SAM2Predictor from .predict import Predictor, SAM2Predictor, SAM2VideoPredictor
__all__ = "SAM", "Predictor", "SAM2Predictor" # tuple or list __all__ = "SAM", "Predictor", "SAM2Predictor", "SAM2VideoPredictor" # tuple or list

@ -148,7 +148,7 @@ class SAM(Model):
verbose (bool): If True, prints the information to the console. verbose (bool): If True, prints the information to the console.
Returns: Returns:
(Tuple): A tuple containing the model's information (string representations of the model). (tuple): A tuple containing the model's information (string representations of the model).
Examples: Examples:
>>> sam = SAM("sam_b.pt") >>> sam = SAM("sam_b.pt")

@ -36,8 +36,6 @@ class SAMModel(nn.Module):
image_encoder (ImageEncoderViT): Backbone for encoding images into embeddings. image_encoder (ImageEncoderViT): Backbone for encoding images into embeddings.
prompt_encoder (PromptEncoder): Encoder for various types of input prompts. prompt_encoder (PromptEncoder): Encoder for various types of input prompts.
mask_decoder (MaskDecoder): Predicts object masks from image and prompt embeddings. mask_decoder (MaskDecoder): Predicts object masks from image and prompt embeddings.
pixel_mean (torch.Tensor): Mean pixel values for image normalization, shape (3, 1, 1).
pixel_std (torch.Tensor): Standard deviation values for image normalization, shape (3, 1, 1).
Methods: Methods:
__init__: Initializes the SAMModel with encoders, decoder, and normalization parameters. __init__: Initializes the SAMModel with encoders, decoder, and normalization parameters.
@ -349,8 +347,7 @@ class SAM2Model(torch.nn.Module):
self.sam_prompt_embed_dim = self.hidden_dim self.sam_prompt_embed_dim = self.hidden_dim
self.sam_image_embedding_size = self.image_size // self.backbone_stride self.sam_image_embedding_size = self.image_size // self.backbone_stride
# build PromptEncoder and MaskDecoder from SAM # Build PromptEncoder and MaskDecoder from SAM (hyperparameters like `mask_in_chans=16` are from SAM code)
# (their hyperparameters like `mask_in_chans=16` are from SAM code)
self.sam_prompt_encoder = PromptEncoder( self.sam_prompt_encoder = PromptEncoder(
embed_dim=self.sam_prompt_embed_dim, embed_dim=self.sam_prompt_embed_dim,
image_embedding_size=( image_embedding_size=(
@ -425,8 +422,8 @@ class SAM2Model(torch.nn.Module):
low_res_multimasks: Tensor of shape (B, M, H*4, W*4) with SAM output mask logits. low_res_multimasks: Tensor of shape (B, M, H*4, W*4) with SAM output mask logits.
high_res_multimasks: Tensor of shape (B, M, H*16, W*16) with upsampled mask logits. high_res_multimasks: Tensor of shape (B, M, H*16, W*16) with upsampled mask logits.
ious: Tensor of shape (B, M) with estimated IoU for each output mask. ious: Tensor of shape (B, M) with estimated IoU for each output mask.
low_res_masks: Tensor of shape (B, 1, H*4, W*4) with best low-resolution mask. low_res_masks: Tensor of shape (B, 1, H*4, W*4) with the best low-resolution mask.
high_res_masks: Tensor of shape (B, 1, H*16, W*16) with best high-resolution mask. high_res_masks: Tensor of shape (B, 1, H*16, W*16) with the best high-resolution mask.
obj_ptr: Tensor of shape (B, C) with object pointer vector for the output mask. obj_ptr: Tensor of shape (B, C) with object pointer vector for the output mask.
object_score_logits: Tensor of shape (B,) with object score logits. object_score_logits: Tensor of shape (B,) with object score logits.
@ -488,12 +485,7 @@ class SAM2Model(torch.nn.Module):
boxes=None, boxes=None,
masks=sam_mask_prompt, masks=sam_mask_prompt,
) )
( low_res_multimasks, ious, sam_output_tokens, object_score_logits = self.sam_mask_decoder(
low_res_multimasks,
ious,
sam_output_tokens,
object_score_logits,
) = self.sam_mask_decoder(
image_embeddings=backbone_features, image_embeddings=backbone_features,
image_pe=self.sam_prompt_encoder.get_dense_pe(), image_pe=self.sam_prompt_encoder.get_dense_pe(),
sparse_prompt_embeddings=sparse_embeddings, sparse_prompt_embeddings=sparse_embeddings,
@ -505,13 +497,8 @@ class SAM2Model(torch.nn.Module):
if self.pred_obj_scores: if self.pred_obj_scores:
is_obj_appearing = object_score_logits > 0 is_obj_appearing = object_score_logits > 0
# Mask used for spatial memories is always a *hard* choice between obj and no obj, # Spatial memory mask is a *hard* choice between obj and no obj, consistent with actual mask prediction
# consistent with the actual mask prediction low_res_multimasks = torch.where(is_obj_appearing[:, None, None], low_res_multimasks, NO_OBJ_SCORE)
low_res_multimasks = torch.where(
is_obj_appearing[:, None, None],
low_res_multimasks,
NO_OBJ_SCORE,
)
# convert masks from possibly bfloat16 (or float16) to float32 # convert masks from possibly bfloat16 (or float16) to float32
# (older PyTorch versions before 2.1 don't support `interpolate` on bf16) # (older PyTorch versions before 2.1 don't support `interpolate` on bf16)
@ -617,7 +604,6 @@ class SAM2Model(torch.nn.Module):
def _prepare_backbone_features(self, backbone_out): def _prepare_backbone_features(self, backbone_out):
"""Prepares and flattens visual features from the image backbone output for further processing.""" """Prepares and flattens visual features from the image backbone output for further processing."""
backbone_out = backbone_out.copy()
assert len(backbone_out["backbone_fpn"]) == len(backbone_out["vision_pos_enc"]) assert len(backbone_out["backbone_fpn"]) == len(backbone_out["vision_pos_enc"])
assert len(backbone_out["backbone_fpn"]) >= self.num_feature_levels assert len(backbone_out["backbone_fpn"]) >= self.num_feature_levels
@ -826,11 +812,7 @@ class SAM2Model(torch.nn.Module):
mask_for_mem = mask_for_mem * self.sigmoid_scale_for_mem_enc mask_for_mem = mask_for_mem * self.sigmoid_scale_for_mem_enc
if self.sigmoid_bias_for_mem_enc != 0.0: if self.sigmoid_bias_for_mem_enc != 0.0:
mask_for_mem = mask_for_mem + self.sigmoid_bias_for_mem_enc mask_for_mem = mask_for_mem + self.sigmoid_bias_for_mem_enc
maskmem_out = self.memory_encoder( maskmem_out = self.memory_encoder(pix_feat, mask_for_mem, skip_mask_sigmoid=True) # sigmoid already applied
pix_feat,
mask_for_mem,
skip_mask_sigmoid=True, # sigmoid already applied
)
maskmem_features = maskmem_out["vision_features"] maskmem_features = maskmem_out["vision_features"]
maskmem_pos_enc = maskmem_out["vision_pos_enc"] maskmem_pos_enc = maskmem_out["vision_pos_enc"]
# add a no-object embedding to the spatial memory to indicate that the frame # add a no-object embedding to the spatial memory to indicate that the frame
@ -965,16 +947,7 @@ class SAM2Model(torch.nn.Module):
track_in_reverse, track_in_reverse,
prev_sam_mask_logits, prev_sam_mask_logits,
) )
_, _, _, low_res_masks, high_res_masks, obj_ptr, object_score_logits = sam_outputs
(
_,
_,
_,
low_res_masks,
high_res_masks,
obj_ptr,
object_score_logits,
) = sam_outputs
current_out["pred_masks"] = low_res_masks current_out["pred_masks"] = low_res_masks
current_out["pred_masks_high_res"] = high_res_masks current_out["pred_masks_high_res"] = high_res_masks
@ -984,8 +957,7 @@ class SAM2Model(torch.nn.Module):
# it's mainly used in the demo to encode spatial memories w/ consolidated masks) # it's mainly used in the demo to encode spatial memories w/ consolidated masks)
current_out["object_score_logits"] = object_score_logits current_out["object_score_logits"] = object_score_logits
# Finally run the memory encoder on the predicted mask to encode # Run memory encoder on the predicted mask to encode it into a new memory feature (for use in future frames)
# it into a new memory feature (that can be used in future frames)
self._encode_memory_in_output( self._encode_memory_in_output(
current_vision_feats, current_vision_feats,
feat_sizes, feat_sizes,
@ -1007,8 +979,9 @@ class SAM2Model(torch.nn.Module):
and (self.multimask_min_pt_num <= num_pts <= self.multimask_max_pt_num) and (self.multimask_min_pt_num <= num_pts <= self.multimask_max_pt_num)
) )
def _apply_non_overlapping_constraints(self, pred_masks): @staticmethod
"""Applies non-overlapping constraints to masks, keeping highest scoring object per location.""" def _apply_non_overlapping_constraints(pred_masks):
"""Applies non-overlapping constraints to masks, keeping the highest scoring object per location."""
batch_size = pred_masks.size(0) batch_size = pred_masks.size(0)
if batch_size == 1: if batch_size == 1:
return pred_masks return pred_masks
@ -1024,6 +997,10 @@ class SAM2Model(torch.nn.Module):
pred_masks = torch.where(keep, pred_masks, torch.clamp(pred_masks, max=-10.0)) pred_masks = torch.where(keep, pred_masks, torch.clamp(pred_masks, max=-10.0))
return pred_masks return pred_masks
def set_binarize(self, binarize=False):
"""Set binarize for VideoPredictor."""
self.binarize_mask_from_pts_for_mem_enc = binarize
def set_imgsz(self, imgsz): def set_imgsz(self, imgsz):
""" """
Set image size to make model compatible with different image sizes. Set image size to make model compatible with different image sizes.

@ -8,6 +8,8 @@ using SAM. It forms an integral part of the Ultralytics framework and is designe
segmentation tasks. segmentation tasks.
""" """
from collections import OrderedDict
import numpy as np import numpy as np
import torch import torch
import torch.nn.functional as F import torch.nn.functional as F
@ -16,7 +18,7 @@ from ultralytics.data.augment import LetterBox
from ultralytics.engine.predictor import BasePredictor from ultralytics.engine.predictor import BasePredictor
from ultralytics.engine.results import Results from ultralytics.engine.results import Results
from ultralytics.utils import DEFAULT_CFG, ops from ultralytics.utils import DEFAULT_CFG, ops
from ultralytics.utils.torch_utils import select_device from ultralytics.utils.torch_utils import select_device, smart_inference_mode
from .amg import ( from .amg import (
batch_iterator, batch_iterator,
@ -95,7 +97,7 @@ class Predictor(BasePredictor):
""" """
if overrides is None: if overrides is None:
overrides = {} overrides = {}
overrides.update(dict(task="segment", mode="predict")) overrides.update(dict(task="segment", mode="predict", batch=1))
super().__init__(cfg, overrides, _callbacks) super().__init__(cfg, overrides, _callbacks)
self.args.retina_masks = True self.args.retina_masks = True
self.im = None self.im = None
@ -114,7 +116,7 @@ class Predictor(BasePredictor):
im (torch.Tensor | List[np.ndarray]): Input image(s) in BCHW tensor format or list of HWC numpy arrays. im (torch.Tensor | List[np.ndarray]): Input image(s) in BCHW tensor format or list of HWC numpy arrays.
Returns: Returns:
(torch.Tensor): The preprocessed image tensor, normalized and converted to the appropriate dtype. im (torch.Tensor): The preprocessed image tensor, normalized and converted to the appropriate dtype.
Examples: Examples:
>>> predictor = Predictor() >>> predictor = Predictor()
@ -181,10 +183,9 @@ class Predictor(BasePredictor):
**kwargs (Any): Additional keyword arguments. **kwargs (Any): Additional keyword arguments.
Returns: Returns:
(tuple): Contains the following three elements: (np.ndarray): The output masks in shape (C, H, W), where C is the number of generated masks.
- np.ndarray: The output masks in shape (C, H, W), where C is the number of generated masks. (np.ndarray): An array of length C containing quality scores predicted by the model for each mask.
- np.ndarray: An array of length C containing quality scores predicted by the model for each mask. (np.ndarray): Low-resolution logits of shape (C, H, W) for subsequent inference, where H=W=256.
- np.ndarray: Low-resolution logits of shape (C, H, W) for subsequent inference, where H=W=256.
Examples: Examples:
>>> predictor = Predictor() >>> predictor = Predictor()
@ -222,10 +223,8 @@ class Predictor(BasePredictor):
AssertionError: If the number of points don't match the number of labels, in case labels were passed. AssertionError: If the number of points don't match the number of labels, in case labels were passed.
Returns: Returns:
(tuple): Tuple containing: (np.ndarray): Output masks with shape (C, H, W), where C is the number of generated masks.
- np.ndarray: Output masks with shape (C, H, W), where C is the number of generated masks. (np.ndarray): Quality scores predicted by the model for each mask, with length C.
- np.ndarray: Quality scores predicted by the model for each mask, with length C.
- np.ndarray: Low-resolution logits with shape (C, H, W) for subsequent inference, where H=W=256.
Examples: Examples:
>>> predictor = Predictor() >>> predictor = Predictor()
@ -329,10 +328,9 @@ class Predictor(BasePredictor):
crop_nms_thresh (float): IoU cutoff for NMS to remove duplicate masks between crops. crop_nms_thresh (float): IoU cutoff for NMS to remove duplicate masks between crops.
Returns: Returns:
(Tuple[torch.Tensor, torch.Tensor, torch.Tensor]): A tuple containing: pred_masks (torch.Tensor): Segmented masks with shape (N, H, W).
- pred_masks (torch.Tensor): Segmented masks with shape (N, H, W). pred_scores (torch.Tensor): Confidence scores for each mask with shape (N,).
- pred_scores (torch.Tensor): Confidence scores for each mask with shape (N,). pred_bboxes (torch.Tensor): Bounding boxes for each mask with shape (N, 4).
- pred_bboxes (torch.Tensor): Bounding boxes for each mask with shape (N, 4).
Examples: Examples:
>>> predictor = Predictor() >>> predictor = Predictor()
@ -408,7 +406,7 @@ class Predictor(BasePredictor):
return pred_masks, pred_scores, pred_bboxes return pred_masks, pred_scores, pred_bboxes
def setup_model(self, model, verbose=True): def setup_model(self, model=None, verbose=True):
""" """
Initializes the Segment Anything Model (SAM) for inference. Initializes the Segment Anything Model (SAM) for inference.
@ -416,7 +414,7 @@ class Predictor(BasePredictor):
parameters for image normalization and other Ultralytics compatibility settings. parameters for image normalization and other Ultralytics compatibility settings.
Args: Args:
model (torch.nn.Module): A pre-trained SAM model. If None, a model will be built based on configuration. model (torch.nn.Module | None): A pretrained SAM model. If None, a new model is built based on config.
verbose (bool): If True, prints selected device information. verbose (bool): If True, prints selected device information.
Examples: Examples:
@ -459,7 +457,7 @@ class Predictor(BasePredictor):
orig_imgs (List[np.ndarray] | torch.Tensor): The original, unprocessed images. orig_imgs (List[np.ndarray] | torch.Tensor): The original, unprocessed images.
Returns: Returns:
(List[Results]): List of Results objects containing detection masks, bounding boxes, and other results (List[Results]): List of Results objects containing detection masks, bounding boxes, and other
metadata for each processed image. metadata for each processed image.
Examples: Examples:
@ -586,9 +584,8 @@ class Predictor(BasePredictor):
nms_thresh (float): IoU threshold for the NMS algorithm to remove duplicate boxes. nms_thresh (float): IoU threshold for the NMS algorithm to remove duplicate boxes.
Returns: Returns:
(tuple): new_masks (torch.Tensor): Processed masks with small regions removed, shape (N, H, W).
- new_masks (torch.Tensor): Processed masks with small regions removed, shape (N, H, W). keep (List[int]): Indices of remaining masks after NMS, for filtering corresponding boxes.
- keep (List[int]): Indices of remaining masks after NMS, for filtering corresponding boxes.
Examples: Examples:
>>> masks = torch.rand(5, 640, 640) > 0.5 # 5 random binary masks >>> masks = torch.rand(5, 640, 640) > 0.5 # 5 random binary masks
@ -690,10 +687,8 @@ class SAM2Predictor(Predictor):
img_idx (int): Index of the image in the batch to process. img_idx (int): Index of the image in the batch to process.
Returns: Returns:
(tuple): Tuple containing: (np.ndarray): Output masks with shape (C, H, W), where C is the number of generated masks.
- np.ndarray: Output masks with shape (C, H, W), where C is the number of generated masks. (np.ndarray): Quality scores for each mask, with length C.
- np.ndarray: Quality scores for each mask, with length C.
- np.ndarray: Low-resolution logits with shape (C, 256, 256) for subsequent inference.
Examples: Examples:
>>> predictor = SAM2Predictor(cfg) >>> predictor = SAM2Predictor(cfg)
@ -712,7 +707,7 @@ class SAM2Predictor(Predictor):
""" """
features = self.get_im_features(im) if self.features is None else self.features features = self.get_im_features(im) if self.features is None else self.features
bboxes, points, labels, masks = self._prepare_prompts(im.shape[2:], bboxes, points, labels, masks) points, labels, masks = self._prepare_prompts(im.shape[2:], bboxes, points, labels, masks)
points = (points, labels) if points is not None else None points = (points, labels) if points is not None else None
sparse_embeddings, dense_embeddings = self.model.sam_prompt_encoder( sparse_embeddings, dense_embeddings = self.model.sam_prompt_encoder(
@ -751,7 +746,7 @@ class SAM2Predictor(Predictor):
AssertionError: If the number of points don't match the number of labels, in case labels were passed. AssertionError: If the number of points don't match the number of labels, in case labels were passed.
Returns: Returns:
(tuple): A tuple containing transformed bounding boxes, points, labels, and masks. (tuple): A tuple containing transformed points, labels, and masks.
""" """
bboxes, points, labels, masks = super()._prepare_prompts(dst_shape, bboxes, points, labels, masks) bboxes, points, labels, masks = super()._prepare_prompts(dst_shape, bboxes, points, labels, masks)
if bboxes is not None: if bboxes is not None:
@ -764,7 +759,7 @@ class SAM2Predictor(Predictor):
labels = torch.cat([bbox_labels, labels], dim=1) labels = torch.cat([bbox_labels, labels], dim=1)
else: else:
points, labels = bboxes, bbox_labels points, labels = bboxes, bbox_labels
return bboxes, points, labels, masks return points, labels, masks
def set_image(self, image): def set_image(self, image):
""" """
@ -815,3 +810,797 @@ class SAM2Predictor(Predictor):
for feat, feat_size in zip(vision_feats[::-1], self._bb_feat_sizes[::-1]) for feat, feat_size in zip(vision_feats[::-1], self._bb_feat_sizes[::-1])
][::-1] ][::-1]
return {"image_embed": feats[-1], "high_res_feats": feats[:-1]} return {"image_embed": feats[-1], "high_res_feats": feats[:-1]}
class SAM2VideoPredictor(SAM2Predictor):
"""
SAM2VideoPredictor to handle user interactions with videos and manage inference states.
This class extends the functionality of SAM2Predictor to support video processing and maintains
the state of inference operations. It includes configurations for managing non-overlapping masks,
clearing memory for non-conditional inputs, and setting up callbacks for prediction events.
Attributes:
inference_state (Dict): A dictionary to store the current state of inference operations.
non_overlap_masks (bool): A flag indicating whether masks should be non-overlapping.
clear_non_cond_mem_around_input (bool): A flag to control clearing non-conditional memory around inputs.
clear_non_cond_mem_for_multi_obj (bool): A flag to control clearing non-conditional memory for multi-object scenarios.
callbacks (Dict): A dictionary of callbacks for various prediction lifecycle events.
Args:
cfg (Dict, Optional): Configuration settings for the predictor. Defaults to DEFAULT_CFG.
overrides (Dict, Optional): Additional configuration overrides. Defaults to None.
_callbacks (List, Optional): Custom callbacks to be added. Defaults to None.
Note:
The `fill_hole_area` attribute is defined but not used in the current implementation.
"""
# fill_hole_area = 8 # not used
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
"""
Initialize the predictor with configuration and optional overrides.
This constructor initializes the SAM2VideoPredictor with a given configuration, applies any
specified overrides, and sets up the inference state along with certain flags
that control the behavior of the predictor.
Args:
cfg (Dict): Configuration dictionary containing default settings.
overrides (Dict | None): Dictionary of values to override default configuration.
_callbacks (Dict | None): Dictionary of callback functions to customize behavior.
Examples:
>>> predictor = SAM2VideoPredictor(cfg=DEFAULT_CFG)
>>> predictor = SAM2VideoPredictor(overrides={"imgsz": 640})
>>> predictor = SAM2VideoPredictor(_callbacks={"on_predict_start": custom_callback})
"""
super().__init__(cfg, overrides, _callbacks)
self.inference_state = {}
self.non_overlap_masks = True
self.clear_non_cond_mem_around_input = False
self.clear_non_cond_mem_for_multi_obj = False
self.callbacks["on_predict_start"].append(self.init_state)
def get_model(self):
"""
Retrieves and configures the model with binarization enabled.
Note:
This method overrides the base class implementation to set the binarize flag to True.
"""
model = super().get_model()
model.set_binarize(True)
return model
def inference(self, im, bboxes=None, points=None, labels=None, masks=None):
"""
Perform image segmentation inference based on the given input cues, using the currently loaded image. This
method leverages SAM's (Segment Anything Model) architecture consisting of image encoder, prompt encoder, and
mask decoder for real-time and promptable segmentation tasks.
Args:
im (torch.Tensor): The preprocessed input image in tensor format, with shape (N, C, H, W).
bboxes (np.ndarray | List, optional): Bounding boxes with shape (N, 4), in XYXY format.
points (np.ndarray | List, optional): Points indicating object locations with shape (N, 2), in pixels.
labels (np.ndarray | List, optional): Labels for point prompts, shape (N, ). 1 = foreground, 0 = background.
masks (np.ndarray, optional): Low-resolution masks from previous predictions shape (N,H,W). For SAM H=W=256.
Returns:
(np.ndarray): The output masks in shape CxHxW, where C is the number of generated masks.
(np.ndarray): An array of length C containing quality scores predicted by the model for each mask.
"""
# Override prompts if any stored in self.prompts
bboxes = self.prompts.pop("bboxes", bboxes)
points = self.prompts.pop("points", points)
masks = self.prompts.pop("masks", masks)
frame = self.dataset.frame
self.inference_state["im"] = im
output_dict = self.inference_state["output_dict"]
if len(output_dict["cond_frame_outputs"]) == 0: # initialize prompts
points, labels, masks = self._prepare_prompts(im.shape[2:], bboxes, points, labels, masks)
if points is not None:
for i in range(len(points)):
self.add_new_prompts(obj_id=i, points=points[[i]], labels=labels[[i]], frame_idx=frame)
elif masks is not None:
for i in range(len(masks)):
self.add_new_prompts(obj_id=i, masks=masks[[i]], frame_idx=frame)
self.propagate_in_video_preflight()
consolidated_frame_inds = self.inference_state["consolidated_frame_inds"]
batch_size = len(self.inference_state["obj_idx_to_id"])
if len(output_dict["cond_frame_outputs"]) == 0:
raise RuntimeError("No points are provided; please add points first")
if frame in consolidated_frame_inds["cond_frame_outputs"]:
storage_key = "cond_frame_outputs"
current_out = output_dict[storage_key][frame]
if self.clear_non_cond_mem_around_input and (self.clear_non_cond_mem_for_multi_obj or batch_size <= 1):
# clear non-conditioning memory of the surrounding frames
self._clear_non_cond_mem_around_input(frame)
elif frame in consolidated_frame_inds["non_cond_frame_outputs"]:
storage_key = "non_cond_frame_outputs"
current_out = output_dict[storage_key][frame]
else:
storage_key = "non_cond_frame_outputs"
current_out = self._run_single_frame_inference(
output_dict=output_dict,
frame_idx=frame,
batch_size=batch_size,
is_init_cond_frame=False,
point_inputs=None,
mask_inputs=None,
reverse=False,
run_mem_encoder=True,
)
output_dict[storage_key][frame] = current_out
# Create slices of per-object outputs for subsequent interaction with each
# individual object after tracking.
self._add_output_per_object(frame, current_out, storage_key)
self.inference_state["frames_already_tracked"].append(frame)
pred_masks = current_out["pred_masks"].flatten(0, 1)
pred_masks = pred_masks[(pred_masks > self.model.mask_threshold).sum((1, 2)) > 0] # filter blank masks
return pred_masks, torch.ones(len(pred_masks), dtype=pred_masks.dtype, device=pred_masks.device)
def postprocess(self, preds, img, orig_imgs):
"""
Post-processes the predictions to apply non-overlapping constraints if required.
This method extends the post-processing functionality by applying non-overlapping constraints
to the predicted masks if the `non_overlap_masks` flag is set to True. This ensures that
the masks do not overlap, which can be useful for certain applications.
Args:
preds (Tuple[torch.Tensor]): The predictions from the model.
img (torch.Tensor): The processed image tensor.
orig_imgs (List[np.ndarray]): The original images before processing.
Returns:
results (list): The post-processed predictions.
Note:
If `non_overlap_masks` is True, the method applies constraints to ensure non-overlapping masks.
"""
results = super().postprocess(preds, img, orig_imgs)
if self.non_overlap_masks:
for result in results:
if result.masks is None or len(result.masks) == 0:
continue
result.masks.data = self.model._apply_non_overlapping_constraints(result.masks.data.unsqueeze(0))[0]
return results
@smart_inference_mode()
def add_new_prompts(
self,
obj_id,
points=None,
labels=None,
masks=None,
frame_idx=0,
):
"""
Adds new points or masks to a specific frame for a given object ID.
This method updates the inference state with new prompts (points or masks) for a specified
object and frame index. It ensures that the prompts are either points or masks, but not both,
and updates the internal state accordingly. It also handles the generation of new segmentations
based on the provided prompts and the existing state.
Args:
obj_id (int): The ID of the object to which the prompts are associated.
points (torch.Tensor, Optional): The coordinates of the points of interest. Defaults to None.
labels (torch.Tensor, Optional): The labels corresponding to the points. Defaults to None.
masks (torch.Tensor, optional): Binary masks for the object. Defaults to None.
frame_idx (int, optional): The index of the frame to which the prompts are applied. Defaults to 0.
Returns:
(tuple): A tuple containing the flattened predicted masks and a tensor of ones indicating the number of objects.
Raises:
AssertionError: If both `masks` and `points` are provided, or neither is provided.
Note:
- Only one type of prompt (either points or masks) can be added per call.
- If the frame is being tracked for the first time, it is treated as an initial conditioning frame.
- The method handles the consolidation of outputs and resizing of masks to the original video resolution.
"""
assert (masks is None) ^ (points is None), "'masks' and 'points' prompts are not compatible with each other."
obj_idx = self._obj_id_to_idx(obj_id)
point_inputs = None
pop_key = "point_inputs_per_obj"
if points is not None:
point_inputs = {"point_coords": points, "point_labels": labels}
self.inference_state["point_inputs_per_obj"][obj_idx][frame_idx] = point_inputs
pop_key = "mask_inputs_per_obj"
self.inference_state["mask_inputs_per_obj"][obj_idx][frame_idx] = masks
self.inference_state[pop_key][obj_idx].pop(frame_idx, None)
# If this frame hasn't been tracked before, we treat it as an initial conditioning
# frame, meaning that the inputs points are to generate segments on this frame without
# using any memory from other frames, like in SAM. Otherwise (if it has been tracked),
# the input points will be used to correct the already tracked masks.
is_init_cond_frame = frame_idx not in self.inference_state["frames_already_tracked"]
obj_output_dict = self.inference_state["output_dict_per_obj"][obj_idx]
obj_temp_output_dict = self.inference_state["temp_output_dict_per_obj"][obj_idx]
# Add a frame to conditioning output if it's an initial conditioning frame or
# if the model sees all frames receiving clicks/mask as conditioning frames.
is_cond = is_init_cond_frame or self.model.add_all_frames_to_correct_as_cond
storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs"
# Get any previously predicted mask logits on this object and feed it along with
# the new clicks into the SAM mask decoder.
prev_sam_mask_logits = None
# lookup temporary output dict first, which contains the most recent output
# (if not found, then lookup conditioning and non-conditioning frame output)
if point_inputs is not None:
prev_out = (
obj_temp_output_dict[storage_key].get(frame_idx)
or obj_output_dict["cond_frame_outputs"].get(frame_idx)
or obj_output_dict["non_cond_frame_outputs"].get(frame_idx)
)
if prev_out is not None and prev_out.get("pred_masks") is not None:
prev_sam_mask_logits = prev_out["pred_masks"].to(device=self.device, non_blocking=True)
# Clamp the scale of prev_sam_mask_logits to avoid rare numerical issues.
prev_sam_mask_logits.clamp_(-32.0, 32.0)
current_out = self._run_single_frame_inference(
output_dict=obj_output_dict, # run on the slice of a single object
frame_idx=frame_idx,
batch_size=1, # run on the slice of a single object
is_init_cond_frame=is_init_cond_frame,
point_inputs=point_inputs,
mask_inputs=masks,
reverse=False,
# Skip the memory encoder when adding clicks or mask. We execute the memory encoder
# at the beginning of `propagate_in_video` (after user finalize their clicks). This
# allows us to enforce non-overlapping constraints on all objects before encoding
# them into memory.
run_mem_encoder=False,
prev_sam_mask_logits=prev_sam_mask_logits,
)
# Add the output to the output dict (to be used as future memory)
obj_temp_output_dict[storage_key][frame_idx] = current_out
# Resize the output mask to the original video resolution
consolidated_out = self._consolidate_temp_output_across_obj(
frame_idx,
is_cond=is_cond,
run_mem_encoder=False,
)
pred_masks = consolidated_out["pred_masks"].flatten(0, 1)
return pred_masks.flatten(0, 1), torch.ones(1, dtype=pred_masks.dtype, device=pred_masks.device)
@smart_inference_mode()
def propagate_in_video_preflight(self):
"""
Prepare inference_state and consolidate temporary outputs before tracking.
This method marks the start of tracking, disallowing the addition of new objects until the session is reset.
It consolidates temporary outputs from `temp_output_dict_per_obj` and merges them into `output_dict`.
Additionally, it clears non-conditioning memory around input frames and ensures that the state is consistent
with the provided inputs.
"""
# Tracking has started and we don't allow adding new objects until session is reset.
self.inference_state["tracking_has_started"] = True
batch_size = len(self.inference_state["obj_idx_to_id"])
# Consolidate per-object temporary outputs in "temp_output_dict_per_obj" and
# add them into "output_dict".
temp_output_dict_per_obj = self.inference_state["temp_output_dict_per_obj"]
output_dict = self.inference_state["output_dict"]
# "consolidated_frame_inds" contains indices of those frames where consolidated
# temporary outputs have been added (either in this call or any previous calls
# to `propagate_in_video_preflight`).
consolidated_frame_inds = self.inference_state["consolidated_frame_inds"]
for is_cond in {False, True}:
# Separately consolidate conditioning and non-conditioning temp outptus
storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs"
# Find all the frames that contain temporary outputs for any objects
# (these should be the frames that have just received clicks for mask inputs
# via `add_new_points` or `add_new_mask`)
temp_frame_inds = set()
for obj_temp_output_dict in temp_output_dict_per_obj.values():
temp_frame_inds.update(obj_temp_output_dict[storage_key].keys())
consolidated_frame_inds[storage_key].update(temp_frame_inds)
# consolidate the temprary output across all objects on this frame
for frame_idx in temp_frame_inds:
consolidated_out = self._consolidate_temp_output_across_obj(
frame_idx, is_cond=is_cond, run_mem_encoder=True
)
# merge them into "output_dict" and also create per-object slices
output_dict[storage_key][frame_idx] = consolidated_out
self._add_output_per_object(frame_idx, consolidated_out, storage_key)
if self.clear_non_cond_mem_around_input and (self.clear_non_cond_mem_for_multi_obj or batch_size <= 1):
# clear non-conditioning memory of the surrounding frames
self._clear_non_cond_mem_around_input(frame_idx)
# clear temporary outputs in `temp_output_dict_per_obj`
for obj_temp_output_dict in temp_output_dict_per_obj.values():
obj_temp_output_dict[storage_key].clear()
# edge case: if an output is added to "cond_frame_outputs", we remove any prior
# output on the same frame in "non_cond_frame_outputs"
for frame_idx in output_dict["cond_frame_outputs"]:
output_dict["non_cond_frame_outputs"].pop(frame_idx, None)
for obj_output_dict in self.inference_state["output_dict_per_obj"].values():
for frame_idx in obj_output_dict["cond_frame_outputs"]:
obj_output_dict["non_cond_frame_outputs"].pop(frame_idx, None)
for frame_idx in consolidated_frame_inds["cond_frame_outputs"]:
assert frame_idx in output_dict["cond_frame_outputs"]
consolidated_frame_inds["non_cond_frame_outputs"].discard(frame_idx)
# Make sure that the frame indices in "consolidated_frame_inds" are exactly those frames
# with either points or mask inputs (which should be true under a correct workflow).
all_consolidated_frame_inds = (
consolidated_frame_inds["cond_frame_outputs"] | consolidated_frame_inds["non_cond_frame_outputs"]
)
input_frames_inds = set()
for point_inputs_per_frame in self.inference_state["point_inputs_per_obj"].values():
input_frames_inds.update(point_inputs_per_frame.keys())
for mask_inputs_per_frame in self.inference_state["mask_inputs_per_obj"].values():
input_frames_inds.update(mask_inputs_per_frame.keys())
assert all_consolidated_frame_inds == input_frames_inds
@staticmethod
def init_state(predictor):
"""
Initialize an inference state for the predictor.
This function sets up the initial state required for performing inference on video data.
It includes initializing various dictionaries and ordered dictionaries that will store
inputs, outputs, and other metadata relevant to the tracking process.
Args:
predictor (SAM2VideoPredictor): The predictor object for which to initialize the state.
"""
if len(predictor.inference_state) > 0: # means initialized
return
assert predictor.dataset is not None
assert predictor.dataset.mode == "video"
inference_state = {}
inference_state["num_frames"] = predictor.dataset.frames
# inputs on each frame
inference_state["point_inputs_per_obj"] = {}
inference_state["mask_inputs_per_obj"] = {}
# values that don't change across frames (so we only need to hold one copy of them)
inference_state["constants"] = {}
# mapping between client-side object id and model-side object index
inference_state["obj_id_to_idx"] = OrderedDict()
inference_state["obj_idx_to_id"] = OrderedDict()
inference_state["obj_ids"] = []
# A storage to hold the model's tracking results and states on each frame
inference_state["output_dict"] = {
"cond_frame_outputs": {}, # dict containing {frame_idx: <out>}
"non_cond_frame_outputs": {}, # dict containing {frame_idx: <out>}
}
# Slice (view) of each object tracking results, sharing the same memory with "output_dict"
inference_state["output_dict_per_obj"] = {}
# A temporary storage to hold new outputs when user interact with a frame
# to add clicks or mask (it's merged into "output_dict" before propagation starts)
inference_state["temp_output_dict_per_obj"] = {}
# Frames that already holds consolidated outputs from click or mask inputs
# (we directly use their consolidated outputs during tracking)
inference_state["consolidated_frame_inds"] = {
"cond_frame_outputs": set(), # set containing frame indices
"non_cond_frame_outputs": set(), # set containing frame indices
}
# metadata for each tracking frame (e.g. which direction it's tracked)
inference_state["tracking_has_started"] = False
inference_state["frames_already_tracked"] = []
predictor.inference_state = inference_state
def get_im_features(self, im, batch=1):
"""
Extracts and processes image features using SAM2's image encoder for subsequent segmentation tasks.
Args:
im (torch.Tensor): The input image tensor.
batch (int, optional): The batch size for expanding features if there are multiple prompts. Defaults to 1.
Returns:
vis_feats (torch.Tensor): The visual features extracted from the image.
vis_pos_embed (torch.Tensor): The positional embeddings for the visual features.
feat_sizes (List(Tuple[int])): A list containing the sizes of the extracted features.
Note:
- If `batch` is greater than 1, the features are expanded to fit the batch size.
- The method leverages the model's `_prepare_backbone_features` method to prepare the backbone features.
"""
backbone_out = self.model.forward_image(im)
if batch > 1: # expand features if there's more than one prompt
for i, feat in enumerate(backbone_out["backbone_fpn"]):
backbone_out["backbone_fpn"][i] = feat.expand(batch, -1, -1, -1)
for i, pos in enumerate(backbone_out["vision_pos_enc"]):
pos = pos.expand(batch, -1, -1, -1)
backbone_out["vision_pos_enc"][i] = pos
_, vis_feats, vis_pos_embed, feat_sizes = self.model._prepare_backbone_features(backbone_out)
return vis_feats, vis_pos_embed, feat_sizes
def _obj_id_to_idx(self, obj_id):
"""
Map client-side object id to model-side object index.
Args:
obj_id (int): The unique identifier of the object provided by the client side.
Returns:
obj_idx (int): The index of the object on the model side.
Raises:
RuntimeError: If an attempt is made to add a new object after tracking has started.
Note:
- The method updates or retrieves mappings between object IDs and indices stored in
`inference_state`.
- It ensures that new objects can only be added before tracking commences.
- It maintains two-way mappings between IDs and indices (`obj_id_to_idx` and `obj_idx_to_id`).
- Additional data structures are initialized for the new object to store inputs and outputs.
"""
obj_idx = self.inference_state["obj_id_to_idx"].get(obj_id, None)
if obj_idx is not None:
return obj_idx
# This is a new object id not sent to the server before. We only allow adding
# new objects *before* the tracking starts.
allow_new_object = not self.inference_state["tracking_has_started"]
if allow_new_object:
# get the next object slot
obj_idx = len(self.inference_state["obj_id_to_idx"])
self.inference_state["obj_id_to_idx"][obj_id] = obj_idx
self.inference_state["obj_idx_to_id"][obj_idx] = obj_id
self.inference_state["obj_ids"] = list(self.inference_state["obj_id_to_idx"])
# set up input and output structures for this object
self.inference_state["point_inputs_per_obj"][obj_idx] = {}
self.inference_state["mask_inputs_per_obj"][obj_idx] = {}
self.inference_state["output_dict_per_obj"][obj_idx] = {
"cond_frame_outputs": {}, # dict containing {frame_idx: <out>}
"non_cond_frame_outputs": {}, # dict containing {frame_idx: <out>}
}
self.inference_state["temp_output_dict_per_obj"][obj_idx] = {
"cond_frame_outputs": {}, # dict containing {frame_idx: <out>}
"non_cond_frame_outputs": {}, # dict containing {frame_idx: <out>}
}
return obj_idx
else:
raise RuntimeError(
f"Cannot add new object id {obj_id} after tracking starts. "
f"All existing object ids: {self.inference_state['obj_ids']}. "
f"Please call 'reset_state' to restart from scratch."
)
def _run_single_frame_inference(
self,
output_dict,
frame_idx,
batch_size,
is_init_cond_frame,
point_inputs,
mask_inputs,
reverse,
run_mem_encoder,
prev_sam_mask_logits=None,
):
"""
Run tracking on a single frame based on current inputs and previous memory.
Args:
output_dict (Dict): The dictionary containing the output states of the tracking process.
frame_idx (int): The index of the current frame.
batch_size (int): The batch size for processing the frame.
is_init_cond_frame (bool): Indicates if the current frame is an initial conditioning frame.
point_inputs (Dict, Optional): Input points and their labels. Defaults to None.
mask_inputs (torch.Tensor, Optional): Input binary masks. Defaults to None.
reverse (bool): Indicates if the tracking should be performed in reverse order.
run_mem_encoder (bool): Indicates if the memory encoder should be executed.
prev_sam_mask_logits (torch.Tensor, Optional): Previous mask logits for the current object. Defaults to None.
Returns:
current_out (dict): A dictionary containing the output of the tracking step, including updated features and predictions.
Raises:
AssertionError: If both `point_inputs` and `mask_inputs` are provided, or neither is provided.
Note:
- The method assumes that `point_inputs` and `mask_inputs` are mutually exclusive.
- The method retrieves image features using the `get_im_features` method.
- The `maskmem_pos_enc` is assumed to be constant across frames, hence only one copy is stored.
- The `fill_holes_in_mask_scores` function is commented out and currently unsupported due to CUDA extension requirements.
"""
# Retrieve correct image features
current_vision_feats, current_vision_pos_embeds, feat_sizes = self.get_im_features(
self.inference_state["im"], batch_size
)
# point and mask should not appear as input simultaneously on the same frame
assert point_inputs is None or mask_inputs is None
current_out = self.model.track_step(
frame_idx=frame_idx,
is_init_cond_frame=is_init_cond_frame,
current_vision_feats=current_vision_feats,
current_vision_pos_embeds=current_vision_pos_embeds,
feat_sizes=feat_sizes,
point_inputs=point_inputs,
mask_inputs=mask_inputs,
output_dict=output_dict,
num_frames=self.inference_state["num_frames"],
track_in_reverse=reverse,
run_mem_encoder=run_mem_encoder,
prev_sam_mask_logits=prev_sam_mask_logits,
)
maskmem_features = current_out["maskmem_features"]
if maskmem_features is not None:
current_out["maskmem_features"] = maskmem_features.to(
dtype=torch.float16, device=self.device, non_blocking=True
)
# NOTE: Do not support the `fill_holes_in_mask_scores` function since it needs cuda extensions
# potentially fill holes in the predicted masks
# if self.fill_hole_area > 0:
# pred_masks = current_out["pred_masks"].to(self.device, non_blocking=True)
# pred_masks = fill_holes_in_mask_scores(pred_masks, self.fill_hole_area)
# "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it
current_out["maskmem_pos_enc"] = self._get_maskmem_pos_enc(current_out["maskmem_pos_enc"])
return current_out
def _get_maskmem_pos_enc(self, out_maskmem_pos_enc):
"""
Caches and manages the positional encoding for mask memory across frames and objects.
This method optimizes storage by caching the positional encoding (`maskmem_pos_enc`) for
mask memory, which is constant across frames and objects, thus reducing the amount of
redundant information stored during an inference session. It checks if the positional
encoding has already been cached; if not, it caches a slice of the provided encoding.
If the batch size is greater than one, it expands the cached positional encoding to match
the current batch size.
Args:
out_maskmem_pos_enc (List[torch.Tensor] or None): The positional encoding for mask memory.
Should be a list of tensors or None.
Returns:
out_maskmem_pos_enc (List[torch.Tensor]): The positional encoding for mask memory, either cached or expanded.
Note:
- The method assumes that `out_maskmem_pos_enc` is a list of tensors or None.
- Only a single object's slice is cached since the encoding is the same across objects.
- The method checks if the positional encoding has already been cached in the session's constants.
- If the batch size is greater than one, the cached encoding is expanded to fit the batch size.
"""
model_constants = self.inference_state["constants"]
# "out_maskmem_pos_enc" should be either a list of tensors or None
if out_maskmem_pos_enc is not None:
if "maskmem_pos_enc" not in model_constants:
assert isinstance(out_maskmem_pos_enc, list)
# only take the slice for one object, since it's same across objects
maskmem_pos_enc = [x[0:1].clone() for x in out_maskmem_pos_enc]
model_constants["maskmem_pos_enc"] = maskmem_pos_enc
else:
maskmem_pos_enc = model_constants["maskmem_pos_enc"]
# expand the cached maskmem_pos_enc to the actual batch size
batch_size = out_maskmem_pos_enc[0].size(0)
if batch_size > 1:
out_maskmem_pos_enc = [x.expand(batch_size, -1, -1, -1) for x in maskmem_pos_enc]
return out_maskmem_pos_enc
def _consolidate_temp_output_across_obj(
self,
frame_idx,
is_cond=False,
run_mem_encoder=False,
):
"""
Consolidates per-object temporary outputs into a single output for all objects.
This method combines the temporary outputs for each object on a given frame into a unified
output. It fills in any missing objects either from the main output dictionary or leaves
placeholders if they do not exist in the main output. Optionally, it can re-run the memory
encoder after applying non-overlapping constraints to the object scores.
Args:
frame_idx (int): The index of the frame for which to consolidate outputs.
is_cond (bool, Optional): Indicates if the frame is considered a conditioning frame.
Defaults to False.
run_mem_encoder (bool, Optional): Specifies whether to run the memory encoder after
consolidating the outputs. Defaults to False.
Returns:
consolidated_out (dict): A consolidated output dictionary containing the combined results for all objects.
Note:
- The method initializes the consolidated output with placeholder values for missing objects.
- It searches for outputs in both the temporary and main output dictionaries.
- If `run_mem_encoder` is True, it applies non-overlapping constraints and re-runs the memory encoder.
- The `maskmem_features` and `maskmem_pos_enc` are only populated when `run_mem_encoder` is True.
"""
batch_size = len(self.inference_state["obj_idx_to_id"])
storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs"
# Initialize `consolidated_out`. Its "maskmem_features" and "maskmem_pos_enc"
# will be added when rerunning the memory encoder after applying non-overlapping
# constraints to object scores. Its "pred_masks" are prefilled with a large
# negative value (NO_OBJ_SCORE) to represent missing objects.
consolidated_out = {
"maskmem_features": None,
"maskmem_pos_enc": None,
"pred_masks": torch.full(
size=(batch_size, 1, self.imgsz[0] // 4, self.imgsz[1] // 4),
fill_value=-1024.0,
dtype=torch.float32,
device=self.device,
),
"obj_ptr": torch.full(
size=(batch_size, self.model.hidden_dim),
fill_value=-1024.0,
dtype=torch.float32,
device=self.device,
),
"object_score_logits": torch.full(
size=(batch_size, 1),
# default to 10.0 for object_score_logits, i.e. assuming the object is
# present as sigmoid(10)=1, same as in `predict_masks` of `MaskDecoder`
fill_value=10.0,
dtype=torch.float32,
device=self.device,
),
}
for obj_idx in range(batch_size):
obj_temp_output_dict = self.inference_state["temp_output_dict_per_obj"][obj_idx]
obj_output_dict = self.inference_state["output_dict_per_obj"][obj_idx]
out = (
obj_temp_output_dict[storage_key].get(frame_idx)
# If the object doesn't appear in "temp_output_dict_per_obj" on this frame,
# we fall back and look up its previous output in "output_dict_per_obj".
# We look up both "cond_frame_outputs" and "non_cond_frame_outputs" in
# "output_dict_per_obj" to find a previous output for this object.
or obj_output_dict["cond_frame_outputs"].get(frame_idx)
or obj_output_dict["non_cond_frame_outputs"].get(frame_idx)
)
# If the object doesn't appear in "output_dict_per_obj" either, we skip it
# and leave its mask scores to the default scores (i.e. the NO_OBJ_SCORE
# placeholder above) and set its object pointer to be a dummy pointer.
if out is None:
# Fill in dummy object pointers for those objects without any inputs or
# tracking outcomes on this frame (only do it under `run_mem_encoder=True`,
# i.e. when we need to build the memory for tracking).
if run_mem_encoder:
# fill object pointer with a dummy pointer (based on an empty mask)
consolidated_out["obj_ptr"][obj_idx : obj_idx + 1] = self._get_empty_mask_ptr(frame_idx)
continue
# Add the temporary object output mask to consolidated output mask
consolidated_out["pred_masks"][obj_idx : obj_idx + 1] = out["pred_masks"]
consolidated_out["obj_ptr"][obj_idx : obj_idx + 1] = out["obj_ptr"]
# Optionally, apply non-overlapping constraints on the consolidated scores and rerun the memory encoder
if run_mem_encoder:
high_res_masks = F.interpolate(
consolidated_out["pred_masks"],
size=self.imgsz,
mode="bilinear",
align_corners=False,
)
if self.model.non_overlap_masks_for_mem_enc:
high_res_masks = self.model._apply_non_overlapping_constraints(high_res_masks)
consolidated_out["maskmem_features"], consolidated_out["maskmem_pos_enc"] = self._run_memory_encoder(
batch_size=batch_size,
high_res_masks=high_res_masks,
is_mask_from_pts=True, # these frames are what the user interacted with
object_score_logits=consolidated_out["object_score_logits"],
)
return consolidated_out
def _get_empty_mask_ptr(self, frame_idx):
"""
Get a dummy object pointer based on an empty mask on the current frame.
Args:
frame_idx (int): The index of the current frame for which to generate the dummy object pointer.
Returns:
(torch.Tensor): A tensor representing the dummy object pointer generated from the empty mask.
"""
# Retrieve correct image features
current_vision_feats, current_vision_pos_embeds, feat_sizes = self.get_im_features(self.inference_state["im"])
# Feed the empty mask and image feature above to get a dummy object pointer
current_out = self.model.track_step(
frame_idx=frame_idx,
is_init_cond_frame=True,
current_vision_feats=current_vision_feats,
current_vision_pos_embeds=current_vision_pos_embeds,
feat_sizes=feat_sizes,
point_inputs=None,
# A dummy (empty) mask with a single object
mask_inputs=torch.zeros((1, 1, *self.imgsz), dtype=torch.float32, device=self.device),
output_dict={},
num_frames=self.inference_state["num_frames"],
track_in_reverse=False,
run_mem_encoder=False,
prev_sam_mask_logits=None,
)
return current_out["obj_ptr"]
def _run_memory_encoder(self, batch_size, high_res_masks, object_score_logits, is_mask_from_pts):
"""
Run the memory encoder on masks.
This is usually after applying non-overlapping constraints to object scores. Since their scores changed, their
memory also needs to be computed again with the memory encoder.
Args:
batch_size (int): The batch size for processing the frame.
high_res_masks (torch.Tensor): High-resolution masks for which to compute the memory.
object_score_logits (torch.Tensor): Logits representing the object scores.
is_mask_from_pts (bool): Indicates if the mask is derived from point interactions.
Returns:
(tuple[torch.Tensor, torch.Tensor]): A tuple containing the encoded mask features and positional encoding.
"""
# Retrieve correct image features
current_vision_feats, _, feat_sizes = self.get_im_features(self.inference_state["im"], batch_size)
maskmem_features, maskmem_pos_enc = self.model._encode_new_memory(
current_vision_feats=current_vision_feats,
feat_sizes=feat_sizes,
pred_masks_high_res=high_res_masks,
is_mask_from_pts=is_mask_from_pts,
object_score_logits=object_score_logits,
)
# "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it
maskmem_pos_enc = self._get_maskmem_pos_enc(maskmem_pos_enc)
return maskmem_features.to(dtype=torch.float16, device=self.device, non_blocking=True), maskmem_pos_enc
def _add_output_per_object(self, frame_idx, current_out, storage_key):
"""
Split a multi-object output into per-object output slices and add them into Output_Dict_Per_Obj.
The resulting slices share the same tensor storage.
Args:
frame_idx (int): The index of the current frame.
current_out (Dict): The current output dictionary containing multi-object outputs.
storage_key (str): The key used to store the output in the per-object output dictionary.
"""
maskmem_features = current_out["maskmem_features"]
assert maskmem_features is None or isinstance(maskmem_features, torch.Tensor)
maskmem_pos_enc = current_out["maskmem_pos_enc"]
assert maskmem_pos_enc is None or isinstance(maskmem_pos_enc, list)
for obj_idx, obj_output_dict in self.inference_state["output_dict_per_obj"].items():
obj_slice = slice(obj_idx, obj_idx + 1)
obj_out = {
"maskmem_features": None,
"maskmem_pos_enc": None,
"pred_masks": current_out["pred_masks"][obj_slice],
"obj_ptr": current_out["obj_ptr"][obj_slice],
}
if maskmem_features is not None:
obj_out["maskmem_features"] = maskmem_features[obj_slice]
if maskmem_pos_enc is not None:
obj_out["maskmem_pos_enc"] = [x[obj_slice] for x in maskmem_pos_enc]
obj_output_dict[storage_key][frame_idx] = obj_out
def _clear_non_cond_mem_around_input(self, frame_idx):
"""
Remove the non-conditioning memory around the input frame.
When users provide correction clicks, the surrounding frames' non-conditioning memories can still contain outdated
object appearance information and could confuse the model. This method clears those non-conditioning memories
surrounding the interacted frame to avoid giving the model both old and new information about the object.
Args:
frame_idx (int): The index of the current frame where user interaction occurred.
"""
r = self.model.memory_temporal_stride_for_eval
frame_idx_begin = frame_idx - r * self.model.num_maskmem
frame_idx_end = frame_idx + r * self.model.num_maskmem
for t in range(frame_idx_begin, frame_idx_end + 1):
self.inference_state["output_dict"]["non_cond_frame_outputs"].pop(t, None)
for obj_output_dict in self.inference_state["output_dict_per_obj"].values():
obj_output_dict["non_cond_frame_outputs"].pop(t, None)

@ -141,3 +141,10 @@ class DetectionTrainer(BaseTrainer):
boxes = np.concatenate([lb["bboxes"] for lb in self.train_loader.dataset.labels], 0) boxes = np.concatenate([lb["bboxes"] for lb in self.train_loader.dataset.labels], 0)
cls = np.concatenate([lb["cls"] for lb in self.train_loader.dataset.labels], 0) cls = np.concatenate([lb["cls"] for lb in self.train_loader.dataset.labels], 0)
plot_labels(boxes, cls.squeeze(), names=self.data["names"], save_dir=self.save_dir, on_plot=self.on_plot) plot_labels(boxes, cls.squeeze(), names=self.data["names"], save_dir=self.save_dir, on_plot=self.on_plot)
def auto_batch(self):
"""Get batch size by calculating memory occupation of model."""
train_dataset = self.build_dataset(self.trainset, mode="train", batch=16)
# 4 for mosaic augmentation
max_num_obj = max(len(l["cls"]) for l in train_dataset.labels) * 4
return super().auto_batch(max_num_obj)

@ -123,6 +123,7 @@ class AutoBackend(nn.Module):
paddle, paddle,
mnn, mnn,
ncnn, ncnn,
imx,
triton, triton,
) = self._model_type(w) ) = self._model_type(w)
fp16 &= pt or jit or onnx or xml or engine or nn_module or triton # FP16 fp16 &= pt or jit or onnx or xml or engine or nn_module or triton # FP16
@ -182,8 +183,8 @@ class AutoBackend(nn.Module):
check_requirements("opencv-python>=4.5.4") check_requirements("opencv-python>=4.5.4")
net = cv2.dnn.readNetFromONNX(w) net = cv2.dnn.readNetFromONNX(w)
# ONNX Runtime # ONNX Runtime and IMX
elif onnx: elif onnx or imx:
LOGGER.info(f"Loading {w} for ONNX Runtime inference...") LOGGER.info(f"Loading {w} for ONNX Runtime inference...")
check_requirements(("onnx", "onnxruntime-gpu" if cuda else "onnxruntime")) check_requirements(("onnx", "onnxruntime-gpu" if cuda else "onnxruntime"))
if IS_RASPBERRYPI or IS_JETSON: if IS_RASPBERRYPI or IS_JETSON:
@ -199,7 +200,22 @@ class AutoBackend(nn.Module):
device = torch.device("cpu") device = torch.device("cpu")
cuda = False cuda = False
LOGGER.info(f"Preferring ONNX Runtime {providers[0]}") LOGGER.info(f"Preferring ONNX Runtime {providers[0]}")
if onnx:
session = onnxruntime.InferenceSession(w, providers=providers) session = onnxruntime.InferenceSession(w, providers=providers)
else:
check_requirements(
["model-compression-toolkit==2.1.1", "sony-custom-layers[torch]==0.2.0", "onnxruntime-extensions"]
)
w = next(Path(w).glob("*.onnx"))
LOGGER.info(f"Loading {w} for ONNX IMX inference...")
import mct_quantizers as mctq
from sony_custom_layers.pytorch.object_detection import nms_ort # noqa
session = onnxruntime.InferenceSession(
w, mctq.get_ort_session_options(), providers=["CPUExecutionProvider"]
)
task = "detect"
output_names = [x.name for x in session.get_outputs()] output_names = [x.name for x in session.get_outputs()]
metadata = session.get_modelmeta().custom_metadata_map metadata = session.get_modelmeta().custom_metadata_map
dynamic = isinstance(session.get_outputs()[0].shape[0], str) dynamic = isinstance(session.get_outputs()[0].shape[0], str)
@ -520,7 +536,7 @@ class AutoBackend(nn.Module):
y = self.net.forward() y = self.net.forward()
# ONNX Runtime # ONNX Runtime
elif self.onnx: elif self.onnx or self.imx:
if self.dynamic: if self.dynamic:
im = im.cpu().numpy() # torch to numpy im = im.cpu().numpy() # torch to numpy
y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im}) y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})
@ -537,6 +553,9 @@ class AutoBackend(nn.Module):
) )
self.session.run_with_iobinding(self.io) self.session.run_with_iobinding(self.io)
y = self.bindings y = self.bindings
if self.imx:
# boxes, conf, cls
y = np.concatenate([y[0], y[1][:, :, None], y[2][:, :, None]], axis=-1)
# OpenVINO # OpenVINO
elif self.xml: elif self.xml:

@ -240,7 +240,8 @@ class C2f(nn.Module):
def forward_split(self, x): def forward_split(self, x):
"""Forward pass using split() instead of chunk().""" """Forward pass using split() instead of chunk()."""
y = list(self.cv1(x).split((self.c, self.c), 1)) y = self.cv1(x).split((self.c, self.c), 1)
y = [y[0], y[1]]
y.extend(m(y[-1]) for m in self.m) y.extend(m(y[-1]) for m in self.m)
return self.cv2(torch.cat(y, 1)) return self.cv2(torch.cat(y, 1))
@ -279,8 +280,8 @@ class RepC3(nn.Module):
"""Initialize CSP Bottleneck with a single convolution using input channels, output channels, and number.""" """Initialize CSP Bottleneck with a single convolution using input channels, output channels, and number."""
super().__init__() super().__init__()
c_ = int(c2 * e) # hidden channels c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c2, 1, 1) self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c1, c2, 1, 1) self.cv2 = Conv(c1, c_, 1, 1)
self.m = nn.Sequential(*[RepConv(c_, c_) for _ in range(n)]) self.m = nn.Sequential(*[RepConv(c_, c_) for _ in range(n)])
self.cv3 = Conv(c_, c2, 1, 1) if c_ != c2 else nn.Identity() self.cv3 = Conv(c_, c2, 1, 1) if c_ != c2 else nn.Identity()

@ -50,7 +50,7 @@ class Conv(nn.Module):
return self.act(self.bn(self.conv(x))) return self.act(self.bn(self.conv(x)))
def forward_fuse(self, x): def forward_fuse(self, x):
"""Perform transposed convolution of 2D data.""" """Apply convolution and activation without batch normalization."""
return self.act(self.conv(x)) return self.act(self.conv(x))

@ -23,6 +23,7 @@ class Detect(nn.Module):
dynamic = False # force grid reconstruction dynamic = False # force grid reconstruction
export = False # export mode export = False # export mode
format = None # export format
end2end = False # end2end end2end = False # end2end
max_det = 300 # max_det max_det = 300 # max_det
shape = None shape = None
@ -101,7 +102,7 @@ class Detect(nn.Module):
# Inference path # Inference path
shape = x[0].shape # BCHW shape = x[0].shape # BCHW
x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2) x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
if self.dynamic or self.shape != shape: if self.format != "imx" and (self.dynamic or self.shape != shape):
self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5)) self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
self.shape = shape self.shape = shape
@ -119,6 +120,11 @@ class Detect(nn.Module):
grid_size = torch.tensor([grid_w, grid_h, grid_w, grid_h], device=box.device).reshape(1, 4, 1) grid_size = torch.tensor([grid_w, grid_h, grid_w, grid_h], device=box.device).reshape(1, 4, 1)
norm = self.strides / (self.stride[0] * grid_size) norm = self.strides / (self.stride[0] * grid_size)
dbox = self.decode_bboxes(self.dfl(box) * norm, self.anchors.unsqueeze(0) * norm[:, :2]) dbox = self.decode_bboxes(self.dfl(box) * norm, self.anchors.unsqueeze(0) * norm[:, :2])
elif self.export and self.format == "imx":
dbox = self.decode_bboxes(
self.dfl(box) * self.strides, self.anchors.unsqueeze(0) * self.strides, xywh=False
)
return dbox.transpose(1, 2), cls.sigmoid().permute(0, 2, 1)
else: else:
dbox = self.decode_bboxes(self.dfl(box), self.anchors.unsqueeze(0)) * self.strides dbox = self.decode_bboxes(self.dfl(box), self.anchors.unsqueeze(0)) * self.strides
@ -137,9 +143,9 @@ class Detect(nn.Module):
a[-1].bias.data[:] = 1.0 # box a[-1].bias.data[:] = 1.0 # box
b[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2) # cls (.01 objects, 80 classes, 640 img) b[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2) # cls (.01 objects, 80 classes, 640 img)
def decode_bboxes(self, bboxes, anchors): def decode_bboxes(self, bboxes, anchors, xywh=True):
"""Decode bounding boxes.""" """Decode bounding boxes."""
return dist2bbox(bboxes, anchors, xywh=not self.end2end, dim=1) return dist2bbox(bboxes, anchors, xywh=xywh and (not self.end2end), dim=1)
@staticmethod @staticmethod
def postprocess(preds: torch.Tensor, max_det: int, nc: int = 80): def postprocess(preds: torch.Tensor, max_det: int, nc: int = 80):

@ -960,10 +960,8 @@ def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)
m = getattr(torch.nn, m[3:]) if "nn." in m else globals()[m] # get module m = getattr(torch.nn, m[3:]) if "nn." in m else globals()[m] # get module
for j, a in enumerate(args): for j, a in enumerate(args):
if isinstance(a, str): if isinstance(a, str):
try: with contextlib.suppress(ValueError):
args[j] = locals()[a] if a in locals() else ast.literal_eval(a) args[j] = locals()[a] if a in locals() else ast.literal_eval(a)
except ValueError:
pass
n = n_ = max(round(n * depth), 1) if n > 1 else n # depth gain n = n_ = max(round(n * depth), 1) if n > 1 else n # depth gain
if m in { if m in {
Classify, Classify,
@ -1141,24 +1139,16 @@ def guess_model_task(model):
# Guess from model cfg # Guess from model cfg
if isinstance(model, dict): if isinstance(model, dict):
try: with contextlib.suppress(Exception):
return cfg2task(model) return cfg2task(model)
except Exception:
pass
# Guess from PyTorch model # Guess from PyTorch model
if isinstance(model, nn.Module): # PyTorch model if isinstance(model, nn.Module): # PyTorch model
for x in "model.args", "model.model.args", "model.model.model.args": for x in "model.args", "model.model.args", "model.model.model.args":
try: with contextlib.suppress(Exception):
return eval(x)["task"] return eval(x)["task"]
except Exception:
pass
for x in "model.yaml", "model.model.yaml", "model.model.model.yaml": for x in "model.yaml", "model.model.yaml", "model.model.model.yaml":
try: with contextlib.suppress(Exception):
return cfg2task(eval(x)) return cfg2task(eval(x))
except Exception:
pass
for m in model.modules(): for m in model.modules():
if isinstance(m, Segment): if isinstance(m, Segment):
return "segment" return "segment"

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save