Merge branch 'main' into rm-smoothing

pull/16014/head
Burhan 4 days ago committed by GitHub
commit 15477f88b5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 2
      .github/workflows/docs.yml
  2. 6
      README.md
  3. 6
      README.zh-CN.md
  4. 2
      docs/build_docs.py
  5. 11
      docs/en/datasets/pose/hand-keypoints.md
  6. 8
      docs/en/datasets/segment/coco.md
  7. 2
      docs/en/hub/models.md
  8. 2
      docs/en/index.md
  9. 2
      docs/en/integrations/ray-tune.md
  10. 2
      docs/en/macros/export-table.md
  11. 16
      docs/en/modes/benchmark.md
  12. 4
      docs/en/tasks/segment.md
  13. 11
      docs/en/usage/simple-utilities.md
  14. 199
      docs/overrides/javascript/benchmark.js
  15. 230
      docs/overrides/javascript/extra.js
  16. 2
      tests/test_solutions.py
  17. 2
      ultralytics/__init__.py
  18. 2
      ultralytics/cfg/solutions/default.yaml
  19. 2
      ultralytics/data/augment.py
  20. 5
      ultralytics/engine/exporter.py
  21. 6
      ultralytics/engine/predictor.py
  22. 18
      ultralytics/nn/tasks.py
  23. 46
      ultralytics/solutions/object_counter.py
  24. 2
      ultralytics/solutions/solutions.py
  25. 2
      ultralytics/utils/torch_utils.py

@ -48,7 +48,7 @@ jobs:
python-version: "3.x"
- uses: astral-sh/setup-uv@v3
- name: Install Dependencies
run: uv pip install --system ruff black tqdm minify-html mkdocs-material "mkdocstrings[python]" mkdocs-jupyter mkdocs-redirects mkdocs-ultralytics-plugin mkdocs-macros-plugin
run: uv pip install --system ruff black tqdm mkdocs-material "mkdocstrings[python]" mkdocs-jupyter mkdocs-redirects mkdocs-ultralytics-plugin mkdocs-macros-plugin
- name: Ruff fixes
continue-on-error: true
run: ruff check --fix --unsafe-fixes --select D --ignore=D100,D104,D203,D205,D212,D213,D401,D406,D407,D413 .

@ -8,7 +8,7 @@
<div>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://www.pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
@ -150,8 +150,8 @@ See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage e
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 |
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco-seg.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu`
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco.yaml batch=1 device=0|cpu`
</details>

@ -8,7 +8,7 @@
<div>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://www.pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
@ -150,8 +150,8 @@ YOLO11 [检测](https://docs.ultralytics.com/tasks/detect/)、[分割](https://d
| [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 |
| [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 |
- **mAP<sup>val</sup>** 值针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val segment data=coco-seg.yaml device=0`
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu`
- **mAP<sup>val</sup>** 值针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。 <br>复制命令 `yolo val segment data=coco.yaml device=0`
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。 <br>复制命令 `yolo val segment data=coco.yaml batch=1 device=0|cpu`
</details>

@ -252,7 +252,7 @@ def minify_html_files():
content = f.read()
original_size = len(content)
minified_content = minify(content)
minified_content = minify(content, keep_closing_tags=True, minify_css=True, minify_js=True)
minified_size = len(minified_content)
total_original_size += original_size

@ -10,6 +10,17 @@ keywords: Hand KeyPoints, pose estimation, dataset, keypoints, MediaPipe, YOLO,
The hand-keypoints dataset contains 26,768 images of hands annotated with keypoints, making it suitable for training models like Ultralytics YOLO for pose estimation tasks. The annotations were generated using the Google MediaPipe library, ensuring high [accuracy](https://www.ultralytics.com/glossary/accuracy) and consistency, and the dataset is compatible [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) formats.
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/fd6u1TW_AGY"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Hand Keypoints Estimation with Ultralytics YOLO11 | Human Hand Pose Estimation Tutorial
</p>
## Hand Landmarks
![Hand Landmarks](https://github.com/ultralytics/docs/releases/download/0/hand_landmarks.jpg)

@ -56,14 +56,14 @@ To train a YOLO11n-seg model on the COCO-Seg dataset for 100 [epochs](https://ww
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640)
results = model.train(data="coco.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
## Sample Images and Annotations
@ -118,14 +118,14 @@ To train a YOLO11n-seg model on the COCO-Seg dataset for 100 epochs with an imag
model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640)
results = model.train(data="coco.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
yolo segment train data=coco.yaml model=yolo11n-seg.pt epochs=100 imgsz=640
```
### What are the key features of the COCO-Seg dataset?

@ -66,7 +66,7 @@ In this step, you have to choose the project in which you want to create your mo
!!! info
You can read more about the available [YOLO models](https://docs.ultralytics.com/models) and architectures in our documentation.
You can read more about the available [YOLO models](https://docs.ultralytics.com/models/) and architectures in our documentation.
By default, your model will use a pre-trained model (trained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset) to reduce training time. You can change this behavior and tweak your model's configuration by opening the **Advanced Model Configuration** accordion.

@ -20,7 +20,7 @@ keywords: Ultralytics, YOLO, YOLO11, object detection, image segmentation, deep
<br>
<br>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://www.pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>

@ -106,6 +106,8 @@ In this example, we demonstrate how to use a custom search space for hyperparame
!!! example "Usage"
```python
from ray import tune
from ultralytics import YOLO
# Define a YOLO model

@ -14,4 +14,4 @@
| [PaddlePaddle](../integrations/paddlepaddle.md) | `paddle` | `{{ model_name or "yolo11n" }}_paddle_model/` | ✅ | `imgsz`, `batch` |
| [MNN](../integrations/mnn.md) | `mnn` | `{{ model_name or "yolo11n" }}.mnn` | ✅ | `imgsz`, `batch`, `int8`, `half` |
| [NCNN](../integrations/ncnn.md) | `ncnn` | `{{ model_name or "yolo11n" }}_ncnn_model/` | ✅ | `imgsz`, `half`, `batch` |
| [IMX500](../integrations/sony-imx500.md) | `imx` | `{{ model_name or "yolo11n" }}_imx_model/` | ✅ | `imgsz`, `int8` |
| [IMX500](../integrations/sony-imx500.md) | `imx` | `{{ model_name or "yolov8n" }}_imx_model/` | ✅ | `imgsz`, `int8` |

@ -4,14 +4,22 @@ description: Learn how to evaluate your YOLO11 model's performance in real-world
keywords: model benchmarking, YOLO11, Ultralytics, performance evaluation, export formats, ONNX, TensorRT, OpenVINO, CoreML, TensorFlow, optimization, mAP50-95, inference time
---
<script>
const script = document.createElement('script');
script.src = "https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js";
document.head.appendChild(script);
const anotherScript = document.createElement('script');
anotherScript.src = "../../javascript/benchmark.js";
document.head.appendChild(anotherScript);
</script>
# Model Benchmarking with Ultralytics YOLO
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-ecosystem-integrations.avif" alt="Ultralytics YOLO ecosystem and integrations">
## Benchmark Visualization
<script src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1/dist/chart.min.js"></script>
!!! tip "Refresh Browser"
You may need to refresh the page to view the graphs correctly due to potential cookie issues.
@ -19,7 +27,7 @@ keywords: model benchmarking, YOLO11, Ultralytics, performance evaluation, expor
<div style="display: flex; align-items: flex-start;">
<div style="margin-right: 20px;">
<label><input type="checkbox" name="algorithm" value="YOLO11" checked><span>YOLO11</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv10" checked><span>YOLOv10</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv10" checked><span>YOLOv10</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv9" checked><span>YOLOv9</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv8" checked><span>YOLOv8</span></label><br>
<label><input type="checkbox" name="algorithm" value="YOLOv7" checked><span>YOLOv7</span></label><br>
@ -30,7 +38,7 @@ keywords: model benchmarking, YOLO11, Ultralytics, performance evaluation, expor
<label><input type="checkbox" name="algorithm" value="YOLOX" checked><span>YOLOX</span></label><br>
<label><input type="checkbox" name="algorithm" value="RTDETRv2" checked><span>RTDETRv2</span></label>
</div>
<div style="flex-grow: 1;"><canvas id="chart"></canvas></div> <!-- Canva for plotting benchmarks -->
<div style="flex-grow: 1;"><canvas id="chart"></canvas></div>
</div>
## Introduction

@ -36,8 +36,8 @@ YOLO11 pretrained Segment models are shown here. Detect, Segment and Pose models
{% include "macros/yolo-seg-perf.md" %}
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco-seg.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu`
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val segment data=coco.yaml batch=1 device=0|cpu`
## Train

@ -458,6 +458,17 @@ image_with_obb = ann.result()
#### Bounding Boxes Circle Annotation [Circle Label](https://docs.ultralytics.com/reference/utils/plotting/#ultralytics.utils.plotting.Annotator.circle_label)
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/c-S5M36XWmg"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> In-Depth Guide to Text & Circle Annotations with Python Live Demos | Ultralytics Annotations 🚀
</p>
```python
import cv2

@ -0,0 +1,199 @@
// YOLO models chart ---------------------------------------------------------------------------------------------------
const data = {
YOLO11: {
n: { speed: 1.55, mAP: 39.5 },
s: { speed: 2.63, mAP: 47.0 },
m: { speed: 5.27, mAP: 51.4 },
l: { speed: 6.84, mAP: 53.2 },
x: { speed: 12.49, mAP: 54.7 },
},
YOLOv10: {
n: { speed: 1.56, mAP: 39.5 },
s: { speed: 2.66, mAP: 46.7 },
m: { speed: 5.48, mAP: 51.3 },
b: { speed: 6.54, mAP: 52.7 },
l: { speed: 8.33, mAP: 53.3 },
x: { speed: 12.2, mAP: 54.4 },
},
YOLOv9: {
t: { speed: 2.3, mAP: 37.8 },
s: { speed: 3.54, mAP: 46.5 },
m: { speed: 6.43, mAP: 51.5 },
c: { speed: 7.16, mAP: 52.8 },
e: { speed: 16.77, mAP: 55.1 },
},
YOLOv8: {
n: { speed: 1.47, mAP: 37.3 },
s: { speed: 2.66, mAP: 44.9 },
m: { speed: 5.86, mAP: 50.2 },
l: { speed: 9.06, mAP: 52.9 },
x: { speed: 14.37, mAP: 53.9 },
},
YOLOv7: { l: { speed: 6.84, mAP: 51.4 }, x: { speed: 11.57, mAP: 53.1 } },
"YOLOv6-3.0": {
n: { speed: 1.17, mAP: 37.5 },
s: { speed: 2.66, mAP: 45.0 },
m: { speed: 5.28, mAP: 50.0 },
l: { speed: 8.95, mAP: 52.8 },
},
YOLOv5: {
s: { speed: 1.92, mAP: 37.4 },
m: { speed: 4.03, mAP: 45.4 },
l: { speed: 6.61, mAP: 49.0 },
x: { speed: 11.89, mAP: 50.7 },
},
"PP-YOLOE+": {
t: { speed: 2.84, mAP: 39.9 },
s: { speed: 2.62, mAP: 43.7 },
m: { speed: 5.56, mAP: 49.8 },
l: { speed: 8.36, mAP: 52.9 },
x: { speed: 14.3, mAP: 54.7 },
},
"DAMO-YOLO": {
t: { speed: 2.32, mAP: 42.0 },
s: { speed: 3.45, mAP: 46.0 },
m: { speed: 5.09, mAP: 49.2 },
l: { speed: 7.18, mAP: 50.8 },
},
YOLOX: {
s: { speed: 2.56, mAP: 40.5 },
m: { speed: 5.43, mAP: 46.9 },
l: { speed: 9.04, mAP: 49.7 },
x: { speed: 16.1, mAP: 51.1 },
},
RTDETRv2: {
s: { speed: 5.03, mAP: 48.1 },
m: { speed: 7.51, mAP: 51.9 },
l: { speed: 9.76, mAP: 53.4 },
x: { speed: 15.03, mAP: 54.3 },
},
};
let chart = null; // chart variable will hold the reference to the current chart instance.
// Function to lighten a hex color by a specified amount.
function lightenHexColor(color, amount = 0.5) {
const r = parseInt(color.slice(1, 3), 16);
const g = parseInt(color.slice(3, 5), 16);
const b = parseInt(color.slice(5, 7), 16);
const newR = Math.min(255, Math.round(r + (255 - r) * amount));
const newG = Math.min(255, Math.round(g + (255 - g) * amount));
const newB = Math.min(255, Math.round(b + (255 - b) * amount));
return `#${newR.toString(16).padStart(2, "0")}${newG.toString(16).padStart(2, "0")}${newB.toString(16).padStart(2, "0")}`;
}
// Function to update the benchmarks chart.
function updateChart() {
if (chart) {
chart.destroy();
} // If a chart instance already exists, destroy it.
// Define a specific color map for models.
const colorMap = {
YOLO11: "#0b23a9",
YOLOv10: "#ff7f0e",
YOLOv9: "#2ca02c",
YOLOv8: "#d62728",
YOLOv7: "#9467bd",
"YOLOv6-3.0": "#8c564b",
YOLOv5: "#e377c2",
"PP-YOLOE+": "#7f7f7f",
"DAMO-YOLO": "#bcbd22",
YOLOX: "#17becf",
RTDETRv2: "#eccd22",
};
// Get the selected algorithms from the checkboxes.
const selectedAlgorithms = [
...document.querySelectorAll('input[name="algorithm"]:checked'),
].map((e) => e.value);
// Create the datasets for the selected algorithms.
const datasets = selectedAlgorithms.map((algorithm, i) => {
const baseColor =
colorMap[algorithm] || `hsl(${Math.random() * 360}, 70%, 50%)`;
const lineColor = i === 0 ? baseColor : lightenHexColor(baseColor, 0.6); // Lighten non-primary lines.
return {
label: algorithm, // Label for the data points in the legend.
data: Object.entries(data[algorithm]).map(([version, point]) => ({
x: point.speed, // Speed data points on the x-axis.
y: point.mAP, // mAP data points on the y-axis.
version: version.toUpperCase(), // Store the version as additional data.
})),
fill: false, // Don't fill the chart.
borderColor: lineColor, // Use the lightened color for the line.
tension: 0.3, // Smooth the line.
pointRadius: i === 0 ? 7 : 4, // Highlight primary dataset points.
pointHoverRadius: i === 0 ? 9 : 6, // Highlight hover for primary dataset.
pointBackgroundColor: lineColor, // Fill points with the line color.
pointBorderColor: "#ffffff", // Add a border around points for contrast.
borderWidth: i === 0 ? 3 : 1.5, // Slightly increase line size for the primary dataset.
};
});
if (datasets.length === 0) {
return;
} // If there are no selected algorithms, return without creating a new chart.
// Create a new chart instance.
chart = new Chart(document.getElementById("chart").getContext("2d"), {
type: "line", // Set the chart type to line.
data: { datasets },
options: {
plugins: {
legend: {
display: true,
position: "top",
labels: { color: "#808080" },
}, // Configure the legend.
tooltip: {
callbacks: {
label: (tooltipItem) => {
const { dataset, dataIndex } = tooltipItem;
const point = dataset.data[dataIndex];
return `${dataset.label}${point.version.toLowerCase()}: Speed = ${point.x}, mAP = ${point.y}`; // Custom tooltip label.
},
},
mode: "nearest",
intersect: false,
}, // Configure the tooltip.
},
interaction: { mode: "nearest", axis: "x", intersect: false }, // Configure the interaction mode.
scales: {
x: {
type: "linear",
position: "bottom",
title: {
display: true,
text: "Latency T4 TensorRT10 FP16 (ms/img)",
color: "#808080",
}, // X-axis title.
grid: { color: "#e0e0e0" }, // Grid line color.
ticks: { color: "#808080" }, // Tick label color.
},
y: {
title: { display: true, text: "mAP", color: "#808080" }, // Y-axis title.
grid: { color: "#e0e0e0" }, // Grid line color.
ticks: { color: "#808080" }, // Tick label color.
},
},
},
});
}
document$.subscribe(function () {
function initializeApp() {
if (typeof Chart !== "undefined") {
document
.querySelectorAll('input[name="algorithm"]')
.forEach((checkbox) =>
checkbox.addEventListener("change", updateChart),
);
updateChart();
} else {
setTimeout(initializeApp, 100); // Retry every 100ms
}
}
initializeApp(); // Initial chart rendering
});

@ -1,4 +1,4 @@
// Light/Dark Mode -----------------------------------------------------------------------------------------------------
// Apply theme colors based on dark/light mode
const applyTheme = (isDark) => {
document.body.setAttribute(
"data-md-color-scheme",
@ -10,28 +10,31 @@ const applyTheme = (isDark) => {
);
};
// Check and apply auto theme
const checkAutoTheme = () => {
// Check and apply appropriate theme based on system/user preference
const checkTheme = () => {
const palette = JSON.parse(localStorage.getItem(".__palette") || "{}");
if (palette.index === 0) {
// Auto mode is selected
applyTheme(window.matchMedia("(prefers-color-scheme: dark)").matches);
}
};
// Event listeners for theme changes
const mediaQueryList = window.matchMedia("(prefers-color-scheme: dark)");
mediaQueryList.addListener(checkAutoTheme);
// Initial theme check
checkAutoTheme();
// Watch for system theme changes
window
.matchMedia("(prefers-color-scheme: dark)")
.addEventListener("change", checkTheme);
// Auto theme input listener
// Initialize theme handling on page load
document.addEventListener("DOMContentLoaded", () => {
const autoThemeInput = document.getElementById("__palette_1");
autoThemeInput?.addEventListener("click", () => {
if (autoThemeInput.checked) setTimeout(checkAutoTheme);
});
// Watch for theme toggle changes
document
.getElementById("__palette_1")
?.addEventListener(
"change",
(e) => e.target.checked && setTimeout(checkTheme),
);
// Initial theme check
checkTheme();
});
// Inkeep --------------------------------------------------------------------------------------------------------------
@ -140,200 +143,3 @@ document.addEventListener("DOMContentLoaded", () => {
widgetContainer && addInkeepWidget("SearchBar", "#inkeepSearchBar");
});
});
// YOLO models chart ---------------------------------------------------------------------------------------------------
const data = {
YOLO11: {
n: { speed: 1.55, mAP: 39.5 },
s: { speed: 2.63, mAP: 47.0 },
m: { speed: 5.27, mAP: 51.4 },
l: { speed: 6.84, mAP: 53.2 },
x: { speed: 12.49, mAP: 54.7 },
},
YOLOv10: {
n: { speed: 1.56, mAP: 39.5 },
s: { speed: 2.66, mAP: 46.7 },
m: { speed: 5.48, mAP: 51.3 },
b: { speed: 6.54, mAP: 52.7 },
l: { speed: 8.33, mAP: 53.3 },
x: { speed: 12.2, mAP: 54.4 },
},
YOLOv9: {
t: { speed: 2.3, mAP: 37.8 },
s: { speed: 3.54, mAP: 46.5 },
m: { speed: 6.43, mAP: 51.5 },
c: { speed: 7.16, mAP: 52.8 },
e: { speed: 16.77, mAP: 55.1 },
},
YOLOv8: {
n: { speed: 1.47, mAP: 37.3 },
s: { speed: 2.66, mAP: 44.9 },
m: { speed: 5.86, mAP: 50.2 },
l: { speed: 9.06, mAP: 52.9 },
x: { speed: 14.37, mAP: 53.9 },
},
YOLOv7: { l: { speed: 6.84, mAP: 51.4 }, x: { speed: 11.57, mAP: 53.1 } },
"YOLOv6-3.0": {
n: { speed: 1.17, mAP: 37.5 },
s: { speed: 2.66, mAP: 45.0 },
m: { speed: 5.28, mAP: 50.0 },
l: { speed: 8.95, mAP: 52.8 },
},
YOLOv5: {
s: { speed: 1.92, mAP: 37.4 },
m: { speed: 4.03, mAP: 45.4 },
l: { speed: 6.61, mAP: 49.0 },
x: { speed: 11.89, mAP: 50.7 },
},
"PP-YOLOE+": {
t: { speed: 2.84, mAP: 39.9 },
s: { speed: 2.62, mAP: 43.7 },
m: { speed: 5.56, mAP: 49.8 },
l: { speed: 8.36, mAP: 52.9 },
x: { speed: 14.3, mAP: 54.7 },
},
"DAMO-YOLO": {
t: { speed: 2.32, mAP: 42.0 },
s: { speed: 3.45, mAP: 46.0 },
m: { speed: 5.09, mAP: 49.2 },
l: { speed: 7.18, mAP: 50.8 },
},
YOLOX: {
s: { speed: 2.56, mAP: 40.5 },
m: { speed: 5.43, mAP: 46.9 },
l: { speed: 9.04, mAP: 49.7 },
x: { speed: 16.1, mAP: 51.1 },
},
RTDETRv2: {
s: { speed: 5.03, mAP: 48.1 },
m: { speed: 7.51, mAP: 51.9 },
l: { speed: 9.76, mAP: 53.4 },
x: { speed: 15.03, mAP: 54.3 },
},
};
let chart = null; // chart variable will hold the reference to the current chart instance.
// Function to lighten a hex color by a specified amount.
function lightenHexColor(color, amount = 0.5) {
const r = parseInt(color.slice(1, 3), 16);
const g = parseInt(color.slice(3, 5), 16);
const b = parseInt(color.slice(5, 7), 16);
const newR = Math.min(255, Math.round(r + (255 - r) * amount));
const newG = Math.min(255, Math.round(g + (255 - g) * amount));
const newB = Math.min(255, Math.round(b + (255 - b) * amount));
return `#${newR.toString(16).padStart(2, "0")}${newG.toString(16).padStart(2, "0")}${newB.toString(16).padStart(2, "0")}`;
}
// Function to update the benchmarks chart.
function updateChart() {
if (chart) {
chart.destroy();
} // If a chart instance already exists, destroy it.
// Define a specific color map for models.
const colorMap = {
YOLO11: "#0b23a9",
YOLOv10: "#ff7f0e",
YOLOv9: "#2ca02c",
YOLOv8: "#d62728",
YOLOv7: "#9467bd",
"YOLOv6-3.0": "#8c564b",
YOLOv5: "#e377c2",
"PP-YOLOE+": "#7f7f7f",
"DAMO-YOLO": "#bcbd22",
YOLOX: "#17becf",
RTDETRv2: "#eccd22",
};
// Get the selected algorithms from the checkboxes.
const selectedAlgorithms = [
...document.querySelectorAll('input[name="algorithm"]:checked'),
].map((e) => e.value);
// Create the datasets for the selected algorithms.
const datasets = selectedAlgorithms.map((algorithm, i) => {
const baseColor =
colorMap[algorithm] || `hsl(${Math.random() * 360}, 70%, 50%)`;
const lineColor = i === 0 ? baseColor : lightenHexColor(baseColor, 0.6); // Lighten non-primary lines.
return {
label: algorithm, // Label for the data points in the legend.
data: Object.entries(data[algorithm]).map(([version, point]) => ({
x: point.speed, // Speed data points on the x-axis.
y: point.mAP, // mAP data points on the y-axis.
version: version.toUpperCase(), // Store the version as additional data.
})),
fill: false, // Don't fill the chart.
borderColor: lineColor, // Use the lightened color for the line.
tension: 0.3, // Smooth the line.
pointRadius: i === 0 ? 7 : 4, // Highlight primary dataset points.
pointHoverRadius: i === 0 ? 9 : 6, // Highlight hover for primary dataset.
pointBackgroundColor: lineColor, // Fill points with the line color.
pointBorderColor: "#ffffff", // Add a border around points for contrast.
borderWidth: i === 0 ? 3 : 1.5, // Slightly increase line size for the primary dataset.
};
});
if (datasets.length === 0) {
return;
} // If there are no selected algorithms, return without creating a new chart.
// Create a new chart instance.
chart = new Chart(document.getElementById("chart").getContext("2d"), {
type: "line", // Set the chart type to line.
data: { datasets },
options: {
plugins: {
legend: {
display: true,
position: "top",
labels: { color: "#808080" },
}, // Configure the legend.
tooltip: {
callbacks: {
label: (tooltipItem) => {
const { dataset, dataIndex } = tooltipItem;
const point = dataset.data[dataIndex];
return `${dataset.label}${point.version.toLowerCase()}: Speed = ${point.x}, mAP = ${point.y}`; // Custom tooltip label.
},
},
mode: "nearest",
intersect: false,
}, // Configure the tooltip.
},
interaction: { mode: "nearest", axis: "x", intersect: false }, // Configure the interaction mode.
scales: {
x: {
type: "linear",
position: "bottom",
title: {
display: true,
text: "Latency T4 TensorRT10 FP16 (ms/img)",
color: "#808080",
}, // X-axis title.
grid: { color: "#e0e0e0" }, // Grid line color.
ticks: { color: "#808080" }, // Tick label color.
},
y: {
title: { display: true, text: "mAP", color: "#808080" }, // Y-axis title.
grid: { color: "#e0e0e0" }, // Grid line color.
ticks: { color: "#808080" }, // Tick label color.
},
},
},
});
}
// Poll for Chart.js to load, then initialize checkboxes and chart
function initializeApp() {
if (typeof Chart !== "undefined") {
document
.querySelectorAll('input[name="algorithm"]')
.forEach((checkbox) => checkbox.addEventListener("change", updateChart));
updateChart();
} else {
setTimeout(initializeApp, 100); // Retry every 100ms
}
}
document.addEventListener("DOMContentLoaded", initializeApp); // Initial chart rendering on page load

@ -16,7 +16,7 @@ def test_major_solutions():
safe_download(url=MAJOR_SOLUTIONS_DEMO)
cap = cv2.VideoCapture("solutions_ci_demo.mp4")
assert cap.isOpened(), "Error reading video file"
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
region_points = [(20, 400), (1080, 400), (1080, 360), (20, 360)]
counter = solutions.ObjectCounter(region=region_points, model="yolo11n.pt", show=False) # Test object counter
heatmap = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, model="yolo11n.pt", show=False) # Test heatmaps
speed = solutions.SpeedEstimator(region=region_points, model="yolo11n.pt", show=False) # Test queue manager

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = "8.3.34"
__version__ = "8.3.36"
import os

@ -2,7 +2,7 @@
# Configuration for Ultralytics Solutions
# Object counting settings
region: # Object counting, queue or speed estimation region points. Default region points are [(20, 400), (1080, 404), (1080, 360), (20, 360)]
region: # Object counting, queue or speed estimation region points. Default region points are [(20, 400), (1080, 400), (1080, 360), (20, 360)]
show_in: True # Flag to display objects moving *into* the defined region
show_out: True # Flag to display objects moving *out of* the defined region

@ -1591,7 +1591,7 @@ class LetterBox:
labels["ratio_pad"] = (labels["ratio_pad"], (left, top)) # for evaluation
if len(labels):
labels = self._update_labels(labels, ratio, dw, dh)
labels = self._update_labels(labels, ratio, left, top)
labels["img"] = img
labels["resized_shape"] = new_shape
return labels

@ -501,8 +501,7 @@ class Exporter:
@try_export
def export_openvino(self, prefix=colorstr("OpenVINO:")):
"""YOLO OpenVINO export."""
# WARNING: numpy>=2.0.0 issue with OpenVINO on macOS https://github.com/ultralytics/ultralytics/pull/17221
check_requirements(f'openvino{"<=2024.0.0" if ARM64 else ">=2024.0.0"}') # fix OpenVINO issue on ARM64
check_requirements("openvino>=2024.5.0")
import openvino as ov
LOGGER.info(f"\n{prefix} starting export with openvino {ov.__version__}...")
@ -530,7 +529,7 @@ class Exporter:
if self.args.int8:
fq = str(self.file).replace(self.file.suffix, f"_int8_openvino_model{os.sep}")
fq_ov = str(Path(fq) / self.file.with_suffix(".xml").name)
check_requirements("nncf>=2.8.0")
check_requirements("nncf>=2.14.0")
import nncf
def transform_fn(data_item) -> np.ndarray:

@ -153,7 +153,11 @@ class BasePredictor:
(list): A list of transformed images.
"""
same_shapes = len({x.shape for x in im}) == 1
letterbox = LetterBox(self.imgsz, auto=same_shapes and self.model.pt, stride=self.model.stride)
letterbox = LetterBox(
self.imgsz,
auto=same_shapes and (self.model.pt or getattr(self.model, "dynamic", False)),
stride=self.model.stride,
)
return [letterbox(image=x) for x in im]
def postprocess(self, preds, img, orig_imgs):

@ -960,10 +960,8 @@ def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)
m = getattr(torch.nn, m[3:]) if "nn." in m else globals()[m] # get module
for j, a in enumerate(args):
if isinstance(a, str):
try:
with contextlib.suppress(ValueError):
args[j] = locals()[a] if a in locals() else ast.literal_eval(a)
except ValueError:
pass
n = n_ = max(round(n * depth), 1) if n > 1 else n # depth gain
if m in {
Classify,
@ -1141,24 +1139,16 @@ def guess_model_task(model):
# Guess from model cfg
if isinstance(model, dict):
try:
with contextlib.suppress(Exception):
return cfg2task(model)
except Exception:
pass
# Guess from PyTorch model
if isinstance(model, nn.Module): # PyTorch model
for x in "model.args", "model.model.args", "model.model.model.args":
try:
with contextlib.suppress(Exception):
return eval(x)["task"]
except Exception:
pass
for x in "model.yaml", "model.model.yaml", "model.model.model.yaml":
try:
with contextlib.suppress(Exception):
return cfg2task(eval(x))
except Exception:
pass
for m in model.modules():
if isinstance(m, Segment):
return "segment"

@ -80,37 +80,33 @@ class ObjectCounter(BaseSolution):
else: # Moving left
self.out_count += 1
self.classwise_counts[self.names[cls]]["OUT"] += 1
else:
# Horizontal region: Compare y-coordinates to determine direction
if current_centroid[1] > prev_position[1]: # Moving downward
self.in_count += 1
self.classwise_counts[self.names[cls]]["IN"] += 1
else: # Moving upward
self.out_count += 1
self.classwise_counts[self.names[cls]]["OUT"] += 1
# Horizontal region: Compare y-coordinates to determine direction
elif current_centroid[1] > prev_position[1]: # Moving downward
self.in_count += 1
self.classwise_counts[self.names[cls]]["IN"] += 1
else: # Moving upward
self.out_count += 1
self.classwise_counts[self.names[cls]]["OUT"] += 1
self.counted_ids.append(track_id)
elif len(self.region) > 2: # Polygonal region
polygon = self.Polygon(self.region)
if polygon.contains(self.Point(current_centroid)):
# Determine motion direction for vertical or horizontal polygons
region_width = max([p[0] for p in self.region]) - min([p[0] for p in self.region])
region_height = max([p[1] for p in self.region]) - min([p[1] for p in self.region])
if region_width < region_height: # Vertical-oriented polygon
if current_centroid[0] > prev_position[0]: # Moving right
self.in_count += 1
self.classwise_counts[self.names[cls]]["IN"] += 1
else: # Moving left
self.out_count += 1
self.classwise_counts[self.names[cls]]["OUT"] += 1
else: # Horizontal-oriented polygon
if current_centroid[1] > prev_position[1]: # Moving downward
self.in_count += 1
self.classwise_counts[self.names[cls]]["IN"] += 1
else: # Moving upward
self.out_count += 1
self.classwise_counts[self.names[cls]]["OUT"] += 1
region_width = max(p[0] for p in self.region) - min(p[0] for p in self.region)
region_height = max(p[1] for p in self.region) - min(p[1] for p in self.region)
if (
region_width < region_height
and current_centroid[0] > prev_position[0]
or region_width >= region_height
and current_centroid[1] > prev_position[1]
): # Moving right
self.in_count += 1
self.classwise_counts[self.names[cls]]["IN"] += 1
else: # Moving left
self.out_count += 1
self.classwise_counts[self.names[cls]]["OUT"] += 1
self.counted_ids.append(track_id)
def store_classwise_counts(self, cls):

@ -135,7 +135,7 @@ class BaseSolution:
def initialize_region(self):
"""Initialize the counting region and line segment based on configuration settings."""
if self.region is None:
self.region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
self.region = [(20, 400), (1080, 400), (1080, 360), (20, 360)]
self.r_s = (
self.Polygon(self.region) if len(self.region) >= 3 else self.LineString(self.region)
) # region or line

@ -675,7 +675,7 @@ def profile(input, ops, n=10, device=None, max_num_obj=0):
torch.randn(
x.shape[0],
max_num_obj,
int(sum([(x.shape[-1] / s) * (x.shape[-2] / s) for s in m.stride.tolist()])),
int(sum((x.shape[-1] / s) * (x.shape[-2] / s) for s in m.stride.tolist())),
device=device,
dtype=torch.float32,
)

Loading…
Cancel
Save