From 29361e46d99c5b27f812a94fdb9b7b7adc5a5c9b Mon Sep 17 00:00:00 2001 From: Mohammed Yasin <32206511+Y-T-G@users.noreply.github.com> Date: Mon, 25 Nov 2024 00:01:11 +0800 Subject: [PATCH 1/3] Fix labels padding for Letterbox with `center=False` (#17728) --- ultralytics/data/augment.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ultralytics/data/augment.py b/ultralytics/data/augment.py index d092e3c370..bd821de28d 100644 --- a/ultralytics/data/augment.py +++ b/ultralytics/data/augment.py @@ -1591,7 +1591,7 @@ class LetterBox: labels["ratio_pad"] = (labels["ratio_pad"], (left, top)) # for evaluation if len(labels): - labels = self._update_labels(labels, ratio, dw, dh) + labels = self._update_labels(labels, ratio, left, top) labels["img"] = img labels["resized_shape"] = new_shape return labels From 10b0bbd84795fc9aef16c37983bc3c0c51dbd501 Mon Sep 17 00:00:00 2001 From: Muhammad Rizwan Munawar Date: Sun, 24 Nov 2024 21:01:54 +0500 Subject: [PATCH 2/3] Add https://youtu.be/c-S5M36XWmg & https://youtu.be/fd6u1TW_AGY to docs (#17722) Co-authored-by: Glenn Jocher --- docs/en/datasets/pose/hand-keypoints.md | 11 +++++++++++ docs/en/usage/simple-utilities.md | 11 +++++++++++ 2 files changed, 22 insertions(+) diff --git a/docs/en/datasets/pose/hand-keypoints.md b/docs/en/datasets/pose/hand-keypoints.md index dd3c19b1a4..559cdcec65 100644 --- a/docs/en/datasets/pose/hand-keypoints.md +++ b/docs/en/datasets/pose/hand-keypoints.md @@ -10,6 +10,17 @@ keywords: Hand KeyPoints, pose estimation, dataset, keypoints, MediaPipe, YOLO, The hand-keypoints dataset contains 26,768 images of hands annotated with keypoints, making it suitable for training models like Ultralytics YOLO for pose estimation tasks. The annotations were generated using the Google MediaPipe library, ensuring high [accuracy](https://www.ultralytics.com/glossary/accuracy) and consistency, and the dataset is compatible [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) formats. +

+
+ +
+ Watch: Hand Keypoints Estimation with Ultralytics YOLO11 | Human Hand Pose Estimation Tutorial +

+ ## Hand Landmarks ![Hand Landmarks](https://github.com/ultralytics/docs/releases/download/0/hand_landmarks.jpg) diff --git a/docs/en/usage/simple-utilities.md b/docs/en/usage/simple-utilities.md index 45d3dc66c3..2026e5a216 100644 --- a/docs/en/usage/simple-utilities.md +++ b/docs/en/usage/simple-utilities.md @@ -458,6 +458,17 @@ image_with_obb = ann.result() #### Bounding Boxes Circle Annotation [Circle Label](https://docs.ultralytics.com/reference/utils/plotting/#ultralytics.utils.plotting.Annotator.circle_label) +

+
+ +
+ Watch: In-Depth Guide to Text & Circle Annotations with Python Live Demos | Ultralytics Annotations 🚀 +

+ ```python import cv2 From cd6ef6105a3d43c8be70cd2d80aca4622d371a11 Mon Sep 17 00:00:00 2001 From: Mohammed Yasin <32206511+Y-T-G@users.noreply.github.com> Date: Mon, 25 Nov 2024 00:05:51 +0800 Subject: [PATCH 3/3] Update `coco-seg.yaml` to `coco.yaml` (#17739) Co-authored-by: Glenn Jocher --- README.md | 4 ++-- README.zh-CN.md | 4 ++-- docs/en/datasets/segment/coco.md | 8 ++++---- docs/en/tasks/segment.md | 4 ++-- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index cb29008492..0242ca3fb8 100644 --- a/README.md +++ b/README.md @@ -150,8 +150,8 @@ See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage e | [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 | | [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 | -- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset.
Reproduce by `yolo val segment data=coco-seg.yaml device=0` -- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu` +- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset.
Reproduce by `yolo val segment data=coco.yaml device=0` +- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val segment data=coco.yaml batch=1 device=0|cpu` diff --git a/README.zh-CN.md b/README.zh-CN.md index aec15a2e1d..47cbaaaa99 100644 --- a/README.zh-CN.md +++ b/README.zh-CN.md @@ -150,8 +150,8 @@ YOLO11 [检测](https://docs.ultralytics.com/tasks/detect/)、[分割](https://d | [YOLO11l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-seg.pt) | 640 | 53.4 | 42.9 | 344.2 ± 3.2 | 7.8 ± 0.2 | 27.6 | 142.2 | | [YOLO11x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt) | 640 | 54.7 | 43.8 | 664.5 ± 3.2 | 15.8 ± 0.7 | 62.1 | 319.0 | -- **mAPval** 值针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。
复制命令 `yolo val segment data=coco-seg.yaml device=0` -- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。
复制命令 `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu` +- **mAPval** 值针对单模型单尺度在 [COCO val2017](https://cocodataset.org/) 数据集上进行。
复制命令 `yolo val segment data=coco.yaml device=0` +- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 COCO 验证图像上平均。
复制命令 `yolo val segment data=coco.yaml batch=1 device=0|cpu` diff --git a/docs/en/datasets/segment/coco.md b/docs/en/datasets/segment/coco.md index 5ff52f46a2..2dd8a0f53a 100644 --- a/docs/en/datasets/segment/coco.md +++ b/docs/en/datasets/segment/coco.md @@ -56,14 +56,14 @@ To train a YOLO11n-seg model on the COCO-Seg dataset for 100 [epochs](https://ww model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training) # Train the model - results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640) + results = model.train(data="coco.yaml", epochs=100, imgsz=640) ``` === "CLI" ```bash # Start training from a pretrained *.pt model - yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640 + yolo segment train data=coco.yaml model=yolo11n-seg.pt epochs=100 imgsz=640 ``` ## Sample Images and Annotations @@ -118,14 +118,14 @@ To train a YOLO11n-seg model on the COCO-Seg dataset for 100 epochs with an imag model = YOLO("yolo11n-seg.pt") # load a pretrained model (recommended for training) # Train the model - results = model.train(data="coco-seg.yaml", epochs=100, imgsz=640) + results = model.train(data="coco.yaml", epochs=100, imgsz=640) ``` === "CLI" ```bash # Start training from a pretrained *.pt model - yolo segment train data=coco-seg.yaml model=yolo11n-seg.pt epochs=100 imgsz=640 + yolo segment train data=coco.yaml model=yolo11n-seg.pt epochs=100 imgsz=640 ``` ### What are the key features of the COCO-Seg dataset? diff --git a/docs/en/tasks/segment.md b/docs/en/tasks/segment.md index c422c6fd62..33c19d9d3c 100644 --- a/docs/en/tasks/segment.md +++ b/docs/en/tasks/segment.md @@ -36,8 +36,8 @@ YOLO11 pretrained Segment models are shown here. Detect, Segment and Pose models {% include "macros/yolo-seg-perf.md" %} -- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset.
Reproduce by `yolo val segment data=coco-seg.yaml device=0` -- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val segment data=coco-seg.yaml batch=1 device=0|cpu` +- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset.
Reproduce by `yolo val segment data=coco.yaml device=0` +- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val segment data=coco.yaml batch=1 device=0|cpu` ## Train