`ultralytics 8.0.65` YOLOv8 Pose models (#1347)
Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Mert Can Demir <validatedev@gmail.com> Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com> Co-authored-by: Fabian Greavu <fabiangreavu@gmail.com> Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com> Co-authored-by: Eric Pedley <ericpedley@gmail.com> Co-authored-by: JustasBart <40023722+JustasBart@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Sergio Sanchez <sergio.ssm.97@gmail.com> Co-authored-by: Bogdan Gheorghe <112427971+bogdan-galileo@users.noreply.github.com> Co-authored-by: Jaap van de Loosdrecht <jaap@vdlmv.nl> Co-authored-by: Noobtoss <96134731+Noobtoss@users.noreply.github.com> Co-authored-by: nerdyespresso <106761627+nerdyespresso@users.noreply.github.com> Co-authored-by: Farid Inawan <frdteknikelektro@gmail.com> Co-authored-by: Laughing-q <1185102784@qq.com> Co-authored-by: Alexander Duda <Alexander.Duda@me.com> Co-authored-by: Mehran Ghandehari <mehran.maps@gmail.com> Co-authored-by: Snyk bot <snyk-bot@snyk.io> Co-authored-by: majid nasiri <majnasai@gmail.com>pull/1206/head^2
parent
9af3e69b1a
commit
1cb92d7f42
57 changed files with 1578 additions and 489 deletions
@ -1,149 +0,0 @@ |
||||
Key Point Estimation is a task that involves identifying the location of specific points in an image, usually referred |
||||
to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive |
||||
features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]` |
||||
coordinates. |
||||
|
||||
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png"> |
||||
|
||||
The output of a keypoint detector is a set of points that represent the keypoints on the object in the image, usually |
||||
along with the confidence scores for each point. Keypoint estimation is a good choice when you need to identify specific |
||||
parts of an object in a scene, and their location in relation to each other. |
||||
|
||||
!!! tip "Tip" |
||||
|
||||
YOLOv8 _keypoints_ models use the `-kpts` suffix, i.e. `yolov8n-kpts.pt`. These models are trained on the COCO dataset and are suitable for a variety of keypoint estimation tasks. |
||||
|
||||
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary} |
||||
|
||||
## Train TODO |
||||
|
||||
Train an OpenPose model on a custom dataset of keypoints using the OpenPose framework. For more information on how to |
||||
train an OpenPose model on a custom dataset, see the OpenPose Training page. |
||||
|
||||
!!! example "" |
||||
|
||||
=== "Python" |
||||
|
||||
```python |
||||
from ultralytics import YOLO |
||||
|
||||
# Load a model |
||||
model = YOLO('yolov8n.yaml') # build a new model from YAML |
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training) |
||||
model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights |
||||
|
||||
# Train the model |
||||
model.train(data='coco128.yaml', epochs=100, imgsz=640) |
||||
``` |
||||
=== "CLI" |
||||
|
||||
```bash |
||||
# Build a new model from YAML and start training from scratch |
||||
yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640 |
||||
|
||||
# Start training from a pretrained *.pt model |
||||
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 |
||||
|
||||
# Build a new model from YAML, transfer pretrained weights to it and start training |
||||
yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640 |
||||
``` |
||||
|
||||
## Val TODO |
||||
|
||||
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's |
||||
training `data` and arguments as model attributes. |
||||
|
||||
!!! example "" |
||||
|
||||
=== "Python" |
||||
|
||||
```python |
||||
from ultralytics import YOLO |
||||
|
||||
# Load a model |
||||
model = YOLO('yolov8n.pt') # load an official model |
||||
model = YOLO('path/to/best.pt') # load a custom model |
||||
|
||||
# Validate the model |
||||
metrics = model.val() # no arguments needed, dataset and settings remembered |
||||
metrics.box.map # map50-95 |
||||
metrics.box.map50 # map50 |
||||
metrics.box.map75 # map75 |
||||
metrics.box.maps # a list contains map50-95 of each category |
||||
``` |
||||
=== "CLI" |
||||
|
||||
```bash |
||||
yolo detect val model=yolov8n.pt # val official model |
||||
yolo detect val model=path/to/best.pt # val custom model |
||||
``` |
||||
|
||||
## Predict TODO |
||||
|
||||
Use a trained YOLOv8n model to run predictions on images. |
||||
|
||||
!!! example "" |
||||
|
||||
=== "Python" |
||||
|
||||
```python |
||||
from ultralytics import YOLO |
||||
|
||||
# Load a model |
||||
model = YOLO('yolov8n.pt') # load an official model |
||||
model = YOLO('path/to/best.pt') # load a custom model |
||||
|
||||
# Predict with the model |
||||
results = model('https://ultralytics.com/images/bus.jpg') # predict on an image |
||||
``` |
||||
=== "CLI" |
||||
|
||||
```bash |
||||
yolo detect predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model |
||||
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model |
||||
``` |
||||
|
||||
Read more details of `predict` in our [Predict](https://docs.ultralytics.com/modes/predict/) page. |
||||
|
||||
## Export TODO |
||||
|
||||
Export a YOLOv8n model to a different format like ONNX, CoreML, etc. |
||||
|
||||
!!! example "" |
||||
|
||||
=== "Python" |
||||
|
||||
```python |
||||
from ultralytics import YOLO |
||||
|
||||
# Load a model |
||||
model = YOLO('yolov8n.pt') # load an official model |
||||
model = YOLO('path/to/best.pt') # load a custom trained |
||||
|
||||
# Export the model |
||||
model.export(format='onnx') |
||||
``` |
||||
=== "CLI" |
||||
|
||||
```bash |
||||
yolo export model=yolov8n.pt format=onnx # export official model |
||||
yolo export model=path/to/best.pt format=onnx # export custom trained model |
||||
``` |
||||
|
||||
Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models, |
||||
i.e. `yolo predict model=yolov8n-pose.onnx`. |
||||
|
||||
| Format | `format` Argument | Model | Metadata | |
||||
|--------------------------------------------------------------------|-------------------|---------------------------|----------| |
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | |
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | |
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | |
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | |
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | |
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | |
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | |
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | |
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | |
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | |
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | |
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | |
@ -0,0 +1,175 @@ |
||||
Pose estimation is a task that involves identifying the location of specific points in an image, usually referred |
||||
to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive |
||||
features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]` |
||||
coordinates. |
||||
|
||||
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png"> |
||||
|
||||
The output of a pose estimation model is a set of points that represent the keypoints on an object in the image, usually |
||||
along with the confidence scores for each point. Pose estimation is a good choice when you need to identify specific |
||||
parts of an object in a scene, and their location in relation to each other. |
||||
|
||||
!!! tip "Tip" |
||||
|
||||
YOLOv8 _pose_ models use the `-pose` suffix, i.e. `yolov8n-pose.pt`. These models are trained on the [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco-pose.yaml) dataset and are suitable for a variety of pose estimation tasks. |
||||
|
||||
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8) |
||||
|
||||
YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models are pretrained on |
||||
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify |
||||
models are pretrained on |
||||
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset. |
||||
|
||||
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest |
||||
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use. |
||||
|
||||
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>pose<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) | |
||||
|------------------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|--------------------------------|-------------------------------------|--------------------|-------------------| |
||||
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | - | 49.7 | - | - | 3.3 | 9.2 | |
||||
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | - | 59.2 | - | - | 11.6 | 30.2 | |
||||
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | - | 63.6 | - | - | 26.4 | 81.0 | |
||||
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | - | 67.0 | - | - | 44.4 | 168.6 | |
||||
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | - | 68.9 | - | - | 69.4 | 263.2 | |
||||
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | - | 71.5 | - | - | 99.1 | 1066.4 | |
||||
|
||||
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO Keypoints val2017](http://cocodataset.org) |
||||
dataset. |
||||
<br>Reproduce by `yolo val pose data=coco-pose.yaml device=0` |
||||
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) |
||||
instance. |
||||
<br>Reproduce by `yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu` |
||||
|
||||
## Train |
||||
|
||||
Train a YOLOv8-pose model on the COCO128-pose dataset. |
||||
|
||||
!!! example "" |
||||
|
||||
=== "Python" |
||||
|
||||
```python |
||||
from ultralytics import YOLO |
||||
|
||||
# Load a model |
||||
model = YOLO('yolov8n-pose.yaml') # build a new model from YAML |
||||
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training) |
||||
model = YOLO('yolov8n-pose.yaml').load('yolov8n-pose.pt') # build from YAML and transfer weights |
||||
|
||||
# Train the model |
||||
model.train(data='coco128-pose.yaml', epochs=100, imgsz=640) |
||||
``` |
||||
=== "CLI" |
||||
|
||||
```bash |
||||
# Build a new model from YAML and start training from scratch |
||||
yolo detect train data=coco128-pose.yaml model=yolov8n-pose.yaml epochs=100 imgsz=640 |
||||
|
||||
# Start training from a pretrained *.pt model |
||||
yolo detect train data=coco128-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640 |
||||
|
||||
# Build a new model from YAML, transfer pretrained weights to it and start training |
||||
yolo detect train data=coco128-pose.yaml model=yolov8n-pose.yaml pretrained=yolov8n-pose.pt epochs=100 imgsz=640 |
||||
``` |
||||
|
||||
## Val |
||||
|
||||
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model` |
||||
retains it's |
||||
training `data` and arguments as model attributes. |
||||
|
||||
!!! example "" |
||||
|
||||
=== "Python" |
||||
|
||||
```python |
||||
from ultralytics import YOLO |
||||
|
||||
# Load a model |
||||
model = YOLO('yolov8n-pose.pt') # load an official model |
||||
model = YOLO('path/to/best.pt') # load a custom model |
||||
|
||||
# Validate the model |
||||
metrics = model.val() # no arguments needed, dataset and settings remembered |
||||
metrics.box.map # map50-95 |
||||
metrics.box.map50 # map50 |
||||
metrics.box.map75 # map75 |
||||
metrics.box.maps # a list contains map50-95 of each category |
||||
``` |
||||
=== "CLI" |
||||
|
||||
```bash |
||||
yolo pose val model=yolov8n-pose.pt # val official model |
||||
yolo pose val model=path/to/best.pt # val custom model |
||||
``` |
||||
|
||||
## Predict |
||||
|
||||
Use a trained YOLOv8n-pose model to run predictions on images. |
||||
|
||||
!!! example "" |
||||
|
||||
=== "Python" |
||||
|
||||
```python |
||||
from ultralytics import YOLO |
||||
|
||||
# Load a model |
||||
model = YOLO('yolov8n-pose.pt') # load an official model |
||||
model = YOLO('path/to/best.pt') # load a custom model |
||||
|
||||
# Predict with the model |
||||
results = model('https://ultralytics.com/images/bus.jpg') # predict on an image |
||||
``` |
||||
=== "CLI" |
||||
|
||||
```bash |
||||
yolo pose predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model |
||||
yolo pose predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model |
||||
``` |
||||
|
||||
See full `predict` mode details in the [Predict](https://docs.ultralytics.com/modes/predict/) page. |
||||
|
||||
## Export |
||||
|
||||
Export a YOLOv8n model to a different format like ONNX, CoreML, etc. |
||||
|
||||
!!! example "" |
||||
|
||||
=== "Python" |
||||
|
||||
```python |
||||
from ultralytics import YOLO |
||||
|
||||
# Load a model |
||||
model = YOLO('yolov8n.pt') # load an official model |
||||
model = YOLO('path/to/best.pt') # load a custom trained |
||||
|
||||
# Export the model |
||||
model.export(format='onnx') |
||||
``` |
||||
=== "CLI" |
||||
|
||||
```bash |
||||
yolo export model=yolov8n.pt format=onnx # export official model |
||||
yolo export model=path/to/best.pt format=onnx # export custom trained model |
||||
``` |
||||
|
||||
Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models, |
||||
i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your model after export completes. |
||||
|
||||
| Format | `format` Argument | Model | Metadata | |
||||
|--------------------------------------------------------------------|-------------------|--------------------------------|----------| |
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n-pose.pt` | ✅ | |
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-pose.torchscript` | ✅ | |
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-pose.onnx` | ✅ | |
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-pose_openvino_model/` | ✅ | |
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-pose.engine` | ✅ | |
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-pose.mlmodel` | ✅ | |
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-pose_saved_model/` | ✅ | |
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-pose.pb` | ❌ | |
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-pose.tflite` | ✅ | |
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-pose_edgetpu.tflite` | ✅ | |
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | ✅ | |
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | ✅ | |
||||
|
||||
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. |
@ -0,0 +1,38 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
# COCO 2017 dataset http://cocodataset.org by Microsoft |
||||
# Example usage: yolo train data=coco-pose.yaml |
||||
# parent |
||||
# ├── ultralytics |
||||
# └── datasets |
||||
# └── coco-pose ← downloads here (20.1 GB) |
||||
|
||||
|
||||
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] |
||||
path: ../datasets/coco-pose # dataset root dir |
||||
train: train2017.txt # train images (relative to 'path') 118287 images |
||||
val: val2017.txt # val images (relative to 'path') 5000 images |
||||
test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794 |
||||
|
||||
# Keypoints |
||||
kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible) |
||||
flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15] |
||||
|
||||
# Classes |
||||
names: |
||||
0: person |
||||
|
||||
# Download script/URL (optional) |
||||
download: | |
||||
from ultralytics.yolo.utils.downloads import download |
||||
from pathlib import Path |
||||
|
||||
# Download labels |
||||
dir = Path(yaml['path']) # dataset root dir |
||||
url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/' |
||||
urls = [url + 'coco2017labels-pose.zip'] # labels |
||||
download(urls, dir=dir.parent) |
||||
# Download data |
||||
urls = ['http://images.cocodataset.org/zips/train2017.zip', # 19G, 118k images |
||||
'http://images.cocodataset.org/zips/val2017.zip', # 1G, 5k images |
||||
'http://images.cocodataset.org/zips/test2017.zip'] # 7G, 41k images (optional) |
||||
download(urls, dir=dir / 'images', threads=3) |
@ -0,0 +1,25 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
# COCO8-pose dataset (first 8 images from COCO train2017) by Ultralytics |
||||
# Example usage: yolo train data=coco8-pose.yaml |
||||
# parent |
||||
# ├── ultralytics |
||||
# └── datasets |
||||
# └── coco8-pose ← downloads here (1 MB) |
||||
|
||||
|
||||
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] |
||||
path: ../datasets/coco8-pose # dataset root dir |
||||
train: images/train # train images (relative to 'path') 4 images |
||||
val: images/val # val images (relative to 'path') 4 images |
||||
test: # test images (optional) |
||||
|
||||
# Keypoints |
||||
kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible) |
||||
flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15] |
||||
|
||||
# Classes |
||||
names: |
||||
0: person |
||||
|
||||
# Download script/URL (optional) |
||||
download: https://ultralytics.com/assets/coco8-pose.zip |
@ -0,0 +1,57 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
# YOLOv8 object detection model with P3-P6 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect |
||||
|
||||
# Parameters |
||||
nc: 1 # number of classes |
||||
kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible) |
||||
scales: # model compound scaling constants, i.e. 'model=yolov8n-p6.yaml' will call yolov8-p6.yaml with scale 'n' |
||||
# [depth, width, max_channels] |
||||
n: [0.33, 0.25, 1024] |
||||
s: [0.33, 0.50, 1024] |
||||
m: [0.67, 0.75, 768] |
||||
l: [1.00, 1.00, 512] |
||||
x: [1.00, 1.25, 512] |
||||
|
||||
# YOLOv8.0x6 backbone |
||||
backbone: |
||||
# [from, repeats, module, args] |
||||
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 |
||||
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 |
||||
- [-1, 3, C2f, [128, True]] |
||||
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 |
||||
- [-1, 6, C2f, [256, True]] |
||||
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 |
||||
- [-1, 6, C2f, [512, True]] |
||||
- [-1, 1, Conv, [768, 3, 2]] # 7-P5/32 |
||||
- [-1, 3, C2f, [768, True]] |
||||
- [-1, 1, Conv, [1024, 3, 2]] # 9-P6/64 |
||||
- [-1, 3, C2f, [1024, True]] |
||||
- [-1, 1, SPPF, [1024, 5]] # 11 |
||||
|
||||
# YOLOv8.0x6 head |
||||
head: |
||||
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] |
||||
- [[-1, 8], 1, Concat, [1]] # cat backbone P5 |
||||
- [-1, 3, C2, [768, False]] # 14 |
||||
|
||||
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] |
||||
- [[-1, 6], 1, Concat, [1]] # cat backbone P4 |
||||
- [-1, 3, C2, [512, False]] # 17 |
||||
|
||||
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] |
||||
- [[-1, 4], 1, Concat, [1]] # cat backbone P3 |
||||
- [-1, 3, C2, [256, False]] # 20 (P3/8-small) |
||||
|
||||
- [-1, 1, Conv, [256, 3, 2]] |
||||
- [[-1, 17], 1, Concat, [1]] # cat head P4 |
||||
- [-1, 3, C2, [512, False]] # 23 (P4/16-medium) |
||||
|
||||
- [-1, 1, Conv, [512, 3, 2]] |
||||
- [[-1, 14], 1, Concat, [1]] # cat head P5 |
||||
- [-1, 3, C2, [768, False]] # 26 (P5/32-large) |
||||
|
||||
- [-1, 1, Conv, [768, 3, 2]] |
||||
- [[-1, 11], 1, Concat, [1]] # cat head P6 |
||||
- [-1, 3, C2, [1024, False]] # 29 (P6/64-xlarge) |
||||
|
||||
- [[20, 23, 26, 29], 1, Pose, [nc, kpt_shape]] # Pose(P3, P4, P5, P6) |
@ -0,0 +1,47 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
# YOLOv8-pose keypoints/pose estimation model. For Usage examples see https://docs.ultralytics.com/tasks/pose |
||||
|
||||
# Parameters |
||||
nc: 1 # number of classes |
||||
kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible) |
||||
scales: # model compound scaling constants, i.e. 'model=yolov8n-pose.yaml' will call yolov8-pose.yaml with scale 'n' |
||||
# [depth, width, max_channels] |
||||
n: [0.33, 0.25, 1024] |
||||
s: [0.33, 0.50, 1024] |
||||
m: [0.67, 0.75, 768] |
||||
l: [1.00, 1.00, 512] |
||||
x: [1.00, 1.25, 512] |
||||
|
||||
# YOLOv8.0n backbone |
||||
backbone: |
||||
# [from, repeats, module, args] |
||||
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 |
||||
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 |
||||
- [-1, 3, C2f, [128, True]] |
||||
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 |
||||
- [-1, 6, C2f, [256, True]] |
||||
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 |
||||
- [-1, 6, C2f, [512, True]] |
||||
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 |
||||
- [-1, 3, C2f, [1024, True]] |
||||
- [-1, 1, SPPF, [1024, 5]] # 9 |
||||
|
||||
# YOLOv8.0n head |
||||
head: |
||||
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] |
||||
- [[-1, 6], 1, Concat, [1]] # cat backbone P4 |
||||
- [-1, 3, C2f, [512]] # 12 |
||||
|
||||
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] |
||||
- [[-1, 4], 1, Concat, [1]] # cat backbone P3 |
||||
- [-1, 3, C2f, [256]] # 15 (P3/8-small) |
||||
|
||||
- [-1, 1, Conv, [256, 3, 2]] |
||||
- [[-1, 12], 1, Concat, [1]] # cat head P4 |
||||
- [-1, 3, C2f, [512]] # 18 (P4/16-medium) |
||||
|
||||
- [-1, 1, Conv, [512, 3, 2]] |
||||
- [[-1, 9], 1, Concat, [1]] # cat head P5 |
||||
- [-1, 3, C2f, [1024]] # 21 (P5/32-large) |
||||
|
||||
- [[15, 18, 21], 1, Pose, [nc, kpt_shape]] # Pose(P3, P4, P5) |
@ -1,5 +1,5 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
|
||||
from ultralytics.yolo.v8 import classify, detect, segment |
||||
from ultralytics.yolo.v8 import classify, detect, pose, segment |
||||
|
||||
__all__ = 'classify', 'segment', 'detect' |
||||
__all__ = 'classify', 'segment', 'detect', 'pose' |
||||
|
@ -0,0 +1,7 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
|
||||
from .predict import PosePredictor, predict |
||||
from .train import PoseTrainer, train |
||||
from .val import PoseValidator, val |
||||
|
||||
__all__ = 'PoseTrainer', 'train', 'PoseValidator', 'val', 'PosePredictor', 'predict' |
@ -0,0 +1,103 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
|
||||
from ultralytics.yolo.engine.results import Results |
||||
from ultralytics.yolo.utils import DEFAULT_CFG, ROOT, ops |
||||
from ultralytics.yolo.utils.plotting import colors, save_one_box |
||||
from ultralytics.yolo.v8.detect.predict import DetectionPredictor |
||||
|
||||
|
||||
class PosePredictor(DetectionPredictor): |
||||
|
||||
def postprocess(self, preds, img, orig_img): |
||||
preds = ops.non_max_suppression(preds, |
||||
self.args.conf, |
||||
self.args.iou, |
||||
agnostic=self.args.agnostic_nms, |
||||
max_det=self.args.max_det, |
||||
classes=self.args.classes, |
||||
nc=len(self.model.names)) |
||||
|
||||
results = [] |
||||
for i, pred in enumerate(preds): |
||||
orig_img = orig_img[i] if isinstance(orig_img, list) else orig_img |
||||
shape = orig_img.shape |
||||
pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], shape).round() |
||||
pred_kpts = pred[:, 6:].view(len(pred), *self.model.kpt_shape) if len(pred) else pred[:, 6:] |
||||
pred_kpts = ops.scale_coords(img.shape[2:], pred_kpts, shape) |
||||
path, _, _, _, _ = self.batch |
||||
img_path = path[i] if isinstance(path, list) else path |
||||
results.append( |
||||
Results(orig_img=orig_img, |
||||
path=img_path, |
||||
names=self.model.names, |
||||
boxes=pred[:, :6], |
||||
keypoints=pred_kpts)) |
||||
return results |
||||
|
||||
def write_results(self, idx, results, batch): |
||||
p, im, im0 = batch |
||||
log_string = '' |
||||
if len(im.shape) == 3: |
||||
im = im[None] # expand for batch dim |
||||
self.seen += 1 |
||||
imc = im0.copy() if self.args.save_crop else im0 |
||||
if self.source_type.webcam or self.source_type.from_img: # batch_size >= 1 |
||||
log_string += f'{idx}: ' |
||||
frame = self.dataset.count |
||||
else: |
||||
frame = getattr(self.dataset, 'frame', 0) |
||||
self.data_path = p |
||||
self.txt_path = str(self.save_dir / 'labels' / p.stem) + ('' if self.dataset.mode == 'image' else f'_{frame}') |
||||
log_string += '%gx%g ' % im.shape[2:] # print string |
||||
self.annotator = self.get_annotator(im0) |
||||
|
||||
det = results[idx].boxes # TODO: make boxes inherit from tensors |
||||
if len(det) == 0: |
||||
return f'{log_string}(no detections), ' |
||||
for c in det.cls.unique(): |
||||
n = (det.cls == c).sum() # detections per class |
||||
log_string += f"{n} {self.model.names[int(c)]}{'s' * (n > 1)}, " |
||||
|
||||
kpts = reversed(results[idx].keypoints) |
||||
for k in kpts: |
||||
self.annotator.kpts(k, shape=results[idx].orig_shape) |
||||
|
||||
# write |
||||
for j, d in enumerate(reversed(det)): |
||||
c, conf, id = int(d.cls), float(d.conf), None if d.id is None else int(d.id.item()) |
||||
if self.args.save_txt: # Write to file |
||||
kpt = (kpts[j][:, :2] / d.orig_shape[[1, 0]]).reshape(-1).tolist() |
||||
box = d.xywhn.view(-1).tolist() |
||||
line = (c, *box, *kpt) + (conf, ) * self.args.save_conf + (() if id is None else (id, )) |
||||
with open(f'{self.txt_path}.txt', 'a') as f: |
||||
f.write(('%g ' * len(line)).rstrip() % line + '\n') |
||||
if self.args.save or self.args.show: # Add bbox to image |
||||
name = ('' if id is None else f'id:{id} ') + self.model.names[c] |
||||
label = (f'{name} {conf:.2f}' if self.args.show_conf else name) if self.args.show_labels else None |
||||
if self.args.boxes: |
||||
self.annotator.box_label(d.xyxy.squeeze(), label, color=colors(c, True)) |
||||
if self.args.save_crop: |
||||
save_one_box(d.xyxy, |
||||
imc, |
||||
file=self.save_dir / 'crops' / self.model.model.names[c] / f'{self.data_path.stem}.jpg', |
||||
BGR=True) |
||||
|
||||
return log_string |
||||
|
||||
|
||||
def predict(cfg=DEFAULT_CFG, use_python=False): |
||||
model = cfg.model or 'yolov8n-pose.pt' |
||||
source = cfg.source if cfg.source is not None else ROOT / 'assets' if (ROOT / 'assets').exists() \ |
||||
else 'https://ultralytics.com/images/bus.jpg' |
||||
|
||||
args = dict(model=model, source=source) |
||||
if use_python: |
||||
from ultralytics import YOLO |
||||
YOLO(model)(**args) |
||||
else: |
||||
predictor = PosePredictor(overrides=args) |
||||
predictor.predict_cli() |
||||
|
||||
|
||||
if __name__ == '__main__': |
||||
predict() |
@ -0,0 +1,170 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
|
||||
from copy import copy |
||||
|
||||
import torch |
||||
import torch.nn as nn |
||||
|
||||
from ultralytics.nn.tasks import PoseModel |
||||
from ultralytics.yolo import v8 |
||||
from ultralytics.yolo.utils import DEFAULT_CFG |
||||
from ultralytics.yolo.utils.loss import KeypointLoss |
||||
from ultralytics.yolo.utils.metrics import OKS_SIGMA |
||||
from ultralytics.yolo.utils.ops import xyxy2xywh |
||||
from ultralytics.yolo.utils.plotting import plot_images, plot_results |
||||
from ultralytics.yolo.utils.tal import make_anchors |
||||
from ultralytics.yolo.utils.torch_utils import de_parallel |
||||
from ultralytics.yolo.v8.detect.train import Loss |
||||
|
||||
|
||||
# BaseTrainer python usage |
||||
class PoseTrainer(v8.detect.DetectionTrainer): |
||||
|
||||
def __init__(self, cfg=DEFAULT_CFG, overrides=None): |
||||
if overrides is None: |
||||
overrides = {} |
||||
overrides['task'] = 'pose' |
||||
super().__init__(cfg, overrides) |
||||
|
||||
def get_model(self, cfg=None, weights=None, verbose=True): |
||||
model = PoseModel(cfg, ch=3, nc=self.data['nc'], data_kpt_shape=self.data['kpt_shape'], verbose=verbose) |
||||
if weights: |
||||
model.load(weights) |
||||
|
||||
return model |
||||
|
||||
def set_model_attributes(self): |
||||
super().set_model_attributes() |
||||
self.model.kpt_shape = self.data['kpt_shape'] |
||||
|
||||
def get_validator(self): |
||||
self.loss_names = 'box_loss', 'pose_loss', 'kobj_loss', 'cls_loss', 'dfl_loss' |
||||
return v8.pose.PoseValidator(self.test_loader, save_dir=self.save_dir, args=copy(self.args)) |
||||
|
||||
def criterion(self, preds, batch): |
||||
if not hasattr(self, 'compute_loss'): |
||||
self.compute_loss = PoseLoss(de_parallel(self.model)) |
||||
return self.compute_loss(preds, batch) |
||||
|
||||
def plot_training_samples(self, batch, ni): |
||||
images = batch['img'] |
||||
kpts = batch['keypoints'] |
||||
cls = batch['cls'].squeeze(-1) |
||||
bboxes = batch['bboxes'] |
||||
paths = batch['im_file'] |
||||
batch_idx = batch['batch_idx'] |
||||
plot_images(images, |
||||
batch_idx, |
||||
cls, |
||||
bboxes, |
||||
kpts=kpts, |
||||
paths=paths, |
||||
fname=self.save_dir / f'train_batch{ni}.jpg') |
||||
|
||||
def plot_metrics(self): |
||||
plot_results(file=self.csv, pose=True) # save results.png |
||||
|
||||
|
||||
# Criterion class for computing training losses |
||||
class PoseLoss(Loss): |
||||
|
||||
def __init__(self, model): # model must be de-paralleled |
||||
super().__init__(model) |
||||
self.kpt_shape = model.model[-1].kpt_shape |
||||
self.bce_pose = nn.BCEWithLogitsLoss() |
||||
is_pose = self.kpt_shape == [17, 3] |
||||
nkpt = self.kpt_shape[0] # number of keypoints |
||||
sigmas = torch.from_numpy(OKS_SIGMA).to(self.device) if is_pose else torch.ones(nkpt, device=self.device) / nkpt |
||||
self.keypoint_loss = KeypointLoss(sigmas=sigmas) |
||||
|
||||
def __call__(self, preds, batch): |
||||
loss = torch.zeros(5, device=self.device) # box, cls, dfl, kpt_location, kpt_visibility |
||||
feats, pred_kpts = preds if isinstance(preds[0], list) else preds[1] |
||||
pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split( |
||||
(self.reg_max * 4, self.nc), 1) |
||||
|
||||
# b, grids, .. |
||||
pred_scores = pred_scores.permute(0, 2, 1).contiguous() |
||||
pred_distri = pred_distri.permute(0, 2, 1).contiguous() |
||||
pred_kpts = pred_kpts.permute(0, 2, 1).contiguous() |
||||
|
||||
dtype = pred_scores.dtype |
||||
imgsz = torch.tensor(feats[0].shape[2:], device=self.device, dtype=dtype) * self.stride[0] # image size (h,w) |
||||
anchor_points, stride_tensor = make_anchors(feats, self.stride, 0.5) |
||||
|
||||
# targets |
||||
batch_size = pred_scores.shape[0] |
||||
batch_idx = batch['batch_idx'].view(-1, 1) |
||||
targets = torch.cat((batch_idx, batch['cls'].view(-1, 1), batch['bboxes']), 1) |
||||
targets = self.preprocess(targets.to(self.device), batch_size, scale_tensor=imgsz[[1, 0, 1, 0]]) |
||||
gt_labels, gt_bboxes = targets.split((1, 4), 2) # cls, xyxy |
||||
mask_gt = gt_bboxes.sum(2, keepdim=True).gt_(0) |
||||
|
||||
# pboxes |
||||
pred_bboxes = self.bbox_decode(anchor_points, pred_distri) # xyxy, (b, h*w, 4) |
||||
pred_kpts = self.kpts_decode(anchor_points, pred_kpts.view(batch_size, -1, *self.kpt_shape)) # (b, h*w, 17, 3) |
||||
|
||||
_, target_bboxes, target_scores, fg_mask, target_gt_idx = self.assigner( |
||||
pred_scores.detach().sigmoid(), (pred_bboxes.detach() * stride_tensor).type(gt_bboxes.dtype), |
||||
anchor_points * stride_tensor, gt_labels, gt_bboxes, mask_gt) |
||||
|
||||
target_scores_sum = max(target_scores.sum(), 1) |
||||
|
||||
# cls loss |
||||
# loss[1] = self.varifocal_loss(pred_scores, target_scores, target_labels) / target_scores_sum # VFL way |
||||
loss[3] = self.bce(pred_scores, target_scores.to(dtype)).sum() / target_scores_sum # BCE |
||||
|
||||
# bbox loss |
||||
if fg_mask.sum(): |
||||
target_bboxes /= stride_tensor |
||||
loss[0], loss[4] = self.bbox_loss(pred_distri, pred_bboxes, anchor_points, target_bboxes, target_scores, |
||||
target_scores_sum, fg_mask) |
||||
keypoints = batch['keypoints'].to(self.device).float().clone() |
||||
keypoints[..., 0] *= imgsz[1] |
||||
keypoints[..., 1] *= imgsz[0] |
||||
for i in range(batch_size): |
||||
if fg_mask[i].sum(): |
||||
idx = target_gt_idx[i][fg_mask[i]] |
||||
gt_kpt = keypoints[batch_idx.view(-1) == i][idx] # (n, 51) |
||||
gt_kpt[..., 0] /= stride_tensor[fg_mask[i]] |
||||
gt_kpt[..., 1] /= stride_tensor[fg_mask[i]] |
||||
area = xyxy2xywh(target_bboxes[i][fg_mask[i]])[:, 2:].prod(1, keepdim=True) |
||||
pred_kpt = pred_kpts[i][fg_mask[i]] |
||||
kpt_mask = gt_kpt[..., 2] != 0 |
||||
loss[1] += self.keypoint_loss(pred_kpt, gt_kpt, kpt_mask, area) # pose loss |
||||
# kpt_score loss |
||||
if pred_kpt.shape[-1] == 3: |
||||
loss[2] += self.bce_pose(pred_kpt[..., 2], kpt_mask.float()) # keypoint obj loss |
||||
|
||||
loss[0] *= self.hyp.box # box gain |
||||
loss[1] *= self.hyp.pose / batch_size # pose gain |
||||
loss[2] *= self.hyp.kobj / batch_size # kobj gain |
||||
loss[3] *= self.hyp.cls # cls gain |
||||
loss[4] *= self.hyp.dfl # dfl gain |
||||
|
||||
return loss.sum() * batch_size, loss.detach() # loss(box, cls, dfl) |
||||
|
||||
def kpts_decode(self, anchor_points, pred_kpts): |
||||
y = pred_kpts.clone() |
||||
y[..., :2] *= 2.0 |
||||
y[..., 0] += anchor_points[:, [0]] - 0.5 |
||||
y[..., 1] += anchor_points[:, [1]] - 0.5 |
||||
return y |
||||
|
||||
|
||||
def train(cfg=DEFAULT_CFG, use_python=False): |
||||
model = cfg.model or 'yolov8n-pose.yaml' |
||||
data = cfg.data or 'coco8-pose.yaml' |
||||
device = cfg.device if cfg.device is not None else '' |
||||
|
||||
args = dict(model=model, data=data, device=device) |
||||
if use_python: |
||||
from ultralytics import YOLO |
||||
YOLO(model).train(**args) |
||||
else: |
||||
trainer = PoseTrainer(overrides=args) |
||||
trainer.train() |
||||
|
||||
|
||||
if __name__ == '__main__': |
||||
train() |
@ -0,0 +1,213 @@ |
||||
# Ultralytics YOLO 🚀, GPL-3.0 license |
||||
|
||||
from pathlib import Path |
||||
|
||||
import numpy as np |
||||
import torch |
||||
|
||||
from ultralytics.yolo.utils import DEFAULT_CFG, LOGGER, ops |
||||
from ultralytics.yolo.utils.checks import check_requirements |
||||
from ultralytics.yolo.utils.metrics import OKS_SIGMA, PoseMetrics, box_iou, kpt_iou |
||||
from ultralytics.yolo.utils.plotting import output_to_target, plot_images |
||||
from ultralytics.yolo.v8.detect import DetectionValidator |
||||
|
||||
|
||||
class PoseValidator(DetectionValidator): |
||||
|
||||
def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None): |
||||
super().__init__(dataloader, save_dir, pbar, args) |
||||
self.args.task = 'pose' |
||||
self.metrics = PoseMetrics(save_dir=self.save_dir) |
||||
|
||||
def preprocess(self, batch): |
||||
batch = super().preprocess(batch) |
||||
batch['keypoints'] = batch['keypoints'].to(self.device).float() |
||||
return batch |
||||
|
||||
def get_desc(self): |
||||
return ('%22s' + '%11s' * 10) % ('Class', 'Images', 'Instances', 'Box(P', 'R', 'mAP50', 'mAP50-95)', 'Pose(P', |
||||
'R', 'mAP50', 'mAP50-95)') |
||||
|
||||
def postprocess(self, preds): |
||||
preds = ops.non_max_suppression(preds, |
||||
self.args.conf, |
||||
self.args.iou, |
||||
labels=self.lb, |
||||
multi_label=True, |
||||
agnostic=self.args.single_cls, |
||||
max_det=self.args.max_det, |
||||
nc=self.nc) |
||||
return preds |
||||
|
||||
def init_metrics(self, model): |
||||
super().init_metrics(model) |
||||
self.kpt_shape = self.data['kpt_shape'] |
||||
is_pose = self.kpt_shape == [17, 3] |
||||
nkpt = self.kpt_shape[0] |
||||
self.sigma = OKS_SIGMA if is_pose else np.ones(nkpt) / nkpt |
||||
|
||||
def update_metrics(self, preds, batch): |
||||
# Metrics |
||||
for si, pred in enumerate(preds): |
||||
idx = batch['batch_idx'] == si |
||||
cls = batch['cls'][idx] |
||||
bbox = batch['bboxes'][idx] |
||||
kpts = batch['keypoints'][idx] |
||||
nl, npr = cls.shape[0], pred.shape[0] # number of labels, predictions |
||||
nk = kpts.shape[1] # number of keypoints |
||||
shape = batch['ori_shape'][si] |
||||
correct_kpts = torch.zeros(npr, self.niou, dtype=torch.bool, device=self.device) # init |
||||
correct_bboxes = torch.zeros(npr, self.niou, dtype=torch.bool, device=self.device) # init |
||||
self.seen += 1 |
||||
|
||||
if npr == 0: |
||||
if nl: |
||||
self.stats.append((correct_bboxes, correct_kpts, *torch.zeros( |
||||
(2, 0), device=self.device), cls.squeeze(-1))) |
||||
if self.args.plots: |
||||
self.confusion_matrix.process_batch(detections=None, labels=cls.squeeze(-1)) |
||||
continue |
||||
|
||||
# Predictions |
||||
if self.args.single_cls: |
||||
pred[:, 5] = 0 |
||||
predn = pred.clone() |
||||
ops.scale_boxes(batch['img'][si].shape[1:], predn[:, :4], shape, |
||||
ratio_pad=batch['ratio_pad'][si]) # native-space pred |
||||
pred_kpts = predn[:, 6:].view(npr, nk, -1) |
||||
ops.scale_coords(batch['img'][si].shape[1:], pred_kpts, shape, ratio_pad=batch['ratio_pad'][si]) |
||||
|
||||
# Evaluate |
||||
if nl: |
||||
height, width = batch['img'].shape[2:] |
||||
tbox = ops.xywh2xyxy(bbox) * torch.tensor( |
||||
(width, height, width, height), device=self.device) # target boxes |
||||
ops.scale_boxes(batch['img'][si].shape[1:], tbox, shape, |
||||
ratio_pad=batch['ratio_pad'][si]) # native-space labels |
||||
tkpts = kpts.clone() |
||||
tkpts[..., 0] *= width |
||||
tkpts[..., 1] *= height |
||||
tkpts = ops.scale_coords(batch['img'][si].shape[1:], tkpts, shape, ratio_pad=batch['ratio_pad'][si]) |
||||
labelsn = torch.cat((cls, tbox), 1) # native-space labels |
||||
correct_bboxes = self._process_batch(predn[:, :6], labelsn) |
||||
correct_kpts = self._process_batch(predn[:, :6], labelsn, pred_kpts, tkpts) |
||||
if self.args.plots: |
||||
self.confusion_matrix.process_batch(predn, labelsn) |
||||
|
||||
# Append correct_masks, correct_boxes, pconf, pcls, tcls |
||||
self.stats.append((correct_bboxes, correct_kpts, pred[:, 4], pred[:, 5], cls.squeeze(-1))) |
||||
|
||||
# Save |
||||
if self.args.save_json: |
||||
self.pred_to_json(predn, batch['im_file'][si]) |
||||
# if self.args.save_txt: |
||||
# save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt') |
||||
|
||||
def _process_batch(self, detections, labels, pred_kpts=None, gt_kpts=None): |
||||
""" |
||||
Return correct prediction matrix |
||||
Arguments: |
||||
detections (array[N, 6]), x1, y1, x2, y2, conf, class |
||||
labels (array[M, 5]), class, x1, y1, x2, y2 |
||||
pred_kpts (array[N, 51]), 51 = 17 * 3 |
||||
gt_kpts (array[N, 51]) |
||||
Returns: |
||||
correct (array[N, 10]), for 10 IoU levels |
||||
""" |
||||
if pred_kpts is not None and gt_kpts is not None: |
||||
# `0.53` is from https://github.com/jin-s13/xtcocoapi/blob/master/xtcocotools/cocoeval.py#L384 |
||||
area = ops.xyxy2xywh(labels[:, 1:])[:, 2:].prod(1) * 0.53 |
||||
iou = kpt_iou(gt_kpts, pred_kpts, sigma=self.sigma, area=area) |
||||
else: # boxes |
||||
iou = box_iou(labels[:, 1:], detections[:, :4]) |
||||
|
||||
correct = np.zeros((detections.shape[0], self.iouv.shape[0])).astype(bool) |
||||
correct_class = labels[:, 0:1] == detections[:, 5] |
||||
for i in range(len(self.iouv)): |
||||
x = torch.where((iou >= self.iouv[i]) & correct_class) # IoU > threshold and classes match |
||||
if x[0].shape[0]: |
||||
matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), |
||||
1).cpu().numpy() # [label, detect, iou] |
||||
if x[0].shape[0] > 1: |
||||
matches = matches[matches[:, 2].argsort()[::-1]] |
||||
matches = matches[np.unique(matches[:, 1], return_index=True)[1]] |
||||
# matches = matches[matches[:, 2].argsort()[::-1]] |
||||
matches = matches[np.unique(matches[:, 0], return_index=True)[1]] |
||||
correct[matches[:, 1].astype(int), i] = True |
||||
return torch.tensor(correct, dtype=torch.bool, device=detections.device) |
||||
|
||||
def plot_val_samples(self, batch, ni): |
||||
plot_images(batch['img'], |
||||
batch['batch_idx'], |
||||
batch['cls'].squeeze(-1), |
||||
batch['bboxes'], |
||||
kpts=batch['keypoints'], |
||||
paths=batch['im_file'], |
||||
fname=self.save_dir / f'val_batch{ni}_labels.jpg', |
||||
names=self.names) |
||||
|
||||
def plot_predictions(self, batch, preds, ni): |
||||
pred_kpts = torch.cat([p[:, 6:].view(-1, *self.kpt_shape)[:15] for p in preds], 0) |
||||
plot_images(batch['img'], |
||||
*output_to_target(preds, max_det=15), |
||||
kpts=pred_kpts, |
||||
paths=batch['im_file'], |
||||
fname=self.save_dir / f'val_batch{ni}_pred.jpg', |
||||
names=self.names) # pred |
||||
|
||||
def pred_to_json(self, predn, filename): |
||||
stem = Path(filename).stem |
||||
image_id = int(stem) if stem.isnumeric() else stem |
||||
box = ops.xyxy2xywh(predn[:, :4]) # xywh |
||||
box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner |
||||
for p, b in zip(predn.tolist(), box.tolist()): |
||||
self.jdict.append({ |
||||
'image_id': image_id, |
||||
'category_id': self.class_map[int(p[5])], |
||||
'bbox': [round(x, 3) for x in b], |
||||
'keypoints': p[6:], |
||||
'score': round(p[4], 5)}) |
||||
|
||||
def eval_json(self, stats): |
||||
if self.args.save_json and self.is_coco and len(self.jdict): |
||||
anno_json = self.data['path'] / 'annotations/person_keypoints_val2017.json' # annotations |
||||
pred_json = self.save_dir / 'predictions.json' # predictions |
||||
LOGGER.info(f'\nEvaluating pycocotools mAP using {pred_json} and {anno_json}...') |
||||
try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb |
||||
check_requirements('pycocotools>=2.0.6') |
||||
from pycocotools.coco import COCO # noqa |
||||
from pycocotools.cocoeval import COCOeval # noqa |
||||
|
||||
for x in anno_json, pred_json: |
||||
assert x.is_file(), f'{x} file not found' |
||||
anno = COCO(str(anno_json)) # init annotations api |
||||
pred = anno.loadRes(str(pred_json)) # init predictions api (must pass string, not Path) |
||||
for i, eval in enumerate([COCOeval(anno, pred, 'bbox'), COCOeval(anno, pred, 'keypoints')]): |
||||
if self.is_coco: |
||||
eval.params.imgIds = [int(Path(x).stem) for x in self.dataloader.dataset.im_files] # im to eval |
||||
eval.evaluate() |
||||
eval.accumulate() |
||||
eval.summarize() |
||||
idx = i * 4 + 2 |
||||
stats[self.metrics.keys[idx + 1]], stats[ |
||||
self.metrics.keys[idx]] = eval.stats[:2] # update mAP50-95 and mAP50 |
||||
except Exception as e: |
||||
LOGGER.warning(f'pycocotools unable to run: {e}') |
||||
return stats |
||||
|
||||
|
||||
def val(cfg=DEFAULT_CFG, use_python=False): |
||||
model = cfg.model or 'yolov8n-pose.pt' |
||||
data = cfg.data or 'coco128-pose.yaml' |
||||
|
||||
args = dict(model=model, data=data) |
||||
if use_python: |
||||
from ultralytics import YOLO |
||||
YOLO(model).val(**args) |
||||
else: |
||||
validator = PoseValidator(args=args) |
||||
validator(model=args['model']) |
||||
|
||||
|
||||
if __name__ == '__main__': |
||||
val() |
Loading…
Reference in new issue