`ultralytics 8.0.193` add Raspberry Pi guide to Docs (#5230)

Co-authored-by: Kayzwer <68285002+Kayzwer@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: DaanKwF <108017202+DaanKwF@users.noreply.github.com>
main
Glenn Jocher 2 years ago committed by GitHub
parent 9b1f35cbdc
commit 3e3980b2bc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 12
      docs/datasets/pose/index.md
  2. 12
      docs/datasets/pose/tiger-pose.md
  3. 7
      docs/guides/index.md
  4. 240
      docs/guides/raspberry-pi.md
  5. 14
      docs/modes/predict.md
  6. 1
      mkdocs.yml
  7. 2
      setup.py
  8. 2
      ultralytics/__init__.py
  9. 2
      ultralytics/data/build.py
  10. 2
      ultralytics/data/loaders.py
  11. 19
      ultralytics/data/utils.py
  12. 2
      ultralytics/engine/predictor.py
  13. 5
      ultralytics/engine/results.py
  14. 2
      ultralytics/utils/checks.py

@ -60,8 +60,7 @@ The `train` and `val` fields specify the paths to the directories containing the
`names` is a dictionary of class names. The order of the names should match the order of the object class indices in the YOLO dataset files.
(Optional) if the points are symmetric then need flip_idx, like left-right side of human or face.
For example if we assume five keypoints of facial landmark: [left eye, right eye, nose, left mouth, right mouth], and the original index is [0, 1, 2, 3, 4], then flip_idx is [1, 0, 2, 4, 3] (just exchange the left-right index, i.e 0-1 and 3-4, and do not modify others like nose in this example).
(Optional) if the points are symmetric then need flip_idx, like left-right side of human or face. For example if we assume five keypoints of facial landmark: [left eye, right eye, nose, left mouth, right mouth], and the original index is [0, 1, 2, 3, 4], then flip_idx is [1, 0, 2, 4, 3] (just exchange the left-right index, i.e 0-1 and 3-4, and do not modify others like nose in this example).
## Usage
@ -109,6 +108,15 @@ This section outlines the datasets that are compatible with Ultralytics YOLO for
- **Additional Notes**: COCO8-Pose is ideal for sanity checks and CI checks.
- [Read more about COCO8-Pose](./coco8-pose.md)
### Tiger-Pose
- **Description**: [Ultralytics](https://ultralytics.com) This animal pose dataset comprises 263 images sourced from a [YouTube Video](https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUbVGlnZXIgd2Fsa2luZyByZWZlcmVuY2UubXA0), with 210 images allocated for training and 53 for validation.
- **Label Format**: Same as Ultralytics YOLO format as described above, with 12 keypoints for animal pose and no visible dimension.
- **Number of Classes**: 1 (Tiger).
- **Keypoints**: 12 keypoints.
- **Usage**: Great for animal pose or any other pose that is not human-based.
- [Read more about Tiger-Pose](./tiger-pose.md)
### Adding your own dataset
If you have your own dataset and would like to use it for training pose estimation models with Ultralytics YOLO format, ensure that it follows the format specified above under "Ultralytics YOLO format". Convert your annotations to the required format and specify the paths, number of classes, and class names in the YAML configuration file.

@ -7,7 +7,8 @@ keywords: Ultralytics, YOLOv8, pose detection, COCO8-Pose dataset, dataset, mode
# Tiger-Pose Dataset
## Introduction
[Ultralytics](https://ultralytics.com) introduces the Tiger-Pose dataset, a versatile collection designed for pose estimation tasks. This dataset comprises 250 images sourced from a [YouTube Video](https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUbVGlnZXIgd2Fsa2luZyByZWZlcmVuY2UubXA0), with 210 images allocated for training and 53 for validation. It serves as an excellent resource for testing and troubleshooting pose estimation algorithm.
[Ultralytics](https://ultralytics.com) introduces the Tiger-Pose dataset, a versatile collection designed for pose estimation tasks. This dataset comprises 263 images sourced from a [YouTube Video](https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUbVGlnZXIgd2Fsa2luZyByZWZlcmVuY2UubXA0), with 210 images allocated for training and 53 for validation. It serves as an excellent resource for testing and troubleshooting pose estimation algorithm.
Despite its manageable size of 210 images, tiger-pose dataset offers diversity, making it suitable for assessing training pipelines, identifying potential errors, and serving as a valuable preliminary step before working with larger datasets for pose estimation.
@ -18,7 +19,6 @@ and [YOLOv8](https://github.com/ultralytics/ultralytics).
A YAML (Yet Another Markup Language) file serves as the means to specify the configuration details of a dataset. It encompasses crucial data such as file paths, class definitions, and other pertinent information. Specifically, for the `tiger-pose.yaml` file, you can check [Ultralytics Tiger-Pose Dataset Configuration File](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/tiger-pose.yaml).
!!! example "ultralytics/cfg/datasets/tiger-pose.yaml"
```yaml
@ -54,7 +54,7 @@ To train a YOLOv8n-pose model on the Tiger-Pose dataset for 100 epochs with an i
Here are some examples of images from the Tiger-Pose dataset, along with their corresponding annotations:
<img src="https://user-images.githubusercontent.com/62513924/272491921-c963d2bf-505f-4a15-abd7-259de302cffa.jpg" alt="Dataset sample image" width="800">
<img src="https://user-images.githubusercontent.com/62513924/272491921-c963d2bf-505f-4a15-abd7-259de302cffa.jpg" alt="Dataset sample image" width="100%">
- **Mosaiced Image**: This image demonstrates a training batch composed of mosaiced dataset images. Mosaicing is a technique used during training that combines multiple images into a single image to increase the variety of objects and scenes within each training batch. This helps improve the model's ability to generalize to different object sizes, aspect ratios, and contexts.
@ -72,17 +72,17 @@ The example showcases the variety and complexity of the images in the Tiger-Pose
# Load a model
model = YOLO('path/to/best.pt') # load a tiger-pose trained model
# Run Inference
# Run inference
results = model.predict(source="https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUYdGlnZXIgd2Fsa2luZyByZWZlcmVuY2Ug" show=True)
```
=== "CLI"
```bash
# Start training from a pretrained *.pt model
# Run inference using a tiger-pose trained model
yolo task=pose mode=predict source="https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUYdGlnZXIgd2Fsa2luZyByZWZlcmVuY2Ug" show=True model="path/to/best.pt"
```
## Citations and Acknowledgments
The dataset has been released available under the [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).

@ -16,10 +16,11 @@ Here's a compilation of in-depth guides to help you master different aspects of
* [K-Fold Cross Validation](kfold-cross-validation.md) 🚀 NEW: Learn how to improve model generalization using K-Fold cross-validation technique.
* [Hyperparameter Tuning](hyperparameter-tuning.md) 🚀 NEW: Discover how to optimize your YOLO models by fine-tuning hyperparameters using the Tuner class and genetic evolution algorithms.
* [Using YOLOv8 with SAHI for Sliced Inference](sahi-tiled-inference.md) 🚀 NEW: Comprehensive guide on leveraging SAHI's sliced inference capabilities with YOLOv8 for object detection in high-resolution images.
* [SAHI Tiled Inference](sahi-tiled-inference.md) 🚀 NEW: Comprehensive guide on leveraging SAHI's sliced inference capabilities with YOLOv8 for object detection in high-resolution images.
* [AzureML Quickstart](azureml-quickstart.md) 🚀 NEW: Get up and running with Ultralytics YOLO models on Microsoft's Azure Machine Learning platform. Learn how to train, deploy, and scale your object detection projects in the cloud.
* [Conda Quickstart](conda-quickstart.md) 🚀 NEW: Step-by-step guide to setting up a Conda environment for Ultralytics. Learn how to install and start using the Ultralytics package efficiently with Conda.
* [Docker Quickstart](docker-quickstart.md) 🚀 NEW: Complete guide to setting up and using Ultralytics YOLO models with Docker. Learn how to install Docker, manage GPU support, and run YOLO models in isolated containers for consistent development and deployment.
* [Conda Quickstart](conda-quickstart.md) 🚀 NEW: Step-by-step guide to setting up a [Conda](https://anaconda.org/conda-forge/ultralytics) environment for Ultralytics. Learn how to install and start using the Ultralytics package efficiently with Conda.
* [Docker Quickstart](docker-quickstart.md) 🚀 NEW: Complete guide to setting up and using Ultralytics YOLO models with [Docker](https://hub.docker.com/r/ultralytics/ultralytics). Learn how to install Docker, manage GPU support, and run YOLO models in isolated containers for consistent development and deployment.
* [Raspberry Pi](raspberry-pi.md) 🚀 NEW: Quickstart tutorial to run YOLO models to the latest [Raspberry Pi](https://www.raspberrypi.com/) hardware.
## Contribute to Our Guides

@ -0,0 +1,240 @@
---
comments: true
description: Quick start guide to setting up YOLO on a Raspberry Pi with a Pi Camera using the libcamera stack. Detailed comparison between Raspberry Pi 3, 4 and 5 models.
keywords: Ultralytics, YOLO, Raspberry Pi, Pi Camera, libcamera, quick start guide, Raspberry Pi 4 vs Raspberry Pi 5, YOLO on Raspberry Pi, hardware setup, machine learning, AI
---
# Quick Start Guide: Raspberry Pi and Pi Camera with YOLOv5 and YOLOv8
This comprehensive guide aims to expedite your journey with YOLO object detection models on a [Raspberry Pi](https://www.raspberrypi.com/) using a [Pi Camera](https://www.raspberrypi.com/products/camera-module-v2/). Whether you're a student, hobbyist, or a professional, this guide is designed to get you up and running in less than 30 minutes. The instructions here are rigorously tested to minimize setup issues, allowing you to focus on utilizing YOLO for your specific projects.
<p align="center">
<br>
<iframe width="720" height="405" src="https://www.youtube.com/embed/yul4gq_LrOI"
title="Introducing Raspberry Pi 5" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Raspberry Pi 5 updates and improvements.
</p>
## Prerequisites
- Raspberry Pi 3 or 4
- Pi Camera
- 64-bit Raspberry Pi Operating System
Connect the Pi Camera to your Raspberry Pi via a CSI cable and install the 64-bit Raspberry Pi Operating System. Verify your camera with the following command:
```bash
libcamera-hello
```
You should see a video feed from your camera.
## Choose Your YOLO Version: YOLOv5 or YOLOv8
This guide offers you the flexibility to start with either [YOLOv5](https://github.com/ultralytics/yolov5) or [YOLOv8](https://github.com/ultralytics/ultralytics). Both versions have their unique advantages and use-cases. The choice is yours, but remember, the guide's aim is not just quick setup but also a robust foundation for your future work in object detection.
## Hardware Specifics: Raspberry Pi 3 vs Raspberry Pi 4
Raspberry Pi 3 and Raspberry Pi 4 have distinct hardware specifications, and the YOLO installation and configuration process can vary slightly depending on which model you're using.
### Raspberry Pi 3
- **CPU**: 1.2GHz Quad-Core ARM Cortex-A53
- **RAM**: 1GB LPDDR2
- **USB Ports**: 4 x USB 2.0
- **Network**: Ethernet & Wi-Fi 802.11n
- **Performance**: Generally slower, may require lighter YOLO models for real-time processing
- **Power Requirement**: 2.5A power supply
- **Official Documentation**: [Raspberry Pi 3 Documentation](https://www.raspberrypi.org/documentation/hardware/raspberrypi/bcm2837/README.md)
### Raspberry Pi 4
- **CPU**: 1.5GHz Quad-core 64-bit ARM Cortex-A72 CPU
- **RAM**: Options of 2GB, 4GB or 8GB LPDDR4
- **USB Ports**: 2 x USB 2.0, 2 x USB 3.0
- **Network**: Gigabit Ethernet & Wi-Fi 802.11ac
- **Performance**: Faster, capable of running more complex YOLO models in real-time
- **Power Requirement**: 3.0A USB-C power supply
- **Official Documentation**: [Raspberry Pi 4 Documentation](https://www.raspberrypi.org/documentation/hardware/raspberrypi/bcm2711/README.md)
### Raspberry Pi 5
- **CPU**: 2.4GHz Quad-core 64-bit Arm Cortex-A76 CPU
- **GPU**: VideoCore VII, supporting OpenGL ES 3.1, Vulkan 1.2
- **Display Output**: Dual 4Kp60 HDMI
- **Decoder**: 4Kp60 HEVC
- **Network**: Gigabit Ethernet with PoE+ support, Dual-band 802.11ac Wi-Fi®, Bluetooth 5.0 / BLE
- **USB Ports**: 2 x USB 3.0, 2 x USB 2.0
- **Other Features**: High-speed microSD card interface with SDR104 mode, 2 × 4-lane MIPI camera/display transceivers, PCIe 2.0 x1 interface, standard 40-pin GPIO header, real-time clock, power button
- **Power Requirement**: Specifics not yet available, expected to require a higher amperage supply
- **Official Documentation**: [Raspberry Pi 5 Documentation](https://www.raspberrypi.com/news/introducing-raspberry-pi-5/)
Please make sure to follow the instructions specific to your Raspberry Pi model to ensure a smooth setup process.
## Quick Start with YOLOv5
This section outlines how to set up YOLOv5 on a Raspberry Pi 3 or 4 with a Pi Camera. These steps are designed to be compatible with the libcamera camera stack introduced in Raspberry Pi OS Bullseye.
### Install Necessary Packages
1. Update the Raspberry Pi:
```bash
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get autoremove -y
```
2. Clone the YOLOv5 repository:
```bash
cd ~
git clone https://github.com/Ultralytics/yolov5.git
```
3. Install the required dependencies:
```bash
cd ~/yolov5
pip3 install -r requirements.txt
```
4. For Raspberry Pi 3, install compatible versions of PyTorch and Torchvision (skip for Raspberry Pi 4):
```bash
pip3 uninstall torch torchvision
pip3 install torch==1.11.0 torchvision==0.12.0
```
### Modify `detect.py`
To enable TCP streams via SSH or the CLI, minor modifications are needed in `detect.py`.
1. Open `detect.py`:
```bash
sudo nano ~/yolov5/detect.py
```
2. Find and modify the `is_url` line to accept TCP streams:
```python
is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://', 'tcp://'))
```
3. Comment out the `view_img` line:
```python
# view_img = check_imshow(warn=True)
```
4. Save and exit:
```bash
CTRL + O -> ENTER -> CTRL + X
```
### Initiate TCP Stream with Libcamera
1. Start the TCP stream:
```bash
libcamera-vid -n -t 0 --width 1280 --height 960 --framerate 1 --inline --listen -o tcp://127.0.0.1:8888
```
Keep this terminal session running for the next steps.
### Perform YOLOv5 Inference
1. Run the YOLOv5 detection:
```bash
cd ~/yolov5
python3 detect.py --source=tcp://127.0.0.1:8888
```
## Quick Start with YOLOv8
Follow this section if you are interested in setting up YOLOv8 instead. The steps are quite similar but are tailored for YOLOv8's specific needs.
### Install Necessary Packages
1. Update the Raspberry Pi:
```bash
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get autoremove -y
```
2. Install YOLOv8:
```bash
pip3 install ultralytics
```
3. Reboot:
```bash
sudo reboot
```
### Modify `build.py`
Just like YOLOv5, YOLOv8 also needs minor modifications to accept TCP streams.
1. Open `build.py` located in the Ultralytics package folder:
```bash
sudo nano /home/pi/.local/lib/pythonX.X/site-packages/ultralytics/build.py
```
2. Find and modify the `is_url` line to accept TCP streams:
```python
is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://', 'tcp://'))
```
3. Save and exit:
```bash
CTRL + O -> ENTER -> CTRL + X
```
### Initiate TCP Stream with Libcamera
1. Start the TCP stream:
```bash
libcamera-vid -n -t 0 --width 1280 --height 960 --framerate 1 --inline --listen -o tcp://127.0.0.1:8888
```
### Perform YOLOv8 Inference
To perform inference with YOLOv8, you can use the following Python code snippet:
```python
from ultralytics import YOLO
model = YOLO('yolov8n.pt')
results = model('tcp://127.0.0.1:8888', stream=True)
while True:
for result in results:
boxes = result.boxes
probs = result.probs
```
## Next Steps
Congratulations on successfully setting up YOLO on your Raspberry Pi! For further learning and support, visit [Ultralytics](https://ultralytics.com/) and [KashmirWorldFoundation](https://www.kashmirworldfoundation.org/).
## Acknowledgements and Citations
This guide was initially created by Daan Eeltink for Kashmir World Foundation, an organization dedicated to the use of YOLO for the conservation of endangered species. We acknowledge their pioneering work and educational focus in the realm of object detection technologies.
For more information about Kashmir World Foundation's activities, you can visit their [website](https://www.kashmirworldfoundation.org/).

@ -25,10 +25,10 @@ In the world of machine learning and computer vision, the process of making sens
## Real-world Applications
| Manufacturing | Sports | Safety |
|:-----------------------------------:|:-----------------------:|:-----------:|
| ![Vehicle Spare Parts Detection](https://github.com/RizwanMunawar/ultralytics/assets/62513924/a0f802a8-0776-44cf-8f17-93974a4a28a1) | ![Football Player Detection](https://github.com/RizwanMunawar/ultralytics/assets/62513924/7d320e1f-fc57-4d7f-a691-78ee579c3442)| ![People Fall Detection](https://github.com/RizwanMunawar/ultralytics/assets/62513924/86437c4a-3227-4eee-90ef-9efb697bdb43) |
| Vehicle Spare Parts Detection | Football Player Detection | People Fall Detection |
| Manufacturing | Sports | Safety |
|:-----------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------:|
| ![Vehicle Spare Parts Detection](https://github.com/RizwanMunawar/ultralytics/assets/62513924/a0f802a8-0776-44cf-8f17-93974a4a28a1) | ![Football Player Detection](https://github.com/RizwanMunawar/ultralytics/assets/62513924/7d320e1f-fc57-4d7f-a691-78ee579c3442) | ![People Fall Detection](https://github.com/RizwanMunawar/ultralytics/assets/62513924/86437c4a-3227-4eee-90ef-9efb697bdb43) |
| Vehicle Spare Parts Detection | Football Player Detection | People Fall Detection |
## Why Use Ultralytics YOLO for Inference?
@ -110,7 +110,7 @@ YOLOv8 can process different types of input sources for inference, as shown in t
| directory ✅ | `'path/'` | `str` or `Path` | Path to a directory containing images or videos. |
| glob ✅ | `'path/*.jpg'` | `str` | Glob pattern to match multiple files. Use the `*` character as a wildcard. |
| YouTube ✅ | `'https://youtu.be/LNwODJXcvt4'` | `str` | URL to a YouTube video. |
| stream ✅ | `'rtsp://example.com/media.mp4'` | `str` | URL for streaming protocols such as RTSP, RTMP, or an IP address. |
| stream ✅ | `'rtsp://example.com/media.mp4'` | `str` | URL for streaming protocols such as RTSP, RTMP, TCP, or an IP address. |
| multi-stream ✅ | `'list.streams'` | `str` or `Path` | `*.streams` text file with one stream URL per row, i.e. 8 streams will run at batch-size 8. |
Below are code examples for using each source type:
@ -306,7 +306,7 @@ Below are code examples for using each source type:
```
=== "Streams"
Run inference on remote streaming sources using RTSP, RTMP, and IP address protocols. If multiple streams are provided in a `*.streams` text file then batched inference will run, i.e. 8 streams will run at batch-size 8, otherwise single streams will run at batch-size 1.
Run inference on remote streaming sources using RTSP, RTMP, TCP and IP address protocols. If multiple streams are provided in a `*.streams` text file then batched inference will run, i.e. 8 streams will run at batch-size 8, otherwise single streams will run at batch-size 1.
```python
from ultralytics import YOLO
@ -314,7 +314,7 @@ Below are code examples for using each source type:
model = YOLO('yolov8n.pt')
# Single stream with batch-size 1 inference
source = 'rtsp://example.com/media.mp4' # RTSP, RTMP or IP streaming address
source = 'rtsp://example.com/media.mp4' # RTSP, RTMP, TCP or IP streaming address
# Multiple streams with batched inference (i.e. batch-size 8 for 8 streams)
source = 'path/to/list.streams' # *.streams text file with one streaming address per row

@ -221,6 +221,7 @@ nav:
- AzureML Quickstart: guides/azureml-quickstart.md
- Conda Quickstart: guides/conda-quickstart.md
- Docker Quickstart: guides/docker-quickstart.md
- Raspberry Pi: guides/raspberry-pi.md
- Integrations:
- integrations/index.md
- OpenVINO: integrations/openvino.md

@ -68,7 +68,7 @@ setup(
'mkdocs-material',
'mkdocstrings[python]',
'mkdocs-redirects', # for 301 redirects
'mkdocs-ultralytics-plugin>=0.0.27', # for meta descriptions and images, dates and authors
'mkdocs-ultralytics-plugin>=0.0.29', # for meta descriptions and images, dates and authors
],
'export': [
'coremltools>=7.0',

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = '8.0.192'
__version__ = '8.0.193'
from ultralytics.models import RTDETR, SAM, YOLO
from ultralytics.models.fastsam import FastSAM

@ -115,7 +115,7 @@ def check_source(source):
if isinstance(source, (str, int, Path)): # int for local usb camera
source = str(source)
is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
is_url = source.lower().startswith(('https://', 'http://', 'rtsp://', 'rtmp://'))
is_url = source.lower().startswith(('https://', 'http://', 'rtsp://', 'rtmp://', 'tcp://'))
webcam = source.isnumeric() or source.endswith('.streams') or (is_url and not is_file)
screenshot = source.lower() == 'screen'
if is_url and is_file:

@ -29,7 +29,7 @@ class SourceTypes:
class LoadStreams:
"""YOLOv8 streamloader, i.e. `yolo predict source='rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP streams`."""
"""Stream Loader, i.e. `yolo predict source='rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP, TCP streams`."""
def __init__(self, sources='file.streams', imgsz=640, vid_stride=1, buffer=False):
"""Initialize instance variables and check for consistent input stream shapes."""

@ -115,17 +115,17 @@ def verify_image_label(args):
if nl:
if keypoint:
assert lb.shape[1] == (5 + nkpt * ndim), f'labels require {(5 + nkpt * ndim)} columns each'
assert (lb[:, 5::ndim] <= 1).all(), 'non-normalized or out of bounds coordinate labels'
assert (lb[:, 6::ndim] <= 1).all(), 'non-normalized or out of bounds coordinate labels'
points = lb[:, 5:].reshape(-1, ndim)[:, :2]
else:
assert lb.shape[1] == 5, f'labels require 5 columns, {lb.shape[1]} columns detected'
assert (lb[:, 1:] <= 1).all(), \
f'non-normalized or out of bounds coordinates {lb[:, 1:][lb[:, 1:] > 1]}'
assert (lb >= 0).all(), f'negative label values {lb[lb < 0]}'
points = lb[:, 1:]
assert points.max() <= 1, f'non-normalized or out of bounds coordinates {points[points > 1]}'
assert lb.min() >= 0, f'negative label values {lb[lb < 0]}'
# All labels
max_cls = int(lb[:, 0].max()) # max label count
max_cls = lb[:, 0].max() # max label count
assert max_cls <= num_cls, \
f'Label class {max_cls} exceeds dataset class count {num_cls}. ' \
f'Label class {int(max_cls)} exceeds dataset class count {num_cls}. ' \
f'Possible class labels are 0-{num_cls - 1}'
_, i = np.unique(lb, axis=0, return_index=True)
if len(i) < nl: # duplicate row check
@ -135,11 +135,10 @@ def verify_image_label(args):
msg = f'{prefix}WARNING ⚠ {im_file}: {nl - len(i)} duplicate labels removed'
else:
ne = 1 # label empty
lb = np.zeros((0, (5 + nkpt * ndim)), dtype=np.float32) if keypoint else np.zeros(
(0, 5), dtype=np.float32)
lb = np.zeros((0, (5 + nkpt * ndim) if keypoint else 5), dtype=np.float32)
else:
nm = 1 # label missing
lb = np.zeros((0, (5 + nkpt * ndim)), dtype=np.float32) if keypoint else np.zeros((0, 5), dtype=np.float32)
lb = np.zeros((0, (5 + nkpt * ndim) if keypoints else 5), dtype=np.float32)
if keypoint:
keypoints = lb[:, 5:].reshape(-1, nkpt, ndim)
if ndim == 2:

@ -12,7 +12,7 @@ Usage - sources:
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/LNwODJXcvt4' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP, TCP stream
Usage - formats:
$ yolo mode=predict model=yolov8n.pt # PyTorch

@ -15,6 +15,7 @@ import torch
from ultralytics.data.augment import LetterBox
from ultralytics.utils import LOGGER, SimpleClass, ops
from ultralytics.utils.plotting import Annotator, colors, save_one_box
from ultralytics.utils.torch_utils import smart_inference_mode
class BaseTensor(SimpleClass):
@ -485,10 +486,14 @@ class Keypoints(BaseTensor):
to(device, dtype): Returns a copy of the keypoints tensor with the specified device and dtype.
"""
@smart_inference_mode() # avoid keypoints < conf in-place error
def __init__(self, keypoints, orig_shape) -> None:
"""Initializes the Keypoints object with detection keypoints and original image size."""
if keypoints.ndim == 2:
keypoints = keypoints[None, :]
if keypoints.shape[2] == 3: # x, y, conf
mask = keypoints[..., 2] < 0.5 # points with conf < 0.5 (not visible)
keypoints[..., :2][mask] = 0
super().__init__(keypoints, orig_shape)
self.has_visible = self.data.shape[-1] == 3

@ -431,7 +431,7 @@ def check_file(file, suffix='', download=True, hard=True):
file = check_yolov5u_filename(file) # yolov5n -> yolov5nu
if not file or ('://' not in file and Path(file).exists()): # exists ('://' check required in Windows Python<3.10)
return file
elif download and file.lower().startswith(('https://', 'http://', 'rtsp://', 'rtmp://')): # download
elif download and file.lower().startswith(('https://', 'http://', 'rtsp://', 'rtmp://', 'tcp://')): # download
url = file # warning: Pathlib turns :// -> :/
file = url2file(file) # '%2F' to '/', split https://url.com/file.txt?auth
if Path(file).exists():

Loading…
Cancel
Save