Add real-world projects in Ultralytics + `guides` in Docs (#6695)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>main
parent
9618025416
commit
8c4094e7d9
13 changed files with 869 additions and 23 deletions
@ -0,0 +1,62 @@ |
|||||||
|
--- |
||||||
|
comments: true |
||||||
|
description: Object Counting Using Ultralytics YOLOv8 |
||||||
|
keywords: Ultralytics, YOLOv8, Object Detection, Object Counting, Object Tracking, Notebook, IPython Kernel, CLI, Python SDK |
||||||
|
--- |
||||||
|
|
||||||
|
# Object Counting using Ultralytics YOLOv8 🚀 |
||||||
|
|
||||||
|
## What is Object Counting? |
||||||
|
|
||||||
|
Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves accurate identification and counting of specific objects in videos and camera streams. YOLOv8 excels in real-time applications, providing efficient and precise object counting for various scenarios like crowd analysis and surveillance, thanks to its state-of-the-art algorithms and deep learning capabilities. |
||||||
|
|
||||||
|
## Advantages of Object Counting? |
||||||
|
|
||||||
|
- **Resource Optimization:** Object counting facilitates efficient resource management by providing accurate counts, and optimizing resource allocation in applications like inventory management. |
||||||
|
- **Enhanced Security:** Object counting enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection. |
||||||
|
- **Informed Decision-Making:** Object counting offers valuable insights for decision-making, optimizing processes in retail, traffic management, and various other domains. |
||||||
|
|
||||||
|
## Real World Applications |
||||||
|
|
||||||
|
| Logistics | Aquaculture | |
||||||
|
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------:| |
||||||
|
|  |  | |
||||||
|
| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 | |
||||||
|
|
||||||
|
## Example |
||||||
|
|
||||||
|
```python |
||||||
|
from ultralytics import YOLO |
||||||
|
from ultralytics.solutions import object_counter |
||||||
|
import cv2 |
||||||
|
|
||||||
|
model = YOLO("yolov8n.pt") |
||||||
|
cap = cv2.VideoCapture("path/to/video/file.mp4") |
||||||
|
|
||||||
|
counter = object_counter.ObjectCounter() # Init Object Counter |
||||||
|
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)] |
||||||
|
counter.set_args(view_img=True, reg_pts=region_points, |
||||||
|
classes_names=model.model.names, draw_tracks=True) |
||||||
|
|
||||||
|
while cap.isOpened(): |
||||||
|
success, frame = cap.read() |
||||||
|
if not success: |
||||||
|
exit(0) |
||||||
|
tracks = model.track(frame, persist=True, show=False) |
||||||
|
counter.start_counting(frame, tracks) |
||||||
|
``` |
||||||
|
|
||||||
|
???+ tip "Region is Moveable" |
||||||
|
|
||||||
|
You can move the region anywhere in the frame by clicking on its edges |
||||||
|
|
||||||
|
### Optional Arguments `set_args` |
||||||
|
|
||||||
|
| Name | Type | Default | Description | |
||||||
|
|-----------------|---------|--------------------------------------------------|---------------------------------------| |
||||||
|
| view_img | `bool` | `False` | Display the frame with counts | |
||||||
|
| line_thickness | `int` | `2` | Increase the thickness of count value | |
||||||
|
| reg_pts | `list` | `(20, 400), (1080, 404), (1080, 360), (20, 360)` | Region Area Points | |
||||||
|
| classes_names | `dict` | `model.model.names` | Classes Names Dict | |
||||||
|
| region_color | `tuple` | `(0, 255, 0)` | Region Area Color | |
||||||
|
| track_thickness | `int` | `2` | Tracking line thickness | |
@ -0,0 +1,86 @@ |
|||||||
|
--- |
||||||
|
comments: true |
||||||
|
description: Object Counting in Different Region using Ultralytics YOLOv8 |
||||||
|
keywords: Ultralytics, YOLOv8, Object Detection, Object Counting, Object Tracking, Notebook, IPython Kernel, CLI, Python SDK |
||||||
|
--- |
||||||
|
|
||||||
|
# Object Counting in Different Regions using Ultralytics YOLOv8 🚀 |
||||||
|
|
||||||
|
## What is Object Counting in Regions? |
||||||
|
|
||||||
|
Object counting in regions with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves precisely determining the number of objects within specified areas using advanced computer vision. This approach is valuable for optimizing processes, enhancing security, and improving efficiency in various applications. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<br> |
||||||
|
<iframe width="720" height="405" src="https://www.youtube.com/embed/okItf1iHlV8" |
||||||
|
title="YouTube video player" frameborder="0" |
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" |
||||||
|
allowfullscreen> |
||||||
|
</iframe> |
||||||
|
<br> |
||||||
|
<strong>Watch:</strong> Ultralytics YOLOv8 Object Counting in Multiple & Moveable Regions |
||||||
|
</p> |
||||||
|
|
||||||
|
## Advantages of Object Counting in Regions? |
||||||
|
|
||||||
|
- **Precision and Accuracy:** Object counting in regions with advanced computer vision ensures precise and accurate counts, minimizing errors often associated with manual counting. |
||||||
|
- **Efficiency Improvement:** Automated object counting enhances operational efficiency, providing real-time results and streamlining processes across different applications. |
||||||
|
- **Versatility and Application:** The versatility of object counting in regions makes it applicable across various domains, from manufacturing and surveillance to traffic monitoring, contributing to its widespread utility and effectiveness. |
||||||
|
|
||||||
|
## Real World Applications |
||||||
|
|
||||||
|
| Retail | Market Streets | |
||||||
|
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:| |
||||||
|
|  |  | |
||||||
|
| People Counting in Different Region using Ultralytics YOLOv8 | Crowd Counting in Different Region using Ultralytics YOLOv8 | |
||||||
|
|
||||||
|
## Steps to Run |
||||||
|
|
||||||
|
### Step 1: Install Required Libraries |
||||||
|
|
||||||
|
Begin by cloning the Ultralytics repository, installing dependencies, and navigating to the local directory using the provided commands in Step 2. |
||||||
|
|
||||||
|
```bash |
||||||
|
# Clone Ultralytics repo |
||||||
|
git clone https://github.com/ultralytics/ultralytics |
||||||
|
|
||||||
|
# Navigate to the local directory |
||||||
|
cd ultralytics/examples/YOLOv8-Region-Counter |
||||||
|
``` |
||||||
|
|
||||||
|
### Step 2: Run Region Counting Using Ultralytics YOLOv8 |
||||||
|
|
||||||
|
Execute the following basic commands for inference. |
||||||
|
|
||||||
|
???+ tip "Region is Moveable" |
||||||
|
|
||||||
|
During video playback, you can interactively move the region within the video by clicking and dragging using the left mouse button. |
||||||
|
|
||||||
|
```bash |
||||||
|
# Save results |
||||||
|
python yolov8_region_counter.py --source "path/to/video.mp4" --save-img |
||||||
|
|
||||||
|
# Run model on CPU |
||||||
|
python yolov8_region_counter.py --source "path/to/video.mp4" --device cpu |
||||||
|
|
||||||
|
# Change model file |
||||||
|
python yolov8_region_counter.py --source "path/to/video.mp4" --weights "path/to/model.pt" |
||||||
|
|
||||||
|
# Detect specific classes (e.g., first and third classes) |
||||||
|
python yolov8_region_counter.py --source "path/to/video.mp4" --classes 0 2 |
||||||
|
|
||||||
|
# View results without saving |
||||||
|
python yolov8_region_counter.py --source "path/to/video.mp4" --view-img |
||||||
|
``` |
||||||
|
|
||||||
|
### Optional Arguments |
||||||
|
|
||||||
|
| Name | Type | Default | Description | |
||||||
|
|----------------------|--------|--------------|-------------------------------------------| |
||||||
|
| `--source` | `str` | `None` | Path to video file, for webcam 0 | |
||||||
|
| `--line_thickness` | `int` | `2` | Bounding Box thickness | |
||||||
|
| `--save-img` | `bool` | `False` | Save the predicted video/image | |
||||||
|
| `--weights` | `str` | `yolov8n.pt` | Weights file path | |
||||||
|
| `--classes` | `list` | `None` | Detect specific classes i.e --classes 0 2 | |
||||||
|
| `--region-thickness` | `int` | `2` | Region Box thickness | |
||||||
|
| `--track-thickness` | `int` | `2` | Tracking line thickness | |
@ -0,0 +1,166 @@ |
|||||||
|
--- |
||||||
|
comments: true |
||||||
|
description: Security Alarm System Project Using Ultralytics YOLOv8. Learn How to implement a Security Alarm System Using ultralytics YOLOv8 |
||||||
|
keywords: Object Detection, Security Alarm, Object Tracking, YOLOv8, Computer Vision Projects |
||||||
|
--- |
||||||
|
|
||||||
|
# Security Alarm System Project Using Ultralytics YOLOv8 |
||||||
|
|
||||||
|
<img src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/f4e4a613-fb25-4bd0-9ec5-78352ddb62bd" alt="Security Alarm System"> |
||||||
|
|
||||||
|
The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanced computer vision capabilities to enhance security measures. YOLOv8, developed by Ultralytics, provides real-time object detection, allowing the system to identify and respond to potential security threats promptly. This project offers several advantages: |
||||||
|
|
||||||
|
- **Real-time Detection:** YOLOv8's efficiency enables the Security Alarm System to detect and respond to security incidents in real-time, minimizing response time. |
||||||
|
- **Accuracy:** YOLOv8 is known for its accuracy in object detection, reducing false positives and enhancing the reliability of the security alarm system. |
||||||
|
- **Integration Capabilities:** The project can be seamlessly integrated with existing security infrastructure, providing an upgraded layer of intelligent surveillance. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<br> |
||||||
|
<iframe width="720" height="405" src="https://www.youtube.com/embed/_1CmwUzoxY4" |
||||||
|
title="YouTube video player" frameborder="0" |
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" |
||||||
|
allowfullscreen> |
||||||
|
</iframe> |
||||||
|
<br> |
||||||
|
<strong>Watch:</strong> Security Alarm System Project with Ultralytics YOLOv8 Object Detection |
||||||
|
</p> |
||||||
|
|
||||||
|
### Code |
||||||
|
|
||||||
|
#### Import Libraries |
||||||
|
|
||||||
|
```python |
||||||
|
import torch |
||||||
|
import numpy as np |
||||||
|
import cv2 |
||||||
|
from time import time |
||||||
|
from ultralytics import YOLO |
||||||
|
from ultralytics.utils.plotting import Annotator, colors |
||||||
|
import smtplib |
||||||
|
from email.mime.multipart import MIMEMultipart |
||||||
|
from email.mime.text import MIMEText |
||||||
|
``` |
||||||
|
|
||||||
|
#### Set up the parameters of the message |
||||||
|
|
||||||
|
???+ tip "Note" |
||||||
|
|
||||||
|
App Password Generation is necessary |
||||||
|
|
||||||
|
- Navigate to [App Password Generator](https://myaccount.google.com/apppasswords), designate an app name such as "security project," and obtain a 16-digit password. Copy this password and paste it into the designated password field as instructed. |
||||||
|
|
||||||
|
```python |
||||||
|
password = "" |
||||||
|
from_email = "" # must match the email used to generate the password |
||||||
|
to_email = "" # receiver email |
||||||
|
``` |
||||||
|
|
||||||
|
#### Server creation and authentication |
||||||
|
|
||||||
|
```python |
||||||
|
server = smtplib.SMTP('smtp.gmail.com: 587') |
||||||
|
server.starttls() |
||||||
|
server.login(from_email, password) |
||||||
|
``` |
||||||
|
|
||||||
|
#### Email Send Function |
||||||
|
|
||||||
|
```python |
||||||
|
def send_email(to_email, from_email, object_detected=1): |
||||||
|
message = MIMEMultipart() |
||||||
|
message['From'] = from_email |
||||||
|
message['To'] = to_email |
||||||
|
message['Subject'] = "Security Alert" |
||||||
|
# Add in the message body |
||||||
|
message_body = f'ALERT - {object_detected} objects has been detected!!' |
||||||
|
|
||||||
|
message.attach(MIMEText(message_body, 'plain')) |
||||||
|
server.sendmail(from_email, to_email, message.as_string()) |
||||||
|
``` |
||||||
|
|
||||||
|
#### Object Detection and Alert Sender |
||||||
|
|
||||||
|
```python |
||||||
|
class ObjectDetection: |
||||||
|
def __init__(self, capture_index): |
||||||
|
# default parameters |
||||||
|
self.capture_index = capture_index |
||||||
|
self.email_sent = False |
||||||
|
|
||||||
|
# model information |
||||||
|
self.model = YOLO("yolov8n.pt") |
||||||
|
|
||||||
|
# visual information |
||||||
|
self.annotator = None |
||||||
|
self.start_time = 0 |
||||||
|
self.end_time = 0 |
||||||
|
|
||||||
|
# device information |
||||||
|
self.device = 'cuda' if torch.cuda.is_available() else 'cpu' |
||||||
|
|
||||||
|
def predict(self, im0): |
||||||
|
results = self.model(im0) |
||||||
|
return results |
||||||
|
|
||||||
|
def display_fps(self, im0): |
||||||
|
self.end_time = time() |
||||||
|
fps = 1 / np.round(self.end_time - self.start_time, 2) |
||||||
|
text = f'FPS: {int(fps)}' |
||||||
|
text_size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 1.0, 2)[0] |
||||||
|
gap = 10 |
||||||
|
cv2.rectangle(im0, (20 - gap, 70 - text_size[1] - gap), (20 + text_size[0] + gap, 70 + gap), (255, 255, 255),-1) |
||||||
|
cv2.putText(im0, text, (20, 70), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 0), 2) |
||||||
|
|
||||||
|
def plot_bboxes(self, results, im0): |
||||||
|
class_ids = [] |
||||||
|
self.annotator = Annotator(im0, 3, results[0].names) |
||||||
|
boxes = results[0].boxes.xyxy.cpu() |
||||||
|
clss = results[0].boxes.cls.cpu().tolist() |
||||||
|
names = results[0].names |
||||||
|
for box, cls in zip(boxes, clss): |
||||||
|
class_ids.append(cls) |
||||||
|
self.annotator.box_label(box, label=names[int(cls)], color=colors(int(cls), True)) |
||||||
|
return im0, class_ids |
||||||
|
|
||||||
|
def __call__(self): |
||||||
|
cap = cv2.VideoCapture(self.capture_index) |
||||||
|
assert cap.isOpened() |
||||||
|
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640) |
||||||
|
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480) |
||||||
|
frame_count = 0 |
||||||
|
while True: |
||||||
|
self.start_time = time() |
||||||
|
ret, im0 = cap.read() |
||||||
|
assert ret |
||||||
|
results = self.predict(im0) |
||||||
|
im0, class_ids = self.plot_bboxes(results, im0) |
||||||
|
|
||||||
|
if len(class_ids) > 0: # Only send email If not sent before |
||||||
|
if not self.email_sent: |
||||||
|
send_email(to_email, from_email, len(class_ids)) |
||||||
|
self.email_sent = True |
||||||
|
else: |
||||||
|
self.email_sent = False |
||||||
|
|
||||||
|
self.display_fps(im0) |
||||||
|
cv2.imshow('YOLOv8 Detection', im0) |
||||||
|
frame_count += 1 |
||||||
|
if cv2.waitKey(5) & 0xFF == 27: |
||||||
|
break |
||||||
|
cap.release() |
||||||
|
cv2.destroyAllWindows() |
||||||
|
server.quit() |
||||||
|
``` |
||||||
|
|
||||||
|
#### Call the Object Detection class and Run the Inference |
||||||
|
|
||||||
|
```python |
||||||
|
detector = ObjectDetection(capture_index=0) |
||||||
|
detector() |
||||||
|
``` |
||||||
|
|
||||||
|
That's it! When you execute the code, you'll receive a single notification on your email if any object is detected. The notification is sent immediately, not repeatedly. However, feel free to customize the code to suit your project requirements. |
||||||
|
|
||||||
|
#### Email Received Sample |
||||||
|
|
||||||
|
<img width="256" src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/db79ccc6-aabd-4566-a825-b34e679c90f9" alt="Email Received Sample"> |
@ -0,0 +1,65 @@ |
|||||||
|
--- |
||||||
|
comments: true |
||||||
|
description: Workouts Monitoring Using Ultralytics YOLOv8 |
||||||
|
keywords: Ultralytics, YOLOv8, Object Detection, Pose Estimation, PushUps, PullUps, Ab workouts, Notebook, IPython Kernel, CLI, Python SDK |
||||||
|
--- |
||||||
|
|
||||||
|
# Workouts Monitoring using Ultralytics YOLOv8 🚀 |
||||||
|
|
||||||
|
Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) enhances exercise assessment by accurately tracking key body landmarks and joints in real-time. This technology provides instant feedback on exercise form, tracks workout routines, and measures performance metrics, optimizing training sessions for users and trainers alike. |
||||||
|
|
||||||
|
## Advantages of Workouts Monitoring? |
||||||
|
|
||||||
|
- **Optimized Performance:** Tailoring workouts based on monitoring data for better results. |
||||||
|
- **Goal Achievement:** Track and adjust fitness goals for measurable progress. |
||||||
|
- **Personalization:** Customized workout plans based on individual data for effectiveness. |
||||||
|
- **Health Awareness:** Early detection of patterns indicating health issues or overtraining. |
||||||
|
- **Informed Decisions:** Data-driven decisions for adjusting routines and setting realistic goals. |
||||||
|
|
||||||
|
## Real World Applications |
||||||
|
|
||||||
|
| Workouts Monitoring | Workouts Monitoring | |
||||||
|
|:----------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------:| |
||||||
|
|  |  | |
||||||
|
| PushUps Counting | PullUps Counting | |
||||||
|
|
||||||
|
## Example |
||||||
|
|
||||||
|
```python |
||||||
|
from ultralytics import YOLO |
||||||
|
from ultralytics.solutions import ai_gym |
||||||
|
import cv2 |
||||||
|
|
||||||
|
model = YOLO("yolov8n-pose.pt") |
||||||
|
cap = cv2.VideoCapture("path/to/video.mp4") |
||||||
|
|
||||||
|
gym_object = ai_gym.AIGym() # init AI GYM module |
||||||
|
gym_object.set_args(line_thickness=2, view_img=True, pose_type="pushup", kpts_to_check=[6, 8, 10]) |
||||||
|
|
||||||
|
frame_count = 0 |
||||||
|
while cap.isOpened(): |
||||||
|
success, frame = cap.read() |
||||||
|
if not success: exit(0) |
||||||
|
frame_count += 1 |
||||||
|
results = model.predict(frame, verbose=False) |
||||||
|
gym_object.start_counting(frame, results, frame_count) |
||||||
|
``` |
||||||
|
|
||||||
|
???+ tip "Support" |
||||||
|
|
||||||
|
"pushup", "pullup" and "abworkout" supported |
||||||
|
|
||||||
|
### KeyPoints Map |
||||||
|
|
||||||
|
 |
||||||
|
|
||||||
|
### Arguments `set_args` |
||||||
|
|
||||||
|
| Name | Type | Default | Description | |
||||||
|
|-----------------|--------|----------|----------------------------------------------------------------------------------------| |
||||||
|
| kpts_to_check | `list` | `None` | List of three keypoints index, for counting specific workout, followed by keypoint Map | |
||||||
|
| view_img | `bool` | `False` | Display the frame with counts | |
||||||
|
| line_thickness | `int` | `2` | Increase the thickness of count value | |
||||||
|
| pose_type | `str` | `pushup` | Pose that need to be monitored, "pullup" and "abworkout" also supported | |
||||||
|
| pose_up_angle | `int` | `145` | Pose Up Angle value | |
||||||
|
| pose_down_angle | `int` | `90` | Pose Down Angle value | |
@ -0,0 +1,130 @@ |
|||||||
|
# Ultralytics YOLO 🚀, AGPL-3.0 license |
||||||
|
|
||||||
|
import cv2 |
||||||
|
|
||||||
|
from ultralytics.utils.plotting import Annotator |
||||||
|
|
||||||
|
|
||||||
|
class AIGym: |
||||||
|
"""A class to manage the gym steps of people in a real-time video stream based on their poses.""" |
||||||
|
|
||||||
|
def __init__(self): |
||||||
|
"""Initializes the AIGym with default values for Visual and Image parameters.""" |
||||||
|
|
||||||
|
# Image and line thickness |
||||||
|
self.im0 = None |
||||||
|
self.tf = None |
||||||
|
|
||||||
|
# Keypoints and count information |
||||||
|
self.keypoints = None |
||||||
|
self.poseup_angle = None |
||||||
|
self.posedown_angle = None |
||||||
|
self.threshold = 0.001 |
||||||
|
|
||||||
|
# Store stage, count and angle information |
||||||
|
self.angle = None |
||||||
|
self.count = None |
||||||
|
self.stage = None |
||||||
|
self.pose_type = 'pushup' |
||||||
|
self.kpts_to_check = None |
||||||
|
|
||||||
|
# Visual Information |
||||||
|
self.view_img = False |
||||||
|
self.annotator = None |
||||||
|
|
||||||
|
def set_args(self, |
||||||
|
kpts_to_check, |
||||||
|
line_thickness=2, |
||||||
|
view_img=False, |
||||||
|
pose_up_angle=145.0, |
||||||
|
pose_down_angle=90.0, |
||||||
|
pose_type='pullup'): |
||||||
|
""" |
||||||
|
Configures the AIGym line_thickness, save image and view image parameters |
||||||
|
Args: |
||||||
|
kpts_to_check (list): 3 keypoints for counting |
||||||
|
line_thickness (int): Line thickness for bounding boxes. |
||||||
|
view_img (bool): display the im0 |
||||||
|
pose_up_angle (float): Angle to set pose position up |
||||||
|
pose_down_angle (float): Angle to set pose position down |
||||||
|
pose_type: "pushup", "pullup" or "abworkout" |
||||||
|
""" |
||||||
|
self.kpts_to_check = kpts_to_check |
||||||
|
self.tf = line_thickness |
||||||
|
self.view_img = view_img |
||||||
|
self.poseup_angle = pose_up_angle |
||||||
|
self.posedown_angle = pose_down_angle |
||||||
|
self.pose_type = pose_type |
||||||
|
|
||||||
|
def start_counting(self, im0, results, frame_count): |
||||||
|
""" |
||||||
|
function used to count the gym steps |
||||||
|
Args: |
||||||
|
im0 (ndarray): Current frame from the video stream. |
||||||
|
results: Pose estimation data |
||||||
|
frame_count: store current frame count |
||||||
|
""" |
||||||
|
self.im0 = im0 |
||||||
|
if frame_count == 1: |
||||||
|
self.count = [0] * len(results[0]) |
||||||
|
self.angle = [0] * len(results[0]) |
||||||
|
self.stage = ['-' for _ in results[0]] |
||||||
|
self.keypoints = results[0].keypoints.data |
||||||
|
self.annotator = Annotator(im0, line_width=2) |
||||||
|
|
||||||
|
for ind, k in enumerate(reversed(self.keypoints)): |
||||||
|
if self.pose_type == 'pushup' or self.pose_type == 'pullup': |
||||||
|
self.angle[ind] = self.annotator.estimate_pose_angle(k[int(self.kpts_to_check[0])].cpu(), |
||||||
|
k[int(self.kpts_to_check[1])].cpu(), |
||||||
|
k[int(self.kpts_to_check[2])].cpu()) |
||||||
|
self.im0 = self.annotator.draw_specific_points(k, self.kpts_to_check, shape=(640, 640), radius=10) |
||||||
|
|
||||||
|
if self.pose_type == 'abworkout': |
||||||
|
self.angle[ind] = self.annotator.estimate_pose_angle(k[int(self.kpts_to_check[0])].cpu(), |
||||||
|
k[int(self.kpts_to_check[1])].cpu(), |
||||||
|
k[int(self.kpts_to_check[2])].cpu()) |
||||||
|
self.im0 = self.annotator.draw_specific_points(k, self.kpts_to_check, shape=(640, 640), radius=10) |
||||||
|
if self.angle[ind] > self.poseup_angle: |
||||||
|
self.stage[ind] = 'down' |
||||||
|
if self.angle[ind] < self.posedown_angle and self.stage[ind] == 'down': |
||||||
|
self.stage[ind] = 'up' |
||||||
|
self.count[ind] += 1 |
||||||
|
self.annotator.plot_angle_and_count_and_stage(angle_text=self.angle[ind], |
||||||
|
count_text=self.count[ind], |
||||||
|
stage_text=self.stage[ind], |
||||||
|
center_kpt=k[int(self.kpts_to_check[1])], |
||||||
|
line_thickness=self.tf) |
||||||
|
|
||||||
|
if self.pose_type == 'pushup': |
||||||
|
if self.angle[ind] > self.poseup_angle: |
||||||
|
self.stage[ind] = 'up' |
||||||
|
if self.angle[ind] < self.posedown_angle and self.stage[ind] == 'up': |
||||||
|
self.stage[ind] = 'down' |
||||||
|
self.count[ind] += 1 |
||||||
|
self.annotator.plot_angle_and_count_and_stage(angle_text=self.angle[ind], |
||||||
|
count_text=self.count[ind], |
||||||
|
stage_text=self.stage[ind], |
||||||
|
center_kpt=k[int(self.kpts_to_check[1])], |
||||||
|
line_thickness=self.tf) |
||||||
|
if self.pose_type == 'pullup': |
||||||
|
if self.angle[ind] > self.poseup_angle: |
||||||
|
self.stage[ind] = 'down' |
||||||
|
if self.angle[ind] < self.posedown_angle and self.stage[ind] == 'down': |
||||||
|
self.stage[ind] = 'up' |
||||||
|
self.count[ind] += 1 |
||||||
|
self.annotator.plot_angle_and_count_and_stage(angle_text=self.angle[ind], |
||||||
|
count_text=self.count[ind], |
||||||
|
stage_text=self.stage[ind], |
||||||
|
center_kpt=k[int(self.kpts_to_check[1])], |
||||||
|
line_thickness=self.tf) |
||||||
|
|
||||||
|
self.annotator.kpts(k, shape=(640, 640), radius=1, kpt_line=True) |
||||||
|
|
||||||
|
if self.view_img: |
||||||
|
cv2.imshow('Ultralytics YOLOv8 AI GYM', self.im0) |
||||||
|
if cv2.waitKey(1) & 0xFF == ord('q'): |
||||||
|
return |
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__': |
||||||
|
AIGym() |
@ -0,0 +1,165 @@ |
|||||||
|
# Ultralytics YOLO 🚀, AGPL-3.0 license |
||||||
|
|
||||||
|
from collections import defaultdict |
||||||
|
|
||||||
|
import cv2 |
||||||
|
|
||||||
|
from ultralytics.utils.checks import check_requirements |
||||||
|
from ultralytics.utils.plotting import Annotator, colors |
||||||
|
|
||||||
|
check_requirements('shapely>=2.0.0') |
||||||
|
|
||||||
|
from shapely.geometry import Polygon |
||||||
|
from shapely.geometry.point import Point |
||||||
|
|
||||||
|
|
||||||
|
class ObjectCounter: |
||||||
|
"""A class to manage the counting of objects in a real-time video stream based on their tracks.""" |
||||||
|
|
||||||
|
def __init__(self): |
||||||
|
"""Initializes the Counter with default values for various tracking and counting parameters.""" |
||||||
|
|
||||||
|
# Mouse events |
||||||
|
self.is_drawing = False |
||||||
|
self.selected_point = None |
||||||
|
|
||||||
|
# Region Information |
||||||
|
self.reg_pts = None |
||||||
|
self.counting_region = None |
||||||
|
self.region_color = (255, 255, 255) |
||||||
|
|
||||||
|
# Image and annotation Information |
||||||
|
self.im0 = None |
||||||
|
self.tf = None |
||||||
|
self.view_img = False |
||||||
|
|
||||||
|
self.names = None # Classes names |
||||||
|
self.annotator = None # Annotator |
||||||
|
|
||||||
|
# Object counting Information |
||||||
|
self.in_counts = 0 |
||||||
|
self.out_counts = 0 |
||||||
|
self.counting_list = [] |
||||||
|
|
||||||
|
# Tracks info |
||||||
|
self.track_history = defaultdict(list) |
||||||
|
self.track_thickness = 2 |
||||||
|
self.draw_tracks = False |
||||||
|
|
||||||
|
def set_args(self, |
||||||
|
classes_names, |
||||||
|
reg_pts, |
||||||
|
region_color=None, |
||||||
|
line_thickness=2, |
||||||
|
track_thickness=2, |
||||||
|
view_img=False, |
||||||
|
draw_tracks=False): |
||||||
|
""" |
||||||
|
Configures the Counter's image, bounding box line thickness, and counting region points. |
||||||
|
|
||||||
|
Args: |
||||||
|
line_thickness (int): Line thickness for bounding boxes. |
||||||
|
view_img (bool): Flag to control whether to display the video stream. |
||||||
|
reg_pts (list): Initial list of points defining the counting region. |
||||||
|
classes_names (dict): Classes names |
||||||
|
region_color (tuple): color for region line |
||||||
|
track_thickness (int): Track thickness |
||||||
|
draw_tracks (Bool): draw tracks |
||||||
|
""" |
||||||
|
self.tf = line_thickness |
||||||
|
self.view_img = view_img |
||||||
|
self.track_thickness = track_thickness |
||||||
|
self.draw_tracks = draw_tracks |
||||||
|
self.reg_pts = reg_pts |
||||||
|
self.counting_region = Polygon(self.reg_pts) |
||||||
|
self.names = classes_names |
||||||
|
self.region_color = region_color if region_color else self.region_color |
||||||
|
|
||||||
|
def mouse_event_for_region(self, event, x, y, flags, params): |
||||||
|
""" |
||||||
|
This function is designed to move region with mouse events in a real-time video stream. |
||||||
|
|
||||||
|
Args: |
||||||
|
event (int): The type of mouse event (e.g., cv2.EVENT_MOUSEMOVE, cv2.EVENT_LBUTTONDOWN, etc.). |
||||||
|
x (int): The x-coordinate of the mouse pointer. |
||||||
|
y (int): The y-coordinate of the mouse pointer. |
||||||
|
flags (int): Any flags associated with the event (e.g., cv2.EVENT_FLAG_CTRLKEY, |
||||||
|
cv2.EVENT_FLAG_SHIFTKEY, etc.). |
||||||
|
params (dict): Additional parameters you may want to pass to the function. |
||||||
|
""" |
||||||
|
# global is_drawing, selected_point |
||||||
|
if event == cv2.EVENT_LBUTTONDOWN: |
||||||
|
for i, point in enumerate(self.reg_pts): |
||||||
|
if isinstance(point, (tuple, list)) and len(point) >= 2: |
||||||
|
if abs(x - point[0]) < 10 and abs(y - point[1]) < 10: |
||||||
|
self.selected_point = i |
||||||
|
self.is_drawing = True |
||||||
|
break |
||||||
|
|
||||||
|
elif event == cv2.EVENT_MOUSEMOVE: |
||||||
|
if self.is_drawing and self.selected_point is not None: |
||||||
|
self.reg_pts[self.selected_point] = (x, y) |
||||||
|
self.counting_region = Polygon(self.reg_pts) |
||||||
|
|
||||||
|
elif event == cv2.EVENT_LBUTTONUP: |
||||||
|
self.is_drawing = False |
||||||
|
self.selected_point = None |
||||||
|
|
||||||
|
def extract_and_process_tracks(self, tracks): |
||||||
|
boxes = tracks[0].boxes.xyxy.cpu() |
||||||
|
clss = tracks[0].boxes.cls.cpu().tolist() |
||||||
|
track_ids = tracks[0].boxes.id.int().cpu().tolist() |
||||||
|
|
||||||
|
self.annotator = Annotator(self.im0, self.tf, self.names) |
||||||
|
self.annotator.draw_region(reg_pts=self.reg_pts, color=(0, 255, 0)) |
||||||
|
|
||||||
|
for box, track_id, cls in zip(boxes, track_ids, clss): |
||||||
|
self.annotator.box_label(box, label=self.names[cls], color=colors(int(cls), True)) # Draw bounding box |
||||||
|
|
||||||
|
# Draw Tracks |
||||||
|
track_line = self.track_history[track_id] |
||||||
|
track_line.append((float((box[0] + box[2]) / 2), float((box[1] + box[3]) / 2))) |
||||||
|
track_line.pop(0) if len(track_line) > 30 else None |
||||||
|
|
||||||
|
if self.draw_tracks: |
||||||
|
self.annotator.draw_centroid_and_tracks(track_line, |
||||||
|
color=(0, 255, 0), |
||||||
|
track_thickness=self.track_thickness) |
||||||
|
|
||||||
|
# Count objects |
||||||
|
if self.counting_region.contains(Point(track_line[-1])): |
||||||
|
if track_id not in self.counting_list: |
||||||
|
self.counting_list.append(track_id) |
||||||
|
if box[0] < self.counting_region.centroid.x: |
||||||
|
self.out_counts += 1 |
||||||
|
else: |
||||||
|
self.in_counts += 1 |
||||||
|
|
||||||
|
if self.view_img: |
||||||
|
incount_label = 'InCount : ' + f'{self.in_counts}' |
||||||
|
outcount_label = 'OutCount : ' + f'{self.out_counts}' |
||||||
|
self.annotator.count_labels(in_count=incount_label, out_count=outcount_label) |
||||||
|
cv2.namedWindow('Ultralytics YOLOv8 Object Counter') |
||||||
|
cv2.setMouseCallback('Ultralytics YOLOv8 Object Counter', self.mouse_event_for_region, |
||||||
|
{'region_points': self.reg_pts}) |
||||||
|
cv2.imshow('Ultralytics YOLOv8 Object Counter', self.im0) |
||||||
|
# Break Window |
||||||
|
if cv2.waitKey(1) & 0xFF == ord('q'): |
||||||
|
return |
||||||
|
|
||||||
|
def start_counting(self, im0, tracks): |
||||||
|
""" |
||||||
|
Main function to start the object counting process. |
||||||
|
|
||||||
|
Args: |
||||||
|
im0 (ndarray): Current frame from the video stream. |
||||||
|
tracks (list): List of tracks obtained from the object tracking process. |
||||||
|
""" |
||||||
|
self.im0 = im0 # store image |
||||||
|
if tracks[0].boxes.id is None: |
||||||
|
return |
||||||
|
self.extract_and_process_tracks(tracks) |
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__': |
||||||
|
ObjectCounter() |
Loading…
Reference in new issue