18 KiB
comments | description | keywords |
---|---|---|
true | Learn to accurately identify and count objects in real-time using Ultralytics YOLOv8 for applications like crowd analysis and surveillance. | object counting, YOLOv8, Ultralytics, real-time object detection, AI, deep learning, object tracking, crowd analysis, surveillance, resource optimization |
Object Counting using Ultralytics YOLOv8
What is Object Counting?
Object counting with Ultralytics YOLOv8 involves accurate identification and counting of specific objects in videos and camera streams. YOLOv8 excels in real-time applications, providing efficient and precise object counting for various scenarios like crowd analysis and surveillance, thanks to its state-of-the-art algorithms and deep learning capabilities.
Watch: Object Counting using Ultralytics YOLOv8 |
Watch: Class-wise Object Counting using Ultralytics YOLOv8 |
Advantages of Object Counting?
- Resource Optimization: Object counting facilitates efficient resource management by providing accurate counts, and optimizing resource allocation in applications like inventory management.
- Enhanced Security: Object counting enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.
- Informed Decision-Making: Object counting offers valuable insights for decision-making, optimizing processes in retail, traffic management, and various other domains.
Real World Applications
Logistics | Aquaculture |
---|---|
Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
!!! Example "Object Counting using YOLOv8 Example"
=== "Count in Region"
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
reg_pts=region_points,
names=model.names,
draw_tracks=True,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Count in Polygon"
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points as a polygon with 5 points
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
reg_pts=region_points,
names=model.names,
draw_tracks=True,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Count in Line"
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define line points
line_points = [(20, 400), (1080, 400)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
reg_pts=line_points,
names=model.names,
draw_tracks=True,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
=== "Specific Classes"
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
line_points = [(20, 400), (1080, 400)] # line or region points
classes_to_count = [0, 2] # person and car classes for count
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init Object Counter
counter = solutions.ObjectCounter(
view_img=True,
reg_pts=line_points,
names=model.names,
draw_tracks=True,
line_thickness=2,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False, classes=classes_to_count)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
???+ tip "Region is Movable"
You can move the region anywhere in the frame by clicking on its edges
Argument ObjectCounter
Here's a table with the ObjectCounter
arguments:
Name | Type | Default | Description |
---|---|---|---|
names |
dict |
None |
Dictionary of classes names. |
reg_pts |
list |
[(20, 400), (1260, 400)] |
List of points defining the counting region. |
count_reg_color |
tuple |
(255, 0, 255) |
RGB color of the counting region. |
count_txt_color |
tuple |
(0, 0, 0) |
RGB color of the count text. |
count_bg_color |
tuple |
(255, 255, 255) |
RGB color of the count text background. |
line_thickness |
int |
2 |
Line thickness for bounding boxes. |
track_thickness |
int |
2 |
Thickness of the track lines. |
view_img |
bool |
False |
Flag to control whether to display the video stream. |
view_in_counts |
bool |
True |
Flag to control whether to display the in counts on the video stream. |
view_out_counts |
bool |
True |
Flag to control whether to display the out counts on the video stream. |
draw_tracks |
bool |
False |
Flag to control whether to draw the object tracks. |
track_color |
tuple |
None |
RGB color of the tracks. |
region_thickness |
int |
5 |
Thickness of the object counting region. |
line_dist_thresh |
int |
15 |
Euclidean distance threshold for line counter. |
cls_txtdisplay_gap |
int |
50 |
Display gap between each class count. |
Arguments model.track
Name | Type | Default | Description |
---|---|---|---|
source |
im0 |
None |
source directory for images or videos |
persist |
bool |
False |
persisting tracks between frames |
tracker |
str |
botsort.yaml |
Tracking method 'bytetrack' or 'botsort' |
conf |
float |
0.3 |
Confidence Threshold |
iou |
float |
0.5 |
IOU Threshold |
classes |
list |
None |
filter results by class, i.e. classes=0, or classes=[0,2,3] |
verbose |
bool |
True |
Display the object tracking results |
FAQ
How do I count objects in a video using Ultralytics YOLOv8?
To count objects in a video using Ultralytics YOLOv8, you can follow these steps:
- Import the necessary libraries (
cv2
,ultralytics
). - Load a pretrained YOLOv8 model.
- Define the counting region (e.g., a polygon, line, etc.).
- Set up the video capture and initialize the object counter.
- Process each frame to track objects and count them within the defined region.
Here's a simple example for counting in a region:
import cv2
from ultralytics import YOLO, solutions
def count_objects_in_region(video_path, output_video_path, model_path):
"""Count objects in a specific region within a video."""
model = YOLO(model_path)
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
counter = solutions.ObjectCounter(
view_img=True, reg_pts=region_points, names=model.names, draw_tracks=True, line_thickness=2
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
count_objects_in_region("path/to/video.mp4", "output_video.avi", "yolov8n.pt")
Explore more configurations and options in the Object Counting section.
What are the advantages of using Ultralytics YOLOv8 for object counting?
Using Ultralytics YOLOv8 for object counting offers several advantages:
- Resource Optimization: It facilitates efficient resource management by providing accurate counts, helping optimize resource allocation in industries like inventory management.
- Enhanced Security: It enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.
- Informed Decision-Making: It offers valuable insights for decision-making, optimizing processes in domains like retail, traffic management, and more.
For real-world applications and code examples, visit the Advantages of Object Counting section.
How can I count specific classes of objects using Ultralytics YOLOv8?
To count specific classes of objects using Ultralytics YOLOv8, you need to specify the classes you are interested in during the tracking phase. Below is a Python example:
import cv2
from ultralytics import YOLO, solutions
def count_specific_classes(video_path, output_video_path, model_path, classes_to_count):
"""Count specific classes of objects in a video."""
model = YOLO(model_path)
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
line_points = [(20, 400), (1080, 400)]
video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
counter = solutions.ObjectCounter(
view_img=True, reg_pts=line_points, names=model.names, draw_tracks=True, line_thickness=2
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False, classes=classes_to_count)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
count_specific_classes("path/to/video.mp4", "output_specific_classes.avi", "yolov8n.pt", [0, 2])
In this example, classes_to_count=[0, 2]
, which means it counts objects of class 0
and 2
(e.g., person and car).
Why should I use YOLOv8 over other object detection models for real-time applications?
Ultralytics YOLOv8 provides several advantages over other object detection models like Faster R-CNN, SSD, and previous YOLO versions:
- Speed and Efficiency: YOLOv8 offers real-time processing capabilities, making it ideal for applications requiring high-speed inference, such as surveillance and autonomous driving.
- Accuracy: It provides state-of-the-art accuracy for object detection and tracking tasks, reducing the number of false positives and improving overall system reliability.
- Ease of Integration: YOLOv8 offers seamless integration with various platforms and devices, including mobile and edge devices, which is crucial for modern AI applications.
- Flexibility: Supports various tasks like object detection, segmentation, and tracking with configurable models to meet specific use-case requirements.
Check out Ultralytics YOLOv8 Documentation for a deeper dive into its features and performance comparisons.
Can I use YOLOv8 for advanced applications like crowd analysis and traffic management?
Yes, Ultralytics YOLOv8 is perfectly suited for advanced applications like crowd analysis and traffic management due to its real-time detection capabilities, scalability, and integration flexibility. Its advanced features allow for high-accuracy object tracking, counting, and classification in dynamic environments. Example use cases include:
- Crowd Analysis: Monitor and manage large gatherings, ensuring safety and optimizing crowd flow.
- Traffic Management: Track and count vehicles, analyze traffic patterns, and manage congestion in real-time.
For more information and implementation details, refer to the guide on Real World Applications of object counting with YOLOv8.