diff --git a/docs/en/datasets/detect/visdrone.md b/docs/en/datasets/detect/visdrone.md index 5f485af9d..00d84b10e 100644 --- a/docs/en/datasets/detect/visdrone.md +++ b/docs/en/datasets/detect/visdrone.md @@ -8,6 +8,17 @@ keywords: VisDrone, drone dataset, computer vision, object detection, object tra The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-scale benchmark created by the AISKYEYE team at the Lab of Machine Learning and Data Mining, Tianjin University, China. It contains carefully annotated ground truth data for various computer vision tasks related to drone-based image and video analysis. +

+
+ +
+ Watch: How to Train Ultralytics YOLO Models on the VisDrone Dataset for Drone Image Analysis +

+ VisDrone is composed of 288 video clips with 261,908 frames and 10,209 static images, captured by various drone-mounted cameras. The dataset covers a wide range of aspects, including location (14 different cities across China), environment (urban and rural), objects (pedestrians, vehicles, bicycles, etc.), and density (sparse and crowded scenes). The dataset was collected using various drone platforms under different scenarios and weather and lighting conditions. These frames are manually annotated with over 2.6 million bounding boxes of targets such as pedestrians, cars, bicycles, and tricycles. Attributes like scene visibility, object class, and occlusion are also provided for better data utilization. ## Dataset Structure