(B) at 640 |
+| -------------------------------------------------------------------------------------------- | ------------------- | ---------------- | ---------------- | ----------------------------- | ---------------------------------- | ---------------- | ------------------------ |
+| [YOLO11n-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-cls.pt) | 224 | 70.0 | 89.4 | 5.03 ± 0.32 ms | 1.10 ± 0.01 ms | 1.6 | 3.3 |
+| [YOLO11s-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-cls.pt) | 224 | 75.4 | 92.7 | 7.89 ± 0.18 ms | 1.34 ± 0.01 ms | 5.5 | 12.1 |
+| [YOLO11m-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11m-cls.pt) | 224 | 77.3 | 93.9 | 17.17 ± 0.40 ms | 1.95 ± 0.00 ms | 10.4 | 39.3 |
+| [YOLO11l-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11l-cls.pt) | 224 | 78.3 | 94.3 | 23.17 ± 0.29 ms | 2.76 ± 0.00 ms | 12.9 | 49.4 |
+| [YOLO11x-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-cls.pt) | 224 | 79.5 | 94.9 | 41.41 ± 0.94 ms | 3.82 ± 0.00 ms | 28.4 | 110.4 |
- **acc** 值为在 [ImageNet](https://www.image-net.org/) 数据集验证集上的模型准确率。
复制命令 `yolo val classify data=path/to/ImageNet device=0`
- **速度**在使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例的 ImageNet 验证图像上平均。
复制命令 `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
diff --git a/docs/en/models/yolo11.md b/docs/en/models/yolo11.md
index 6827fdff2c..d96abd9f50 100644
--- a/docs/en/models/yolo11.md
+++ b/docs/en/models/yolo11.md
@@ -8,26 +8,26 @@ keywords: YOLO11, state-of-the-art object detection, YOLO series, Ultralytics, c
## Overview
-YOLO11 is the latest iteration in the Ultralytics YOLO series of real-time object detectors, redefining what's possible with cutting-edge accuracy, speed, and efficiency. Building upon the impressive advancements of previous YOLO versions, YOLO11 introduces significant improvements in architecture and training methods, making it a versatile choice for a wide range of computer vision tasks.
+YOLO11 is the latest iteration in the [Ultralytics](https://www.ultralytics.com) YOLO series of real-time object detectors, redefining what's possible with cutting-edge [accuracy](https://www.ultralytics.com/glossary/accuracy), speed, and efficiency. Building upon the impressive advancements of previous YOLO versions, YOLO11 introduces significant improvements in architecture and training methods, making it a versatile choice for a wide range of [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) tasks.
![Ultralytics YOLO11 Comparison Plots](https://github.com/user-attachments/assets/a311a4ed-bbf2-43b5-8012-5f183a28a845)
-
- Watch: Ultralytics YOLO11 Announcement at #YV24
+ Watch: Ultralytics YOLO11 Announcement at [YOLO Vision 2024](https://www.ultralytics.com/events/yolovision)
## Key Features
- **Enhanced Feature Extraction:** YOLO11 employs an improved backbone and neck architecture, which enhances [feature extraction](https://www.ultralytics.com/glossary/feature-extraction) capabilities for more precise object detection and complex task performance.
- **Optimized for Efficiency and Speed:** YOLO11 introduces refined architectural designs and optimized training pipelines, delivering faster processing speeds and maintaining an optimal balance between accuracy and performance.
-- **Greater Accuracy with Fewer Parameters:** With advancements in model design, YOLO11m achieves a higher mean Average Precision (mAP) on the COCO dataset while using 22% fewer parameters than YOLOv8m, making it computationally efficient without compromising accuracy.
+- **Greater Accuracy with Fewer Parameters:** With advancements in model design, YOLO11m achieves a higher [mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP) on the COCO dataset while using 22% fewer parameters than YOLOv8m, making it computationally efficient without compromising accuracy.
- **Adaptability Across Environments:** YOLO11 can be seamlessly deployed across various environments, including edge devices, cloud platforms, and systems supporting NVIDIA GPUs, ensuring maximum flexibility.
- **Broad Range of Supported Tasks:** Whether it's object detection, instance segmentation, image classification, pose estimation, or oriented object detection (OBB), YOLO11 is designed to cater to a diverse set of computer vision challenges.
@@ -53,7 +53,7 @@ This table provides an overview of the YOLO11 model variants, showcasing their a
See [Detection Docs](../tasks/detect.md) for usage examples with these models trained on [COCO](../datasets/detect/coco.md), which include 80 pre-trained classes.
- | Model | size
(pixels) | mAPval
50-95 | Speed
CPU ONNX
(ms) | Speed
Tesla T4 TensorRT
(ms) | params
(M) | FLOPs
(B) |
+ | Model | size
(pixels) | mAPval
50-95 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | --------------------------------------- | ------------------ | ----------------- |
| [YOLO11n](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt) | 640 | 39.5 | 56.12 ± 0.82 ms | 1.55 ± 0.01 ms | 2.6 | 6.5 |
| [YOLO11s](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s.pt) | 640 | 47.0 | 90.01 ± 1.17 ms | 2.46 ± 0.00 ms | 9.4 | 21.5 |
@@ -65,7 +65,7 @@ This table provides an overview of the YOLO11 model variants, showcasing their a
See [Segmentation Docs](../tasks/segment.md) for usage examples with these models trained on [COCO](../datasets/segment/coco.md), which include 80 pre-trained classes.
- | Model | size
(pixels) | mAPbox
50-95 | mAPmask
50-95 | Speed
CPU ONNX
(ms) | Speed
Tesla T4 TensorRT
(ms) | params
(M) | FLOPs
(B) |
+ | Model | size
(pixels) | mAPbox
50-95 | mAPmask
50-95 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | --------------------------------------- | ------------------ | ----------------- |
| [YOLO11n-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-seg.pt) | 640 | 38.9 | 32.0 | 65.90 ± 1.14 ms | 1.84 ± 0.00 ms | 2.9 | 10.4 |
| [YOLO11s-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-seg.pt) | 640 | 46.6 | 37.8 | 117.56 ± 4.89 ms | 2.94 ± 0.01 ms | 10.1 | 35.5 |
@@ -77,7 +77,7 @@ This table provides an overview of the YOLO11 model variants, showcasing their a
See [Classification Docs](../tasks/classify.md) for usage examples with these models trained on [ImageNet](../datasets/classify/imagenet.md), which include 1000 pre-trained classes.
- | Model | size
(pixels) | acc
top1 | acc
top5 | Speed
CPU ONNX
(ms) | Speed
Tesla T4 TensorRT
(ms) | params
(M) | FLOPs
(B) at 640 |
+ | Model | size
(pixels) | acc
top1 | acc
top5 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | --------------------------------------- | ------------------ | ------------------------ |
| [YOLO11n-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-cls.pt) | 224 | 70.0 | 89.4 | 5.03 ± 0.32 ms | 1.10 ± 0.01 ms | 1.6 | 3.3 |
| [YOLO11s-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-cls.pt) | 224 | 75.4 | 92.7 | 7.89 ± 0.18 ms | 1.34 ± 0.01 ms | 5.5 | 12.1 |
@@ -89,7 +89,7 @@ This table provides an overview of the YOLO11 model variants, showcasing their a
See [Pose Estimation Docs](../tasks/pose.md) for usage examples with these models trained on [COCO](../datasets/pose/coco.md), which include 1 pre-trained class, 'person'.
- | Model | size
(pixels) | mAPpose
50-95 | mAPpose
50 | Speed
CPU ONNX
(ms) | Speed
Tesla T4 TensorRT
(ms) | params
(M) | FLOPs
(B) |
+ | Model | size
(pixels) | mAPpose
50-95 | mAPpose
50 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) |
| ---------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | --------------------------------------- | ------------------ | ----------------- |
| [YOLO11n-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-pose.pt) | 640 | 50.0 | 81.0 | 52.40 ± 0.51 ms | 1.72 ± 0.01 ms | 2.9 | 7.6 |
| [YOLO11s-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-pose.pt) | 640 | 58.9 | 86.3 | 90.54 ± 0.59 ms | 2.57 ± 0.00 ms | 9.9 | 23.2 |
@@ -101,7 +101,7 @@ This table provides an overview of the YOLO11 model variants, showcasing their a
See [Oriented Detection Docs](../tasks/obb.md) for usage examples with these models trained on [DOTAv1](../datasets/obb/dota-v2.md#dota-v10), which include 15 pre-trained classes.
- | Model | size
(pixels) | mAPtest
50 | Speed
CPU ONNX
(ms) | Speed
Tesla T4 TensorRT
(ms) | params
(M) | FLOPs
(B) |
+ | Model | size
(pixels) | mAPtest
50 | Speed
CPU ONNX
(ms) | Speed
T4 TensorRT10
(ms) | params
(M) | FLOPs
(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | ------------------ | ------------------------------ | --------------------------------------- | ------------------ | ----------------- |
| [YOLO11n-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-obb.pt) | 1024 | 78.4 | 117.56 ± 0.80 ms | 4.43 ± 0.01 ms | 2.7 | 17.2 |
| [YOLO11s-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11s-obb.pt) | 1024 | 79.5 | 219.41 ± 4.00 ms | 5.13 ± 0.02 ms | 9.7 | 57.5 |
@@ -113,7 +113,7 @@ This table provides an overview of the YOLO11 model variants, showcasing their a
This section provides simple YOLO11 training and inference examples. For full documentation on these and other [modes](../modes/index.md), see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md), and [Export](../modes/export.md) docs pages.
-Note that the example below is for YOLO11 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks, see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md), [OBB](../tasks/obb.md), and [Pose](../tasks/pose.md) docs.
+Note that the example below is for YOLO11 [Detect](../tasks/detect.md) models for [object detection](https://www.ultralytics.com/glossary/object-detection). For additional supported tasks, see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md), [OBB](../tasks/obb.md), and [Pose](../tasks/pose.md) docs.
!!! example
@@ -176,9 +176,9 @@ Ultralytics YOLO11 introduces several significant advancements over its predeces
- **Enhanced Feature Extraction:** YOLO11 employs an improved backbone and neck architecture, enhancing [feature extraction](https://www.ultralytics.com/glossary/feature-extraction) capabilities for more precise object detection.
- **Optimized Efficiency and Speed:** Refined architectural designs and optimized training pipelines deliver faster processing speeds while maintaining a balance between accuracy and performance.
-- **Greater Accuracy with Fewer Parameters:** YOLO11m achieves higher mean Average Precision (mAP) on the COCO dataset with 22% fewer parameters than YOLOv8m, making it computationally efficient without compromising accuracy.
+- **Greater Accuracy with Fewer Parameters:** YOLO11m achieves higher mean Average [Precision](https://www.ultralytics.com/glossary/precision) (mAP) on the COCO dataset with 22% fewer parameters than YOLOv8m, making it computationally efficient without compromising accuracy.
- **Adaptability Across Environments:** YOLO11 can be deployed across various environments, including edge devices, cloud platforms, and systems supporting NVIDIA GPUs.
-- **Broad Range of Supported Tasks:** YOLO11 supports diverse computer vision tasks such as object detection, instance segmentation, image classification, pose estimation, and oriented object detection (OBB).
+- **Broad Range of Supported Tasks:** YOLO11 supports diverse computer vision tasks such as object detection, [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation), image classification, pose estimation, and oriented object detection (OBB).
### How do I train a YOLO11 model for object detection?
@@ -213,7 +213,7 @@ YOLO11 models are versatile and support a wide range of computer vision tasks, i
- **Object Detection:** Identifying and locating objects within an image.
- **Instance Segmentation:** Detecting objects and delineating their boundaries.
-- **Image Classification:** Categorizing images into predefined classes.
+- **[Image Classification](https://www.ultralytics.com/glossary/image-classification):** Categorizing images into predefined classes.
- **Pose Estimation:** Detecting and tracking keypoints on human bodies.
- **Oriented Object Detection (OBB):** Detecting objects with rotation for higher precision.
diff --git a/docs/en/models/yolov6.md b/docs/en/models/yolov6.md
index b670d58d31..c41b40c838 100644
--- a/docs/en/models/yolov6.md
+++ b/docs/en/models/yolov6.md
@@ -24,7 +24,7 @@ keywords: Meituan YOLOv6, object detection, real-time applications, BiC module,
YOLOv6 provides various pre-trained models with different scales:
-- YOLOv6-N: 37.5% AP on COCO val2017 at 1187 FPS with NVIDIA Tesla T4 GPU.
+- YOLOv6-N: 37.5% AP on COCO val2017 at 1187 FPS with NVIDIA T4 GPU.
- YOLOv6-S: 45.0% AP at 484 FPS.
- YOLOv6-M: 50.0% AP at 226 FPS.
- YOLOv6-L: 52.8% AP at 116 FPS.
@@ -151,7 +151,7 @@ YOLOv6 offers multiple versions, each optimized for different performance requir
- YOLOv6-L: 52.8% AP at 116 FPS
- YOLOv6-L6: State-of-the-art accuracy in real-time scenarios
-These models are evaluated on the COCO dataset using an NVIDIA Tesla T4 GPU. For more on performance metrics, see the [Performance Metrics](#performance-metrics) section.
+These models are evaluated on the COCO dataset using an NVIDIA T4 GPU. For more on performance metrics, see the [Performance Metrics](#performance-metrics) section.
### How does the Anchor-Aided Training (AAT) strategy benefit YOLOv6?
diff --git a/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md b/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
index 65ad9c3d62..69e2fa80a8 100644
--- a/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
+++ b/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
@@ -18,7 +18,7 @@ Let's begin by creating a virtual machine that's tuned for deep learning:
1. Head over to the [GCP marketplace](https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning) and select the **Deep Learning VM**.
2. Opt for a **n1-standard-8** instance; it offers a balance of 8 vCPUs and 30 GB of memory, ideally suited for our needs.
-3. Next, select a GPU. This depends on your workload; even a basic one like the Tesla T4 will markedly accelerate your model training.
+3. Next, select a GPU. This depends on your workload; even a basic one like the T4 will markedly accelerate your model training.
4. Tick the box for 'Install NVIDIA GPU driver automatically on first startup?' for hassle-free setup.
5. Allocate a 300 GB SSD Persistent Disk to ensure you don't bottleneck on I/O operations.
6. Hit 'Deploy' and let GCP do its magic in provisioning your custom Deep Learning VM.
diff --git a/examples/heatmaps.ipynb b/examples/heatmaps.ipynb
index 1f590b8cb0..c8064cc7f4 100644
--- a/examples/heatmaps.ipynb
+++ b/examples/heatmaps.ipynb
@@ -56,7 +56,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Ultralytics YOLOv8.2.17 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.17 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"Setup complete ✅ (2 CPUs, 12.7 GB RAM, 29.8/78.2 GB disk)\n"
]
}
diff --git a/examples/hub.ipynb b/examples/hub.ipynb
index ee6f3cae60..e86b795e5e 100644
--- a/examples/hub.ipynb
+++ b/examples/hub.ipynb
@@ -54,7 +54,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"Setup complete ✅ (2 CPUs, 12.7 GB RAM, 28.8/78.2 GB disk)\n"
]
}
diff --git a/examples/object_counting.ipynb b/examples/object_counting.ipynb
index 8c3d0ba6e8..988bc3269d 100644
--- a/examples/object_counting.ipynb
+++ b/examples/object_counting.ipynb
@@ -56,7 +56,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Ultralytics YOLOv8.2.18 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.18 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"Setup complete ✅ (2 CPUs, 12.7 GB RAM, 29.8/78.2 GB disk)\n"
]
}
diff --git a/examples/object_tracking.ipynb b/examples/object_tracking.ipynb
index 17c27c0ec0..af43cc5174 100644
--- a/examples/object_tracking.ipynb
+++ b/examples/object_tracking.ipynb
@@ -56,7 +56,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Ultralytics YOLOv8.2.17 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.17 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"Setup complete ✅ (2 CPUs, 12.7 GB RAM, 29.8/78.2 GB disk)\n"
]
}
diff --git a/examples/tutorial.ipynb b/examples/tutorial.ipynb
index 996db09256..1ecb2d9809 100644
--- a/examples/tutorial.ipynb
+++ b/examples/tutorial.ipynb
@@ -78,7 +78,7 @@
"output_type": "stream",
"name": "stdout",
"text": [
- "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"Setup complete ✅ (2 CPUs, 12.7 GB RAM, 28.8/78.2 GB disk)\n"
]
}
@@ -116,7 +116,7 @@
"text": [
"Downloading https://github.com/ultralytics/assets/releases/download/v8.3.0/yolov8n.pt to 'yolov8n.pt'...\n",
"100% 6.23M/6.23M [00:00<00:00, 83.2MB/s]\n",
- "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs\n",
"\n",
"Downloading https://ultralytics.com/images/zidane.jpg to 'zidane.jpg'...\n",
@@ -182,7 +182,7 @@
"output_type": "stream",
"name": "stdout",
"text": [
- "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs\n",
"\n",
"Dataset 'coco8.yaml' images not found ⚠️, missing path '/content/datasets/coco8/images/val'\n",
@@ -261,7 +261,7 @@
"output_type": "stream",
"name": "stdout",
"text": [
- "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"\u001b[34m\u001b[1mengine/trainer: \u001b[0mtask=detect, mode=train, model=yolov8n.pt, data=coco8.yaml, epochs=3, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=runs/detect/train\n",
"\n",
" from n params module arguments \n",
@@ -330,7 +330,7 @@
"Optimizer stripped from runs/detect/train/weights/best.pt, 6.5MB\n",
"\n",
"Validating runs/detect/train/weights/best.pt...\n",
- "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "Ultralytics YOLOv8.2.3 🚀 Python-3.10.12 torch-2.2.1+cu121 CUDA:0 (T4, 15102MiB)\n",
"Model summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs\n",
" Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 18.23it/s]\n",
" all 4 17 0.617 0.884 0.888 0.622\n",