openimage-docs-fix
Francesco Mattioli 6 months ago
parent 05506d6b67
commit 284e462f82
  1. 25
      docs/en/datasets/detect/open-images-v7.md
  2. 2
      docs/en/models/yolov5.md
  3. 2
      docs/en/models/yolov6.md
  4. 22
      docs/en/tasks/classify.md
  5. 22
      docs/en/tasks/obb.md

@ -31,29 +31,6 @@ keywords: Open Images V7, Google dataset, computer vision, YOLOv8 models, object
![Open Images V7 classes visual](https://github.com/ultralytics/docs/releases/download/0/open-images-v7-classes-visual.avif)
## Usage
!!! Example "Predict Example"
=== "Python"
```python
from ultralytics import YOLO
# Load a Open Images V7 pretrained YOLOv8n model
model = YOLO("yolov8n-oiv7.pt")
# Predict
results = model.predict()
```
=== "CLI"
```bash
# Predict using a Open Images V7 pretrained YOLOv8n model
yolo predict model=yolov8n-oiv7.pt
```
## Key Features
- Encompasses ~9M images annotated in various ways to suit multiple computer vision tasks.
@ -90,7 +67,7 @@ Typically, datasets come with a YAML (Yet Another Markup Language) file that del
--8<-- "ultralytics/cfg/datasets/open-images-v7.yaml"
```
## Training Usage
## Usage
To train a YOLOv8n model on the Open Images V7 dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.

@ -34,7 +34,7 @@ This table provides a detailed overview of the YOLOv5u model variants, highlight
!!! Performance
=== "Detection"
=== "Detection (COCO)"
See [Detection Docs](../tasks/detect.md) for usage examples with these models trained on [COCO](../datasets/detect/coco.md), which include 80 pre-trained classes.

@ -38,7 +38,7 @@ This table provides a detailed overview of the YOLOv6 model variants, highlighti
!!! Performance
=== "Detection"
=== "Detection (COCO)"
YOLOv6 provides various pre-trained models with different scales: yolov6n, yolov6s, yolov6m, yolov6l These models are evaluated on the COCO dataset using an NVIDIA Tesla T4 GPU. The following table summarizes the performance metrics of YOLOv6 models:

@ -36,17 +36,17 @@ YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose model
| Pre-trained Weights | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| ------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [yolo8n-cls.pt] | 224 | 69.0 | 88.3 | 12.9 | 0.31 | 2.7 | 4.3 |
| [yolo8s-cls.pt] | 224 | 73.8 | 91.7 | 23.4 | 0.35 | 6.4 | 13.5 |
| [yolo8m-cls.pt] | 224 | 76.8 | 93.5 | 85.4 | 0.62 | 17.0 | 42.7 |
| [yolo8l-cls.pt] | 224 | 76.8 | 93.5 | 163.0 | 0.87 | 37.5 | 99.7 |
| [yolo8x-cls.pt] | 224 | 79.0 | 94.6 | 232.0 | 1.01 | 57.4 | 154.8 |
[yolo8n-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-cls.pt
[yolo8s-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-cls.pt
[yolo8m-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-cls.pt
[yolo8l-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-cls.pt
[yolo8x-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-cls.pt
| [yolov8n-cls.pt] | 224 | 69.0 | 88.3 | 12.9 | 0.31 | 2.7 | 4.3 |
| [yolov8s-cls.pt] | 224 | 73.8 | 91.7 | 23.4 | 0.35 | 6.4 | 13.5 |
| [yolov8m-cls.pt] | 224 | 76.8 | 93.5 | 85.4 | 0.62 | 17.0 | 42.7 |
| [yolov8l-cls.pt] | 224 | 76.8 | 93.5 | 163.0 | 0.87 | 37.5 | 99.7 |
| [yolov8x-cls.pt] | 224 | 79.0 | 94.6 | 232.0 | 1.01 | 57.4 | 154.8 |
[yolov8n-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-cls.pt
[yolov8s-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-cls.pt
[yolov8m-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-cls.pt
[yolov8l-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-cls.pt
[yolov8x-cls.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-cls.pt
- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set. <br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`

@ -56,17 +56,17 @@ YOLOv8 pretrained OBB models are shown here, which are pretrained on the [DOTAv1
| Pre-trained Weights | size<br><sup>(pixels) | mAP<sup>test<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [yolo8n-obb.pt] | 1024 | 78.0 | 204.77 | 3.57 | 3.1 | 23.3 |
| [yolo8s-obb.pt] | 1024 | 79.5 | 424.88 | 4.07 | 11.4 | 76.3 |
| [yolo8m-obb.pt] | 1024 | 80.5 | 763.48 | 7.61 | 26.4 | 208.6 |
| [yolo8l-obb.pt] | 1024 | 80.7 | 1278.42 | 11.83 | 44.5 | 433.8 |
| [yolo8x-obb.pt] | 1024 | 81.36 | 1759.10 | 13.23 | 69.5 | 676.7 |
[yolo8n-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-obb.pt
[yolo8s-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-obb.pt
[yolo8m-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-obb.pt
[yolo8l-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-obb.pt
[yolo8x-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-obb.pt
| [yolov8n-obb.pt] | 1024 | 78.0 | 204.77 | 3.57 | 3.1 | 23.3 |
| [yolov8s-obb.pt] | 1024 | 79.5 | 424.88 | 4.07 | 11.4 | 76.3 |
| [yolov8m-obb.pt] | 1024 | 80.5 | 763.48 | 7.61 | 26.4 | 208.6 |
| [yolov8l-obb.pt] | 1024 | 80.7 | 1278.42 | 11.83 | 44.5 | 433.8 |
| [yolov8x-obb.pt] | 1024 | 81.36 | 1759.10 | 13.23 | 69.5 | 676.7 |
[yolov8n-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-obb.pt
[yolov8s-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-obb.pt
[yolov8m-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-obb.pt
[yolov8l-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-obb.pt
[yolov8x-obb.pt]: https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-obb.pt
- **mAP<sup>test</sup>** values are for single-model multiscale on [DOTAv1 test](https://captain-whu.github.io/DOTA/index.html) dataset. <br>Reproduce by `yolo val obb data=DOTAv1.yaml device=0 split=test` and submit merged results to [DOTA evaluation](https://captain-whu.github.io/DOTA/evaluation.html).
- **Speed** averaged over DOTAv1 val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu`

Loading…
Cancel
Save