Add `integrations/gradio` Docs page (#7935)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: WangQvQ <1579093407@qq.com>
Co-authored-by: Martin Pl <martin-plank@gmx.de>
Co-authored-by: Mactarvish <Mactarvish@users.noreply.github.com>
pull/7944/head
Glenn Jocher 10 months ago committed by GitHub
parent 2881cda483
commit ba484929e3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 2
      docs/en/datasets/obb/dota-v2.md
  2. 104
      docs/en/integrations/gradio.md
  3. 2
      docs/en/integrations/index.md
  4. 1
      mkdocs.yml
  5. 4
      ultralytics/utils/metrics.py

@ -119,7 +119,7 @@ To train a model on the DOTA v1 dataset, you can utilize the following code snip
```bash ```bash
# Train a new YOLOv8n-OBB model on the DOTAv2 dataset # Train a new YOLOv8n-OBB model on the DOTAv2 dataset
yolo detect train data=DOTAv1.yaml model=yolov8n.pt epochs=100 imgsz=640 yolo obb train data=DOTAv1.yaml model=yolov8n-obb.pt epochs=100 imgsz=640
``` ```
## Sample Data and Annotations ## Sample Data and Annotations

@ -0,0 +1,104 @@
---
comments: true
description: Learn to use Gradio and Ultralytics YOLOv8 for interactive object detection. Upload images and adjust detection parameters in real-time.
keywords: Gradio, Ultralytics YOLOv8, object detection, interactive AI, Python
---
# Interactive Object Detection: Gradio & Ultralytics YOLOv8 🚀
## Introduction to Interactive Object Detection
This Gradio interface provides an easy and interactive way to perform object detection using the [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) model. Users can upload images and adjust parameters like confidence threshold and intersection-over-union (IoU) threshold to get real-time detection results.
## Why Use Gradio for Object Detection?
* **User-Friendly Interface:** Gradio offers a straightforward platform for users to upload images and visualize detection results without any coding requirement.
* **Real-Time Adjustments:** Parameters such as confidence and IoU thresholds can be adjusted on the fly, allowing for immediate feedback and optimization of detection results.
* **Broad Accessibility:** The Gradio web interface can be accessed by anyone, making it an excellent tool for demonstrations, educational purposes, and quick experiments.
<img width="800" alt="Gradio example screenshot" src="https://github.com/WangQvQ/ultralytics/assets/58406737/5d906f10-fd62-4bcc-8856-ef3233102c1d">
## How to Install the Gradio
```bash
pip install gradio
```
## How to Use the Interface
1. **Upload Image:** Click on 'Upload Image' to choose an image file for object detection.
2. **Adjust Parameters:**
* **Confidence Threshold:** Slider to set the minimum confidence level for detecting objects.
* **IoU Threshold:** Slider to set the IoU threshold for distinguishing different objects.
3. **View Results:** The processed image with detected objects and their labels will be displayed.
## Example Use Cases
* **Sample Image 1:** Bus detection with default thresholds.
* **Sample Image 2:** Detection on a sports image with default thresholds.
## Usage Example
This section provides the Python code used to create the Gradio interface with the Ultralytics YOLOv8 model. Supports classification tasks, detection tasks, segmentation tasks, and key point tasks.
```python
import PIL.Image as Image
import gradio as gr
from ultralytics import ASSETS, YOLO
model = YOLO("yolov8n.pt")
def predict_image(img, conf_threshold, iou_threshold):
results = model.predict(
source=img,
conf=conf_threshold,
iou=iou_threshold,
show_labels=True,
show_conf=True,
imgsz=640,
)
for r in results:
im_array = r.plot()
im = Image.fromarray(im_array[..., ::-1])
return im
iface = gr.Interface(
fn=predict_image,
inputs=[
gr.Image(type="pil", label="Upload Image"),
gr.Slider(minimum=0, maximum=1, value=0.25, label="Confidence threshold"),
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold")
],
outputs=gr.Image(type="pil", label="Result"),
title="Ultralytics Gradio",
description="Upload images for inference. The Ultralytics YOLOv8n model is used by default.",
examples=[
[ASSETS / "bus.jpg", 0.25, 0.45],
[ASSETS / "zidane.jpg", 0.25, 0.45],
]
)
if __name__ == '__main__':
iface.launch()
```
## Parameters Explanation
| Parameter Name | Type | Description |
|------------------|---------|----------------------------------------------------------|
| `img` | `Image` | The image on which object detection will be performed. |
| `conf_threshold` | `float` | Confidence threshold for detecting objects. |
| `iou_threshold` | `float` | Intersection-over-union threshold for object separation. |
### Gradio Interface Components
| Component | Description |
|--------------|------------------------------------------|
| Image Input | To upload the image for detection. |
| Sliders | To adjust confidence and IoU thresholds. |
| Image Output | To display the detection results. |

@ -40,6 +40,8 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
- [Neural Magic](neural-magic.md): Leverage Quantization Aware Training (QAT) and pruning techniques to optimize Ultralytics models for superior performance and leaner size. - [Neural Magic](neural-magic.md): Leverage Quantization Aware Training (QAT) and pruning techniques to optimize Ultralytics models for superior performance and leaner size.
- [Gradio](../integrations/gradio.md) 🚀 NEW: Deploy Ultralytics models with Gradio for real-time, interactive object detection demos.
- [OpenVINO](openvino.md): Intel's toolkit for optimizing and deploying computer vision models efficiently across various Intel CPU and GPU platforms. - [OpenVINO](openvino.md): Intel's toolkit for optimizing and deploying computer vision models efficiently across various Intel CPU and GPU platforms.
- [ONNX](onnx.md): An open-source format created by [Microsoft](https://www.microsoft.com) for facilitating the transfer of AI models between various frameworks, enhancing the versatility and deployment flexibility of Ultralytics models. - [ONNX](onnx.md): An open-source format created by [Microsoft](https://www.microsoft.com) for facilitating the transfer of AI models between various frameworks, enhancing the versatility and deployment flexibility of Ultralytics models.

@ -345,6 +345,7 @@ nav:
- DVC: integrations/dvc.md - DVC: integrations/dvc.md
- Weights & Biases: integrations/weights-biases.md - Weights & Biases: integrations/weights-biases.md
- Neural Magic: integrations/neural-magic.md - Neural Magic: integrations/neural-magic.md
- Gradio: integrations/gradio.md
- TensorBoard: integrations/tensorboard.md - TensorBoard: integrations/tensorboard.md
- Amazon SageMaker: integrations/amazon-sagemaker.md - Amazon SageMaker: integrations/amazon-sagemaker.md
- HUB: - HUB:

@ -701,7 +701,7 @@ class Metric(SimpleClass):
Returns the mean Average Precision (mAP) at an IoU threshold of 0.5. Returns the mean Average Precision (mAP) at an IoU threshold of 0.5.
Returns: Returns:
(float): The mAP50 at an IoU threshold of 0.5. (float): The mAP at an IoU threshold of 0.5.
""" """
return self.all_ap[:, 0].mean() if len(self.all_ap) else 0.0 return self.all_ap[:, 0].mean() if len(self.all_ap) else 0.0
@ -711,7 +711,7 @@ class Metric(SimpleClass):
Returns the mean Average Precision (mAP) at an IoU threshold of 0.75. Returns the mean Average Precision (mAP) at an IoU threshold of 0.75.
Returns: Returns:
(float): The mAP50 at an IoU threshold of 0.75. (float): The mAP at an IoU threshold of 0.75.
""" """
return self.all_ap[:, 5].mean() if len(self.all_ap) else 0.0 return self.all_ap[:, 5].mean() if len(self.all_ap) else 0.0

Loading…
Cancel
Save