Fix mkdocs.yml raw image URLs (#14213)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Burhan <62214284+Burhan-Q@users.noreply.github.com>
pull/14138/head^2
Glenn Jocher 5 months ago committed by GitHub
parent d5db9c916f
commit 5d479c73c2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 169
      docs/en/guides/analytics.md
  2. 75
      docs/en/guides/azureml-quickstart.md
  3. 63
      docs/en/guides/conda-quickstart.md
  4. 84
      docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
  5. 36
      docs/en/guides/deepstream-nvidia-jetson.md
  6. 37
      docs/en/guides/distance-calculation.md
  7. 57
      docs/en/guides/docker-quickstart.md
  8. 71
      docs/en/guides/heatmaps.md
  9. 54
      docs/en/guides/hyperparameter-tuning.md
  10. 41
      docs/en/guides/index.md
  11. 111
      docs/en/guides/instance-segmentation-and-tracking.md
  12. 92
      docs/en/guides/isolating-segmentation-objects.md
  13. 30
      docs/en/guides/kfold-cross-validation.md
  14. 65
      docs/en/guides/model-deployment-options.md
  15. 34
      docs/en/guides/model-deployment-practices.md
  16. 53
      docs/en/guides/model-evaluation-insights.md
  17. 60
      docs/en/guides/model-testing.md
  18. 6
      docs/en/guides/model-training-tips.md
  19. 28
      docs/en/guides/nvidia-jetson.md
  20. 54
      docs/en/guides/object-blurring.md
  21. 124
      docs/en/guides/object-counting.md
  22. 24
      docs/en/guides/object-cropping.md
  23. 60
      docs/en/guides/optimizing-openvino-latency-vs-throughput-modes.md
  24. 34
      docs/en/guides/parking-management.md
  25. 97
      docs/en/guides/queue-management.md
  26. 121
      docs/en/guides/raspberry-pi.md
  27. 47
      docs/en/guides/region-counting.md
  28. 112
      docs/en/guides/ros-quickstart.md
  29. 90
      docs/en/guides/sahi-tiled-inference.md
  30. 22
      docs/en/guides/security-alarm-system.md
  31. 83
      docs/en/guides/speed-estimation.md
  32. 123
      docs/en/guides/triton-inference-server.md
  33. 102
      docs/en/guides/view-results-in-terminal.md
  34. 129
      docs/en/guides/vision-eye.md
  35. 109
      docs/en/guides/workouts-monitoring.md
  36. 32
      docs/en/guides/yolo-common-issues.md
  37. 36
      docs/en/guides/yolo-performance-metrics.md
  38. 75
      docs/en/guides/yolo-thread-safe-inference.md
  39. 182
      docs/en/help/FAQ.md
  40. 87
      docs/en/integrations/amazon-sagemaker.md
  41. 59
      docs/en/integrations/clearml.md
  42. 103
      docs/en/integrations/comet.md
  43. 94
      docs/en/integrations/coreml.md
  44. 107
      docs/en/integrations/dvc.md
  45. 69
      docs/en/integrations/edge-tpu.md
  46. 41
      docs/en/integrations/google-colab.md
  47. 81
      docs/en/integrations/gradio.md
  48. 24
      docs/en/integrations/index.md
  49. 88
      docs/en/integrations/mlflow.md
  50. 66
      docs/en/integrations/ncnn.md
  51. 49
      docs/en/integrations/neural-magic.md
  52. 79
      docs/en/integrations/onnx.md
  53. 104
      docs/en/integrations/openvino.md
  54. 80
      docs/en/integrations/paddlepaddle.md
  55. 29
      docs/en/integrations/paperspace.md
  56. 99
      docs/en/integrations/ray-tune.md
  57. 26
      docs/en/integrations/roboflow.md
  58. 183
      docs/en/integrations/tensorboard.md
  59. 91
      docs/en/integrations/tensorrt.md
  60. 78
      docs/en/integrations/tf-graphdef.md
  61. 77
      docs/en/integrations/tf-savedmodel.md
  62. 80
      docs/en/integrations/tfjs.md
  63. 71
      docs/en/integrations/tflite.md
  64. 78
      docs/en/integrations/torchscript.md
  65. 123
      docs/en/integrations/weights-biases.md
  66. 22
      docs/en/solutions/index.md
  67. 4
      mkdocs.yml
  68. 2
      pyproject.toml
  69. 2
      ultralytics/nn/tasks.py

@ -4,7 +4,7 @@ description: Learn to create line graphs, bar plots, and pie charts using Python
keywords: Ultralytics, YOLOv8, data visualization, line graphs, bar plots, pie charts, Python, analytics, tutorial, guide
---
# Analytics using Ultralytics YOLOv8 📊
# Analytics using Ultralytics YOLOv8
## Introduction
@ -324,3 +324,170 @@ Here's a table with the `Analytics` arguments:
## Conclusion
Understanding when and how to use different types of visualizations is crucial for effective data analysis. Line graphs, bar plots, and pie charts are fundamental tools that can help you convey your data's story more clearly and effectively.
## FAQ
### How do I create a line graph using Ultralytics YOLOv8 Analytics?
To create a line graph using Ultralytics YOLOv8 Analytics, follow these steps:
1. Load a YOLOv8 model and open your video file.
2. Initialize the `Analytics` class with the type set to "line."
3. Iterate through video frames, updating the line graph with relevant data, such as object counts per frame.
4. Save the output video displaying the line graph.
Example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("line_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
analytics = solutions.Analytics(type="line", writer=out, im0_shape=(w, h), view_img=True)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True)
total_counts = sum([1 for box in results[0].boxes.xyxy])
analytics.update_line(frame_count, total_counts)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
For further details on configuring the `Analytics` class, visit the [Analytics using Ultralytics YOLOv8 📊](#analytics-using-ultralytics-yolov8) section.
### What are the benefits of using Ultralytics YOLOv8 for creating bar plots?
Using Ultralytics YOLOv8 for creating bar plots offers several benefits:
1. **Real-time Data Visualization**: Seamlessly integrate object detection results into bar plots for dynamic updates.
2. **Ease of Use**: Simple API and functions make it straightforward to implement and visualize data.
3. **Customization**: Customize titles, labels, colors, and more to fit your specific requirements.
4. **Efficiency**: Efficiently handle large amounts of data and update plots in real-time during video processing.
Use the following example to generate a bar plot:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("bar_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
analytics = solutions.Analytics(type="bar", writer=out, im0_shape=(w, h), view_img=True)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True)
clswise_count = {
model.names[int(cls)]: boxes.size(0)
for cls, boxes in zip(results[0].boxes.cls.tolist(), results[0].boxes.xyxy)
}
analytics.update_bar(clswise_count)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
To learn more, visit the [Bar Plot](#visual-samples) section in the guide.
### Why should I use Ultralytics YOLOv8 for creating pie charts in my data visualization projects?
Ultralytics YOLOv8 is an excellent choice for creating pie charts because:
1. **Integration with Object Detection**: Directly integrate object detection results into pie charts for immediate insights.
2. **User-Friendly API**: Simple to set up and use with minimal code.
3. **Customizable**: Various customization options for colors, labels, and more.
4. **Real-time Updates**: Handle and visualize data in real-time, which is ideal for video analytics projects.
Here's a quick example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("pie_chart.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
analytics = solutions.Analytics(type="pie", writer=out, im0_shape=(w, h), view_img=True)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True)
clswise_count = {
model.names[int(cls)]: boxes.size(0)
for cls, boxes in zip(results[0].boxes.cls.tolist(), results[0].boxes.xyxy)
}
analytics.update_pie(clswise_count)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
For more information, refer to the [Pie Chart](#visual-samples) section in the guide.
### Can Ultralytics YOLOv8 be used to track objects and dynamically update visualizations?
Yes, Ultralytics YOLOv8 can be used to track objects and dynamically update visualizations. It supports tracking multiple objects in real-time and can update various visualizations like line graphs, bar plots, and pie charts based on the tracked objects' data.
Example for tracking and updating a line graph:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("Path/to/video/file.mp4")
out = cv2.VideoWriter("line_plot.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
analytics = solutions.Analytics(type="line", writer=out, im0_shape=(w, h), view_img=True)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True)
total_counts = sum([1 for box in results[0].boxes.xyxy])
analytics.update_line(frame_count, total_counts)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
To learn about the complete functionality, see the [Tracking](../modes/track.md) section.
### What makes Ultralytics YOLOv8 different from other object detection solutions like OpenCV and TensorFlow?
Ultralytics YOLOv8 stands out from other object detection solutions like OpenCV and TensorFlow for multiple reasons:
1. **State-of-the-art Accuracy**: YOLOv8 provides superior accuracy in object detection, segmentation, and classification tasks.
2. **Ease of Use**: User-friendly API allows for quick implementation and integration without extensive coding.
3. **Real-time Performance**: Optimized for high-speed inference, suitable for real-time applications.
4. **Diverse Applications**: Supports various tasks including multi-object tracking, custom model training, and exporting to different formats like ONNX, TensorRT, and CoreML.
5. **Comprehensive Documentation**: Extensive [documentation](https://docs.ultralytics.com/) and [blog resources](https://www.ultralytics.com/blog) to guide users through every step.
For more detailed comparisons and use cases, explore our [Ultralytics Blog](https://www.ultralytics.com/blog/ai-use-cases-transforming-your-future).

@ -150,3 +150,78 @@ This guide serves as an introduction to get you up and running with YOLOv8 on Az
- [Register a Model](https://learn.microsoft.com/azure/machine-learning/how-to-manage-models): Familiarize yourself with model management practices including registration, versioning, and deployment.
- [Train YOLOv8 with AzureML Python SDK](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azure-machine-learning-python-sdk-8268696be8ba): Explore a step-by-step guide on using the AzureML Python SDK to train your YOLOv8 models.
- [Train YOLOv8 with AzureML CLI](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azureml-and-the-az-cli-73d3c870ba8e): Discover how to utilize the command-line interface for streamlined training and management of YOLOv8 models on AzureML.
## FAQ
### How do I run YOLOv8 on AzureML for model training?
Running YOLOv8 on AzureML for model training involves several steps:
1. **Create a Compute Instance**: From your AzureML workspace, navigate to Compute > Compute instances > New, and select the required instance.
2. **Setup Environment**: Start your compute instance, open a terminal, and create a conda environment:
```bash
conda create --name yolov8env -y
conda activate yolov8env
conda install pip -y
pip install ultralytics onnx>=1.12.0
```
3. **Run YOLOv8 Tasks**: Use the Ultralytics CLI to train your model:
```bash
yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
For more details, you can refer to the [instructions to use the Ultralytics CLI](../quickstart.md#use-ultralytics-with-cli).
### What are the benefits of using AzureML for YOLOv8 training?
AzureML provides a robust and efficient ecosystem for training YOLOv8 models:
- **Scalability**: Easily scale your compute resources as your data and model complexity grows.
- **MLOps Integration**: Utilize features like versioning, monitoring, and auditing to streamline ML operations.
- **Collaboration**: Share and manage resources within teams, enhancing collaborative workflows.
These advantages make AzureML an ideal platform for projects ranging from quick prototypes to large-scale deployments. For more tips, check out [AzureML Jobs](https://learn.microsoft.com/azure/machine-learning/how-to-train-model).
### How do I troubleshoot common issues when running YOLOv8 on AzureML?
Troubleshooting common issues with YOLOv8 on AzureML can involve the following steps:
- **Dependency Issues**: Ensure all required packages are installed. Refer to the `requirements.txt` file for dependencies.
- **Environment Setup**: Verify that your conda environment is correctly activated before running commands.
- **Resource Allocation**: Make sure your compute instances have sufficient resources to handle the training workload.
For additional guidance, review our [YOLO Common Issues](https://docs.ultralytics.com/guides/yolo-common-issues/) documentation.
### Can I use both the Ultralytics CLI and Python interface on AzureML?
Yes, AzureML allows you to use both the Ultralytics CLI and the Python interface seamlessly:
- **CLI**: Ideal for quick tasks and running standard scripts directly from the terminal.
```bash
yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
```
- **Python Interface**: Useful for more complex tasks requiring custom coding and integration within notebooks.
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.train(data="coco8.yaml", epochs=3)
```
Refer to the quickstart guides for more detailed instructions [here](../quickstart.md#use-ultralytics-with-cli) and [here](../quickstart.md#use-ultralytics-with-python).
### What is the advantage of using Ultralytics YOLOv8 over other object detection models?
Ultralytics YOLOv8 offers several unique advantages over competing object detection models:
- **Speed**: Faster inference and training times compared to models like Faster R-CNN and SSD.
- **Accuracy**: High accuracy in detection tasks with features like anchor-free design and enhanced augmentation strategies.
- **Ease of Use**: Intuitive API and CLI for quick setup, making it accessible both to beginners and experts.
To explore more about YOLOv8's features, visit the [Ultralytics YOLO](https://www.ultralytics.com/yolo) page for detailed insights.

@ -127,3 +127,66 @@ And that's it! Your Conda installation will now use `libmamba` as the solver, wh
---
Congratulations! You have successfully set up a Conda environment, installed the Ultralytics package, and are now ready to explore its rich functionalities. Feel free to dive deeper into the [Ultralytics documentation](../index.md) for more advanced tutorials and examples.
## FAQ
### What is the process for setting up a Conda environment for Ultralytics projects?
Setting up a Conda environment for Ultralytics projects is straightforward and ensures smooth package management. First, create a new Conda environment using the following command:
```bash
conda create --name ultralytics-env python=3.8 -y
```
Then, activate the new environment with:
```bash
conda activate ultralytics-env
```
Finally, install Ultralytics from the conda-forge channel:
```bash
conda install -c conda-forge ultralytics
```
### Why should I use Conda over pip for managing dependencies in Ultralytics projects?
Conda is a robust package and environment management system that offers several advantages over pip. It manages dependencies efficiently and ensures that all necessary libraries are compatible. Conda's isolated environments prevent conflicts between packages, which is crucial in data science and machine learning projects. Additionally, Conda supports binary package distribution, speeding up the installation process.
### Can I use Ultralytics YOLO in a CUDA-enabled environment for faster performance?
Yes, you can enhance performance by utilizing a CUDA-enabled environment. Ensure that you install `ultralytics`, `pytorch`, and `pytorch-cuda` together to avoid conflicts:
```bash
conda install -c pytorch -c nvidia -c conda-forge pytorch torchvision pytorch-cuda=11.8 ultralytics
```
This setup enables GPU acceleration, crucial for intensive tasks like deep learning model training and inference. For more information, visit the [Ultralytics installation guide](../quickstart.md).
### What are the benefits of using Ultralytics Docker images with a Conda environment?
Using Ultralytics Docker images ensures a consistent and reproducible environment, eliminating "it works on my machine" issues. These images include a pre-configured Conda environment, simplifying the setup process. You can pull and run the latest Ultralytics Docker image with the following commands:
```bash
sudo docker pull ultralytics/ultralytics:latest-conda
sudo docker run -it --ipc=host --gpus all ultralytics/ultralytics:latest-conda
```
This approach is ideal for deploying applications in production or running complex workflows without manual configuration. Learn more about [Ultralytics Conda Docker Image](../quickstart.md).
### How can I speed up Conda package installation in my Ultralytics environment?
You can speed up the package installation process by using `libmamba`, a fast dependency solver for Conda. First, install the `conda-libmamba-solver` package:
```bash
conda install conda-libmamba-solver
```
Then configure Conda to use `libmamba` as the solver:
```bash
conda config --set solver libmamba
```
This setup provides faster and more efficient package management. For more tips on optimizing your environment, read about [libmamba installation](../quickstart.md).

@ -138,3 +138,87 @@ Find comprehensive information on the [Predict](../modes/predict.md) page for fu
```
If you want a `tflite-runtime` wheel for `tensorflow` 2.15.0 download it from [here](https://github.com/feranick/TFlite-builds/releases) and install it using `pip` or your package manager of choice.
## FAQ
### What is a Coral Edge TPU and how does it enhance Raspberry Pi's performance with Ultralytics YOLOv8?
The Coral Edge TPU is a compact device designed to add an Edge TPU coprocessor to your system. This coprocessor enables low-power, high-performance machine learning inference, particularly optimized for TensorFlow Lite models. When using a Raspberry Pi, the Edge TPU accelerates ML model inference, significantly boosting performance, especially for Ultralytics YOLOv8 models. You can read more about the Coral Edge TPU on their [home page](https://coral.ai/products/accelerator).
### How do I install the Coral Edge TPU runtime on a Raspberry Pi?
To install the Coral Edge TPU runtime on your Raspberry Pi, download the appropriate `.deb` package for your Raspberry Pi OS version from [this link](https://github.com/feranick/libedgetpu/releases). Once downloaded, use the following command to install it:
```bash
sudo dpkg -i path/to/package.deb
```
Make sure to uninstall any previous Coral Edge TPU runtime versions by following the steps outlined in the [Installation Walkthrough](#installation-walkthrough) section.
### Can I export my Ultralytics YOLOv8 model to be compatible with Coral Edge TPU?
Yes, you can export your Ultralytics YOLOv8 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the [Ultralytics Docker container](docker-quickstart.md). You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:
!!! Exporting the model
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("path/to/model.pt") # Load an official model or custom model
# Export the model
model.export(format="edgetpu")
```
=== "CLI"
```bash
yolo export model=path/to/model.pt format=edgetpu # Export an official model or custom model
```
For more information, refer to the [Export Mode](../modes/export.md) documentation.
### What should I do if TensorFlow is already installed on my Raspberry Pi but I want to use tflite-runtime instead?
If you have TensorFlow installed on your Raspberry Pi and need to switch to `tflite-runtime`, you'll need to uninstall TensorFlow first using:
```bash
pip uninstall tensorflow tensorflow-aarch64
```
Then, install or update `tflite-runtime` with the following command:
```bash
pip install -U tflite-runtime
```
For a specific wheel, such as TensorFlow 2.15.0 `tflite-runtime`, you can download it from [this link](https://github.com/feranick/TFlite-builds/releases) and install it using `pip`. Detailed instructions are available in the section on running the model [Running the Model](#running-the-model).
### How do I run inference with an exported YOLOv8 model on a Raspberry Pi using the Coral Edge TPU?
After exporting your YOLOv8 model to an Edge TPU-compatible format, you can run inference using the following code snippets:
!!! Running the model
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("path/to/edgetpu_model.tflite") # Load an official model or custom model
# Run Prediction
model.predict("path/to/source.png")
```
=== "CLI"
```bash
yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png # Load an official model or custom model
```
Comprehensive details on full prediction mode features can be found on the [Predict Page](../modes/predict.md).

@ -303,3 +303,39 @@ The following table summarizes how YOLOv8s models perform at different TensorRT
### Acknowledgements
This guide was initially created by our friends at Seeed Studio, Lakshantha and Elaine.
## FAQ
### How do I set up Ultralytics YOLOv8 on an NVIDIA Jetson device?
To set up Ultralytics YOLOv8 on an [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) device, you first need to install the [DeepStream SDK](https://developer.nvidia.com/deepstream-getting-started) compatible with your JetPack version. Follow the step-by-step guide in our [Quick Start Guide](nvidia-jetson.md) to configure your NVIDIA Jetson for YOLOv8 deployment.
### What is the benefit of using TensorRT with YOLOv8 on NVIDIA Jetson?
Using TensorRT with YOLOv8 optimizes the model for inference, significantly reducing latency and improving throughput on NVIDIA Jetson devices. TensorRT provides high-performance, low-latency deep learning inference through layer fusion, precision calibration, and kernel auto-tuning. This leads to faster and more efficient execution, particularly useful for real-time applications like video analytics and autonomous machines.
### Can I run Ultralytics YOLOv8 with DeepStream SDK across different NVIDIA Jetson hardware?
Yes, the guide for deploying Ultralytics YOLOv8 with the DeepStream SDK and TensorRT is compatible across the entire NVIDIA Jetson lineup. This includes devices like the Jetson Orin NX 16GB with [JetPack 5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and the Jetson Nano 4GB with [JetPack 4.6.4](https://developer.nvidia.com/jetpack-sdk-464). Refer to the section [DeepStream Configuration for YOLOv8](#deepstream-configuration-for-yolov8) for detailed steps.
### How can I convert a YOLOv8 model to ONNX for DeepStream?
To convert a YOLOv8 model to ONNX format for deployment with DeepStream, use the `utils/export_yoloV8.py` script from the [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) repository.
Here's an example command:
```bash
python3 utils/export_yoloV8.py -w yolov8s.pt --opset 12 --simplify
```
For more details on model conversion, check out our [model export section](../modes/export.md).
### What are the performance benchmarks for YOLOv8 on NVIDIA Jetson Orin NX?
The performance of YOLOv8 models on NVIDIA Jetson Orin NX 16GB varies based on TensorRT precision levels. For example, YOLOv8s models achieve:
- **FP32 Precision**: 15.63 ms/im, 64 FPS
- **FP16 Precision**: 7.94 ms/im, 126 FPS
- **INT8 Precision**: 5.53 ms/im, 181 FPS
These benchmarks underscore the efficiency and capability of using TensorRT-optimized YOLOv8 models on NVIDIA Jetson hardware. For further details, see our [Benchmark Results](#benchmark-results) section.

@ -4,7 +4,7 @@ description: Learn how to calculate distances between objects using Ultralytics
keywords: Ultralytics, YOLOv8, distance calculation, computer vision, object tracking, spatial positioning
---
# Distance Calculation using Ultralytics YOLOv8 🚀
# Distance Calculation using Ultralytics YOLOv8
## What is Distance Calculation?
@ -101,3 +101,38 @@ Measuring the gap between two objects is known as distance calculation within a
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How do I calculate distances between objects using Ultralytics YOLOv8?
To calculate distances between objects using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), you need to identify the bounding box centroids of the detected objects. This process involves initializing the `DistanceCalculation` class from Ultralytics' `solutions` module and using the model's tracking outputs to calculate the distances. You can refer to the implementation in the [distance calculation example](#distance-calculation-using-ultralytics-yolov8).
### What are the advantages of using distance calculation with Ultralytics YOLOv8?
Using distance calculation with Ultralytics YOLOv8 offers several advantages:
- **Localization Precision:** Provides accurate spatial positioning for objects.
- **Size Estimation:** Helps estimate physical sizes, contributing to better contextual understanding.
- **Scene Understanding:** Enhances 3D scene comprehension, aiding improved decision-making in applications like autonomous driving and surveillance.
### Can I perform distance calculation in real-time video streams with Ultralytics YOLOv8?
Yes, you can perform distance calculation in real-time video streams with Ultralytics YOLOv8. The process involves capturing video frames using OpenCV, running YOLOv8 object detection, and using the `DistanceCalculation` class to calculate distances between objects in successive frames. For a detailed implementation, see the [video stream example](#distance-calculation-using-ultralytics-yolov8).
### How do I delete points drawn during distance calculation using Ultralytics YOLOv8?
To delete points drawn during distance calculation with Ultralytics YOLOv8, you can use a right mouse click. This action will clear all the points you have drawn. For more details, refer to the note section under the [distance calculation example](#distance-calculation-using-ultralytics-yolov8).
### What are the key arguments for initializing the DistanceCalculation class in Ultralytics YOLOv8?
The key arguments for initializing the `DistanceCalculation` class in Ultralytics YOLOv8 include:
- `names`: Dictionary mapping class indices to class names.
- `pixels_per_meter`: Conversion factor from pixels to meters.
- `view_img`: Flag to indicate if the video stream should be displayed.
- `line_thickness`: Thickness of the lines drawn on the image.
- `line_color`: Color of the lines drawn on the image (BGR format).
- `centroid_color`: Color of the centroids (BGR format).
For an exhaustive list and default values, see the [arguments of DistanceCalculation](#arguments-distancecalculation).

@ -224,3 +224,60 @@ yolo predict model=yolov8n.pt show=True
---
Congratulations! You're now set up to use Ultralytics with Docker and ready to take advantage of its powerful capabilities. For alternate installation methods, feel free to explore the [Ultralytics quickstart documentation](../quickstart.md).
## FAQ
### How do I set up Ultralytics with Docker?
To set up Ultralytics with Docker, first ensure that Docker is installed on your system. If you have an NVIDIA GPU, install the NVIDIA Docker runtime to enable GPU support. Then, pull the latest Ultralytics Docker image from Docker Hub using the following command:
```bash
sudo docker pull ultralytics/ultralytics:latest
```
For detailed steps, refer to our [Docker Quickstart Guide](../quickstart.md).
### What are the benefits of using Ultralytics Docker images for machine learning projects?
Using Ultralytics Docker images ensures a consistent environment across different machines, replicating the same software and dependencies. This is particularly useful for collaborating across teams, running models on various hardware, and maintaining reproducibility. For GPU-based training, Ultralytics provides optimized Docker images such as `Dockerfile` for general GPU usage and `Dockerfile-jetson` for NVIDIA Jetson devices. Explore [Ultralytics Docker Hub](https://hub.docker.com/r/ultralytics/ultralytics) for more details.
### How can I run Ultralytics YOLO in a Docker container with GPU support?
First, ensure that the NVIDIA Docker runtime is installed and configured. Then, use the following command to run Ultralytics YOLO with GPU support:
```bash
sudo docker run -it --ipc=host --gpus all ultralytics/ultralytics:latest
```
This command sets up a Docker container with GPU access. For additional details, see the [Docker Quickstart Guide](../quickstart.md).
### How do I visualize YOLO prediction results in a Docker container with a display server?
To visualize YOLO prediction results with a GUI in a Docker container, you need to allow Docker to access your display server. For systems running X11, the command is:
```bash
xhost +local:docker && docker run -e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v ~/.Xauthority:/root/.Xauthority \
-it --ipc=host ultralytics/ultralytics:latest
```
For systems running Wayland, use:
```bash
xhost +local:docker && docker run -e DISPLAY=$DISPLAY \
-v $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY:/tmp/$WAYLAND_DISPLAY \
--net=host -it --ipc=host ultralytics/ultralytics:latest
```
More information can be found in the [Run graphical user interface (GUI) applications in a Docker Container](#run-graphical-user-interface-gui-applications-in-a-docker-container) section.
### Can I mount local directories into the Ultralytics Docker container?
Yes, you can mount local directories into the Ultralytics Docker container using the `-v` flag:
```bash
sudo docker run -it --ipc=host --gpus all -v /path/on/host:/path/in/container ultralytics/ultralytics:latest
```
Replace `/path/on/host` with the directory on your local machine and `/path/in/container` with the desired path inside the container. This setup allows you to work with your local files within the container. For more information, refer to the relevant section on [mounting local directories](../usage/python.md).

@ -330,3 +330,74 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
| `cv::COLORMAP_DEEPGREEN` | Deep Green color map |
These colormaps are commonly used for visualizing data with different color representations.
## FAQ
### How does Ultralytics YOLOv8 generate heatmaps and what are their benefits?
Ultralytics YOLOv8 generates heatmaps by transforming complex data into a color-coded matrix where different hues represent data intensities. Heatmaps make it easier to visualize patterns, correlations, and anomalies in the data. Warmer hues indicate higher values, while cooler tones represent lower values. The primary benefits include intuitive visualization of data distribution, efficient pattern detection, and enhanced spatial analysis for decision-making. For more details and configuration options, refer to the [Heatmap Configuration](#arguments-heatmap) section.
### Can I use Ultralytics YOLOv8 to perform object tracking and generate a heatmap simultaneously?
Yes, Ultralytics YOLOv8 supports object tracking and heatmap generation concurrently. This can be achieved through its `Heatmap` solution integrated with object tracking models. To do so, you need to initialize the heatmap object and use YOLOv8's tracking capabilities. Here's a simple example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, view_img=True, shape="circle", classes_names=model.names)
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
tracks = model.track(im0, persist=True, show=False)
im0 = heatmap_obj.generate_heatmap(im0, tracks)
cv2.imshow("Heatmap", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
For further guidance, check the [Tracking Mode](../modes/track.md) page.
### What makes Ultralytics YOLOv8 heatmaps different from other data visualization tools like those from OpenCV or Matplotlib?
Ultralytics YOLOv8 heatmaps are specifically designed for integration with its object detection and tracking models, providing an end-to-end solution for real-time data analysis. Unlike generic visualization tools like OpenCV or Matplotlib, YOLOv8 heatmaps are optimized for performance and automated processing, supporting features like persistent tracking, decay factor adjustment, and real-time video overlay. For more information on YOLOv8's unique features, visit the [Ultralytics YOLOv8 Introduction](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8).
### How can I visualize only specific object classes in heatmaps using Ultralytics YOLOv8?
You can visualize specific object classes by specifying the desired classes in the `track()` method of the YOLO model. For instance, if you only want to visualize cars and persons (assuming their class indices are 0 and 2), you can set the `classes` parameter accordingly.
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
heatmap_obj = solutions.Heatmap(colormap=cv2.COLORMAP_PARULA, view_img=True, shape="circle", classes_names=model.names)
classes_for_heatmap = [0, 2] # Classes to visualize
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
tracks = model.track(im0, persist=True, show=False, classes=classes_for_heatmap)
im0 = heatmap_obj.generate_heatmap(im0, tracks)
cv2.imshow("Heatmap", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
### Why should businesses choose Ultralytics YOLOv8 for heatmap generation in data analysis?
Ultralytics YOLOv8 offers seamless integration of advanced object detection and real-time heatmap generation, making it an ideal choice for businesses looking to visualize data more effectively. The key advantages include intuitive data distribution visualization, efficient pattern detection, and enhanced spatial analysis for better decision-making. Additionally, YOLOv8's cutting-edge features such as persistent tracking, customizable colormaps, and support for various export formats make it superior to other tools like TensorFlow and OpenCV for comprehensive data analysis. Learn more about business applications at [Ultralytics Plans](https://www.ultralytics.com/plans).

@ -205,3 +205,57 @@ The hyperparameter tuning process in Ultralytics YOLO is simplified yet powerful
3. [Efficient Hyperparameter Tuning with Ray Tune and YOLOv8](../integrations/ray-tune.md)
For deeper insights, you can explore the `Tuner` class source code and accompanying documentation. Should you have any questions, feature requests, or need further assistance, feel free to reach out to us on [GitHub](https://github.com/ultralytics/ultralytics/issues/new/choose) or [Discord](https://ultralytics.com/discord).
## FAQ
### How do I optimize the learning rate for Ultralytics YOLO during hyperparameter tuning?
To optimize the learning rate for Ultralytics YOLO, start by setting an initial learning rate using the `lr0` parameter. Common values range from `0.001` to `0.01`. During the hyperparameter tuning process, this value will be mutated to find the optimal setting. You can utilize the `model.tune()` method to automate this process. For example:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Initialize the YOLO model
model = YOLO("yolov8n.pt")
# Tune hyperparameters on COCO8 for 30 epochs
model.tune(data="coco8.yaml", epochs=30, iterations=300, optimizer="AdamW", plots=False, save=False, val=False)
```
For more details, check the [Ultralytics YOLO configuration page](../usage/cfg.md#augmentation-settings).
### What are the benefits of using genetic algorithms for hyperparameter tuning in YOLOv8?
Genetic algorithms in Ultralytics YOLOv8 provide a robust method for exploring the hyperparameter space, leading to highly optimized model performance. Key benefits include:
- **Efficient Search**: Genetic algorithms like mutation can quickly explore a large set of hyperparameters.
- **Avoiding Local Minima**: By introducing randomness, they help in avoiding local minima, ensuring better global optimization.
- **Performance Metrics**: They adapt based on performance metrics such as AP50 and F1-score.
To see how genetic algorithms can optimize hyperparameters, check out the [hyperparameter evolution guide](../yolov5/tutorials/hyperparameter_evolution.md).
### How long does the hyperparameter tuning process take for Ultralytics YOLO?
The time required for hyperparameter tuning with Ultralytics YOLO largely depends on several factors such as the size of the dataset, the complexity of the model architecture, the number of iterations, and the computational resources available. For instance, tuning YOLOv8n on a dataset like COCO8 for 30 epochs might take several hours to days, depending on the hardware.
To effectively manage tuning time, define a clear tuning budget beforehand ([internal section link](#preparing-for-hyperparameter-tuning)). This helps in balancing resource allocation and optimization goals.
### What metrics should I use to evaluate model performance during hyperparameter tuning in YOLO?
When evaluating model performance during hyperparameter tuning in YOLO, you can use several key metrics:
- **AP50**: The average precision at IoU threshold of 0.50.
- **F1-Score**: The harmonic mean of precision and recall.
- **Precision and Recall**: Individual metrics indicating the model's accuracy in identifying true positives versus false positives and false negatives.
These metrics help you understand different aspects of your model's performance. Refer to the [Ultralytics YOLO performance metrics](../guides/yolo-performance-metrics.md) guide for a comprehensive overview.
### Can I use Ultralytics HUB for hyperparameter tuning of YOLO models?
Yes, you can use Ultralytics HUB for hyperparameter tuning of YOLO models. The HUB offers a no-code platform to easily upload datasets, train models, and perform hyperparameter tuning efficiently. It provides real-time tracking and visualization of tuning progress and results.
Explore more about using Ultralytics HUB for hyperparameter tuning in the [Ultralytics HUB Cloud Training](../hub/cloud-training.md) documentation.

@ -60,3 +60,44 @@ We welcome contributions from the community! If you've mastered a particular asp
To get started, please read our [Contributing Guide](../help/contributing.md) for guidelines on how to open up a Pull Request (PR) 🛠. We look forward to your contributions!
Let's work together to make the Ultralytics YOLO ecosystem more robust and versatile 🙏!
## FAQ
### How do I train a custom object detection model using Ultralytics YOLO?
Training a custom object detection model with Ultralytics YOLO is straightforward. Start by preparing your dataset in the correct format and installing the Ultralytics package. Use the following code to initiate training:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
model = YOLO("yolov8s.pt") # Load a pre-trained YOLO model
model.train(data="path/to/dataset.yaml", epochs=50) # Train on custom dataset
```
=== "CLI"
```bash
yolo task=detect mode=train model=yolov8s.pt data=path/to/dataset.yaml epochs=50
```
For detailed dataset formatting and additional options, refer to our [Tips for Model Training](model-training-tips.md) guide.
### What performance metrics should I use to evaluate my YOLO model?
Evaluating your YOLO model performance is crucial to understanding its efficacy. Key metrics include Mean Average Precision (mAP), Intersection over Union (IoU), and F1 score. These metrics help assess the accuracy and precision of object detection tasks. You can learn more about these metrics and how to improve your model in our [YOLO Performance Metrics](yolo-performance-metrics.md) guide.
### Why should I use Ultralytics HUB for my computer vision projects?
Ultralytics HUB is a no-code platform that simplifies managing, training, and deploying YOLO models. It supports seamless integration, real-time tracking, and cloud training, making it ideal for both beginners and professionals. Discover more about its features and how it can streamline your workflow with our [Ultralytics HUB](https://docs.ultralytics.com/hub/) quickstart guide.
### What are the common issues faced during YOLO model training, and how can I resolve them?
Common issues during YOLO model training include data formatting errors, model architecture mismatches, and insufficient training data. To address these, ensure your dataset is correctly formatted, check for compatible model versions, and augment your training data. For a comprehensive list of solutions, refer to our [YOLO Common Issues](yolo-common-issues.md) guide.
### How can I deploy my YOLO model for real-time object detection on edge devices?
Deploying YOLO models on edge devices like NVIDIA Jetson and Raspberry Pi requires converting the model to a compatible format such as TensorRT or TFLite. Follow our step-by-step guides for [NVIDIA Jetson](nvidia-jetson.md) and [Raspberry Pi](raspberry-pi.md) deployments to get started with real-time object detection on edge hardware. These guides will walk you through installation, configuration, and performance optimization.

@ -135,3 +135,114 @@ There are two types of instance segmentation tracking available in the Ultralyti
## Note
For any inquiries, feel free to post your questions in the [Ultralytics Issue Section](https://github.com/ultralytics/ultralytics/issues/new/choose) or the discussion section mentioned below.
## FAQ
### How do I perform instance segmentation using Ultralytics YOLOv8?
To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO model with a segmentation version of YOLOv8 and process video frames through it. Here's a simplified code example:
!!! Example
=== "Python"
```python
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
model = YOLO("yolov8n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("instance-segmentation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
if not ret:
break
results = model.predict(im0)
annotator = Annotator(im0, line_width=2)
if results[0].masks is not None:
clss = results[0].boxes.cls.cpu().tolist()
masks = results[0].masks.xy
for mask, cls in zip(masks, clss):
annotator.seg_bbox(mask=mask, mask_color=colors(int(cls), True), det_label=model.model.names[int(cls)])
out.write(im0)
cv2.imshow("instance-segmentation", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
```
Learn more about instance segmentation in the [Ultralytics YOLOv8 guide](#what-is-instance-segmentation).
### What is the difference between instance segmentation and object tracking in Ultralytics YOLOv8?
Instance segmentation identifies and outlines individual objects within an image, giving each object a unique label and mask. Object tracking extends this by assigning consistent labels to objects across video frames, facilitating continuous tracking of the same objects over time. Learn more about the distinctions in the [Ultralytics YOLOv8 documentation](#samples).
### Why should I use Ultralytics YOLOv8 for instance segmentation and tracking over other models like Mask R-CNN or Faster R-CNN?
Ultralytics YOLOv8 offers real-time performance, superior accuracy, and ease of use compared to other models like Mask R-CNN or Faster R-CNN. YOLOv8 provides a seamless integration with Ultralytics HUB, allowing users to manage models, datasets, and training pipelines efficiently. Discover more about the benefits of YOLOv8 in the [Ultralytics blog](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8).
### How can I implement object tracking using Ultralytics YOLOv8?
To implement object tracking, use the `model.track` method and ensure that each object's ID is consistently assigned across frames. Below is a simple example:
!!! Example
=== "Python"
```python
from collections import defaultdict
import cv2
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
track_history = defaultdict(lambda: [])
model = YOLO("yolov8n-seg.pt") # segmentation model
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
out = cv2.VideoWriter("instance-segmentation-object-tracking.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
while True:
ret, im0 = cap.read()
if not ret:
break
annotator = Annotator(im0, line_width=2)
results = model.track(im0, persist=True)
if results[0].boxes.id is not None and results[0].masks is not None:
masks = results[0].masks.xy
track_ids = results[0].boxes.id.int().cpu().tolist()
for mask, track_id in zip(masks, track_ids):
annotator.seg_bbox(mask=mask, mask_color=colors(track_id, True), track_label=str(track_id))
out.write(im0)
cv2.imshow("instance-segmentation-object-tracking", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
out.release()
cap.release()
cv2.destroyAllWindows()
```
Find more in the [Instance Segmentation and Tracking section](#samples).
### Are there any datasets provided by Ultralytics suitable for training YOLOv8 models for instance segmentation and tracking?
Yes, Ultralytics offers several datasets suitable for training YOLOv8 models, including segmentation and tracking datasets. Dataset examples, structures, and instructions for use can be found in the [Ultralytics Datasets documentation](https://docs.ultralytics.com/datasets/).

@ -14,7 +14,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
## Recipe Walk Through
1. See the [Ultralytics Quickstart Installation section](../quickstart.md/#install-ultralytics) for a quick walkthrough on installing the required libraries.
1. See the [Ultralytics Quickstart Installation section](../quickstart.md) for a quick walkthrough on installing the required libraries.
***
@ -307,3 +307,93 @@ for r in res:
4. See [Segment Task](../tasks/segment.md#models) for more information.
5. Learn more about [Working with Results](../modes/predict.md#working-with-results)
6. Learn more about [Segmentation Mask Results](../modes/predict.md#masks)
## FAQ
### How do I isolate objects using Ultralytics YOLOv8 for segmentation tasks?
To isolate objects using Ultralytics YOLOv8, follow these steps:
1. **Load the model and run inference:**
```python
from ultralytics import YOLO
model = YOLO("yolov8n-seg.pt")
results = model.predict(source="path/to/your/image.jpg")
```
2. **Generate a binary mask and draw contours:**
```python
import cv2
import numpy as np
img = np.copy(results[0].orig_img)
b_mask = np.zeros(img.shape[:2], np.uint8)
contour = results[0].masks.xy[0].astype(np.int32).reshape(-1, 1, 2)
cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED)
```
3. **Isolate the object using the binary mask:**
```python
mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
isolated = cv2.bitwise_and(mask3ch, img)
```
Refer to the guide on [Predict Mode](../modes/predict.md) and the [Segment Task](../tasks/segment.md) for more information.
### What options are available for saving the isolated objects after segmentation?
Ultralytics YOLOv8 offers two main options for saving isolated objects:
1. **With a Black Background:**
```python
mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
isolated = cv2.bitwise_and(mask3ch, img)
```
2. **With a Transparent Background:**
```python
isolated = np.dstack([img, b_mask])
```
For further details, visit the [Predict Mode](../modes/predict.md) section.
### How can I crop isolated objects to their bounding boxes using Ultralytics YOLOv8?
To crop isolated objects to their bounding boxes:
1. **Retrieve bounding box coordinates:**
```python
x1, y1, x2, y2 = results[0].boxes.xyxy[0].cpu().numpy().astype(np.int32)
```
2. **Crop the isolated image:**
```python
iso_crop = isolated[y1:y2, x1:x2]
```
Learn more about bounding box results in the [Predict Mode](../modes/predict.md#boxes) documentation.
### Why should I use Ultralytics YOLOv8 for object isolation in segmentation tasks?
Ultralytics YOLOv8 provides:
- **High-speed** real-time object detection and segmentation.
- **Accurate bounding box and mask generation** for precise object isolation.
- **Comprehensive documentation** and easy-to-use API for efficient development.
Explore the benefits of using YOLO in the [Segment Task documentation](../tasks/segment.md).
### Can I save isolated objects including the background using Ultralytics YOLOv8?
Yes, this is a built-in feature in Ultralytics YOLOv8. Use the `save_crop` argument in the `predict()` method. For example:
```python
results = model.predict(source="path/to/your/image.jpg", save_crop=True)
```
Read more about the `save_crop` argument in the [Predict Mode Inference Arguments](../modes/predict.md#inference-arguments) section.

@ -280,3 +280,33 @@ Finally, we implemented the actual model training using each split in a loop, sa
This technique of K-Fold cross-validation is a robust way of making the most out of your available data, and it helps to ensure that your model performance is reliable and consistent across different data subsets. This results in a more generalizable and reliable model that is less likely to overfit to specific data patterns.
Remember that although we used YOLO in this guide, these steps are mostly transferable to other machine learning models. Understanding these steps allows you to apply cross-validation effectively in your own machine learning projects. Happy coding!
## FAQ
### What is K-Fold Cross Validation and why is it useful in object detection?
K-Fold Cross Validation is a technique where the dataset is divided into 'k' subsets (folds) to evaluate model performance more reliably. Each fold serves as both training and validation data. In the context of object detection, using K-Fold Cross Validation helps to ensure your Ultralytics YOLO model's performance is robust and generalizable across different data splits, enhancing its reliability. For detailed instructions on setting up K-Fold Cross Validation with Ultralytics YOLO, refer to [K-Fold Cross Validation with Ultralytics](#introduction).
### How do I implement K-Fold Cross Validation using Ultralytics YOLO?
To implement K-Fold Cross Validation with Ultralytics YOLO, you need to follow these steps:
1. Verify annotations are in the [YOLO detection format](../datasets/detect/index.md).
2. Use Python libraries like `sklearn`, `pandas`, and `pyyaml`.
3. Create feature vectors from your dataset.
4. Split your dataset using `KFold` from `sklearn.model_selection`.
5. Train the YOLO model on each split.
For a comprehensive guide, see the [K-Fold Dataset Split](#k-fold-dataset-split) section in our documentation.
### Why should I use Ultralytics YOLO for object detection?
Ultralytics YOLO offers state-of-the-art, real-time object detection with high accuracy and efficiency. It's versatile, supporting multiple computer vision tasks such as detection, segmentation, and classification. Additionally, it integrates seamlessly with tools like Ultralytics HUB for no-code model training and deployment. For more details, explore the benefits and features on our [Ultralytics YOLO page](https://www.ultralytics.com/yolo).
### How can I ensure my annotations are in the correct format for Ultralytics YOLO?
Your annotations should follow the YOLO detection format. Each annotation file must list the object class, alongside its bounding box coordinates in the image. The YOLO format ensures streamlined and standardized data processing for training object detection models. For more information on proper annotation formatting, visit the [YOLO detection format guide](../datasets/detect/index.md).
### Can I use K-Fold Cross Validation with custom datasets other than Fruit Detection?
Yes, you can use K-Fold Cross Validation with any custom dataset as long as the annotations are in the YOLO detection format. Replace the dataset paths and class labels with those specific to your custom dataset. This flexibility ensures that any object detection project can benefit from robust model evaluation using K-Fold Cross Validation. For a practical example, review our [Generating Feature Vectors](#generating-feature-vectors-for-object-detection-dataset) section.

@ -303,3 +303,68 @@ In this guide, we've explored the different deployment options for YOLOv8. We've
Don't forget that the YOLOv8 and Ultralytics community is a valuable source of help. Connect with other developers and experts to learn unique tips and solutions you might not find in regular documentation. Keep seeking knowledge, exploring new ideas, and sharing your experiences.
Happy deploying!
## FAQ
### What are the deployment options available for YOLOv8 on different hardware platforms?
Ultralytics YOLOv8 supports various deployment formats, each designed for specific environments and hardware platforms. Key formats include:
- **PyTorch** for research and prototyping, with excellent Python integration.
- **TorchScript** for production environments where Python is unavailable.
- **ONNX** for cross-platform compatibility and hardware acceleration.
- **OpenVINO** for optimized performance on Intel hardware.
- **TensorRT** for high-speed inference on NVIDIA GPUs.
Each format has unique advantages. For a detailed walkthrough, see our [export process documentation](../modes/export.md#usage-examples).
### How do I improve the inference speed of my YOLOv8 model on an Intel CPU?
To enhance inference speed on Intel CPUs, you can deploy your YOLOv8 model using Intel's OpenVINO toolkit. OpenVINO offers significant performance boosts by optimizing models to leverage Intel hardware efficiently.
1. Convert your YOLOv8 model to the OpenVINO format using the `model.export()` function.
2. Follow the detailed setup guide in the [Intel OpenVINO Export documentation](../integrations/openvino.md).
For more insights, check out our [blog post](https://www.ultralytics.com/blog/achieve-faster-inference-speeds-ultralytics-yolov8-openvino).
### Can I deploy YOLOv8 models on mobile devices?
Yes, YOLOv8 models can be deployed on mobile devices using TensorFlow Lite (TF Lite) for both Android and iOS platforms. TF Lite is designed for mobile and embedded devices, providing efficient on-device inference.
!!! Example
=== "Python"
```python
# Export command for TFLite format
model.export(format="tflite")
```
=== "CLI"
```bash
# CLI command for TFLite export
yolo export --format tflite
```
For more details on deploying models to mobile, refer to our [TF Lite integration guide](../integrations/tflite.md).
### What factors should I consider when choosing a deployment format for my YOLOv8 model?
When choosing a deployment format for YOLOv8, consider the following factors:
- **Performance**: Some formats like TensorRT provide exceptional speeds on NVIDIA GPUs, while OpenVINO is optimized for Intel hardware.
- **Compatibility**: ONNX offers broad compatibility across different platforms.
- **Ease of Integration**: Formats like CoreML or TF Lite are tailored for specific ecosystems like iOS and Android, respectively.
- **Community Support**: Formats like PyTorch and TensorFlow have extensive community resources and support.
For a comparative analysis, refer to our [export formats documentation](../modes/export.md#export-formats).
### How can I deploy YOLOv8 models in a web application?
To deploy YOLOv8 models in a web application, you can use TensorFlow.js (TF.js), which allows for running machine learning models directly in the browser. This approach eliminates the need for backend infrastructure and provides real-time performance.
1. Export the YOLOv8 model to the TF.js format.
2. Integrate the exported model into your web application.
For step-by-step instructions, refer to our guide on [TensorFlow.js integration](../integrations/tfjs.md).

@ -26,19 +26,19 @@ Choosing where to deploy your computer vision model depends on multiple factors.
Cloud deployment is great for applications that need to scale up quickly and handle large amounts of data. Platforms like AWS, [Google Cloud](../yolov5/environments/google_cloud_quickstart_tutorial.md), and Azure make it easy to manage your models from training to deployment. They offer services like [AWS SageMaker](../integrations/amazon-sagemaker.md), Google AI Platform, and [Azure Machine Learning](./azureml-quickstart.md) to help you throughout the process.
However, using the cloud can be expensive, especially with high data usage, and you might face latency issues if your users are far from the data centers. To manage costs and performance, it's important to optimize resource use and ensure compliance with data privacy rules.
However, using the cloud can be expensive, especially with high data usage, and you might face latency issues if your users are far from the data centers. To manage costs and performance, it's important to optimize resource use and ensure compliance with data privacy rules.
#### Edge Deployment
Edge deployment works well for applications needing real-time responses and low latency, particularly in places with limited or no internet access. Deploying models on edge devices like smartphones or IoT gadgets ensures fast processing and keeps data local, which enhances privacy. Deploying on edge also saves bandwidth since less data is sent to the cloud.
Edge deployment works well for applications needing real-time responses and low latency, particularly in places with limited or no internet access. Deploying models on edge devices like smartphones or IoT gadgets ensures fast processing and keeps data local, which enhances privacy. Deploying on edge also saves bandwidth due to reduced data sent to the cloud.
However, edge devices often have limited processing power, so you'll need to optimize your models. Tools like [TensorFlow Lite](../integrations/tflite.md) and [NVIDIA Jetson](./nvidia-jetson.md) can help. Despite the benefits, maintaining and updating many devices can be challenging.
However, edge devices often have limited processing power, so you'll need to optimize your models. Tools like [TensorFlow Lite](../integrations/tflite.md) and [NVIDIA Jetson](./nvidia-jetson.md) can help. Despite the benefits, maintaining and updating many devices can be challenging.
#### Local Deployment
Local Deployment is best when data privacy is critical or when there's unreliable or no internet access. Running models on local servers or desktops gives you full control and keeps your data secure. It can also reduce latency if the server is near the user.
However, scaling locally can be tough, and maintenance can be time-consuming. Using tools like [Docker](./docker-quickstart.md) for containerization and Kubernetes for management can help make local deployments more efficient. Regular updates and maintenance are necessary to keep everything running smoothly.
However, scaling locally can be tough, and maintenance can be time-consuming. Using tools like [Docker](./docker-quickstart.md) for containerization and Kubernetes for management can help make local deployments more efficient. Regular updates and maintenance are necessary to keep everything running smoothly.
## Model Optimization Techniques
@ -46,7 +46,7 @@ Optimizing your computer vision model helps it runs efficiently, especially when
### Model Pruning
Pruning reduces the size of the model by removing weights that contribute little to the final output. It makes the model smaller and faster without significantly affecting accuracy. Pruning involves identifying and eliminating unnecessary parameters, resulting in a lighter model that requires less computational power. It is particularly useful for deploying models on devices with limited resources.
Pruning reduces the size of the model by removing weights that contribute little to the final output. It makes the model smaller and faster without significantly affecting accuracy. Pruning involves identifying and eliminating unnecessary parameters, resulting in a lighter model that requires less computational power. It is particularly useful for deploying models on devices with limited resources.
<p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*rw2zAHw9Xlm7nSq1PCKbzQ.png" alt="Model Pruning Overview">
@ -113,7 +113,7 @@ It's essential to control who can access your model and its data to prevent unau
### Model Obfuscation
Protecting your model from being reverse-engineered or misused can done through model obfuscation. It involves encrypting model parameters, such as weights and biases in neural networks, to make it difficult for unauthorized individuals to understand or alter the model. You can also obfuscate the model's architecture by renaming layers and parameters or adding dummy layers, making it harder for attackers to reverse-engineer it. You can also serve the model in a secure environment, like a secure enclave or using a trusted execution environment (TEE), can provide an extra layer of protection during inference.
Protecting your model from being reverse-engineered or misuse can be done through model obfuscation. It involves encrypting model parameters, such as weights and biases in neural networks, to make it difficult for unauthorized individuals to understand or alter the model. You can also obfuscate the model's architecture by renaming layers and parameters or adding dummy layers, making it harder for attackers to reverse-engineer it. You can also serve the model in a secure environment, like a secure enclave or using a trusted execution environment (TEE), can provide an extra layer of protection during inference.
## Share Ideas With Your Peers
@ -135,3 +135,25 @@ Using these resources will help you solve challenges and stay up-to-date with th
We walked through some best practices to follow when deploying computer vision models. By securing data, controlling access, and obfuscating model details, you can protect sensitive information while keeping your models running smoothly. We also discussed how to address common issues like reduced accuracy and slow inferences using strategies such as warm-up runs, optimizing engines, asynchronous processing, profiling pipelines, and choosing the right precision.
After deploying your model, the next step would be monitoring, maintaining, and documenting your application. Regular monitoring helps catch and fix issues quickly, maintenance keeps your models up-to-date and functional, and good documentation tracks all changes and updates. These steps will help you achieve the [goals of your computer vision project](./defining-project-goals.md).
## FAQ
### What are the best practices for deploying a machine learning model using Ultralytics YOLOv8?
Deploying a machine learning model, particularly with Ultralytics YOLOv8, involves several best practices to ensure efficiency and reliability. First, choose the deployment environment that suits your needs—cloud, edge, or local. Optimize your model through techniques like [pruning, quantization, and knowledge distillation](#model-optimization-techniques) for efficient deployment in resource-constrained environments. Lastly, ensure data consistency and preprocessing steps align with the training phase to maintain performance. You can also refer to [model deployment options](./model-deployment-options.md) for more detailed guidelines.
### How can I troubleshoot common deployment issues with Ultralytics YOLOv8 models?
Troubleshooting deployment issues can be broken down into a few key steps. If your model's accuracy drops after deployment, check for data consistency, validate preprocessing steps, and ensure the hardware/software environment matches what you used during training. For slow inference times, perform warm-up runs, optimize your inference engine, use asynchronous processing, and profile your inference pipeline. Refer to [troubleshooting deployment issues](#troubleshooting-deployment-issues) for a detailed guide on these best practices.
### How does Ultralytics YOLOv8 optimization enhance model performance on edge devices?
Optimizing Ultralytics YOLOv8 models for edge devices involves using techniques like pruning to reduce the model size, quantization to convert weights to lower precision, and knowledge distillation to train smaller models that mimic larger ones. These techniques ensure the model runs efficiently on devices with limited computational power. Tools like [TensorFlow Lite](../integrations/tflite.md) and [NVIDIA Jetson](./nvidia-jetson.md) are particularly useful for these optimizations. Learn more about these techniques in our section on [model optimization](#model-optimization-techniques).
### What are the security considerations for deploying machine learning models with Ultralytics YOLOv8?
Security is paramount when deploying machine learning models. Ensure secure data transmission using encryption protocols like TLS. Implement robust access controls, including strong authentication and role-based access control (RBAC). Model obfuscation techniques, such as encrypting model parameters and serving models in a secure environment like a trusted execution environment (TEE), offer additional protection. For detailed practices, refer to [security considerations](#security-considerations-in-model-deployment).
### How do I choose the right deployment environment for my Ultralytics YOLOv8 model?
Selecting the optimal deployment environment for your Ultralytics YOLOv8 model depends on your application's specific needs. Cloud deployment offers scalability and ease of access, making it ideal for applications with high data volumes. Edge deployment is best for low-latency applications requiring real-time responses, using tools like [TensorFlow Lite](../integrations/tflite.md). Local deployment suits scenarios needing stringent data privacy and control. For a comprehensive overview of each environment, check out our section on [choosing a deployment environment](#choosing-a-deployment-environment).

@ -39,7 +39,7 @@ Let's focus on two specific mAP metrics:
- *mAP@.5:* Measures the average precision at a single IoU (Intersection over Union) threshold of 0.5. This metric checks if the model can correctly find objects with a looser accuracy requirement. It focuses on whether the object is roughly in the right place, not needing perfect placement. It helps see if the model is generally good at spotting objects.
- *mAP@.5:.95:* Averages the mAP values calculated at multiple IoU thresholds, from 0.5 to 0.95 in 0.05 increments. This metric is more detailed and strict. It gives a fuller picture of how accurately the model can find objects at different levels of strictness and is especially useful for applications that need precise object detection.
Other mAP metrics include mAP@0.75, which uses a stricter IoU threshold of 0.75, and mAP@small, medium, and large, which evaluate precision across objects of different sizes.
Other mAP metrics include mAP@0.75, which uses a stricter IoU threshold of 0.75, and mAP@small, medium, and large, which evaluate precision across objects of different sizes.
<p align="center">
<img width="100%" src="https://a.storyblok.com/f/139616/1200x800/913f78e511/ways-to-improve-mean-average-precision.webp" alt="Mean Average Precision Overview">
@ -103,7 +103,7 @@ If you want to get a deeper understanding of your YOLOv8 model's performance, yo
The results object also includes speed metrics like preprocess time, inference time, loss, and postprocess time. By analyzing these metrics, you can fine-tune and optimize your YOLOv8 model for better performance, making it more effective for your specific use case.
## How Does Fine Tuning Work?
## How Does Fine-Tuning Work?
Fine-tuning involves taking a pre-trained model and adjusting its parameters to improve performance on a specific task or dataset. The process, also known as model retraining, allows the model to better understand and predict outcomes for the specific data it will encounter in real-world applications. You can retrain your model based on your model evaluation to achieve optimal results.
@ -137,3 +137,52 @@ Sharing your ideas and questions with other computer vision enthusiasts can insp
## Final Thoughts
Evaluating and fine-tuning your computer vision model are important steps for successful model deployment. These steps help make sure that your model is accurate, efficient, and suited to your overall application. The key to training the best model possible is continuous experimentation and learning. Don't hesitate to tweak parameters, try new techniques, and explore different datasets. Keep experimenting and pushing the boundaries of what's possible!
## FAQ
### What are the key metrics for evaluating YOLOv8 model performance?
To evaluate YOLOv8 model performance, important metrics include Confidence Score, Intersection over Union (IoU), and Mean Average Precision (mAP). Confidence Score measures the model's certainty for each detected object class. IoU evaluates how well the predicted bounding box overlaps with the ground truth. Mean Average Precision (mAP) aggregates precision scores across classes, with mAP@.5 and mAP@.5:.95 being two common types for varying IoU thresholds. Learn more about these metrics in our [YOLOv8 performance metrics guide](./yolo-performance-metrics.md).
### How can I fine-tune a pre-trained YOLOv8 model for my specific dataset?
Fine-tuning a pre-trained YOLOv8 model involves adjusting its parameters to improve performance on a specific task or dataset. Start by evaluating your model using metrics, then set a higher initial learning rate by adjusting the `warmup_epochs` parameter to 0 for immediate stability. Use parameters like `rect=true` for handling varied image sizes effectively. For more detailed guidance, refer to our section on [fine-tuning YOLOv8 models](#how-does-fine-tuning-work).
### How can I handle variable image sizes when evaluating my YOLOv8 model?
To handle variable image sizes during evaluation, use the `rect=true` parameter in YOLOv8, which adjusts the network's stride for each batch based on image sizes. The `imgsz` parameter sets the maximum dimension for image resizing, defaulting to 640. Adjust `imgsz` to suit your dataset and GPU memory. For more details, visit our [section on handling variable image sizes](#handling-variable-image-sizes).
### What practical steps can I take to improve mean average precision for my YOLOv8 model?
Improving mean average precision (mAP) for a YOLOv8 model involves several steps:
1. **Tuning Hyperparameters**: Experiment with different learning rates, batch sizes, and image augmentations.
2. **Data Augmentation**: Use techniques like Mosaic and MixUp to create diverse training samples.
3. **Image Tiling**: Split larger images into smaller tiles to improve detection accuracy for small objects.
Refer to our detailed guide on [model fine-tuning](#tips-for-fine-tuning-your-model) for specific strategies.
### How do I access YOLOv8 model evaluation metrics in Python?
You can access YOLOv8 model evaluation metrics using Python with the following steps:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the model
model = YOLO("yolov8n.pt")
# Run the evaluation
results = model.val(data="coco8.yaml")
# Print specific metrics
print("Class indices with average precision:", results.ap_class_index)
print("Average precision for all classes:", results.box.all_ap)
print("Mean average precision at IoU=0.50:", results.box.map50)
print("Mean recall:", results.box.mr)
```
Analyzing these metrics helps fine-tune and optimize your YOLOv8 model. For a deeper dive, check out our guide on [YOLOv8 metrics](../modes/val.md).

@ -10,7 +10,7 @@ keywords: Overfitting and Underfitting in Machine Learning, Model Testing, Data
After [training](./model-training-tips.md) and [evaluating](./model-evaluation-insights.md) your model, it's time to test it. Model testing involves assessing how well it performs in real-world scenarios. Testing considers factors like accuracy, reliability, fairness, and how easy it is to understand the model's decisions. The goal is to make sure the model performs as intended, delivers the expected results, and fits into the [overall objective of your application](./defining-project-goals.md) or project.
Model testing's definition is quite similar to model evaluation, but they are two distinct [steps in a computer vision project](./steps-of-a-cv-project.md). Model evaluation involves metrics and plots to assess the model's accuracy. On the other hand, model testing checks if the model's learned behavior is the same as expectations. In this guide, we'll explore strategies for testing your computer vision models.
Model testing is quite similar to model evaluation, but they are two distinct [steps in a computer vision project](./steps-of-a-cv-project.md). Model evaluation involves metrics and plots to assess the model's accuracy. On the other hand, model testing checks if the model's learned behavior is the same as expectations. In this guide, we'll explore strategies for testing your computer vision models.
## Model Testing Vs. Model Evaluation
@ -140,3 +140,61 @@ These resources will help you navigate challenges and remain updated on the late
## In Summary
Building trustworthy computer vision models relies on rigorous model testing. By testing the model with previously unseen data, we can analyze it and spot weaknesses like overfitting and data leakage. Addressing these issues before deployment helps the model perform well in real-world applications. It's important to remember that model testing is just as crucial as model evaluation in guaranteeing the model's long-term success and effectiveness.
## FAQ
### What are the key differences between model evaluation and model testing in computer vision?
Model evaluation and model testing are distinct steps in a computer vision project. Model evaluation involves using a labeled dataset to compute metrics such as accuracy, precision, recall, and F1 score, providing insights into the model's performance with a controlled dataset. Model testing, on the other hand, assesses the model's performance in real-world scenarios by applying it to new, unseen data, ensuring the model's learned behavior aligns with expectations outside the evaluation environment. For a detailed guide, refer to the [steps in a computer vision project](./steps-of-a-cv-project.md).
### How can I test my Ultralytics YOLOv8 model on multiple images?
To test your Ultralytics YOLOv8 model on multiple images, you can use the [prediction mode](../modes/predict.md). This mode allows you to run the model on new, unseen data to generate predictions without providing detailed metrics. This is ideal for real-world performance testing on larger image sets stored in a folder. For evaluating performance metrics, use the [validation mode](../modes/val.md) instead.
### What should I do if my computer vision model shows signs of overfitting or underfitting?
To address **overfitting**:
- Regularization techniques like dropout.
- Increase the size of the training dataset.
- Simplify the model architecture.
To address **underfitting**:
- Use a more complex model.
- Provide more relevant features.
- Increase training iterations or epochs.
Review misclassified images, perform thorough error analysis, and regularly track performance metrics to maintain a balance. For more information on these concepts, explore our section on [Overfitting and Underfitting](#overfitting-and-underfitting-in-machine-learning).
### How can I detect and avoid data leakage in computer vision?
To detect data leakage:
- Verify that the testing performance is not unusually high.
- Check feature importance for unexpected insights.
- Intuitively review model decisions.
- Ensure correct data division before processing.
To avoid data leakage:
- Use diverse datasets with various environments.
- Carefully review data for hidden biases.
- Ensure no overlapping information between training and testing sets.
For detailed strategies on preventing data leakage, refer to our section on [Data Leakage in Computer Vision](#data-leakage-in-computer-vision-and-how-to-avoid-it).
### What steps should I take after testing my computer vision model?
Post-testing, if the model performance meets the project goals, proceed with deployment. If the results are unsatisfactory, consider:
- Error analysis.
- Gathering more diverse and high-quality data.
- Hyperparameter tuning.
- Retraining the model.
Gain insights from the [Model Testing Vs. Model Evaluation](#model-testing-vs-model-evaluation) section to refine and enhance model effectiveness in real-world applications.
### How do I run YOLOv8 predictions without custom training?
You can run predictions using the pre-trained YOLOv8 model on your dataset to see if it suits your application needs. Utilize the [prediction mode](../modes/predict.md) to get a quick sense of performance results without diving into custom training.

@ -35,7 +35,7 @@ There are a few different aspects to think about when you are planning on using
When training models on large datasets, efficiently utilizing your GPU is key. Batch size is an important factor. It is the number of data samples that a machine learning model processes in a single training iteration.
Using the maximum batch size supported by your GPU, you can fully take advantage of its capabilities and reduce the time model training takes. However, you want to avoid running out of GPU memory. If you encounter memory errors, reduce the batch size incrementally until the model trains smoothly.
With respect to YOLOv8, you can set the `batch_size` parameter in the [training configuration](../modes/train.md) to match your GPU's capacity. Also, setting `batch=-1` in your training script will automatically determine the batch size that can be efficiently processed based on your device's capabilities. By fine-tuning the batch size, you can make the most of your GPU resources and improve the overall training process.
With respect to YOLOv8, you can set the `batch_size` parameter in the [training configuration](../modes/train.md) to match your GPU capacity. Also, setting `batch=-1` in your training script will automatically determine the batch size that can be efficiently processed based on your device's capabilities. By fine-tuning the batch size, you can make the most of your GPU resources and improve the overall training process.
### Subset Training
@ -73,7 +73,7 @@ Mixed precision training is straightforward when working with YOLOv8. You can us
### Pre-trained Weights
Using pre-trained weights is a smart way to speed up your model's training process. Pretrained weights come from models already trained on large datasets, giving your model a head start. Transfer learning adapts pre-trained models to new, related tasks. Fine-tuning a pre-trained model involves starting with these weights and then continuing training on your specific dataset. This method of training results in faster training times and often better performance because the model starts with a solid understanding of basic features.
Using pretrained weights is a smart way to speed up your model's training process. Pretrained weights come from models already trained on large datasets, giving your model a head start. Transfer learning adapts pretrained models to new, related tasks. Fine-tuning a pre-trained model involves starting with these weights and then continuing training on your specific dataset. This method of training results in faster training times and often better performance because the model starts with a solid understanding of basic features.
The `pretrained` parameter makes transfer learning easy with YOLOv8. Setting `pretrained=True` will use default pre-trained weights, or you can specify a path to a custom pre-trained model. Using pre-trained weights and transfer learning effectively boosts your model's capabilities and reduces training costs.
@ -96,7 +96,7 @@ However, the ideal number of epochs can vary based on your dataset's size and pr
Early stopping is a valuable technique for optimizing model training. By monitoring validation performance, you can halt training once the model stops improving. You can save computational resources and prevent overfitting.
The process involves setting a patience parameter that determines how many epochs to wait for an improvement in validation metrics before stopping training. If the model's performance doesn't improve within these epochs, training is stopped to avoid wasting time and resources.
The process involves setting a patience parameter that determines how many epochs to wait for an improvement in validation metrics before stopping training. If the model's performance does not improve within these epochs, training is stopped to avoid wasting time and resources.
<p align="center">
<img width="100%" src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*06sTlOC3AYeZAjzUDwbaMw@2x.jpeg" alt="Early Stopping Overview">

@ -367,3 +367,31 @@ When using NVIDIA Jetson, there are a couple of best practices to follow in orde
## Next Steps
Congratulations on successfully setting up YOLOv8 on your NVIDIA Jetson! For further learning and support, visit more guide at [Ultralytics YOLOv8 Docs](../index.md)!
## FAQ
### How do I deploy Ultralytics YOLOv8 on NVIDIA Jetson devices?
Deploying Ultralytics YOLOv8 on NVIDIA Jetson devices is a straightforward process. First, flash your Jetson device with the NVIDIA JetPack SDK. Then, either use a pre-built Docker image for quick setup or manually install the required packages. Detailed steps for each approach can be found in sections [Start with Docker](#start-with-docker) and [Start without Docker](#start-without-docker).
### What performance benchmarks can I expect from YOLOv8 models on NVIDIA Jetson devices?
YOLOv8 models have been benchmarked on various NVIDIA Jetson devices showing significant performance improvements. For example, the TensorRT format delivers the best inference performance. The table in the [Detailed Comparison Table](#detailed-comparison-table) section provides a comprehensive view of performance metrics like mAP50-95 and inference time across different model formats.
### Why should I use TensorRT for deploying YOLOv8 on NVIDIA Jetson?
TensorRT is highly recommended for deploying YOLOv8 models on NVIDIA Jetson due to its optimal performance. It accelerates inference by leveraging the Jetson's GPU capabilities, ensuring maximum efficiency and speed. Learn more about how to convert to TensorRT and run inference in the [Use TensorRT on NVIDIA Jetson](#use-tensorrt-on-nvidia-jetson) section.
### How can I install PyTorch and Torchvision on NVIDIA Jetson?
To install PyTorch and Torchvision on NVIDIA Jetson, first uninstall any existing versions that may have been installed via pip. Then, manually install the compatible PyTorch and Torchvision versions for the Jetson's ARM64 architecture. Detailed instructions for this process are provided in the [Install PyTorch and Torchvision](#install-pytorch-and-torchvision) section.
### What are the best practices for maximizing performance on NVIDIA Jetson when using YOLOv8?
To maximize performance on NVIDIA Jetson with YOLOv8, follow these best practices:
1. Enable MAX Power Mode to utilize all CPU and GPU cores.
2. Enable Jetson Clocks to run all cores at their maximum frequency.
3. Install the Jetson Stats application for monitoring system metrics.
For commands and additional details, refer to the [Best Practices when using NVIDIA Jetson](#best-practices-when-using-nvidia-jetson) section.

@ -99,3 +99,57 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| `classes` | `list[int]` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `retina_masks` | `bool` | `False` | use high-resolution segmentation masks |
| `embed` | `list[int]` | `None` | return feature vectors/embeddings from given layers |
## FAQ
### What is object blurring with Ultralytics YOLOv8?
Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves automatically detecting and applying a blurring effect to specific objects in images or videos. This technique enhances privacy by concealing sensitive information while retaining relevant visual data. YOLOv8's real-time processing capabilities make it suitable for applications requiring immediate privacy protection and selective focus adjustments.
### How can I implement real-time object blurring using YOLOv8?
To implement real-time object blurring with YOLOv8, follow the provided Python example. This involves using YOLOv8 for object detection and OpenCV for applying the blur effect. Here's a simplified version:
```python
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
results = model.predict(im0, show=False)
for box in results[0].boxes.xyxy.cpu().tolist():
obj = im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])]
im0[int(box[1]) : int(box[3]), int(box[0]) : int(box[2])] = cv2.blur(obj, (50, 50))
cv2.imshow("YOLOv8 Blurring", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
### What are the benefits of using Ultralytics YOLOv8 for object blurring?
Ultralytics YOLOv8 offers several advantages for object blurring:
- **Privacy Protection**: Effectively obscure sensitive or identifiable information.
- **Selective Focus**: Target specific objects for blurring, maintaining essential visual content.
- **Real-time Processing**: Execute object blurring efficiently in dynamic environments, suitable for instant privacy enhancements.
For more detailed applications, check the [advantages of object blurring section](#advantages-of-object-blurring).
### Can I use Ultralytics YOLOv8 to blur faces in a video for privacy reasons?
Yes, Ultralytics YOLOv8 can be configured to detect and blur faces in videos to protect privacy. By training or using a pre-trained model to specifically recognize faces, the detection results can be processed with OpenCV to apply a blur effect. Refer to our guide on [object detection with YOLOv8](https://docs.ultralytics.com/models/yolov8) and modify the code to target face detection.
### How does YOLOv8 compare to other object detection models like Faster R-CNN for object blurring?
Ultralytics YOLOv8 typically outperforms models like Faster R-CNN in terms of speed, making it more suitable for real-time applications. While both models offer accurate detection, YOLOv8's architecture is optimized for rapid inference, which is critical for tasks like real-time object blurring. Learn more about the technical differences and performance metrics in our [YOLOv8 documentation](https://docs.ultralytics.com/models/yolov8).

@ -4,7 +4,7 @@ description: Learn to accurately identify and count objects in real-time using U
keywords: object counting, YOLOv8, Ultralytics, real-time object detection, AI, deep learning, object tracking, crowd analysis, surveillance, resource optimization
---
# Object Counting using Ultralytics YOLOv8 🚀
# Object Counting using Ultralytics YOLOv8
## What is Object Counting?
@ -253,3 +253,125 @@ Here's a table with the `ObjectCounter` arguments:
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How do I count objects in a video using Ultralytics YOLOv8?
To count objects in a video using Ultralytics YOLOv8, you can follow these steps:
1. Import the necessary libraries (`cv2`, `ultralytics`).
2. Load a pretrained YOLOv8 model.
3. Define the counting region (e.g., a polygon, line, etc.).
4. Set up the video capture and initialize the object counter.
5. Process each frame to track objects and count them within the defined region.
Here's a simple example for counting in a region:
```python
import cv2
from ultralytics import YOLO, solutions
def count_objects_in_region(video_path, output_video_path, model_path):
"""Count objects in a specific region within a video."""
model = YOLO(model_path)
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
counter = solutions.ObjectCounter(
view_img=True, reg_pts=region_points, classes_names=model.names, draw_tracks=True, line_thickness=2
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
count_objects_in_region("path/to/video.mp4", "output_video.avi", "yolov8n.pt")
```
Explore more configurations and options in the [Object Counting](#object-counting-using-ultralytics-yolov8) section.
### What are the advantages of using Ultralytics YOLOv8 for object counting?
Using Ultralytics YOLOv8 for object counting offers several advantages:
1. **Resource Optimization:** It facilitates efficient resource management by providing accurate counts, helping optimize resource allocation in industries like inventory management.
2. **Enhanced Security:** It enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.
3. **Informed Decision-Making:** It offers valuable insights for decision-making, optimizing processes in domains like retail, traffic management, and more.
For real-world applications and code examples, visit the [Advantages of Object Counting](#advantages-of-object-counting) section.
### How can I count specific classes of objects using Ultralytics YOLOv8?
To count specific classes of objects using Ultralytics YOLOv8, you need to specify the classes you are interested in during the tracking phase. Below is a Python example:
```python
import cv2
from ultralytics import YOLO, solutions
def count_specific_classes(video_path, output_video_path, model_path, classes_to_count):
"""Count specific classes of objects in a video."""
model = YOLO(model_path)
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
line_points = [(20, 400), (1080, 400)]
video_writer = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
counter = solutions.ObjectCounter(
view_img=True, reg_pts=line_points, classes_names=model.names, draw_tracks=True, line_thickness=2
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False, classes=classes_to_count)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
count_specific_classes("path/to/video.mp4", "output_specific_classes.avi", "yolov8n.pt", [0, 2])
```
In this example, `classes_to_count=[0, 2]`, which means it counts objects of class `0` and `2` (e.g., person and car).
### Why should I use YOLOv8 over other object detection models for real-time applications?
Ultralytics YOLOv8 provides several advantages over other object detection models like Faster R-CNN, SSD, and previous YOLO versions:
1. **Speed and Efficiency:** YOLOv8 offers real-time processing capabilities, making it ideal for applications requiring high-speed inference, such as surveillance and autonomous driving.
2. **Accuracy:** It provides state-of-the-art accuracy for object detection and tracking tasks, reducing the number of false positives and improving overall system reliability.
3. **Ease of Integration:** YOLOv8 offers seamless integration with various platforms and devices, including mobile and edge devices, which is crucial for modern AI applications.
4. **Flexibility:** Supports various tasks like object detection, segmentation, and tracking with configurable models to meet specific use-case requirements.
Check out Ultralytics [YOLOv8 Documentation](https://docs.ultralytics.com/models/yolov8) for a deeper dive into its features and performance comparisons.
### Can I use YOLOv8 for advanced applications like crowd analysis and traffic management?
Yes, Ultralytics YOLOv8 is perfectly suited for advanced applications like crowd analysis and traffic management due to its real-time detection capabilities, scalability, and integration flexibility. Its advanced features allow for high-accuracy object tracking, counting, and classification in dynamic environments. Example use cases include:
- **Crowd Analysis:** Monitor and manage large gatherings, ensuring safety and optimizing crowd flow.
- **Traffic Management:** Track and count vehicles, analyze traffic patterns, and manage congestion in real-time.
For more information and implementation details, refer to the guide on [Real World Applications](#real-world-applications) of object counting with YOLOv8.

@ -4,7 +4,7 @@ description: Learn how to crop and extract objects using Ultralytics YOLOv8 for
keywords: Ultralytics, YOLOv8, object cropping, object detection, image processing, video analysis, AI, machine learning
---
# Object Cropping using Ultralytics YOLOv8 🚀
# Object Cropping using Ultralytics YOLOv8
## What is Object Cropping?
@ -111,3 +111,25 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
| `classes` | `list[int]` | `None` | Filters predictions to a set of class IDs. Only detections belonging to the specified classes will be returned. Useful for focusing on relevant objects in multi-class detection tasks. |
| `retina_masks` | `bool` | `False` | Uses high-resolution segmentation masks if available in the model. This can enhance mask quality for segmentation tasks, providing finer detail. |
| `embed` | `list[int]` | `None` | Specifies the layers from which to extract feature vectors or embeddings. Useful for downstream tasks like clustering or similarity search. |
## FAQ
### What is object cropping in Ultralytics YOLOv8 and how does it work?
Object cropping using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) involves isolating and extracting specific objects from an image or video based on YOLOv8's detection capabilities. This process allows for focused analysis, reduced data volume, and enhanced precision by leveraging YOLOv8 to identify objects with high accuracy and crop them accordingly. For an in-depth tutorial, refer to the [object cropping example](#object-cropping-using-ultralytics-yolov8).
### Why should I use Ultralytics YOLOv8 for object cropping over other solutions?
Ultralytics YOLOv8 stands out due to its precision, speed, and ease of use. It allows detailed and accurate object detection and cropping, essential for [focused analysis](#advantages-of-object-cropping) and applications needing high data integrity. Moreover, YOLOv8 integrates seamlessly with tools like OpenVINO and TensorRT for deployments requiring real-time capabilities and optimization on diverse hardware. Explore the benefits in the [guide on model export](../modes/export.md).
### How can I reduce the data volume of my dataset using object cropping?
By using Ultralytics YOLOv8 to crop only relevant objects from your images or videos, you can significantly reduce the data size, making it more efficient for storage and processing. This process involves training the model to detect specific objects and then using the results to crop and save these portions only. For more information on exploiting Ultralytics YOLOv8's capabilities, visit our [quickstart guide](../quickstart.md).
### Can I use Ultralytics YOLOv8 for real-time video analysis and object cropping?
Yes, Ultralytics YOLOv8 can process real-time video feeds to detect and crop objects dynamically. The model's high-speed inference capabilities make it ideal for real-time applications such as surveillance, sports analysis, and automated inspection systems. Check out the [tracking and prediction modes](../modes/predict.md) to understand how to implement real-time processing.
### What are the hardware requirements for efficiently running YOLOv8 for object cropping?
Ultralytics YOLOv8 is optimized for both CPU and GPU environments, but to achieve optimal performance, especially for real-time or high-volume inference, a dedicated GPU (e.g., NVIDIA Tesla, RTX series) is recommended. For deployment on lightweight devices, consider using CoreML for iOS or TFLite for Android. More details on supported devices and formats can be found in our [model deployment options](../guides/model-deployment-options.md).

@ -66,3 +66,63 @@ For more detailed technical information and the latest updates, refer to the [Op
---
Ensuring your models achieve optimal performance is not just about tweaking configurations; it's about understanding your application's needs and making informed decisions. Whether you're optimizing for real-time responses or maximizing throughput for large-scale processing, the combination of Ultralytics YOLO models and OpenVINO offers a powerful toolkit for developers to deploy high-performance AI solutions.
## FAQ
### How do I optimize Ultralytics YOLO models for low latency using OpenVINO?
Optimizing Ultralytics YOLO models for low latency involves several key strategies:
1. **Single Inference per Device:** Limit inferences to one at a time per device to minimize delays.
2. **Leveraging Sub-Devices:** Utilize devices like multi-socket CPUs or multi-tile GPUs which can handle multiple requests with minimal latency increase.
3. **OpenVINO Performance Hints:** Use OpenVINO's `ov::hint::PerformanceMode::LATENCY` during model compilation for simplified, device-agnostic tuning.
For more practical tips on optimizing latency, check out the [Latency Optimization section](#optimizing-for-latency) of our guide.
### Why should I use OpenVINO for optimizing Ultralytics YOLO throughput?
OpenVINO enhances Ultralytics YOLO model throughput by maximizing device resource utilization without sacrificing performance. Key benefits include:
- **Performance Hints:** Simple, high-level performance tuning across devices.
- **Explicit Batching and Streams:** Fine-tuning for advanced performance.
- **Multi-Device Execution:** Automated inference load balancing, easing application-level management.
Example configuration:
```python
import openvino.properties.hint as hints
config = {hints.performance_mode: hints.PerformanceMode.THROUGHPUT}
compiled_model = core.compile_model(model, "GPU", config)
```
Learn more about throughput optimization in the [Throughput Optimization section](#optimizing-for-throughput) of our detailed guide.
### What is the best practice for reducing first-inference latency in OpenVINO?
To reduce first-inference latency, consider these practices:
1. **Model Caching:** Use model caching to decrease load and compile times.
2. **Model Mapping vs. Reading:** Use mapping (`ov::enable_mmap(true)`) by default but switch to reading (`ov::enable_mmap(false)`) if the model is on a removable or network drive.
3. **AUTO Device Selection:** Utilize AUTO mode to start with CPU inference and transition to an accelerator seamlessly.
For detailed strategies on managing first-inference latency, refer to the [Managing First-Inference Latency section](#managing-first-inference-latency).
### How do I balance optimizing for latency and throughput with Ultralytics YOLO and OpenVINO?
Balancing latency and throughput optimization requires understanding your application needs:
- **Latency Optimization:** Ideal for real-time applications requiring immediate responses (e.g., consumer-grade apps).
- **Throughput Optimization:** Best for scenarios with many concurrent inferences, maximizing resource use (e.g., large-scale deployments).
Using OpenVINO's high-level performance hints and multi-device modes can help strike the right balance. Choose the appropriate [OpenVINO Performance hints](https://docs.ultralytics.com/integrations/openvino#openvino-performance-hints) based on your specific requirements.
### Can I use Ultralytics YOLO models with other AI frameworks besides OpenVINO?
Yes, Ultralytics YOLO models are highly versatile and can be integrated with various AI frameworks. Options include:
- **TensorRT:** For NVIDIA GPU optimization, follow the [TensorRT integration guide](https://docs.ultralytics.com/integrations/tensorrt).
- **CoreML:** For Apple devices, refer to our [CoreML export instructions](https://docs.ultralytics.com/integrations/coreml).
- **TensorFlow.js:** For web and Node.js apps, see the [TF.js conversion guide](https://docs.ultralytics.com/integrations/tfjs).
Explore more integrations on the [Ultralytics Integrations page](https://docs.ultralytics.com/integrations).

@ -120,3 +120,37 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How does Ultralytics YOLOv8 enhance parking management systems?
Ultralytics YOLOv8 greatly enhances parking management systems by providing **real-time vehicle detection** and monitoring. This results in optimized usage of parking spaces, reduced congestion, and improved safety through continuous surveillance. The [Parking Management System](https://github.com/ultralytics/ultralytics) enables efficient traffic flow, minimizing idle times and emissions in parking lots, thereby contributing to environmental sustainability. For further details, refer to the [parking management code workflow](#python-code-for-parking-management).
### What are the benefits of using Ultralytics YOLOv8 for smart parking?
Using Ultralytics YOLOv8 for smart parking yields numerous benefits:
- **Efficiency**: Optimizes the use of parking spaces and decreases congestion.
- **Safety and Security**: Enhances surveillance and ensures the safety of vehicles and pedestrians.
- **Environmental Impact**: Helps in reducing emissions by minimizing vehicle idle times. More details on the advantages can be seen [here](#advantages-of-parking-management-system).
### How can I define parking spaces using Ultralytics YOLOv8?
Defining parking spaces is straightforward with Ultralytics YOLOv8:
1. Capture a frame from a video or camera stream.
2. Use the provided code to launch a GUI for selecting an image and drawing polygons to define parking spaces.
3. Save the labeled data in JSON format for further processing. For comprehensive instructions, check the [selection of points](#selection-of-points) section.
### Can I customize the YOLOv8 model for specific parking management needs?
Yes, Ultralytics YOLOv8 allows customization for specific parking management needs. You can adjust parameters such as the **occupied and available region colors**, margins for text display, and much more. Utilizing the `ParkingManagement` class's [optional arguments](#optional-arguments-parkingmanagement), you can tailor the model to suit your particular requirements, ensuring maximum efficiency and effectiveness.
### What are some real-world applications of Ultralytics YOLOv8 in parking lot management?
Ultralytics YOLOv8 is utilized in various real-world applications for parking lot management, including:
- **Parking Space Detection**: Accurately identifying available and occupied spaces.
- **Surveillance**: Enhancing security through real-time monitoring.
- **Traffic Flow Management**: Reducing idle times and congestion with efficient traffic handling. Images showcasing these applications can be found in [real-world applications](#real-world-applications).

@ -140,3 +140,100 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How can I use Ultralytics YOLOv8 for real-time queue management?
To use Ultralytics YOLOv8 for real-time queue management, you can follow these steps:
1. Load the YOLOv8 model with `YOLO("yolov8n.pt")`.
2. Capture the video feed using `cv2.VideoCapture`.
3. Define the region of interest (ROI) for queue management.
4. Process frames to detect objects and manage queues.
Here's a minimal example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video.mp4")
queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
queue = solutions.QueueManager(
classes_names=model.names,
reg_pts=queue_region,
line_thickness=3,
fontsize=1.0,
region_color=(255, 144, 31),
)
while cap.isOpened():
success, im0 = cap.read()
if success:
tracks = model.track(im0, show=False, persist=True, verbose=False)
out = queue.process_queue(im0, tracks)
cv2.imshow("Queue Management", im0)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
Leveraging Ultralytics [HUB](https://docs.ultralytics.com/hub/) can streamline this process by providing a user-friendly platform for deploying and managing your queue management solution.
### What are the key advantages of using Ultralytics YOLOv8 for queue management?
Using Ultralytics YOLOv8 for queue management offers several benefits:
- **Plummeting Waiting Times:** Efficiently organizes queues, reducing customer wait times and boosting satisfaction.
- **Enhancing Efficiency:** Analyzes queue data to optimize staff deployment and operations, thereby reducing costs.
- **Real-time Alerts:** Provides real-time notifications for long queues, enabling quick intervention.
- **Scalability:** Easily scalable across different environments like retail, airports, and healthcare.
For more details, explore our [Queue Management](https://docs.ultralytics.com/reference/solutions/queue_management/) solutions.
### Why should I choose Ultralytics YOLOv8 over competitors like TensorFlow or Detectron2 for queue management?
Ultralytics YOLOv8 has several advantages over TensorFlow and Detectron2 for queue management:
- **Real-time Performance:** YOLOv8 is known for its real-time detection capabilities, offering faster processing speeds.
- **Ease of Use:** Ultralytics provides a user-friendly experience, from training to deployment, via [Ultralytics HUB](https://docs.ultralytics.com/hub/).
- **Pretrained Models:** Access to a range of pretrained models, minimizing the time needed for setup.
- **Community Support:** Extensive documentation and active community support make problem-solving easier.
Learn how to get started with [Ultralytics YOLO](https://docs.ultralytics.com/quickstart/).
### Can Ultralytics YOLOv8 handle multiple types of queues, such as in airports and retail?
Yes, Ultralytics YOLOv8 can manage various types of queues, including those in airports and retail environments. By configuring the QueueManager with specific regions and settings, YOLOv8 can adapt to different queue layouts and densities.
Example for airports:
```python
queue_region_airport = [(50, 600), (1200, 600), (1200, 550), (50, 550)]
queue_airport = solutions.QueueManager(
classes_names=model.names,
reg_pts=queue_region_airport,
line_thickness=3,
fontsize=1.0,
region_color=(0, 255, 0),
)
```
For more information on diverse applications, check out our [Real World Applications](#real-world-applications) section.
### What are some real-world applications of Ultralytics YOLOv8 in queue management?
Ultralytics YOLOv8 is used in various real-world applications for queue management:
- **Retail:** Monitors checkout lines to reduce wait times and improve customer satisfaction.
- **Airports:** Manages queues at ticket counters and security checkpoints for a smoother passenger experience.
- **Healthcare:** Optimizes patient flow in clinics and hospitals.
- **Banks:** Enhances customer service by managing queues efficiently in banks.
Check our [blog on real-world queue management](https://www.ultralytics.com/blog/revolutionizing-queue-management-with-ultralytics-yolov8-and-openvino) to learn more.

@ -378,3 +378,124 @@ Congratulations on successfully setting up YOLO on your Raspberry Pi! For furthe
This guide was initially created by Daan Eeltink for Kashmir World Foundation, an organization dedicated to the use of YOLO for the conservation of endangered species. We acknowledge their pioneering work and educational focus in the realm of object detection technologies.
For more information about Kashmir World Foundation's activities, you can visit their [website](https://www.kashmirworldfoundation.org/).
## FAQ
### How do I set up Ultralytics YOLOv8 on a Raspberry Pi without using Docker?
To set up Ultralytics YOLOv8 on a Raspberry Pi without Docker, follow these steps:
1. Update the package list and install `pip`:
```bash
sudo apt update
sudo apt install python3-pip -y
pip install -U pip
```
2. Install the Ultralytics package with optional dependencies:
```bash
pip install ultralytics[export]
```
3. Reboot the device to apply changes:
```bash
sudo reboot
```
For detailed instructions, refer to the [Start without Docker](#start-without-docker) section.
### Why should I use Ultralytics YOLOv8's NCNN format on Raspberry Pi for AI tasks?
Ultralytics YOLOv8's NCNN format is highly optimized for mobile and embedded platforms, making it ideal for running AI tasks on Raspberry Pi devices. NCNN maximizes inference performance by leveraging ARM architecture, providing faster and more efficient processing compared to other formats. For more details on supported export options, visit the [Ultralytics documentation page on deployment options](../modes/export.md).
### How can I convert a YOLOv8 model to NCNN format for use on Raspberry Pi?
You can convert a PyTorch YOLOv8 model to NCNN format using either Python or CLI commands:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")
# Export the model to NCNN format
model.export(format="ncnn") # creates 'yolov8n_ncnn_model'
# Load the exported NCNN model
ncnn_model = YOLO("yolov8n_ncnn_model")
# Run inference
results = ncnn_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to NCNN format
yolo export model=yolov8n.pt format=ncnn # creates 'yolov8n_ncnn_model'
# Run inference with the exported model
yolo predict model='yolov8n_ncnn_model' source='https://ultralytics.com/images/bus.jpg'
```
For more details, see the [Use NCNN on Raspberry Pi](#use-ncnn-on-raspberry-pi) section.
### What are the hardware differences between Raspberry Pi 4 and Raspberry Pi 5 relevant to running YOLOv8?
Key differences include:
- **CPU**: Raspberry Pi 4 uses Broadcom BCM2711, Cortex-A72 64-bit SoC, while Raspberry Pi 5 uses Broadcom BCM2712, Cortex-A76 64-bit SoC.
- **Max CPU Frequency**: Raspberry Pi 4 has a max frequency of 1.8GHz, whereas Raspberry Pi 5 reaches 2.4GHz.
- **Memory**: Raspberry Pi 4 offers up to 8GB of LPDDR4-3200 SDRAM, while Raspberry Pi 5 features LPDDR4X-4267 SDRAM, available in 4GB and 8GB variants.
These enhancements contribute to better performance benchmarks for YOLOv8 models on Raspberry Pi 5 compared to Raspberry Pi 4. Refer to the [Raspberry Pi Series Comparison](#raspberry-pi-series-comparison) table for more details.
### How can I set up a Raspberry Pi Camera Module to work with Ultralytics YOLOv8?
There are two methods to set up a Raspberry Pi Camera for YOLOv8 inference:
1. **Using `picamera2`**:
```python
import cv2
from picamera2 import Picamera2
from ultralytics import YOLO
picam2 = Picamera2()
picam2.preview_configuration.main.size = (1280, 720)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()
model = YOLO("yolov8n.pt")
while True:
frame = picam2.capture_array()
results = model(frame)
annotated_frame = results[0].plot()
cv2.imshow("Camera", annotated_frame)
if cv2.waitKey(1) == ord("q"):
break
cv2.destroyAllWindows()
```
2. **Using a TCP Stream**:
```bash
rpicam-vid -n -t 0 --inline --listen -o tcp://127.0.0.1:8888
```
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
results = model("tcp://127.0.0.1:8888")
```
For detailed setup instructions, visit the [Inference with Camera](#inference-with-camera) section.

@ -84,3 +84,50 @@ python yolov8_region_counter.py --source "path/to/video.mp4" --view-img
| `--classes` | `list` | `None` | Detect specific classes i.e. --classes 0 2 |
| `--region-thickness` | `int` | `2` | Region Box thickness |
| `--track-thickness` | `int` | `2` | Tracking line thickness |
## FAQ
### What is object counting in specified regions using Ultralytics YOLOv8?
Object counting in specified regions with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) involves detecting and tallying the number of objects within defined areas using advanced computer vision. This precise method enhances efficiency and accuracy across various applications like manufacturing, surveillance, and traffic monitoring.
### How do I run the object counting script with Ultralytics YOLOv8?
Follow these steps to run object counting in Ultralytics YOLOv8:
1. Clone the Ultralytics repository and navigate to the directory:
```bash
git clone https://github.com/ultralytics/ultralytics
cd ultralytics/examples/YOLOv8-Region-Counter
```
2. Execute the region counting script:
```bash
python yolov8_region_counter.py --source "path/to/video.mp4" --save-img
```
For more options, visit the [Run Region Counting](#steps-to-run) section.
### Why should I use Ultralytics YOLOv8 for object counting in regions?
Using Ultralytics YOLOv8 for object counting in regions offers several advantages:
- **Precision and Accuracy:** Minimizes errors often seen in manual counting.
- **Efficiency Improvement:** Provides real-time results and streamlines processes.
- **Versatility and Application:** Applies to various domains, enhancing its utility.
Explore deeper benefits in the [Advantages](#advantages-of-object-counting-in-regions) section.
### Can the defined regions be adjusted during video playback?
Yes, with Ultralytics YOLOv8, regions can be interactively moved during video playback. Simply click and drag with the left mouse button to reposition the region. This feature enhances flexibility for dynamic environments. Learn more in the tip section for [movable regions](#step-2-run-region-counting-using-ultralytics-yolov8).
### What are some real-world applications of object counting in regions?
Object counting with Ultralytics YOLOv8 can be applied to numerous real-world scenarios:
- **Retail:** Counting people for foot traffic analysis.
- **Market Streets:** Crowd density management.
Explore more examples in the [Real World Applications](#real-world-applications) section.

@ -512,3 +512,115 @@ for index, class_id in enumerate(classes):
<p align="center">
<img width="100%" src="https://github.com/ultralytics/ultralytics/assets/3855193/3caafc4a-0edd-4e5f-8dd1-37e30be70123" alt="Point Cloud Segmentation with Ultralytics ">
</p>
## FAQ
### What is the Robot Operating System (ROS)?
The [Robot Operating System (ROS)](https://www.ros.org/) is an open-source framework commonly used in robotics to help developers create robust robot applications. It provides a collection of [libraries and tools](https://www.ros.org/blog/ecosystem/) for building and interfacing with robotic systems, enabling easier development of complex applications. ROS supports communication between nodes using messages over [topics](https://wiki.ros.org/ROS/Tutorials/UnderstandingTopics) or [services](https://wiki.ros.org/ROS/Tutorials/UnderstandingServicesParams).
### How do I integrate Ultralytics YOLO with ROS for real-time object detection?
Integrating Ultralytics YOLO with ROS involves setting up a ROS environment and using YOLO for processing sensor data. Begin by installing the required dependencies like `ros_numpy` and Ultralytics YOLO:
```bash
pip install ros_numpy ultralytics
```
Next, create a ROS node and subscribe to an [image topic](../tasks/detect.md) to process the incoming data. Here is a minimal example:
```python
import ros_numpy
import rospy
from sensor_msgs.msg import Image
from ultralytics import YOLO
detection_model = YOLO("yolov8m.pt")
rospy.init_node("ultralytics")
det_image_pub = rospy.Publisher("/ultralytics/detection/image", Image, queue_size=5)
def callback(data):
array = ros_numpy.numpify(data)
det_result = detection_model(array)
det_annotated = det_result[0].plot(show=False)
det_image_pub.publish(ros_numpy.msgify(Image, det_annotated, encoding="rgb8"))
rospy.Subscriber("/camera/color/image_raw", Image, callback)
rospy.spin()
```
### What are ROS topics and how are they used in Ultralytics YOLO?
ROS topics facilitate communication between nodes in a ROS network by using a publish-subscribe model. A topic is a named channel that nodes use to send and receive messages asynchronously. In the context of Ultralytics YOLO, you can make a node subscribe to an image topic, process the images using YOLO for tasks like detection or segmentation, and publish outcomes to new topics.
For example, subscribe to a camera topic and process the incoming image for detection:
```python
rospy.Subscriber("/camera/color/image_raw", Image, callback)
```
### Why use depth images with Ultralytics YOLO in ROS?
Depth images in ROS, represented by `sensor_msgs/Image`, provide the distance of objects from the camera, crucial for tasks like obstacle avoidance, 3D mapping, and localization. By [using depth information](https://en.wikipedia.org/wiki/Depth_map) along with RGB images, robots can better understand their 3D environment.
With YOLO, you can extract segmentation masks from RGB images and apply these masks to depth images to obtain precise 3D object information, improving the robot's ability to navigate and interact with its surroundings.
### How can I visualize 3D point clouds with YOLO in ROS?
To visualize 3D point clouds in ROS with YOLO:
1. Convert `sensor_msgs/PointCloud2` messages to numpy arrays.
2. Use YOLO to segment RGB images.
3. Apply the segmentation mask to the point cloud.
Here's an example using Open3D for visualization:
```python
import sys
import open3d as o3d
import ros_numpy
import rospy
from sensor_msgs.msg import PointCloud2
from ultralytics import YOLO
rospy.init_node("ultralytics")
segmentation_model = YOLO("yolov8m-seg.pt")
def pointcloud2_to_array(pointcloud2):
pc_array = ros_numpy.point_cloud2.pointcloud2_to_array(pointcloud2)
split = ros_numpy.point_cloud2.split_rgb_field(pc_array)
rgb = np.stack([split["b"], split["g"], split["r"]], axis=2)
xyz = ros_numpy.point_cloud2.get_xyz_points(pc_array, remove_nans=False)
xyz = np.array(xyz).reshape((pointcloud2.height, pointcloud2.width, 3))
return xyz, rgb
ros_cloud = rospy.wait_for_message("/camera/depth/points", PointCloud2)
xyz, rgb = pointcloud2_to_array(ros_cloud)
result = segmentation_model(rgb)
if not len(result[0].boxes.cls):
print("No objects detected")
sys.exit()
classes = result[0].boxes.cls.cpu().numpy().astype(int)
for index, class_id in enumerate(classes):
mask = result[0].masks.data.cpu().numpy()[index, :, :].astype(int)
mask_expanded = np.stack([mask, mask, mask], axis=2)
obj_rgb = rgb * mask_expanded
obj_xyz = xyz * mask_expanded
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(obj_xyz.reshape((-1, 3)))
pcd.colors = o3d.utility.Vector3dVector(obj_rgb.reshape((-1, 3)) / 255)
o3d.visualization.draw_geometries([pcd])
```
This approach provides a 3D visualization of segmented objects, useful for tasks like navigation and manipulation.

@ -203,3 +203,93 @@ If you use SAHI in your research or development work, please cite the original S
```
We extend our thanks to the SAHI research group for creating and maintaining this invaluable resource for the computer vision community. For more information about SAHI and its creators, visit the [SAHI GitHub repository](https://github.com/obss/sahi).
## FAQ
### How can I integrate YOLOv8 with SAHI for sliced inference in object detection?
Integrating Ultralytics YOLOv8 with SAHI (Slicing Aided Hyper Inference) for sliced inference optimizes your object detection tasks on high-resolution images by partitioning them into manageable slices. This approach improves memory usage and ensures high detection accuracy. To get started, you need to install the ultralytics and sahi libraries:
```bash
pip install -U ultralytics sahi
```
Then, download a YOLOv8 model and test images:
```python
from sahi.utils.file import download_from_url
from sahi.utils.yolov8 import download_yolov8s_model
# Download YOLOv8 model
yolov8_model_path = "models/yolov8s.pt"
download_yolov8s_model(yolov8_model_path)
# Download test images
download_from_url(
"https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg",
"demo_data/small-vehicles1.jpeg",
)
```
For more detailed instructions, refer to our [Sliced Inference guide](#sliced-inference-with-yolov8).
### Why should I use SAHI with YOLOv8 for object detection on large images?
Using SAHI with Ultralytics YOLOv8 for object detection on large images offers several benefits:
- **Reduced Computational Burden**: Smaller slices are faster to process and consume less memory, making it feasible to run high-quality detections on hardware with limited resources.
- **Maintained Detection Accuracy**: SAHI uses intelligent algorithms to merge overlapping boxes, preserving the detection quality.
- **Enhanced Scalability**: By scaling object detection tasks across different image sizes and resolutions, SAHI becomes ideal for various applications, such as satellite imagery analysis and medical diagnostics.
Learn more about the [benefits of sliced inference](#benefits-of-sliced-inference) in our documentation.
### Can I visualize prediction results when using YOLOv8 with SAHI?
Yes, you can visualize prediction results when using YOLOv8 with SAHI. Here's how you can export and visualize the results:
```python
result.export_visuals(export_dir="demo_data/")
from IPython.display import Image
Image("demo_data/prediction_visual.png")
```
This command will save the visualized predictions to the specified directory and you can then load the image to view it in your notebook or application. For a detailed guide, check out the [Standard Inference section](#visualize-results).
### What features does SAHI offer for improving YOLOv8 object detection?
SAHI (Slicing Aided Hyper Inference) offers several features that complement Ultralytics YOLOv8 for object detection:
- **Seamless Integration**: SAHI easily integrates with YOLO models, requiring minimal code adjustments.
- **Resource Efficiency**: It partitions large images into smaller slices, which optimizes memory usage and speed.
- **High Accuracy**: By effectively merging overlapping detection boxes during the stitching process, SAHI maintains high detection accuracy.
For a deeper understanding, read about SAHI's [key features](#key-features-of-sahi).
### How do I handle large-scale inference projects using YOLOv8 and SAHI?
To handle large-scale inference projects using YOLOv8 and SAHI, follow these best practices:
1. **Install Required Libraries**: Ensure that you have the latest versions of ultralytics and sahi.
2. **Configure Sliced Inference**: Determine the optimal slice dimensions and overlap ratios for your specific project.
3. **Run Batch Predictions**: Use SAHI's capabilities to perform batch predictions on a directory of images, which improves efficiency.
Example for batch prediction:
```python
from sahi.predict import predict
predict(
model_type="yolov8",
model_path="path/to/yolov8n.pt",
model_device="cpu", # or 'cuda:0'
model_confidence_threshold=0.4,
source="path/to/dir",
slice_height=256,
slice_width=256,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2,
)
```
For more detailed steps, visit our section on [Batch Prediction](#batch-prediction).

@ -176,3 +176,25 @@ That's it! When you execute the code, you'll receive a single notification on yo
#### Email Received Sample
<img width="256" src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/db79ccc6-aabd-4566-a825-b34e679c90f9" alt="Email Received Sample">
## FAQ
### How does Ultralytics YOLOv8 improve the accuracy of a security alarm system?
Ultralytics YOLOv8 enhances security alarm systems by delivering high-accuracy, real-time object detection. Its advanced algorithms significantly reduce false positives, ensuring that the system only responds to genuine threats. This increased reliability can be seamlessly integrated with existing security infrastructure, upgrading the overall surveillance quality.
### Can I integrate Ultralytics YOLOv8 with my existing security infrastructure?
Yes, Ultralytics YOLOv8 can be seamlessly integrated with your existing security infrastructure. The system supports various modes and provides flexibility for customization, allowing you to enhance your existing setup with advanced object detection capabilities. For detailed instructions on integrating YOLOv8 in your projects, visit the [integration section](https://docs.ultralytics.com/integrations/).
### What are the storage requirements for running Ultralytics YOLOv8?
Running Ultralytics YOLOv8 on a standard setup typically requires around 5GB of free disk space. This includes space for storing the YOLOv8 model and any additional dependencies. For cloud-based solutions, Ultralytics HUB offers efficient project management and dataset handling, which can optimize storage needs. Learn more about the [Pro Plan](../hub/pro.md) for enhanced features including extended storage.
### What makes Ultralytics YOLOv8 different from other object detection models like Faster R-CNN or SSD?
Ultralytics YOLOv8 provides an edge over models like Faster R-CNN or SSD with its real-time detection capabilities and higher accuracy. Its unique architecture allows it to process images much faster without compromising on precision, making it ideal for time-sensitive applications like security alarm systems. For a comprehensive comparison of object detection models, you can explore our [guide](https://docs.ultralytics.com/models).
### How can I reduce the frequency of false positives in my security system using Ultralytics YOLOv8?
To reduce false positives, ensure your Ultralytics YOLOv8 model is adequately trained with a diverse and well-annotated dataset. Fine-tuning hyperparameters and regularly updating the model with new data can significantly improve detection accuracy. Detailed hyperparameter tuning techniques can be found in our [hyperparameter tuning guide](../guides/hyperparameter-tuning.md).

@ -108,3 +108,86 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How do I estimate object speed using Ultralytics YOLOv8?
Estimating object speed with Ultralytics YOLOv8 involves combining object detection and tracking techniques. First, you need to detect objects in each frame using the YOLOv8 model. Then, track these objects across frames to calculate their movement over time. Finally, use the distance traveled by the object between frames and the frame rate to estimate its speed.
**Example**:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n.pt")
names = model.model.names
cap = cv2.VideoCapture("path/to/video/file.mp4")
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("speed_estimation.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Initialize SpeedEstimator
speed_obj = solutions.SpeedEstimator(
reg_pts=[(0, 360), (1280, 360)],
names=names,
view_img=True,
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
break
tracks = model.track(im0, persist=True, show=False)
im0 = speed_obj.estimate_speed(im0, tracks)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
For more details, refer to our [official blog post](https://www.ultralytics.com/blog/ultralytics-yolov8-for-speed-estimation-in-computer-vision-projects).
### What are the benefits of using Ultralytics YOLOv8 for speed estimation in traffic management?
Using Ultralytics YOLOv8 for speed estimation offers significant advantages in traffic management:
- **Enhanced Safety**: Accurately estimate vehicle speeds to detect over-speeding and improve road safety.
- **Real-Time Monitoring**: Benefit from YOLOv8's real-time object detection capability to monitor traffic flow and congestion effectively.
- **Scalability**: Deploy the model on various hardware setups, from edge devices to servers, ensuring flexible and scalable solutions for large-scale implementations.
For more applications, see [advantages of speed estimation](#advantages-of-speed-estimation).
### Can YOLOv8 be integrated with other AI frameworks like TensorFlow or PyTorch?
Yes, YOLOv8 can be integrated with other AI frameworks like TensorFlow and PyTorch. Ultralytics provides support for exporting YOLOv8 models to various formats like ONNX, TensorRT, and CoreML, ensuring smooth interoperability with other ML frameworks.
To export a YOLOv8 model to ONNX format:
```bash
yolo export --weights yolov8n.pt --include onnx
```
Learn more about exporting models in our [guide on export](../modes/export.md).
### How accurate is the speed estimation using Ultralytics YOLOv8?
The accuracy of speed estimation using Ultralytics YOLOv8 depends on several factors, including the quality of the object tracking, the resolution and frame rate of the video, and environmental variables. While the speed estimator provides reliable estimates, it may not be 100% accurate due to variances in frame processing speed and object occlusion.
**Note**: Always consider margin of error and validate the estimates with ground truth data when possible.
For further accuracy improvement tips, check the [Arguments `SpeedEstimator` section](#arguments-speedestimator).
### Why choose Ultralytics YOLOv8 over other object detection models like TensorFlow Object Detection API?
Ultralytics YOLOv8 offers several advantages over other object detection models, such as the TensorFlow Object Detection API:
- **Real-Time Performance**: YOLOv8 is optimized for real-time detection, providing high speed and accuracy.
- **Ease of Use**: Designed with a user-friendly interface, YOLOv8 simplifies model training and deployment.
- **Versatility**: Supports multiple tasks, including object detection, segmentation, and pose estimation.
- **Community and Support**: YOLOv8 is backed by an active community and extensive documentation, ensuring developers have the resources they need.
For more information on the benefits of YOLOv8, explore our detailed [model page](../models/yolov8.md).

@ -142,3 +142,126 @@ subprocess.call(f"docker kill {container_id}", shell=True)
---
By following the above steps, you can deploy and run Ultralytics YOLOv8 models efficiently on Triton Inference Server, providing a scalable and high-performance solution for deep learning inference tasks. If you face any issues or have further queries, refer to the [official Triton documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html) or reach out to the Ultralytics community for support.
## FAQ
### How do I set up Ultralytics YOLOv8 with NVIDIA Triton Inference Server?
Setting up [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) involves a few key steps:
1. **Export YOLOv8 to ONNX format**:
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
# Export the model to ONNX format
onnx_file = model.export(format="onnx", dynamic=True)
```
2. **Set up Triton Model Repository**:
```python
from pathlib import Path
# Define paths
model_name = "yolo"
triton_repo_path = Path("tmp") / "triton_repo"
triton_model_path = triton_repo_path / model_name
# Create directories
(triton_model_path / "1").mkdir(parents=True, exist_ok=True)
Path(onnx_file).rename(triton_model_path / "1" / "model.onnx")
(triton_model_path / "config.pbtxt").touch()
```
3. **Run the Triton Server**:
```python
import contextlib
import subprocess
import time
from tritonclient.http import InferenceServerClient
# Define image https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
tag = "nvcr.io/nvidia/tritonserver:23.09-py3"
subprocess.call(f"docker pull {tag}", shell=True)
container_id = (
subprocess.check_output(
f"docker run -d --rm -v {triton_repo_path}/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
shell=True,
)
.decode("utf-8")
.strip()
)
triton_client = InferenceServerClient(url="localhost:8000", verbose=False, ssl=False)
for _ in range(10):
with contextlib.suppress(Exception):
assert triton_client.is_model_ready(model_name)
break
time.sleep(1)
```
This setup can help you efficiently deploy YOLOv8 models at scale on Triton Inference Server for high-performance AI model inference.
### What benefits does using Ultralytics YOLOv8 with NVIDIA Triton Inference Server offer?
Integrating [Ultralytics YOLOv8](../models/yolov8.md) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) provides several advantages:
- **Scalable AI Inference**: Triton allows serving multiple models from a single server instance, supporting dynamic model loading and unloading, making it highly scalable for diverse AI workloads.
- **High Performance**: Optimized for NVIDIA GPUs, Triton Inference Server ensures high-speed inference operations, perfect for real-time applications such as object detection.
- **Ensemble and Model Versioning**: Triton's ensemble mode enables combining multiple models to improve results, and its model versioning supports A/B testing and rolling updates.
For detailed instructions on setting up and running YOLOv8 with Triton, you can refer to the [setup guide](#setting-up-triton-model-repository).
### Why should I export my YOLOv8 model to ONNX format before using Triton Inference Server?
Using ONNX (Open Neural Network Exchange) format for your [Ultralytics YOLOv8](../models/yolov8.md) model before deploying it on [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) offers several key benefits:
- **Interoperability**: ONNX format supports transfer between different deep learning frameworks (such as PyTorch, TensorFlow), ensuring broader compatibility.
- **Optimization**: Many deployment environments, including Triton, optimize for ONNX, enabling faster inference and better performance.
- **Ease of Deployment**: ONNX is widely supported across frameworks and platforms, simplifying the deployment process in various operating systems and hardware configurations.
To export your model, use:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
onnx_file = model.export(format="onnx", dynamic=True)
```
You can follow the steps in the [exporting guide](../modes/export.md) to complete the process.
### Can I run inference using the Ultralytics YOLOv8 model on Triton Inference Server?
Yes, you can run inference using the [Ultralytics YOLOv8](../models/yolov8.md) model on [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server). Once your model is set up in the Triton Model Repository and the server is running, you can load and run inference on your model as follows:
```python
from ultralytics import YOLO
# Load the Triton Server model
model = YOLO("http://localhost:8000/yolo", task="detect")
# Run inference on the server
results = model("path/to/image.jpg")
```
For an in-depth guide on setting up and running Triton Server with YOLOv8, refer to the [running triton inference server](#running-triton-inference-server) section.
### How does Ultralytics YOLOv8 compare to TensorFlow and PyTorch models for deployment?
[Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8) offers several unique advantages compared to TensorFlow and PyTorch models for deployment:
- **Real-time Performance**: Optimized for real-time object detection tasks, YOLOv8 provides state-of-the-art accuracy and speed, making it ideal for applications requiring live video analytics.
- **Ease of Use**: YOLOv8 integrates seamlessly with Triton Inference Server and supports diverse export formats (ONNX, TensorRT, CoreML), making it flexible for various deployment scenarios.
- **Advanced Features**: YOLOv8 includes features like dynamic model loading, model versioning, and ensemble inference, which are crucial for scalable and reliable AI deployments.
For more details, compare the deployment options in the [model deployment guide](../modes/export.md).

@ -139,3 +139,105 @@ w.draw(mem_file)
!!! tip
You may need to use `clear` to "erase" the view of the image in the terminal.
## FAQ
### How can I view YOLO inference results in a VSCode terminal on macOS or Linux?
To view YOLO inference results in a VSCode terminal on macOS or Linux, follow these steps:
1. Enable the necessary VSCode settings:
```yaml
"terminal.integrated.enableImages": true
"terminal.integrated.gpuAcceleration": "auto"
```
2. Install the sixel library:
```bash
pip install sixel
```
3. Load your YOLO model and run inference:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
results = model.predict(source="path_to_image")
plot = results[0].plot()
```
4. Convert the inference result image to bytes and display it in the terminal:
```python
import io
import cv2
from sixel import SixelWriter
im_bytes = cv2.imencode(".png", plot)[1].tobytes()
mem_file = io.BytesIO(im_bytes)
SixelWriter().draw(mem_file)
```
For further details, visit the [predict mode](../modes/predict.md) page.
### Why does the sixel protocol only work on Linux and macOS?
The sixel protocol is currently only supported on Linux and macOS because these platforms have native terminal capabilities compatible with sixel graphics. Windows support for terminal graphics using sixel is still under development. For updates on Windows compatibility, check the [VSCode Issue status](https://github.com/microsoft/vscode/issues/198622) and [documentation](https://code.visualstudio.com/docs).
### What if I encounter issues with displaying images in the VSCode terminal?
If you encounter issues displaying images in the VSCode terminal using sixel:
1. Ensure the necessary settings in VSCode are enabled:
```yaml
"terminal.integrated.enableImages": true
"terminal.integrated.gpuAcceleration": "auto"
```
2. Verify the sixel library installation:
```bash
pip install sixel
```
3. Check your image data conversion and plotting code for errors. For example:
```python
import io
import cv2
from sixel import SixelWriter
im_bytes = cv2.imencode(".png", plot)[1].tobytes()
mem_file = io.BytesIO(im_bytes)
SixelWriter().draw(mem_file)
```
If problems persist, consult the [VSCode repository](https://github.com/microsoft/vscode), and visit the [plot method parameters](../modes/predict.md#plot-method-parameters) section for additional guidance.
### Can YOLO display video inference results in the terminal using sixel?
Displaying video inference results or animated GIF frames using sixel in the terminal is currently untested and may not be supported. We recommend starting with static images and verifying compatibility. Attempt video results at your own risk, keeping in mind performance constraints. For more information on plotting inference results, visit the [predict mode](../modes/predict.md) page.
### How can I troubleshoot issues with the `python-sixel` library?
To troubleshoot issues with the `python-sixel` library:
1. Ensure the library is correctly installed in your virtual environment:
```bash
pip install sixel
```
2. Verify that you have the necessary Python and system dependencies.
3. Refer to the [python-sixel GitHub repository](https://github.com/lubosz/python-sixel) for additional documentation and community support.
4. Double-check your code for potential errors, specifically the usage of `SixelWriter` and image data conversion steps.
For further assistance on working with YOLO models and sixel integration, see the [export](../modes/export.md) and [predict mode](../modes/predict.md) documentation pages.

@ -177,3 +177,132 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
## Note
For any inquiries, feel free to post your questions in the [Ultralytics Issue Section](https://github.com/ultralytics/ultralytics/issues/new/choose) or the discussion section mentioned below.
## FAQ
### How do I start using VisionEye Object Mapping with Ultralytics YOLOv8?
To start using VisionEye Object Mapping with Ultralytics YOLOv8, first, you'll need to install the Ultralytics YOLO package via pip. Then, you can use the sample code provided in the documentation to set up object detection with VisionEye. Here's a simple example to get you started:
```python
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
results = model.predict(frame)
for result in results:
# Perform custom logic with result
pass
cv2.imshow("visioneye", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
### What are the key features of VisionEye's object tracking capability using Ultralytics YOLOv8?
VisionEye's object tracking with Ultralytics YOLOv8 allows users to follow the movement of objects within a video frame. Key features include:
1. **Real-Time Object Tracking**: Keeps up with objects as they move.
2. **Object Identification**: Utilizes YOLOv8's powerful detection algorithms.
3. **Distance Calculation**: Calculates distances between objects and specified points.
4. **Annotation and Visualization**: Provides visual markers for tracked objects.
Here's a brief code snippet demonstrating tracking with VisionEye:
```python
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
while True:
ret, frame = cap.read()
if not ret:
break
results = model.track(frame, persist=True)
for result in results:
# Annotate and visualize tracking
pass
cv2.imshow("visioneye-tracking", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
For a comprehensive guide, visit the [VisionEye Object Mapping with Object Tracking](#samples).
### How can I calculate distances with VisionEye's YOLOv8 model?
Distance calculation with VisionEye and Ultralytics YOLOv8 involves determining the distance of detected objects from a specified point in the frame. It enhances spatial analysis capabilities, useful in applications such as autonomous driving and surveillance.
Here's a simplified example:
```python
import math
import cv2
from ultralytics import YOLO
model = YOLO("yolov8s.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
center_point = (0, 480) # Example center point
pixel_per_meter = 10
while True:
ret, frame = cap.read()
if not ret:
break
results = model.track(frame, persist=True)
for result in results:
# Calculate distance logic
distances = [
(math.sqrt((box[0] - center_point[0]) ** 2 + (box[1] - center_point[1]) ** 2)) / pixel_per_meter
for box in results
]
cv2.imshow("visioneye-distance", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
For detailed instructions, refer to the [VisionEye with Distance Calculation](#samples).
### Why should I use Ultralytics YOLOv8 for object mapping and tracking?
Ultralytics YOLOv8 is renowned for its speed, accuracy, and ease of integration, making it a top choice for object mapping and tracking. Key advantages include:
1. **State-of-the-art Performance**: Delivers high accuracy in real-time object detection.
2. **Flexibility**: Supports various tasks such as detection, tracking, and distance calculation.
3. **Community and Support**: Extensive documentation and active GitHub community for troubleshooting and enhancements.
4. **Ease of Use**: Intuitive API simplifies complex tasks, allowing for rapid deployment and iteration.
For more information on applications and benefits, check out the [Ultralytics YOLOv8 documentation](https://docs.ultralytics.com/models/yolov8/).
### How can I integrate VisionEye with other machine learning tools like Comet or ClearML?
Ultralytics YOLOv8 can integrate seamlessly with various machine learning tools like Comet and ClearML, enhancing experiment tracking, collaboration, and reproducibility. Follow the detailed guides on [how to use YOLOv5 with Comet](https://www.ultralytics.com/blog/how-to-use-yolov5-with-comet) and [integrate YOLOv8 with ClearML](https://docs.ultralytics.com/integrations/clearml/) to get started.
For further exploration and integration examples, check our [Ultralytics Integrations Guide](https://docs.ultralytics.com/integrations/).

@ -4,7 +4,7 @@ description: Optimize your fitness routine with real-time workouts monitoring us
keywords: workouts monitoring, Ultralytics YOLOv8, pose estimation, fitness tracking, exercise assessment, real-time feedback, exercise form, performance metrics
---
# Workouts Monitoring using Ultralytics YOLOv8 🚀
# Workouts Monitoring using Ultralytics YOLOv8
Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) enhances exercise assessment by accurately tracking key body landmarks and joints in real-time. This technology provides instant feedback on exercise form, tracks workout routines, and measures performance metrics, optimizing training sessions for users and trainers alike.
@ -152,3 +152,110 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
| `iou` | `float` | `0.5` | IOU Threshold |
| `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
| `verbose` | `bool` | `True` | Display the object tracking results |
## FAQ
### How do I monitor my workouts using Ultralytics YOLOv8?
To monitor your workouts using Ultralytics YOLOv8, you can utilize the pose estimation capabilities to track and analyze key body landmarks and joints in real-time. This allows you to receive instant feedback on your exercise form, count repetitions, and measure performance metrics. You can start by using the provided example code for pushups, pullups, or ab workouts as shown:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
gym_object = solutions.AIGym(
line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10],
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
results = model.track(im0, verbose=False)
im0 = gym_object.start_counting(im0, results)
cv2.destroyAllWindows()
```
For further customization and settings, you can refer to the [AIGym](#arguments-aigym) section in the documentation.
### What are the benefits of using Ultralytics YOLOv8 for workout monitoring?
Using Ultralytics YOLOv8 for workout monitoring provides several key benefits:
- **Optimized Performance:** By tailoring workouts based on monitoring data, you can achieve better results.
- **Goal Achievement:** Easily track and adjust fitness goals for measurable progress.
- **Personalization:** Get customized workout plans based on your individual data for optimal effectiveness.
- **Health Awareness:** Early detection of patterns that indicate potential health issues or over-training.
- **Informed Decisions:** Make data-driven decisions to adjust routines and set realistic goals.
You can watch a [YouTube video demonstration](https://www.youtube.com/embed/LGGxqLZtvuw) to see these benefits in action.
### How accurate is Ultralytics YOLOv8 in detecting and tracking exercises?
Ultralytics YOLOv8 is highly accurate in detecting and tracking exercises due to its state-of-the-art pose estimation capabilities. It can accurately track key body landmarks and joints, providing real-time feedback on exercise form and performance metrics. The model's pretrained weights and robust architecture ensure high precision and reliability. For real-world examples, check out the [real-world applications](#real-world-applications) section in the documentation, which showcases pushups and pullups counting.
### Can I use Ultralytics YOLOv8 for custom workout routines?
Yes, Ultralytics YOLOv8 can be adapted for custom workout routines. The `AIGym` class supports different pose types such as "pushup", "pullup", and "abworkout." You can specify keypoints and angles to detect specific exercises. Here is an example setup:
```python
from ultralytics import solutions
gym_object = solutions.AIGym(
line_thickness=2,
view_img=True,
pose_type="squat",
kpts_to_check=[6, 8, 10],
)
```
For more details on setting arguments, refer to the [Arguments `AIGym`](#arguments-aigym) section. This flexibility allows you to monitor various exercises and customize routines based on your needs.
### How can I save the workout monitoring output using Ultralytics YOLOv8?
To save the workout monitoring output, you can modify the code to include a video writer that saves the processed frames. Here's an example:
```python
import cv2
from ultralytics import YOLO, solutions
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("workouts.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
gym_object = solutions.AIGym(
line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10],
)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
results = model.track(im0, verbose=False)
im0 = gym_object.start_counting(im0, results)
video_writer.write(im0)
cv2.destroyAllWindows()
video_writer.release()
```
This setup writes the monitored video to an output file. For more details, refer to the [Workouts Monitoring with Save Output](#workouts-monitoring-using-ultralytics-yolov8) section.

@ -285,3 +285,35 @@ Troubleshooting is an integral part of any development process, and being equipp
Remember, the Ultralytics community is a valuable resource. Engaging with fellow developers and experts can provide additional insights and solutions that might not be covered in standard documentation. Always keep learning, experimenting, and sharing your experiences to contribute to the collective knowledge of the community.
Happy troubleshooting!
## FAQ
### How do I resolve installation errors with YOLOv8?
Installation errors can often be due to compatibility issues or missing dependencies. Ensure you use Python 3.8 or later and have PyTorch 1.8 or later installed. It's beneficial to use virtual environments to avoid conflicts. For a step-by-step installation guide, follow our [official installation guide](../quickstart.md). If you encounter import errors, try a fresh installation or update the library to the latest version.
### Why is my YOLOv8 model training slow on a single GPU?
Training on a single GPU might be slow due to large batch sizes or insufficient memory. To speed up training, use multiple GPUs. Ensure your system has multiple GPUs available and adjust your `.yaml` configuration file to specify the number of GPUs, e.g., `gpus: 4`. Increase the batch size accordingly to fully utilize the GPUs without exceeding memory limits. Example command:
```python
model.train(data="/path/to/your/data.yaml", batch=32, multi_scale=True)
```
### How can I ensure my YOLOv8 model is training on the GPU?
If the 'device' value shows 'null' in the training logs, it generally means the training process is set to automatically use an available GPU. To explicitly assign a specific GPU, set the 'device' value in your `.yaml` configuration file. For instance:
```yaml
device: 0
```
This sets the training process to the first GPU. Consult the `nvidia-smi` command to confirm your CUDA setup.
### How can I monitor and track my YOLOv8 model training progress?
Tracking and visualizing training progress can be efficiently managed through tools like [TensorBoard](https://www.tensorflow.org/tensorboard), [Comet](https://bit.ly/yolov8-readme-comet), and [Ultralytics HUB](https://hub.ultralytics.com). These tools allow you to log and visualize metrics such as loss, precision, recall, and mAP. Implementing [early stopping](#continuous-monitoring-parameters) based on these metrics can also help achieve better training outcomes.
### What should I do if YOLOv8 is not recognizing my dataset format?
Ensure your dataset and labels conform to the expected format. Verify that annotations are accurate and of high quality. If you face any issues, refer to the [Data Collection and Annotation](https://docs.ultralytics.com/guides/data-collection-and-annotation/) guide for best practices. For more dataset-specific guidance, check the [Datasets](https://docs.ultralytics.com/datasets/) section in the documentation.

@ -174,3 +174,39 @@ In this guide, we've taken a close look at the essential performance metrics for
Remember, the YOLOv8 and Ultralytics community is an invaluable asset. Engaging with fellow developers and experts can open doors to insights and solutions not found in standard documentation. As you journey through object detection, keep the spirit of learning alive, experiment with new strategies, and share your findings. By doing so, you contribute to the community's collective wisdom and ensure its growth.
Happy object detecting!
## FAQ
### What is the significance of Mean Average Precision (mAP) in evaluating YOLOv8 model performance?
Mean Average Precision (mAP) is crucial for evaluating YOLOv8 models as it provides a single metric encapsulating precision and recall across multiple classes. mAP@0.50 measures precision at an IoU threshold of 0.50, focusing on the model's ability to detect objects correctly. mAP@0.50:0.95 averages precision across a range of IoU thresholds, offering a comprehensive assessment of detection performance. High mAP scores indicate that the model effectively balances precision and recall, essential for applications like autonomous driving and surveillance.
### How do I interpret the Intersection over Union (IoU) value for YOLOv8 object detection?
Intersection over Union (IoU) measures the overlap between the predicted and ground truth bounding boxes. IoU values range from 0 to 1, where higher values indicate better localization accuracy. An IoU of 1.0 means perfect alignment. Typically, an IoU threshold of 0.50 is used to define true positives in metrics like mAP. Lower IoU values suggest that the model struggles with precise object localization, which can be improved by refining bounding box regression or increasing annotation accuracy.
### Why is the F1 Score important for evaluating YOLOv8 models in object detection?
The F1 Score is important for evaluating YOLOv8 models because it provides a harmonic mean of precision and recall, balancing both false positives and false negatives. It is particularly valuable when dealing with imbalanced datasets or applications where either precision or recall alone is insufficient. A high F1 Score indicates that the model effectively detects objects while minimizing both missed detections and false alarms, making it suitable for critical applications like security systems and medical imaging.
### What are the key advantages of using Ultralytics YOLOv8 for real-time object detection?
Ultralytics YOLOv8 offers multiple advantages for real-time object detection:
- **Speed and Efficiency**: Optimized for high-speed inference, suitable for applications requiring low latency.
- **High Accuracy**: Advanced algorithm ensures high mAP and IoU scores, balancing precision and recall.
- **Flexibility**: Supports various tasks including object detection, segmentation, and classification.
- **Ease of Use**: User-friendly interfaces, extensive documentation, and seamless integration with platforms like Ultralytics HUB ([HUB Quickstart](../hub/quickstart.md)).
This makes YOLOv8 ideal for diverse applications from autonomous vehicles to smart city solutions.
### How can validation metrics from YOLOv8 help improve model performance?
Validation metrics from YOLOv8 like precision, recall, mAP, and IoU help diagnose and improve model performance by providing insights into different aspects of detection:
- **Precision**: Helps identify and minimize false positives.
- **Recall**: Ensures all relevant objects are detected.
- **mAP**: Offers an overall performance snapshot, guiding general improvements.
- **IoU**: Helps fine-tune object localization accuracy.
By analyzing these metrics, specific weaknesses can be targeted, such as adjusting confidence thresholds to improve precision or gathering more diverse data to enhance recall. For detailed explanations of these metrics and how to interpret them, check [Object Detection Metrics](#object-detection-metrics).

@ -111,3 +111,78 @@ In this example, each thread creates its own `YOLO` instance. This prevents any
When using YOLO models with Python's `threading`, always instantiate your models within the thread that will use them to ensure thread safety. This practice avoids race conditions and makes sure that your inference tasks run reliably.
For more advanced scenarios and to further optimize your multi-threaded inference performance, consider using process-based parallelism with `multiprocessing` or leveraging a task queue with dedicated worker processes.
## FAQ
### How can I avoid race conditions when using YOLO models in a multi-threaded Python environment?
To prevent race conditions when using Ultralytics YOLO models in a multi-threaded Python environment, instantiate a separate YOLO model within each thread. This ensures that each thread has its own isolated model instance, avoiding concurrent modification of the model state.
Example:
```python
from threading import Thread
from ultralytics import YOLO
def thread_safe_predict(image_path):
"""Predict on an image in a thread-safe manner."""
local_model = YOLO("yolov8n.pt")
results = local_model.predict(image_path)
# Process results
Thread(target=thread_safe_predict, args=("image1.jpg",)).start()
Thread(target=thread_safe_predict, args=("image2.jpg",)).start()
```
For more information on ensuring thread safety, visit the [Thread-Safe Inference with YOLO Models](#thread-safe-inference).
### What are the best practices for running multi-threaded YOLO model inference in Python?
To run multi-threaded YOLO model inference safely in Python, follow these best practices:
1. Instantiate YOLO models within each thread rather than sharing a single model instance across threads.
2. Use Python's `multiprocessing` module for parallel processing to avoid issues related to Global Interpreter Lock (GIL).
3. Release the GIL by using operations performed by YOLO's underlying C libraries.
Example for thread-safe model instantiation:
```python
from threading import Thread
from ultralytics import YOLO
def thread_safe_predict(image_path):
"""Runs inference in a thread-safe manner with a new YOLO model instance."""
model = YOLO("yolov8n.pt")
results = model.predict(image_path)
# Process results
# Initiate multiple threads
Thread(target=thread_safe_predict, args=("image1.jpg",)).start()
Thread(target=thread_safe_predict, args=("image2.jpg",)).start()
```
For additional context, refer to the section on [Thread-Safe Inference](#thread-safe-inference).
### Why should each thread have its own YOLO model instance?
Each thread should have its own YOLO model instance to prevent race conditions. When a single model instance is shared among multiple threads, concurrent accesses can lead to unpredictable behavior and modifications of the model's internal state. By using separate instances, you ensure thread isolation, making your multi-threaded tasks reliable and safe.
For detailed guidance, check the [Non-Thread-Safe Example: Single Model Instance](#non-thread-safe-example-single-model-instance) and [Thread-Safe Example](#thread-safe-example) sections.
### How does Python's Global Interpreter Lock (GIL) affect YOLO model inference?
Python's Global Interpreter Lock (GIL) allows only one thread to execute Python bytecode at a time, which can limit the performance of CPU-bound multi-threading tasks. However, for I/O-bound operations or processes that use libraries releasing the GIL, like YOLO's C libraries, you can still achieve concurrency. For enhanced performance, consider using process-based parallelism with Python's `multiprocessing` module.
For more about threading in Python, see the [Understanding Python Threading](#understanding-python-threading) section.
### Is it safer to use process-based parallelism instead of threading for YOLO model inference?
Yes, using Python's `multiprocessing` module is safer and often more efficient for running YOLO model inference in parallel. Process-based parallelism creates separate memory spaces, avoiding the Global Interpreter Lock (GIL) and reducing the risk of concurrency issues. Each process will operate independently with its own YOLO model instance.
For further details on process-based parallelism with YOLO models, refer to the page on [Thread-Safe Inference](#thread-safe-inference).

@ -6,45 +6,45 @@ keywords: Ultralytics, YOLO, FAQ, object detection, hardware requirements, fine-
# Ultralytics YOLO Frequently Asked Questions (FAQ)
This FAQ section addresses some common questions and issues users might encounter while working with [Ultralytics](https://ultralytics.com) YOLO repositories.
This FAQ section addresses common questions and issues users might encounter while working with [Ultralytics](https://ultralytics.com) YOLO repositories.
## FAQ
### 1. What is Ultralytics and what does it offer?
### What is Ultralytics and what does it offer?
Ultralytics is a computer vision AI company that develops and maintains state-of-the-art object detection and image segmentation models, primarily focusing on the YOLO (You Only Look Once) family of models. Ultralytics offers:
Ultralytics is a computer vision AI company specializing in state-of-the-art object detection and image segmentation models, with a focus on the YOLO (You Only Look Once) family. Their offerings include:
- [Open-source implementations of YOLOv5 and YOLOv8](https://docs.ultralytics.com/models/yolov5/)
- [Pre-trained models for various computer vision tasks](https://docs.ultralytics.com/models/)
- [A Python package for easy integration of YOLO models into projects](https://docs.ultralytics.com/usage/python/)
- [Tools for training, testing, and deploying models](https://docs.ultralytics.com/modes/)
- [Extensive documentation and community support](https://docs.ultralytics.com/)
- Open-source implementations of [YOLOv5](https://docs.ultralytics.com/models/yolov5/) and [YOLOv8](https://docs.ultralytics.com/models/yolov8/)
- A wide range of [pre-trained models](https://docs.ultralytics.com/models/) for various computer vision tasks
- A comprehensive [Python package](https://docs.ultralytics.com/usage/python/) for seamless integration of YOLO models into projects
- Versatile [tools](https://docs.ultralytics.com/modes/) for training, testing, and deploying models
- [Extensive documentation](https://docs.ultralytics.com/) and a supportive community
### 2. How do I install the Ultralytics package?
### How do I install the Ultralytics package?
To install the Ultralytics package, you can use pip, the Python package manager. Open a terminal or command prompt and run:
Installing the Ultralytics package is straightforward using pip:
```
pip install ultralytics
```
For the latest development version, you can install directly from the GitHub repository:
For the latest development version, install directly from the GitHub repository:
```
pip install git+https://github.com/ultralytics/ultralytics.git
```
For more details, refer to the [quickstart guide](https://docs.ultralytics.com/quickstart/).
Detailed installation instructions can be found in the [quickstart guide](https://docs.ultralytics.com/quickstart/).
### 3. What are the system requirements for running Ultralytics models?
### What are the system requirements for running Ultralytics models?
Minimum requirements:
- Python 3.7 or later
- PyTorch 1.7 or later
- Python 3.7+
- PyTorch 1.7+
- CUDA-compatible GPU (for GPU acceleration)
Recommended:
Recommended setup:
- Python 3.8+
- PyTorch 1.10+
@ -52,9 +52,9 @@ Recommended:
- 8GB+ RAM
- 50GB+ free disk space (for dataset storage and model training)
For more information, visit [YOLO Common Issues](https://docs.ultralytics.com/guides/yolo-common-issues/).
For troubleshooting common issues, visit the [YOLO Common Issues](https://docs.ultralytics.com/guides/yolo-common-issues/) page.
### 4. How can I train a custom YOLOv8 model on my own dataset?
### How can I train a custom YOLOv8 model on my own dataset?
To train a custom YOLOv8 model:
@ -73,19 +73,19 @@ model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
results = model.train(data="path/to/your/data.yaml", epochs=100, imgsz=640)
```
For detailed instructions, refer to the [training guide](https://docs.ultralytics.com/modes/train/).
For a more in-depth guide, including data preparation and advanced training options, refer to the comprehensive [training guide](https://docs.ultralytics.com/modes/train/).
### 5. What pretrained models are available in Ultralytics?
### What pretrained models are available in Ultralytics?
Ultralytics offers a range of pretrained YOLOv8 models for various tasks:
Ultralytics offers a diverse range of pretrained YOLOv8 models for various tasks:
- Object Detection: YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x
- Instance Segmentation: YOLOv8n-seg, YOLOv8s-seg, YOLOv8m-seg, YOLOv8l-seg, YOLOv8x-seg
- Classification: YOLOv8n-cls, YOLOv8s-cls, YOLOv8m-cls, YOLOv8l-cls, YOLOv8x-cls
These models vary in size and complexity, offering different trade-offs between speed and accuracy. Learn more about [pretrained models](https://docs.ultralytics.com/models/yolov8/).
These models vary in size and complexity, offering different trade-offs between speed and accuracy. Explore the full range of [pretrained models](https://docs.ultralytics.com/models/yolov8/) to find the best fit for your project.
### 6. How do I perform inference using a trained Ultralytics model?
### How do I perform inference using a trained Ultralytics model?
To perform inference with a trained model:
@ -105,34 +105,34 @@ for r in results:
print(r.probs) # print class probabilities
```
For more details, visit the [prediction guide](https://docs.ultralytics.com/modes/predict/).
For advanced inference options, including batch processing and video inference, check out the detailed [prediction guide](https://docs.ultralytics.com/modes/predict/).
### 7. Can Ultralytics models be deployed on edge devices or in production environments?
### Can Ultralytics models be deployed on edge devices or in production environments?
Yes, Ultralytics models can be deployed on various platforms:
Absolutely! Ultralytics models are designed for versatile deployment across various platforms:
- Edge devices: Use TensorRT, ONNX, or OpenVINO for optimized inference on devices like NVIDIA Jetson or Intel Neural Compute Stick.
- Mobile: Convert models to TFLite or Core ML for deployment on Android or iOS devices.
- Cloud: Deploy models using frameworks like TensorFlow Serving or PyTorch Serve.
- Web: Use ONNX.js or TensorFlow.js for in-browser inference.
- Edge devices: Optimize inference on devices like NVIDIA Jetson or Intel Neural Compute Stick using TensorRT, ONNX, or OpenVINO.
- Mobile: Deploy on Android or iOS devices by converting models to TFLite or Core ML.
- Cloud: Leverage frameworks like TensorFlow Serving or PyTorch Serve for scalable cloud deployments.
- Web: Implement in-browser inference using ONNX.js or TensorFlow.js.
Ultralytics provides export functions to convert models to various formats for deployment. Learn more about [deployment options](https://docs.ultralytics.com/guides/model-deployment-options/).
Ultralytics provides export functions to convert models to various formats for deployment. Explore the wide range of [deployment options](https://docs.ultralytics.com/guides/model-deployment-options/) to find the best solution for your use case.
### 8. What's the difference between YOLOv5 and YOLOv8?
### What's the difference between YOLOv5 and YOLOv8?
Key differences include:
Key distinctions include:
- Architecture: YOLOv8 has an improved backbone and head design.
- Performance: YOLOv8 generally offers better accuracy and speed.
- Tasks: YOLOv8 natively supports object detection, instance segmentation, and classification.
- Codebase: YOLOv8 is implemented in a more modular and extensible manner.
- Training: YOLOv8 includes advanced training techniques like multi-dataset training and hyperparameter evolution.
- Architecture: YOLOv8 features an improved backbone and head design for enhanced performance.
- Performance: YOLOv8 generally offers superior accuracy and speed compared to YOLOv5.
- Tasks: YOLOv8 natively supports object detection, instance segmentation, and classification in a unified framework.
- Codebase: YOLOv8 is implemented with a more modular and extensible architecture, facilitating easier customization and extension.
- Training: YOLOv8 incorporates advanced training techniques like multi-dataset training and hyperparameter evolution for improved results.
For a detailed comparison, visit [YOLOv5 vs YOLOv8](https://www.ultralytics.com/yolo).
For an in-depth comparison of features and performance metrics, visit the [YOLOv5 vs YOLOv8](https://www.ultralytics.com/yolo) comparison page.
### 9. How can I contribute to the Ultralytics open-source project?
### How can I contribute to the Ultralytics open-source project?
To contribute:
Contributing to Ultralytics is a great way to improve the project and expand your skills. Here's how you can get involved:
1. Fork the Ultralytics repository on GitHub.
2. Create a new branch for your feature or bug fix.
@ -140,90 +140,90 @@ To contribute:
4. Submit a pull request with a clear description of your changes.
5. Participate in the code review process.
You can also contribute by reporting bugs, suggesting features, or improving documentation. Refer to the [contributing guide](https://docs.ultralytics.com/help/contributing/).
You can also contribute by reporting bugs, suggesting features, or improving documentation. For detailed guidelines and best practices, refer to the [contributing guide](https://docs.ultralytics.com/help/contributing/).
### 10. How do I install the Ultralytics package in Python?
### How do I install the Ultralytics package in Python?
To install the Ultralytics package in Python, you can use pip by running the following command in your terminal or command prompt:
Installing the Ultralytics package in Python is simple. Use pip by running the following command in your terminal or command prompt:
```bash
pip install ultralytics
```
If you want the latest development version, you can install it directly from the GitHub repository:
For the cutting-edge development version, install directly from the GitHub repository:
```bash
pip install git+https://github.com/ultralytics/ultralytics.git
```
For additional instructions and details, you can refer to the [quickstart guide](https://docs.ultralytics.com/quickstart/).
For environment-specific installation instructions and troubleshooting tips, consult the comprehensive [quickstart guide](https://docs.ultralytics.com/quickstart/).
### 11. What are the main features of Ultralytics YOLO?
### What are the main features of Ultralytics YOLO?
Ultralytics YOLO offers several advanced features to enhance object detection and image segmentation tasks:
Ultralytics YOLO boasts a rich set of features for advanced object detection and image segmentation:
- **Real-Time Detection:** Efficient detection and classification of objects in real-time.
- **Pre-Trained Models:** Access to a variety of pretrained models that balance speed and accuracy ([Pretrained Models](https://docs.ultralytics.com/models/yolov8/)).
- **Custom Training:** Easily fine-tune models on custom datasets ([Training Guide](https://docs.ultralytics.com/modes/train/)).
- **Wide Deployment Options:** Models can be exported to various formats like TensorRT, ONNX, and CoreML for deployment on different platforms ([Deployment Options](https://docs.ultralytics.com/guides/model-deployment-options/)).
- **Extensive Documentation:** Comprehensive documentation and community support to help users at all levels ([Documentation](https://docs.ultralytics.com/)).
- Real-Time Detection: Efficiently detect and classify objects in real-time scenarios.
- Pre-Trained Models: Access a variety of [pretrained models](https://docs.ultralytics.com/models/yolov8/) that balance speed and accuracy for different use cases.
- Custom Training: Easily fine-tune models on custom datasets with the flexible [training pipeline](https://docs.ultralytics.com/modes/train/).
- Wide [Deployment Options](https://docs.ultralytics.com/guides/model-deployment-options/): Export models to various formats like TensorRT, ONNX, and CoreML for deployment across different platforms.
- Extensive Documentation: Benefit from comprehensive [documentation](https://docs.ultralytics.com/) and a supportive community to guide you through your computer vision journey.
For further information, you can explore the [YOLO models page](https://docs.ultralytics.com/models/yolov8/).
Explore the [YOLO models page](https://docs.ultralytics.com/models/yolov8/) for an in-depth look at the capabilities and architectures of different YOLO versions.
### 12. How can I improve the performance of my YOLO model?
### How can I improve the performance of my YOLO model?
Improving the performance of your YOLO model can be achieved through several techniques:
Enhancing your YOLO model's performance can be achieved through several techniques:
1. **Hyperparameter Tuning:** Experiment with different hyperparameters to optimize model performance ([Hyperparameter Tuning Guide](https://docs.ultralytics.com/guides/hyperparameter-tuning/)).
2. **Data Augmentation:** Use techniques like flip, scale, rotate, and color adjustments to enhance your training dataset.
3. **Transfer Learning:** Start with a pre-trained model and fine-tune it on your specific dataset ([Train YOLOv8](https://docs.ultralytics.com/modes/train/)).
4. **Export to Efficient Formats:** Export your model to optimized formats like TensorRT or ONNX for faster inference ([Export](../modes/export.md)).
5. **Benchmarking:** Use the benchmarking tools available to measure and improve the inference speed and accuracy ([Benchmark Mode](https://docs.ultralytics.com/modes/benchmark/)).
1. Hyperparameter Tuning: Experiment with different hyperparameters using the [Hyperparameter Tuning Guide](https://docs.ultralytics.com/guides/hyperparameter-tuning/) to optimize model performance.
2. Data Augmentation: Implement techniques like flip, scale, rotate, and color adjustments to enhance your training dataset and improve model generalization.
3. Transfer Learning: Leverage pre-trained models and fine-tune them on your specific dataset using the [Train YOLOv8](https://docs.ultralytics.com/modes/train/) guide.
4. Export to Efficient Formats: Convert your model to optimized formats like TensorRT or ONNX for faster inference using the [Export guide](../modes/export.md).
5. Benchmarking: Utilize the [Benchmark Mode](https://docs.ultralytics.com/modes/benchmark/) to measure and improve inference speed and accuracy systematically.
### 13. Can I deploy Ultralytics YOLO models on mobile and edge devices?
### Can I deploy Ultralytics YOLO models on mobile and edge devices?
Yes, you can deploy Ultralytics YOLO models on mobile and edge devices by converting them to supported formats. Here are some options:
Yes, Ultralytics YOLO models are designed for versatile deployment, including mobile and edge devices:
- **Mobile:** Convert models to TFLite or CoreML for integration into Android or iOS apps ([TFLite Integration Guide](https://docs.ultralytics.com/integrations/tflite/) and [CoreML Integration Guide](https://docs.ultralytics.com/integrations/coreml/)).
- **Edge Devices:** Use TensorRT or ONNX for optimized inference on devices like NVIDIA Jetson or other edge hardware ([Edge TPU Integration Guide](https://docs.ultralytics.com/integrations/edge-tpu/)).
- Mobile: Convert models to TFLite or CoreML for seamless integration into Android or iOS apps. Refer to the [TFLite Integration Guide](https://docs.ultralytics.com/integrations/tflite/) and [CoreML Integration Guide](https://docs.ultralytics.com/integrations/coreml/) for platform-specific instructions.
- Edge Devices: Optimize inference on devices like NVIDIA Jetson or other edge hardware using TensorRT or ONNX. The [Edge TPU Integration Guide](https://docs.ultralytics.com/integrations/edge-tpu/) provides detailed steps for edge deployment.
For detailed instructions on different deployment options, visit the [deployment options guide](https://docs.ultralytics.com/guides/model-deployment-options/).
For a comprehensive overview of deployment strategies across various platforms, consult the [deployment options guide](https://docs.ultralytics.com/guides/model-deployment-options/).
### 14. How can I perform inference using a trained Ultralytics YOLO model?
### How can I perform inference using a trained Ultralytics YOLO model?
To perform inference using a trained Ultralytics YOLO model, follow these steps:
Performing inference with a trained Ultralytics YOLO model is straightforward:
1. **Load the Model:**
1. Load the Model:
```python
from ultralytics import YOLO
```python
from ultralytics import YOLO
model = YOLO("path/to/your/model.pt")
```
model = YOLO("path/to/your/model.pt")
```
2. **Run Inference:**
2. Run Inference:
```python
results = model("path/to/image.jpg")
```python
results = model("path/to/image.jpg")
for r in results:
print(r.boxes) # print bounding box predictions
print(r.masks) # print mask predictions
print(r.probs) # print class probabilities
```
for r in results:
print(r.boxes) # print bounding box predictions
print(r.masks) # print mask predictions
print(r.probs) # print class probabilities
```
For more detailed instructions, check out the [prediction guide](https://docs.ultralytics.com/modes/predict/).
For advanced inference techniques, including batch processing, video inference, and custom preprocessing, refer to the detailed [prediction guide](https://docs.ultralytics.com/modes/predict/).
### 15. Where can I find examples and tutorials for using Ultralytics?
### Where can I find examples and tutorials for using Ultralytics?
You can find examples and tutorials in several places:
Ultralytics provides a wealth of resources to help you get started and master their tools:
- 📚 [Official documentation](https://docs.ultralytics.com/)
- 💻 [GitHub repository](https://github.com/ultralytics/ultralytics)
- ✍ [Ultralytics blog](https://www.ultralytics.com/blog)
- 💬 [Community forums](https://community.ultralytics.com/)
- 🎥 [YouTube channel](https://youtube.com/ultralytics?sub_confirmation=1)
- 📚 [Official documentation](https://docs.ultralytics.com/): Comprehensive guides, API references, and best practices.
- 💻 [GitHub repository](https://github.com/ultralytics/ultralytics): Source code, example scripts, and community contributions.
- ✍ [Ultralytics blog](https://www.ultralytics.com/blog): In-depth articles, use cases, and technical insights.
- 💬 [Community forums](https://community.ultralytics.com/): Connect with other users, ask questions, and share your experiences.
- 🎥 [YouTube channel](https://youtube.com/ultralytics?sub_confirmation=1): Video tutorials, demos, and webinars on various Ultralytics topics.
These resources provide code examples, use cases, and step-by-step guides for various tasks using Ultralytics models.
These resources provide code examples, real-world use cases, and step-by-step guides for various tasks using Ultralytics models.
If you have any more questions or need assistance, don't hesitate to consult the Ultralytics documentation or reach out to the community through [GitHub Issues](https://github.com/ultralytics/ultralytics/issues) or the official [discussion forum](https://github.com/orgs/ultralytics/discussions).
If you need further assistance, don't hesitate to consult the Ultralytics documentation or reach out to the community through [GitHub Issues](https://github.com/ultralytics/ultralytics/issues) or the official [discussion forum](https://github.com/orgs/ultralytics/discussions).

@ -118,7 +118,7 @@ After creating the AWS CloudFormation Stack, the next step is to deploy YOLOv8.
import json
def output_fn(prediction_output, content_type):
def output_fn(prediction_output):
"""Formats model outputs as JSON string, extracting attributes like boxes, masks, keypoints."""
print("Executing output_fn from inference.py ...")
infer = {}
@ -169,3 +169,88 @@ This guide took you step by step through deploying YOLOv8 on Amazon SageMaker En
For more technical details, refer to [this article](https://aws.amazon.com/blogs/machine-learning/hosting-yolov8-pytorch-model-on-amazon-sagemaker-endpoints/) on the AWS Machine Learning Blog. You can also check out the official [Amazon SageMaker Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) for more insights into various features and functionalities.
Are you interested in learning more about different YOLOv8 integrations? Visit the [Ultralytics integrations guide page](../integrations/index.md) to discover additional tools and capabilities that can enhance your machine-learning projects.
## FAQ
### How do I deploy the Ultralytics YOLOv8 model on Amazon SageMaker Endpoints?
To deploy the Ultralytics YOLOv8 model on Amazon SageMaker Endpoints, follow these steps:
1. **Set Up Your AWS Environment**: Ensure you have an AWS Account, IAM roles with necessary permissions, and the AWS CLI configured. Install AWS CDK if not already done (refer to the [AWS CDK instructions](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install)).
2. **Clone the YOLOv8 SageMaker Repository**:
```bash
git clone https://github.com/aws-samples/host-yolov8-on-sagemaker-endpoint.git
cd host-yolov8-on-sagemaker-endpoint/yolov8-pytorch-cdk
```
3. **Set Up the CDK Environment**: Create a Python virtual environment, activate it, install dependencies, and upgrade AWS CDK library.
```bash
python3 -m venv .venv
source .venv/bin/activate
pip3 install -r requirements.txt
pip install --upgrade aws-cdk-lib
```
4. **Deploy using AWS CDK**: Synthesize and deploy the CloudFormation stack, bootstrap the environment.
```bash
cdk synth
cdk bootstrap
cdk deploy
```
For further details, review the [documentation section](#step-5-deploy-the-yolov8-model).
### What are the prerequisites for deploying YOLOv8 on Amazon SageMaker?
To deploy YOLOv8 on Amazon SageMaker, ensure you have the following prerequisites:
1. **AWS Account**: Active AWS account ([sign up here](https://aws.amazon.com/)).
2. **IAM Roles**: Configured IAM roles with permissions for SageMaker, CloudFormation, and Amazon S3.
3. **AWS CLI**: Installed and configured AWS Command Line Interface ([AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)).
4. **AWS CDK**: Installed AWS Cloud Development Kit ([CDK setup guide](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install)).
5. **Service Quotas**: Sufficient quotas for `ml.m5.4xlarge` instances for both endpoint and notebook usage ([request a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html#quota-console-increase)).
For detailed setup, refer to [this section](#step-1-setup-your-aws-environment).
### Why should I use Ultralytics YOLOv8 on Amazon SageMaker?
Using Ultralytics YOLOv8 on Amazon SageMaker offers several advantages:
1. **Scalability and Management**: SageMaker provides a managed environment with features like autoscaling, which helps in real-time inference needs.
2. **Integration with AWS Services**: Seamlessly integrate with other AWS services, such as S3 for data storage, CloudFormation for infrastructure as code, and CloudWatch for monitoring.
3. **Ease of Deployment**: Simplified setup using AWS CDK scripts and streamlined deployment processes.
4. **Performance**: Leverage Amazon SageMaker's high-performance infrastructure for running large scale inference tasks efficiently.
Explore more about the advantages of using SageMaker in the [introduction section](#amazon-sagemaker).
### Can I customize the inference logic for YOLOv8 on Amazon SageMaker?
Yes, you can customize the inference logic for YOLOv8 on Amazon SageMaker:
1. **Modify `inference.py`**: Locate and customize the `output_fn` function in the `inference.py` file to tailor output formats.
```python
import json
def output_fn(prediction_output):
"""Formats model outputs as JSON string, extracting attributes like boxes, masks, keypoints."""
infer = {}
for result in prediction_output:
if result.boxes is not None:
infer["boxes"] = result.boxes.numpy().data.tolist()
# Add more processing logic if necessary
return json.dumps(infer)
```
2. **Deploy Updated Model**: Ensure you redeploy the model using Jupyter notebooks provided (`1_DeployEndpoint.ipynb`) to include these changes.
Refer to the [detailed steps](#step-5-deploy-the-yolov8-model) for deploying the modified model.
### How can I test the deployed YOLOv8 model on Amazon SageMaker?
To test the deployed YOLOv8 model on Amazon SageMaker:
1. **Open the Test Notebook**: Locate the `2_TestEndpoint.ipynb` notebook in the SageMaker Jupyter environment.
2. **Run the Notebook**: Follow the notebook's instructions to send an image to the endpoint, perform inference, and display results.
3. **Visualize Results**: Use built-in plotting functionalities to visualize performance metrics, such as bounding boxes around detected objects.
For comprehensive testing instructions, visit the [testing section](#step-6-testing-your-deployment).

@ -185,3 +185,62 @@ This guide has led you through the process of integrating ClearML with Ultralyti
For further details on usage, visit [ClearML's official documentation](https://clear.ml/docs/latest/docs/integrations/yolov8/).
Additionally, explore more integrations and capabilities of Ultralytics by visiting the [Ultralytics integration guide page](../integrations/index.md), which is a treasure trove of resources and insights.
## FAQ
### What is the process for integrating Ultralytics YOLOv8 with ClearML?
Integrating Ultralytics YOLOv8 with ClearML involves a series of steps to streamline your MLOps workflow. First, install the necessary packages:
```bash
pip install ultralytics clearml
```
Next, initialize the ClearML SDK in your environment using:
```bash
clearml-init
```
You then configure ClearML with your credentials from the [ClearML Settings page](https://app.clear.ml/settings/workspace-configuration). Detailed instructions on the entire setup process, including model selection and training configurations, can be found in our [YOLOv8 Model Training guide](../modes/train.md).
### Why should I use ClearML with Ultralytics YOLOv8 for my machine learning projects?
Using ClearML with Ultralytics YOLOv8 enhances your machine learning projects by automating experiment tracking, streamlining workflows, and enabling robust model management. ClearML offers real-time metrics tracking, resource utilization monitoring, and a user-friendly interface for comparing experiments. These features help optimize your model's performance and make the development process more efficient. Learn more about the benefits and procedures in our [MLOps Integration guide](../modes/train.md).
### How do I troubleshoot common issues during YOLOv8 and ClearML integration?
If you encounter issues during the integration of YOLOv8 with ClearML, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips. Typical problems might involve package installation errors, credential setup, or configuration issues. This guide provides step-by-step troubleshooting instructions to resolve these common issues efficiently.
### How do I set up the ClearML task for YOLOv8 model training?
Setting up a ClearML task for YOLOv8 training involves initializing a task, selecting the model variant, loading the model, setting up training arguments, and finally, starting the model training. Here's a simplified example:
```python
from clearml import Task
from ultralytics import YOLO
# Step 1: Creating a ClearML Task
task = Task.init(project_name="my_project", task_name="my_yolov8_task")
# Step 2: Selecting the YOLOv8 Model
model_variant = "yolov8n"
task.set_parameter("model_variant", model_variant)
# Step 3: Loading the YOLOv8 Model
model = YOLO(f"{model_variant}.pt")
# Step 4: Setting Up Training Arguments
args = dict(data="coco8.yaml", epochs=16)
task.connect(args)
# Step 5: Initiating Model Training
results = model.train(**args)
```
Refer to our [Usage guide](#usage) for a detailed breakdown of these steps.
### Where can I view the results of my YOLOv8 training in ClearML?
After running your YOLOv8 training script with ClearML, you can view the results on the ClearML results page. The output will include a URL link to the ClearML dashboard, where you can track metrics, compare experiments, and monitor resource usage. For more details on how to view and interpret the results, check our section on [Viewing the ClearML Results Page](#viewing-the-clearml-results-page).

@ -176,3 +176,106 @@ Explore [Comet ML's official documentation](https://www.comet.com/docs/v2/integr
Furthermore, if you're looking to dive deeper into the practical applications of YOLOv8, specifically for image segmentation tasks, this detailed guide on [fine-tuning YOLOv8 with Comet ML](https://www.comet.com/site/blog/fine-tuning-yolov8-for-image-segmentation-with-comet/) offers valuable insights and step-by-step instructions to enhance your model's performance.
Additionally, to explore other exciting integrations with Ultralytics, check out the [integration guide page](../integrations/index.md), which offers a wealth of resources and information.
## FAQ
### How do I integrate Comet ML with Ultralytics YOLOv8 for training?
To integrate Comet ML with Ultralytics YOLOv8, follow these steps:
1. **Install the required packages**:
```bash
pip install ultralytics comet_ml torch torchvision
```
2. **Set up your Comet API Key**:
```bash
export COMET_API_KEY=<Your API Key>
```
3. **Initialize your Comet project in your Python code**:
```python
import comet_ml
comet_ml.init(project_name="comet-example-yolov8-coco128")
```
4. **Train your YOLOv8 model and log metrics**:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
results = model.train(
data="coco8.yaml", project="comet-example-yolov8-coco128", batch=32, save_period=1, save_json=True, epochs=3
)
```
For more detailed instructions, refer to the [Comet ML configuration section](#configuring-comet-ml).
### What are the benefits of using Comet ML with YOLOv8?
By integrating Ultralytics YOLOv8 with Comet ML, you can:
- **Monitor real-time insights**: Get instant feedback on your training results, allowing for quick adjustments.
- **Log extensive metrics**: Automatically capture essential metrics such as mAP, loss, hyperparameters, and model checkpoints.
- **Track experiments offline**: Log your training runs locally when internet access is unavailable.
- **Compare different training runs**: Use the interactive Comet ML dashboard to analyze and compare multiple experiments.
By leveraging these features, you can optimize your machine learning workflows for better performance and reproducibility. For more information, visit the [Comet ML integration guide](../integrations/index.md).
### How do I customize the logging behavior of Comet ML during YOLOv8 training?
Comet ML allows for extensive customization of its logging behavior using environment variables:
- **Change the number of image predictions logged**:
```python
import os
os.environ["COMET_MAX_IMAGE_PREDICTIONS"] = "200"
```
- **Adjust batch logging interval**:
```python
import os
os.environ["COMET_EVAL_BATCH_LOGGING_INTERVAL"] = "4"
```
- **Disable confusion matrix logging**:
```python
import os
os.environ["COMET_EVAL_LOG_CONFUSION_MATRIX"] = "false"
```
For more customization options, refer to the [Customizing Comet ML Logging](#customizing-comet-ml-logging) section.
### How do I view detailed metrics and visualizations of my YOLOv8 training on Comet ML?
Once your YOLOv8 model starts training, you can access a wide range of metrics and visualizations on the Comet ML dashboard. Key features include:
- **Experiment Panels**: View different runs and their metrics, including segment mask loss, class loss, and mean average precision.
- **Metrics**: Examine metrics in tabular format for detailed analysis.
- **Interactive Confusion Matrix**: Assess classification accuracy with an interactive confusion matrix.
- **System Metrics**: Monitor GPU and CPU utilization, memory usage, and other system metrics.
For a detailed overview of these features, visit the [Understanding Your Model's Performance with Comet ML Visualizations](#understanding-your-models-performance-with-comet-ml-visualizations) section.
### Can I use Comet ML for offline logging when training YOLOv8 models?
Yes, you can enable offline logging in Comet ML by setting the `COMET_MODE` environment variable to "offline":
```python
import os
os.environ["COMET_MODE"] = "offline"
```
This feature allows you to log your experiment data locally, which can later be uploaded to Comet ML when internet connectivity is available. This is particularly useful when working in environments with limited internet access. For more details, refer to the [Offline Logging](#offline-logging) section.

@ -50,7 +50,7 @@ CoreML offers various deployment options for machine learning models, including:
- **Downloaded Models**: These models are fetched from a server as needed. This approach is suitable for larger models or those needing regular updates. It helps keep the app bundle size smaller.
- **Cloud-Based Deployment**: CoreML models are hosted on servers and accessed by the iOS app through API requests. This scalable and flexible option enables easy model updates without app revisions. It's ideal for complex models or large-scale apps requiring regular updates. However, it does require an internet connection and may pose latency and security issues.
- **Cloud-Based Deployment**: CoreML models are hosted on servers and accessed by the iOS app through API requests. This scalable and flexible option enables easy model updates without app revisions. It's ideal for complex models or large-scale apps requiring regular updates. However, it does require an internet connection and may pose latency and security issues.
## Exporting YOLOv8 Models to CoreML
@ -124,3 +124,95 @@ In this guide, we went over how to export Ultralytics YOLOv8 models to CoreML fo
For further details on usage, visit the [CoreML official documentation](https://developer.apple.com/documentation/coreml).
Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of valuable resources and insights there.
## FAQ
### How do I export YOLOv8 models to CoreML format?
To export your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models to CoreML format, you'll first need to ensure you have the `ultralytics` package installed. You can install it using:
!!! Example "Installation"
=== "CLI"
```bash
pip install ultralytics
```
Next, you can export the model using the following Python or CLI commands:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.export(format="coreml")
```
=== "CLI"
```bash
yolo export model=yolov8n.pt format=coreml
```
For further details, refer to the [Exporting YOLOv8 Models to CoreML](../modes/export.md) section of our documentation.
### What are the benefits of using CoreML for deploying YOLOv8 models?
CoreML provides numerous advantages for deploying [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models on Apple devices:
- **On-device Processing**: Enables local model inference on devices, ensuring data privacy and minimizing latency.
- **Performance Optimization**: Leverages the full potential of the device's CPU, GPU, and Neural Engine, optimizing both speed and efficiency.
- **Ease of Integration**: Offers a seamless integration experience with Apple's ecosystems, including iOS, macOS, watchOS, and tvOS.
- **Versatility**: Supports a wide range of machine learning tasks such as image analysis, audio processing, and natural language processing using the CoreML framework.
For more details on integrating your CoreML model into an iOS app, check out the guide on [Integrating a Core ML Model into Your App](https://developer.apple.com/documentation/coreml/integrating_a_core_ml_model_into_your_app).
### What are the deployment options for YOLOv8 models exported to CoreML?
Once you export your YOLOv8 model to CoreML format, you have multiple deployment options:
1. **On-Device Deployment**: Directly integrate CoreML models into your app for enhanced privacy and offline functionality. This can be done as:
- **Embedded Models**: Included in the app bundle, accessible immediately.
- **Downloaded Models**: Fetched from a server as needed, keeping the app bundle size smaller.
2. **Cloud-Based Deployment**: Host CoreML models on servers and access them via API requests. This approach supports easier updates and can handle more complex models.
For detailed guidance on deploying CoreML models, refer to [CoreML Deployment Options](#coreml-deployment-options).
### How does CoreML ensure optimized performance for YOLOv8 models?
CoreML ensures optimized performance for [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models by utilizing various optimization techniques:
- **Hardware Acceleration**: Uses the device's CPU, GPU, and Neural Engine for efficient computation.
- **Model Compression**: Provides tools for compressing models to reduce their footprint without compromising accuracy.
- **Adaptive Inference**: Adjusts inference based on the device's capabilities to maintain a balance between speed and performance.
For more information on performance optimization, visit the [CoreML official documentation](https://developer.apple.com/documentation/coreml).
### Can I run inference directly with the exported CoreML model?
Yes, you can run inference directly using the exported CoreML model. Below are the commands for Python and CLI:
!!! Example "Running Inference"
=== "Python"
```python
from ultralytics import YOLO
coreml_model = YOLO("yolov8n.mlpackage")
results = coreml_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
yolo predict model=yolov8n.mlpackage source='https://ultralytics.com/images/bus.jpg'
```
For additional information, refer to the [Usage section](#usage) of the CoreML export guide.

@ -169,3 +169,110 @@ This guide has led you through the process of integrating DVCLive with Ultralyti
For further details on usage, visit [DVCLive's official documentation](https://dvc.org/doc/dvclive/ml-frameworks/yolo).
Additionally, explore more integrations and capabilities of Ultralytics by visiting the [Ultralytics integration guide page](../integrations/index.md), which is a collection of great resources and insights.
## FAQ
### How do I integrate DVCLive with Ultralytics YOLOv8 for experiment tracking?
Integrating DVCLive with Ultralytics YOLOv8 is straightforward. Start by installing the necessary packages:
!!! Example "Installation"
=== "CLI"
```bash
pip install ultralytics dvclive
```
Next, initialize a Git repository and configure DVCLive in your project:
!!! Example "Initial Environment Setup"
=== "CLI"
```bash
git init -q
git config --local user.email "you@example.com"
git config --local user.name "Your Name"
dvc init -q
git commit -m "DVC init"
```
Follow our [YOLOv8 Installation guide](../quickstart.md) for detailed setup instructions.
### Why should I use DVCLive for tracking YOLOv8 experiments?
Using DVCLive with YOLOv8 provides several advantages, such as:
- **Automated Logging**: DVCLive automatically records key experiment details like model parameters and metrics.
- **Easy Comparison**: Facilitates comparison of results across different runs.
- **Visualization Tools**: Leverages DVCLive's robust data visualization capabilities for in-depth analysis.
For further details, refer to our guide on [YOLOv8 Model Training](../modes/train.md) and [YOLO Performance Metrics](../guides/yolo-performance-metrics.md) to maximize your experiment tracking efficiency.
### How can DVCLive improve my results analysis for YOLOv8 training sessions?
After completing your YOLOv8 training sessions, DVCLive helps in visualizing and analyzing the results effectively. Example code for loading and displaying experiment data:
```python
import dvc.api
import pandas as pd
# Define columns of interest
columns = ["Experiment", "epochs", "imgsz", "model", "metrics.mAP50-95(B)"]
# Retrieve experiment data
df = pd.DataFrame(dvc.api.exp_show(), columns=columns)
# Clean data
df.dropna(inplace=True)
df.reset_index(drop=True, inplace=True)
# Display DataFrame
print(df)
```
To visualize results interactively, use Plotly's parallel coordinates plot:
```python
from plotly.express import parallel_coordinates
fig = parallel_coordinates(df, columns, color="metrics.mAP50-95(B)")
fig.show()
```
Refer to our guide on [YOLOv8 Training with DVCLive](#yolov8-training-with-dvclive) for more examples and best practices.
### What are the steps to configure my environment for DVCLive and YOLOv8 integration?
To configure your environment for a smooth integration of DVCLive and YOLOv8, follow these steps:
1. **Install Required Packages**: Use `pip install ultralytics dvclive`.
2. **Initialize Git Repository**: Run `git init -q`.
3. **Setup DVCLive**: Execute `dvc init -q`.
4. **Commit to Git**: Use `git commit -m "DVC init"`.
These steps ensure proper version control and setup for experiment tracking. For in-depth configuration details, visit our [Configuration guide](../quickstart.md).
### How do I visualize YOLOv8 experiment results using DVCLive?
DVCLive offers powerful tools to visualize the results of YOLOv8 experiments. Here's how you can generate comparative plots:
!!! Example "Generate Comparative Plots"
=== "CLI"
```bash
dvc plots diff $(dvc exp list --names-only)
```
To display these plots in a Jupyter Notebook, use:
```python
from IPython.display import HTML
# Display plots as HTML
HTML(filename="./dvc_plots/index.html")
```
These visualizations help identify trends and optimize model performance. Check our detailed guides on [YOLOv8 Experiment Analysis](#analyzing-results) for comprehensive steps and examples.

@ -18,7 +18,7 @@ Exporting models to TensorFlow Edge TPU makes machine learning tasks fast and ef
<img width="100%" src="https://coral.ai/static/docs/images/edgetpu/compile-workflow.png" alt="TFLite Edge TPU">
</p>
The Edge TPU works with quantized models. Quantization makes models smaller and faster without losing much accuracy. It is ideal for the limited resources of edge computing, allowing applications to respond quickly by reducing latency and allowing for quick data processing locally, without cloud dependency. Local processing also keeps user data private and secure since it's not sent to a remote server.
The Edge TPU works with quantized models. Quantization makes models smaller and faster without losing much accuracy. It is ideal for the limited resources of edge computing, allowing applications to respond quickly by reducing latency and allowing for quick data processing locally, without cloud dependency. Local processing also keeps user data private and secure since it's not sent to a remote server.
## Key Features of TFLite Edge TPU
@ -116,3 +116,70 @@ In this guide, we've learned how to export Ultralytics YOLOv8 models to TFLite E
For further details on usage, visit the [Edge TPU official website](https://cloud.google.com/edge-tpu).
Also, for more information on other Ultralytics YOLOv8 integrations, please visit our [integration guide page](index.md). There, you'll discover valuable resources and insights.
## FAQ
### How do I export a YOLOv8 model to TFLite Edge TPU format?
To export a YOLOv8 model to TFLite Edge TPU format, you can follow these steps:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model to TFLite Edge TPU format
model.export(format="edgetpu") # creates 'yolov8n_full_integer_quant_edgetpu.tflite'
# Load the exported TFLite Edge TPU model
edgetpu_model = YOLO("yolov8n_full_integer_quant_edgetpu.tflite")
# Run inference
results = edgetpu_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to TFLite Edge TPU format
yolo export model=yolov8n.pt format=edgetpu # creates 'yolov8n_full_integer_quant_edgetpu.tflite'
# Run inference with the exported model
yolo predict model=yolov8n_full_integer_quant_edgetpu.tflite source='https://ultralytics.com/images/bus.jpg'
```
For complete details on exporting models to other formats, refer to our [export guide](../modes/export.md).
### What are the benefits of exporting YOLOv8 models to TFLite Edge TPU?
Exporting YOLOv8 models to TFLite Edge TPU offers several benefits:
- **Optimized Performance**: Achieve high-speed neural network performance with minimal power consumption.
- **Reduced Latency**: Quick local data processing without the need for cloud dependency.
- **Enhanced Privacy**: Local processing keeps user data private and secure.
This makes it ideal for applications in edge computing, where devices have limited power and computational resources. Learn more about [why you should export](#why-should-you-export-to-tflite-edge-tpu).
### Can I deploy TFLite Edge TPU models on mobile and embedded devices?
Yes, TensorFlow Lite Edge TPU models can be deployed directly on mobile and embedded devices. This deployment approach allows models to execute directly on the hardware, offering faster and more efficient inferencing. For integration examples, check our [guide on deploying Coral Edge TPU on Raspberry Pi](../guides/coral-edge-tpu-on-raspberry-pi.md).
### What are some common use cases for TFLite Edge TPU models?
Common use cases for TFLite Edge TPU models include:
- **Smart Cameras**: Enhancing real-time image and video analysis.
- **IoT Devices**: Enabling smart home and industrial automation.
- **Healthcare**: Accelerating medical imaging and diagnostics.
- **Retail**: Improving inventory management and customer behavior analysis.
These applications benefit from the high performance and low power consumption of TFLite Edge TPU models. Discover more about [usage scenarios](#deployment-options-with-tflite-edge-tpu).
### How can I troubleshoot issues while exporting or deploying TFLite Edge TPU models?
If you encounter issues while exporting or deploying TFLite Edge TPU models, refer to our [Common Issues guide](../guides/yolo-common-issues.md) for troubleshooting tips. This guide covers common problems and solutions to help you ensure smooth operation. For additional support, visit our [Help Center](https://docs.ultralytics.com/help/).

@ -108,3 +108,44 @@ We've discussed how you can easily experiment with Ultralytics YOLOv8 models on
For more details, visit [Google Colab's FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
Interested in more YOLOv8 integrations? Visit the [Ultralytics integration guide page](index.md) to explore additional tools and capabilities that can improve your machine-learning projects.
## FAQ
### How do I start training Ultralytics YOLOv8 models on Google Colab?
To start training Ultralytics YOLOv8 models on Google Colab, sign in to your Google account, then access the [Google Colab YOLOv8 Notebook](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb). This notebook guides you through the setup and training process. After launching the notebook, run the cells step-by-step to train your model. For a full guide, refer to the [YOLOv8 Model Training guide](../modes/train.md).
### What are the advantages of using Google Colab for training YOLOv8 models?
Google Colab offers several advantages for training YOLOv8 models:
- **Zero Setup:** No initial environment setup is required; just log in and start coding.
- **Free GPU Access:** Use powerful GPUs or TPUs without the need for expensive hardware.
- **Integration with Google Drive:** Easily store and access datasets and models.
- **Collaboration:** Share notebooks with others and collaborate in real-time.
For more information on why you should use Google Colab, explore the [training guide](../modes/train.md) and visit the [Google Colab page](https://colab.google/notebooks/).
### How can I handle Google Colab session timeouts during YOLOv8 training?
Google Colab sessions timeout due to inactivity, especially for free users. To handle this:
1. **Stay Active:** Regularly interact with your Colab notebook.
2. **Save Progress:** Continuously save your work to Google Drive or GitHub.
3. **Colab Pro:** Consider upgrading to Google Colab Pro for longer session durations.
For more tips on managing your Colab session, visit the [Google Colab FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
### Can I use custom datasets for training YOLOv8 models in Google Colab?
Yes, you can use custom datasets to train YOLOv8 models in Google Colab. Upload your dataset to Google Drive and load it directly into your Colab notebook. You can follow Nicolai's YouTube guide, [How to Train YOLOv8 Models on Your Custom Dataset](https://www.youtube.com/embed/LNwODJXcvt4?si=lB9UAc4hatSSEr2a), or refer to the [Custom Dataset Training guide](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-colab) for detailed steps.
### What should I do if my Google Colab training session is interrupted?
If your Google Colab training session is interrupted:
1. **Save Regularly:** Avoid losing unsaved progress by regularly saving your work to Google Drive or GitHub.
2. **Resume Training:** Restart your session and re-run the cells from where the interruption occurred.
3. **Use Checkpoints:** Incorporate checkpointing in your training script to save progress periodically.
These practices help ensure your progress is secure. Learn more about session management on [Google Colab's FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).

@ -116,3 +116,84 @@ if __name__ == "__main__":
| Image Input | To upload the image for detection. |
| Sliders | To adjust confidence and IoU thresholds. |
| Image Output | To display the detection results. |
## FAQ
### How do I use Gradio with Ultralytics YOLOv8 for object detection?
To use Gradio with Ultralytics YOLOv8 for object detection, you can follow these steps:
1. **Install Gradio:** Use the command `pip install gradio`.
2. **Create Interface:** Write a Python script to initialize the Gradio interface. You can refer to the provided code example in the [documentation](#usage-example) for details.
3. **Upload and Adjust:** Upload your image and adjust the confidence and IoU thresholds on the Gradio interface to get real-time object detection results.
Here's a minimal code snippet for reference:
```python
import gradio as gr
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
def predict_image(img, conf_threshold, iou_threshold):
results = model.predict(
source=img,
conf=conf_threshold,
iou=iou_threshold,
show_labels=True,
show_conf=True,
)
return results[0].plot() if results else None
iface = gr.Interface(
fn=predict_image,
inputs=[
gr.Image(type="pil", label="Upload Image"),
gr.Slider(minimum=0, maximum=1, value=0.25, label="Confidence threshold"),
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold"),
],
outputs=gr.Image(type="pil", label="Result"),
title="Ultralytics Gradio YOLOv8",
description="Upload images for YOLOv8 object detection.",
)
iface.launch()
```
### What are the benefits of using Gradio for Ultralytics YOLOv8 object detection?
Using Gradio for Ultralytics YOLOv8 object detection offers several benefits:
- **User-Friendly Interface:** Gradio provides an intuitive interface for users to upload images and visualize detection results without any coding effort.
- **Real-Time Adjustments:** You can dynamically adjust detection parameters such as confidence and IoU thresholds and see the effects immediately.
- **Accessibility:** The web interface is accessible to anyone, making it useful for quick experiments, educational purposes, and demonstrations.
For more details, you can read this [blog post](https://www.ultralytics.com/blog/ai-and-radiology-a-new-era-of-precision-and-efficiency).
### Can I use Gradio and Ultralytics YOLOv8 together for educational purposes?
Yes, Gradio and Ultralytics YOLOv8 can be utilized together for educational purposes effectively. Gradio's intuitive web interface makes it easy for students and educators to interact with state-of-the-art deep learning models like Ultralytics YOLOv8 without needing advanced programming skills. This setup is ideal for demonstrating key concepts in object detection and computer vision, as Gradio provides immediate visual feedback which helps in understanding the impact of different parameters on the detection performance.
### How do I adjust the confidence and IoU thresholds in the Gradio interface for YOLOv8?
In the Gradio interface for YOLOv8, you can adjust the confidence and IoU thresholds using the sliders provided. These thresholds help control the prediction accuracy and object separation:
- **Confidence Threshold:** Determines the minimum confidence level for detecting objects. Slide to increase or decrease the confidence required.
- **IoU Threshold:** Sets the intersection-over-union threshold for distinguishing between overlapping objects. Adjust this value to refine object separation.
For more information on these parameters, visit the [parameters explanation section](#parameters-explanation).
### What are some practical applications of using Ultralytics YOLOv8 with Gradio?
Practical applications of combining Ultralytics YOLOv8 with Gradio include:
- **Real-Time Object Detection Demonstrations:** Ideal for showcasing how object detection works in real-time.
- **Educational Tools:** Useful in academic settings to teach object detection and computer vision concepts.
- **Prototype Development:** Efficient for developing and testing prototype object detection applications quickly.
- **Community and Collaborations:** Making it easy to share models with the community for feedback and collaboration.
For examples of similar use cases, check out the [Ultralytics blog](https://www.ultralytics.com/blog/monitoring-animal-behavior-using-ultralytics-yolov8).
Providing this information within the documentation will help in enhancing the usability and accessibility of Ultralytics YOLOv8, making it more approachable for users at all levels of expertise.

@ -112,3 +112,27 @@ By writing a guide or tutorial, you can help expand our documentation and provid
To contribute, please check out our [Contributing Guide](../help/contributing.md) for instructions on how to submit a Pull Request (PR) 🛠. We eagerly await your contributions!
Let's collaborate to make the Ultralytics YOLO ecosystem more expansive and feature-rich 🙏!
## FAQ
### What is Ultralytics HUB, and how does it streamline the ML workflow?
Ultralytics HUB is a cloud-based platform designed to make machine learning (ML) workflows for Ultralytics models seamless and efficient. By using this tool, you can easily upload datasets, train models, perform real-time tracking, and deploy YOLOv8 models without needing extensive coding skills. You can explore the key features on the [Ultralytics HUB](https://hub.ultralytics.com) page and get started quickly with our [Quickstart](https://docs.ultralytics.com/hub/quickstart/) guide.
### How do I integrate Ultralytics YOLO models with Roboflow for dataset management?
Integrating Ultralytics YOLO models with Roboflow enhances dataset management by providing robust tools for annotation, preprocessing, and augmentation. To get started, follow the steps on the [Roboflow](roboflow.md) integration page. This partnership ensures efficient dataset handling, which is crucial for developing accurate and robust YOLO models.
### Can I track the performance of my Ultralytics models using MLFlow?
Yes, you can. Integrating MLFlow with Ultralytics models allows you to track experiments, improve reproducibility, and streamline the entire ML lifecycle. Detailed instructions for setting up this integration can be found on the [MLFlow](mlflow.md) integration page. This integration is particularly useful for monitoring model metrics and managing the ML workflow efficiently.
### What are the benefits of using Neural Magic for YOLOv8 model optimization?
Neural Magic optimizes YOLOv8 models by leveraging techniques like Quantization Aware Training (QAT) and pruning, resulting in highly efficient, smaller models that perform better on resource-limited hardware. Check out the [Neural Magic](neural-magic.md) integration page to learn how to implement these optimizations for superior performance and leaner models. This is especially beneficial for deployment on edge devices.
### How do I deploy Ultralytics YOLO models with Gradio for interactive demos?
To deploy Ultralytics YOLO models with Gradio for interactive object detection demos, you can follow the steps outlined on the [Gradio](gradio.md) integration page. Gradio allows you to create easy-to-use web interfaces for real-time model inference, making it an excellent tool for showcasing your YOLO model's capabilities in a user-friendly format suitable for both developers and end-users.
By addressing these common questions, we aim to improve user experience and provide valuable insights into the powerful capabilities of Ultralytics products. Incorporating these FAQs will not only enhance the documentation but also drive more organic traffic to the Ultralytics website.

@ -117,3 +117,91 @@ yolo settings mlflow=False
## Conclusion
MLflow logging integration with Ultralytics YOLO offers a streamlined way to keep track of your machine learning experiments. It empowers you to monitor performance metrics and manage artifacts effectively, thus aiding in robust model development and deployment. For further details please visit the MLflow [official documentation](https://mlflow.org/docs/latest/index.html).
## FAQ
### How do I set up MLflow logging with Ultralytics YOLO?
To set up MLflow logging with Ultralytics YOLO, you first need to ensure MLflow is installed. You can install it using pip:
```bash
pip install mlflow
```
Next, enable MLflow logging in Ultralytics settings. This can be controlled using the `mlflow` key. For more information, see the [settings guide](../quickstart.md#ultralytics-settings).
!!! Example "Update Ultralytics MLflow Settings"
=== "Python"
```python
from ultralytics import settings
# Update a setting
settings.update({"mlflow": True})
# Reset settings to default values
settings.reset()
```
=== "CLI"
```bash
# Update a setting
yolo settings runs_dir='/path/to/runs'
# Reset settings to default values
yolo settings reset
```
Finally, start a local MLflow server for tracking:
```bash
mlflow server --backend-store-uri runs/mlflow
```
### What metrics and parameters can I log using MLflow with Ultralytics YOLO?
Ultralytics YOLO with MLflow supports logging various metrics, parameters, and artifacts throughout the training process:
- **Metrics Logging**: Tracks metrics at the end of each epoch and upon training completion.
- **Parameter Logging**: Logs all parameters used in the training process.
- **Artifacts Logging**: Saves model artifacts like weights and configuration files after training.
For more detailed information, visit the [Ultralytics YOLO tracking documentation](#features).
### Can I disable MLflow logging once it is enabled?
Yes, you can disable MLflow logging for Ultralytics YOLO by updating the settings. Here's how you can do it using the CLI:
```bash
yolo settings mlflow=False
```
For further customization and resetting settings, refer to the [settings guide](../quickstart.md#ultralytics-settings).
### How can I start and stop an MLflow server for Ultralytics YOLO tracking?
To start an MLflow server for tracking your experiments in Ultralytics YOLO, use the following command:
```bash
mlflow server --backend-store-uri runs/mlflow
```
This command starts a local server at http://127.0.0.1:5000 by default. If you need to stop running MLflow server instances, use the following bash command:
```bash
ps aux | grep 'mlflow' | grep -v 'grep' | awk '{print $2}' | xargs kill -9
```
Refer to the [commands section](#commands) for more command options.
### What are the benefits of integrating MLflow with Ultralytics YOLO for experiment tracking?
Integrating MLflow with Ultralytics YOLO offers several benefits for managing your machine learning experiments:
- **Enhanced Experiment Tracking**: Easily track and compare different runs and their outcomes.
- **Improved Model Reproducibility**: Ensure that your experiments are reproducible by logging all parameters and artifacts.
- **Performance Monitoring**: Visualize performance metrics over time to make data-driven decisions for model improvements.
For an in-depth look at setting up and leveraging MLflow with Ultralytics YOLO, explore the [MLflow Integration for Ultralytics YOLO](#introduction) documentation.

@ -118,3 +118,69 @@ In this guide, we've gone over exporting Ultralytics YOLOv8 models to the NCNN f
For detailed instructions on usage, please refer to the [official NCNN documentation](https://ncnn.readthedocs.io/en/latest/index.html).
Also, if you're interested in exploring other integration options for Ultralytics YOLOv8, be sure to visit our [integration guide page](index.md) for further insights and information.
## FAQ
### How do I export Ultralytics YOLOv8 models to NCNN format?
To export your Ultralytics YOLOv8 model to NCNN format, follow these steps:
- **Python**: Use the `export` function from the YOLO class.
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export to NCNN format
model.export(format="ncnn") # creates '/yolov8n_ncnn_model'
```
- **CLI**: Use the `yolo` command with the `export` argument.
```bash
yolo export model=yolov8n.pt format=ncnn # creates '/yolov8n_ncnn_model'
```
For detailed export options, check the [Export](../modes/export.md) page in the documentation.
### What are the advantages of exporting YOLOv8 models to NCNN?
Exporting your Ultralytics YOLOv8 models to NCNN offers several benefits:
- **Efficiency**: NCNN models are optimized for mobile and embedded devices, ensuring high performance even with limited computational resources.
- **Quantization**: NCNN supports techniques like quantization that improve model speed and reduce memory usage.
- **Broad Compatibility**: You can deploy NCNN models on multiple platforms, including Android, iOS, Linux, and macOS.
For more details, see the [Export to NCNN](#why-should-you-export-to-ncnn) section in the documentation.
### Why should I use NCNN for my mobile AI applications?
NCNN, developed by Tencent, is specifically optimized for mobile platforms. Key reasons to use NCNN include:
- **High Performance**: Designed for efficient and fast processing on mobile CPUs.
- **Cross-Platform**: Compatible with popular frameworks such as TensorFlow and ONNX, making it easier to convert and deploy models across different platforms.
- **Community Support**: Active community support ensures continual improvements and updates.
To understand more, visit the [NCNN overview](#key-features-of-ncnn-models) in the documentation.
### What platforms are supported for NCNN model deployment?
NCNN is versatile and supports various platforms:
- **Mobile**: Android, iOS.
- **Embedded Systems and IoT Devices**: Devices like Raspberry Pi and NVIDIA Jetson.
- **Desktop and Servers**: Linux, Windows, and macOS.
If running models on a Raspberry Pi isn't fast enough, converting to the NCNN format could speed things up as detailed in our [Raspberry Pi Guide](../guides/raspberry-pi.md).
### How can I deploy Ultralytics YOLOv8 NCNN models on Android?
To deploy your YOLOv8 models on Android:
1. **Build for Android**: Follow the [NCNN Build for Android](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-android) guide.
2. **Integrate with Your App**: Use the NCNN Android SDK to integrate the exported model into your application for efficient on-device inference.
For step-by-step instructions, refer to our guide on [Deploying YOLOv8 NCNN Models](#deploying-exported-yolov8-ncnn-models).
For more advanced guides and use cases, visit the [Ultralytics documentation page](../guides/model-deployment-options.md).

@ -160,3 +160,52 @@ This guide explored integrating Ultralytics' YOLOv8 with Neural Magic's DeepSpar
For more detailed information and advanced usage, visit [Neural Magic's DeepSparse documentation](https://docs.neuralmagic.com/products/deepsparse/). Also, check out Neural Magic's documentation on the integration with YOLOv8 [here](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/yolov8#yolov8-inference-pipelines) and watch a great session on it [here](https://www.youtube.com/watch?v=qtJ7bdt52x8).
Additionally, for a broader understanding of various YOLOv8 integrations, visit the [Ultralytics integration guide page](../integrations/index.md), where you can discover a range of other exciting integration possibilities.
## FAQ
### What is Neural Magic's DeepSparse Engine and how does it optimize YOLOv8 performance?
Neural Magic's DeepSparse Engine is an inference runtime designed to optimize the execution of neural networks on CPUs through advanced techniques such as sparsity, pruning, and quantization. By integrating DeepSparse with YOLOv8, you can achieve GPU-like performance on standard CPUs, significantly enhancing inference speed, model efficiency, and overall performance while maintaining accuracy. For more details, check out the [Neural Magic's DeepSparse section](#neural-magics-deepsparse).
### How can I install the needed packages to deploy YOLOv8 using Neural Magic's DeepSparse?
Installing the required packages for deploying YOLOv8 with Neural Magic's DeepSparse is straightforward. You can easily install them using the CLI. Here's the command you need to run:
```bash
pip install deepsparse[yolov8]
```
Once installed, follow the steps provided in the [Installation section](#step-1-installation) to set up your environment and start using DeepSparse with YOLOv8.
### How do I convert YOLOv8 models to ONNX format for use with DeepSparse?
To convert YOLOv8 models to the ONNX format, which is required for compatibility with DeepSparse, you can use the following CLI command:
```bash
yolo task=detect mode=export model=yolov8n.pt format=onnx opset=13
```
This command will export your YOLOv8 model (`yolov8n.pt`) to a format (`yolov8n.onnx`) that can be utilized by the DeepSparse Engine. More information about model export can be found in the [Model Export section](#step-2-exporting-yolov8-to-onnx-format).
### How do I benchmark YOLOv8 performance on the DeepSparse Engine?
Benchmarking YOLOv8 performance on DeepSparse helps you analyze throughput and latency to ensure your model is optimized. You can use the following CLI command to run a benchmark:
```bash
deepsparse.benchmark model_path="path/to/yolov8n.onnx" --scenario=sync --input_shapes="[1,3,640,640]"
```
This command will provide you with vital performance metrics. For more details, see the [Benchmarking Performance section](#step-4-benchmarking-performance).
### Why should I use Neural Magic's DeepSparse with YOLOv8 for object detection tasks?
Integrating Neural Magic's DeepSparse with YOLOv8 offers several benefits:
- **Enhanced Inference Speed:** Achieves up to 525 FPS, significantly speeding up YOLOv8's capabilities.
- **Optimized Model Efficiency:** Uses sparsity, pruning, and quantization techniques to reduce model size and computational needs while maintaining accuracy.
- **High Performance on Standard CPUs:** Offers GPU-like performance on cost-effective CPU hardware.
- **Streamlined Integration:** User-friendly tools for easy deployment and integration.
- **Flexibility:** Supports both standard and sparsity-optimized YOLOv8 models.
- **Cost-Effective:** Reduces operational expenses through efficient resource utilization.
For a deeper dive into these advantages, visit the [Benefits of Integrating Neural Magic's DeepSparse with YOLOv8 section](#benefits-of-integrating-neural-magics-deepsparse-with-yolov8).

@ -132,3 +132,82 @@ In this guide, you've learned how to export Ultralytics YOLOv8 models to ONNX fo
For further details on usage, visit the [ONNX official documentation](https://onnx.ai/onnx/intro/).
Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.
## FAQ
### How do I export YOLOv8 models to ONNX format using Ultralytics?
To export your YOLOv8 models to ONNX format using Ultralytics, follow these steps:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model to ONNX format
model.export(format="onnx") # creates 'yolov8n.onnx'
# Load the exported ONNX model
onnx_model = YOLO("yolov8n.onnx")
# Run inference
results = onnx_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to ONNX format
yolo export model=yolov8n.pt format=onnx # creates 'yolov8n.onnx'
# Run inference with the exported model
yolo predict model=yolov8n.onnx source='https://ultralytics.com/images/bus.jpg'
```
For more details, visit the [export documentation](../modes/export.md).
### What are the advantages of using ONNX Runtime for deploying YOLOv8 models?
Using ONNX Runtime for deploying YOLOv8 models offers several advantages:
- **Cross-platform compatibility**: ONNX Runtime supports various platforms, such as Windows, macOS, and Linux, ensuring your models run smoothly across different environments.
- **Hardware acceleration**: ONNX Runtime can leverage hardware-specific optimizations for CPUs, GPUs, and dedicated accelerators, providing high-performance inference.
- **Framework interoperability**: Models trained in popular frameworks like PyTorch or TensorFlow can be easily converted to ONNX format and run using ONNX Runtime.
Learn more by checking the [ONNX Runtime documentation](https://onnxruntime.ai/docs/api/python/api_summary.html).
### What deployment options are available for YOLOv8 models exported to ONNX?
YOLOv8 models exported to ONNX can be deployed on various platforms including:
- **CPUs**: Utilizing ONNX Runtime for optimized CPU inference.
- **GPUs**: Leveraging NVIDIA CUDA for high-performance GPU acceleration.
- **Edge devices**: Running lightweight models on edge and mobile devices for real-time, on-device inference.
- **Web browsers**: Executing models directly within web browsers for interactive web-based applications.
For more information, explore our guide on [model deployment options](../guides/model-deployment-options.md).
### Why should I use ONNX format for Ultralytics YOLOv8 models?
Using ONNX format for Ultralytics YOLOv8 models provides numerous benefits:
- **Interoperability**: ONNX allows models to be transferred between different machine learning frameworks seamlessly.
- **Performance Optimization**: ONNX Runtime can enhance model performance by utilizing hardware-specific optimizations.
- **Flexibility**: ONNX supports various deployment environments, enabling you to use the same model on different platforms without modification.
Refer to the comprehensive guide on [exporting YOLOv8 models to ONNX](https://www.ultralytics.com/blog/export-and-optimize-a-yolov8-model-for-inference-on-openvino).
### How can I troubleshoot issues when exporting YOLOv8 models to ONNX?
When exporting YOLOv8 models to ONNX, you might encounter common issues such as mismatched dependencies or unsupported operations. To troubleshoot these problems:
1. Verify that you have the correct version of required dependencies installed.
2. Check the official [ONNX documentation](https://onnx.ai/onnx/intro/) for supported operators and features.
3. Review the error messages for clues and consult the [Ultralytics Common Issues guide](../guides/yolo-common-issues.md).
If issues persist, contact Ultralytics support for further assistance.

@ -284,3 +284,107 @@ For the Intel® Data Center GPU Flex Series, the OpenVINO format was able to del
The benchmarks underline the effectiveness of OpenVINO as a tool for deploying deep learning models. By converting models to the OpenVINO format, developers can achieve significant performance improvements, making it easier to deploy these models in real-world applications.
For more detailed information and instructions on using OpenVINO, refer to the [official OpenVINO documentation](https://docs.openvino.ai/).
## FAQ
### How do I export YOLOv8 models to OpenVINO format?
Exporting YOLOv8 models to the OpenVINO format can significantly enhance CPU speed and enable GPU and NPU accelerations on Intel hardware. To export, you can use either Python or CLI as shown below:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")
# Export the model
model.export(format="openvino") # creates 'yolov8n_openvino_model/'
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to OpenVINO format
yolo export model=yolov8n.pt format=openvino # creates 'yolov8n_openvino_model/'
```
For more information, refer to the [export formats documentation](../modes/export.md).
### What are the benefits of using OpenVINO with YOLOv8 models?
Using Intel's OpenVINO toolkit with YOLOv8 models offers several benefits:
1. **Performance**: Achieve up to 3x speedup on CPU inference and leverage Intel GPUs and NPUs for acceleration.
2. **Model Optimizer**: Convert, optimize, and execute models from popular frameworks like PyTorch, TensorFlow, and ONNX.
3. **Ease of Use**: Over 80 tutorial notebooks are available to help users get started, including ones for YOLOv8.
4. **Heterogeneous Execution**: Deploy models on various Intel hardware with a unified API.
For detailed performance comparisons, visit our [benchmarks section](#openvino-yolov8-benchmarks).
### How can I run inference using a YOLOv8 model exported to OpenVINO?
After exporting a YOLOv8 model to OpenVINO format, you can run inference using Python or CLI:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load the exported OpenVINO model
ov_model = YOLO("yolov8n_openvino_model/")
# Run inference
results = ov_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Run inference with the exported model
yolo predict model=yolov8n_openvino_model source='https://ultralytics.com/images/bus.jpg'
```
Refer to our [predict mode documentation](../modes/predict.md) for more details.
### Why should I choose Ultralytics YOLOv8 over other models for OpenVINO export?
Ultralytics YOLOv8 is optimized for real-time object detection with high accuracy and speed. Specifically, when combined with OpenVINO, YOLOv8 provides:
- Up to 3x speedup on Intel CPUs
- Seamless deployment on Intel GPUs and NPUs
- Consistent and comparable accuracy across various export formats
For in-depth performance analysis, check our detailed [YOLOv8 benchmarks](#openvino-yolov8-benchmarks) on different hardware.
### Can I benchmark YOLOv8 models on different formats such as PyTorch, ONNX, and OpenVINO?
Yes, you can benchmark YOLOv8 models in various formats including PyTorch, TorchScript, ONNX, and OpenVINO. Use the following code snippet to run benchmarks on your chosen dataset:
!!! Example
=== "Python"
```python
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
results = model.benchmarks(data="coco8.yaml")
```
=== "CLI"
```bash
# Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all export formats
yolo benchmark model=yolov8n.pt data=coco8.yaml
```
For detailed benchmark results, refer to our [benchmarks section](#openvino-yolov8-benchmarks) and [export formats](../modes/export.md) documentation.

@ -120,3 +120,83 @@ In this guide, we explored the process of exporting Ultralytics YOLOv8 models to
For further details on usage, visit the [PaddlePaddle official documentation](https://www.paddlepaddle.org.cn/documentation/docs/en/guides/index_en.html)
Want to explore more ways to integrate your Ultralytics YOLOv8 models? Our [integration guide page](index.md) explores various options, equipping you with valuable resources and insights.
## FAQ
### How do I export Ultralytics YOLOv8 models to PaddlePaddle format?
Exporting Ultralytics YOLOv8 models to PaddlePaddle format is straightforward. You can use the `export` method of the YOLO class to perform this exportation. Here is an example using Python:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model to PaddlePaddle format
model.export(format="paddle") # creates '/yolov8n_paddle_model'
# Load the exported PaddlePaddle model
paddle_model = YOLO("./yolov8n_paddle_model")
# Run inference
results = paddle_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to PaddlePaddle format
yolo export model=yolov8n.pt format=paddle # creates '/yolov8n_paddle_model'
# Run inference with the exported model
yolo predict model='./yolov8n_paddle_model' source='https://ultralytics.com/images/bus.jpg'
```
For more detailed setup and troubleshooting, check the [Ultralytics Installation Guide](../quickstart.md) and [Common Issues Guide](../guides/yolo-common-issues.md).
### What are the advantages of using PaddlePaddle for model deployment?
PaddlePaddle offers several key advantages for model deployment:
- **Performance Optimization**: PaddlePaddle excels in efficient model execution and reduced memory usage.
- **Dynamic-to-Static Graph Compilation**: It supports dynamic-to-static compilation, allowing for runtime optimizations.
- **Operator Fusion**: By merging compatible operations, it reduces computational overhead.
- **Quantization Techniques**: Supports both post-training and quantization-aware training, enabling lower-precision data representations for improved performance.
You can achieve enhanced results by exporting your Ultralytics YOLOv8 models to PaddlePaddle, ensuring flexibility and high performance across various applications and hardware platforms. Learn more about PaddlePaddle's features [here](https://www.paddlepaddle.org.cn/en).
### Why should I choose PaddlePaddle for deploying my YOLOv8 models?
PaddlePaddle, developed by Baidu, is optimized for industrial and commercial AI deployments. Its large developer community and robust framework provide extensive tools similar to TensorFlow and PyTorch. By exporting your YOLOv8 models to PaddlePaddle, you leverage:
- **Enhanced Performance**: Optimal execution speed and reduced memory footprint.
- **Flexibility**: Wide compatibility with various devices from smartphones to cloud servers.
- **Scalability**: Efficient parallel processing capabilities for distributed environments.
These features make PaddlePaddle a compelling choice for deploying YOLOv8 models in production settings.
### How does PaddlePaddle improve model performance over other frameworks?
PaddlePaddle employs several advanced techniques to optimize model performance:
- **Dynamic-to-Static Graph**: Converts models into a static computational graph for runtime optimizations.
- **Operator Fusion**: Combines compatible operations to minimize memory transfer and increase inference speed.
- **Quantization**: Reduces model size and increases efficiency using lower-precision data while maintaining accuracy.
These techniques prioritize efficient model execution, making PaddlePaddle an excellent option for deploying high-performance YOLOv8 models. For more on optimization, see the [PaddlePaddle official documentation](https://www.paddlepaddle.org.cn/documentation/docs/en/guides/index_en.html).
### What deployment options does PaddlePaddle offer for YOLOv8 models?
PaddlePaddle provides flexible deployment options:
- **Paddle Serving**: Deploys models as RESTful APIs, ideal for production with features like model versioning and online A/B testing.
- **Paddle Inference API**: Gives low-level control over model execution for custom applications.
- **Paddle Lite**: Optimizes models for mobile and embedded devices' limited resources.
- **Paddle.js**: Enables deploying models directly within web browsers.
These options cover a broad range of deployment scenarios, from on-device inference to scalable cloud services. Explore more deployment strategies on the [Ultralytics Model Deployment Options page](../guides/model-deployment-options.md).

@ -84,3 +84,32 @@ This guide explored the Paperspace Gradient integration for training YOLOv8 mode
For further exploration, visit [PaperSpace's official documentation](https://docs.digitalocean.com/products/paperspace/).
Also, visit the [Ultralytics integration guide page](index.md) to learn more about different YOLOv8 integrations. It's full of insights and tips to take your computer vision projects to the next level.
## FAQ
### How do I train a YOLOv8 model using Paperspace Gradient?
Training a YOLOv8 model with Paperspace Gradient is straightforward and efficient. First, sign in to the [Paperspace console](https://console.paperspace.com/github/ultralytics/ultralytics). Next, click the “Start Machine” button to initiate a managed GPU environment. Once the environment is ready, you can run the notebook's cells to start training your YOLOv8 model. For detailed instructions, refer to our [YOLOv8 Model Training guide](../modes/train.md).
### What are the advantages of using Paperspace Gradient for YOLOv8 projects?
Paperspace Gradient offers several unique advantages for training and deploying YOLOv8 models:
- **Hardware Flexibility:** Choose from various CPU, GPU, and TPU configurations.
- **One-Click Notebooks:** Use pre-configured Jupyter Notebooks for YOLOv8 without worrying about environment setup.
- **Experiment Tracking:** Automatic tracking of hyperparameters, metrics, and code changes.
- **Dataset Management:** Efficiently manage your datasets within Gradient.
- **Model Serving:** Deploy models as REST APIs easily.
- **Real-time Monitoring:** Monitor model performance and resource utilization through a dashboard.
### Why should I choose Ultralytics YOLOv8 over other object detection models?
Ultralytics YOLOv8 stands out for its real-time object detection capabilities and high accuracy. Its seamless integration with platforms like Paperspace Gradient enhances productivity by simplifying the training and deployment process. YOLOv8 supports various use cases, from security systems to retail inventory management. Explore more about YOLOv8's advantages [here](https://www.ultralytics.com/yolo).
### Can I deploy my YOLOv8 model on edge devices using Paperspace Gradient?
Yes, you can deploy YOLOv8 models on edge devices using Paperspace Gradient. The platform supports various deployment formats like TFLite and Edge TPU, which are optimized for edge devices. After training your model on Gradient, refer to our [export guide](../modes/export.md) for instructions on converting your model to the desired format.
### How does experiment tracking in Paperspace Gradient help improve YOLOv8 training?
Experiment tracking in Paperspace Gradient streamlines the model development process by automatically logging hyperparameters, metrics, and code changes. This allows you to easily compare different training runs, identify optimal configurations, and reproduce successful experiments.

@ -183,3 +183,102 @@ plt.show()
In this documentation, we covered common workflows to analyze the results of experiments run with Ray Tune using Ultralytics. The key steps include loading the experiment results from a directory, performing basic experiment-level and trial-level analysis and plotting metrics.
Explore further by looking into Ray Tune's [Analyze Results](https://docs.ray.io/en/latest/tune/examples/tune_analyze_results.html) docs page to get the most out of your hyperparameter tuning experiments.
## FAQ
### How do I tune the hyperparameters of my YOLOv8 model using Ray Tune?
To tune the hyperparameters of your Ultralytics YOLOv8 model using Ray Tune, follow these steps:
1. **Install the required packages:**
```bash
pip install -U ultralytics "ray[tune]"
pip install wandb # optional for logging
```
2. **Load your YOLOv8 model and start tuning:**
```python
from ultralytics import YOLO
# Load a YOLOv8 model
model = YOLO("yolov8n.pt")
# Start tuning with the COCO8 dataset
result_grid = model.tune(data="coco8.yaml", use_ray=True)
```
This utilizes Ray Tune's advanced search strategies and parallelism to efficiently optimize your model's hyperparameters. For more information, check out the [Ray Tune documentation](https://docs.ray.io/en/latest/tune/index.html).
### What are the default hyperparameters for YOLOv8 tuning with Ray Tune?
Ultralytics YOLOv8 uses the following default hyperparameters for tuning with Ray Tune:
| Parameter | Value Range | Description |
| --------------- | -------------------------- | ------------------------------ |
| `lr0` | `tune.uniform(1e-5, 1e-1)` | Initial learning rate |
| `lrf` | `tune.uniform(0.01, 1.0)` | Final learning rate factor |
| `momentum` | `tune.uniform(0.6, 0.98)` | Momentum |
| `weight_decay` | `tune.uniform(0.0, 0.001)` | Weight decay |
| `warmup_epochs` | `tune.uniform(0.0, 5.0)` | Warmup epochs |
| `box` | `tune.uniform(0.02, 0.2)` | Box loss weight |
| `cls` | `tune.uniform(0.2, 4.0)` | Class loss weight |
| `hsv_h` | `tune.uniform(0.0, 0.1)` | Hue augmentation range |
| `translate` | `tune.uniform(0.0, 0.9)` | Translation augmentation range |
These hyperparameters can be customized to suit your specific needs. For a complete list and more details, refer to the [Hyperparameter Tuning](../guides/hyperparameter-tuning.md) guide.
### How can I integrate Weights & Biases with my YOLOv8 model tuning?
To integrate Weights & Biases (W&B) with your Ultralytics YOLOv8 tuning process:
1. **Install W&B:**
```bash
pip install wandb
```
2. **Modify your tuning script:**
```python
import wandb
from ultralytics import YOLO
wandb.init(project="YOLO-Tuning", entity="your-entity")
# Load YOLO model
model = YOLO("yolov8n.pt")
# Tune hyperparameters
result_grid = model.tune(data="coco8.yaml", use_ray=True)
```
This setup will allow you to monitor the tuning process, track hyperparameter configurations, and visualize results in W&B.
### Why should I use Ray Tune for hyperparameter optimization with YOLOv8?
Ray Tune offers numerous advantages for hyperparameter optimization:
- **Advanced Search Strategies:** Utilizes algorithms like Bayesian Optimization and HyperOpt for efficient parameter search.
- **Parallelism:** Supports parallel execution of multiple trials, significantly speeding up the tuning process.
- **Early Stopping:** Employs strategies like ASHA to terminate under-performing trials early, saving computational resources.
Ray Tune seamlessly integrates with Ultralytics YOLOv8, providing an easy-to-use interface for tuning hyperparameters effectively. To get started, check out the [Efficient Hyperparameter Tuning with Ray Tune and YOLOv8](../guides/hyperparameter-tuning.md) guide.
### How can I define a custom search space for YOLOv8 hyperparameter tuning?
To define a custom search space for your YOLOv8 hyperparameter tuning with Ray Tune:
```python
from ray import tune
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
search_space = {"lr0": tune.uniform(1e-5, 1e-1), "momentum": tune.uniform(0.6, 0.98)}
result_grid = model.tune(data="coco8.yaml", space=search_space, use_ray=True)
```
This customizes the range of hyperparameters like initial learning rate and momentum to be explored during the tuning process. For advanced configurations, refer to the [Custom Search Space Example](#custom-search-space-example) section.

@ -241,3 +241,29 @@ Below are a few of the many pieces of feedback we have received for using YOLOv8
<img src="https://media.roboflow.com/ultralytics/rf_showcase_2.png" alt="Showcase image" width="500">
<img src="https://media.roboflow.com/ultralytics/rf_showcase_3.png" alt="Showcase image" width="500">
</p>
## FAQ
### How do I label data for YOLOv8 models using Roboflow?
Labeling data for YOLOv8 models using Roboflow is straightforward with Roboflow Annotate. First, create a project on Roboflow and upload your images. After uploading, select the batch of images and click "Start Annotating." You can use the `B` key for bounding boxes or the `P` key for polygons. For faster annotation, use the SAM-based label assistant by clicking the cursor icon in the sidebar. Detailed steps can be found [here](#upload-convert-and-label-data-for-yolov8-format).
### What services does Roboflow offer for collecting YOLOv8 training data?
Roboflow provides two key services for collecting YOLOv8 training data: [Universe](https://universe.roboflow.com/?ref=ultralytics) and [Collect](https://roboflow.com/collect?ref=ultralytics). Universe offers access to over 250,000 vision datasets, while Collect helps you gather images using a webcam and automated prompts.
### How can I manage and analyze my YOLOv8 dataset using Roboflow?
Roboflow offers robust dataset management tools, including dataset search, tagging, and Health Check. Use the search feature to find images based on text descriptions or tags. Health Check provides insights into dataset quality, showing class balance, image sizes, and annotation heatmaps. This helps optimize dataset performance before training YOLOv8 models. Detailed information can be found [here](#dataset-management-for-yolov8).
### How do I export my YOLOv8 dataset from Roboflow?
To export your YOLOv8 dataset from Roboflow, you need to create a dataset version. Click "Versions" in the sidebar, then "Create New Version" and apply any desired augmentations. Once the version is generated, click "Export Dataset" and choose the YOLOv8 format. Follow this process [here](#export-data-in-40-formats-for-model-training).
### How can I integrate and deploy YOLOv8 models with Roboflow?
Integrate and deploy YOLOv8 models on Roboflow by uploading your YOLOv8 weights through a few lines of Python code. Use the provided script to authenticate and upload your model, which will create an API for deployment. For details on the script and further instructions, see [this section](#upload-custom-yolov8-model-weights-for-testing-and-deployment).
### What tools does Roboflow provide for evaluating YOLOv8 models?
Roboflow offers model evaluation tools, including a confusion matrix and vector analysis plots. Access these tools from the "View Detailed Evaluation" button on your model page. These features help identify model performance issues and find areas for improvement. For more information, refer to [this section](#how-to-evaluate-yolov8-models).

@ -61,93 +61,180 @@ Before diving into the usage instructions, be sure to check out the range of [YO
=== "Python"
```python
from ultralytics import YOLO
rom ultralytics import YOLO
# Load a pre-trained model
model = YOLO('yolov8n.pt')
Load a pre-trained model
odel = YOLO('yolov8n.pt')
# Train the model
results = model.train(data='coco8.yaml', epochs=100, imgsz=640)
```
Train the model
esults = model.train(data='coco8.yaml', epochs=100, imgsz=640)
``
Upon running the usage code snippet above, you can expect the following output:
ning the usage code snippet above, you can expect the following output:
```plaintext
TensorBoard: Start with 'tensorboard --logdir path_to_your_tensorboard_logs', view at http://localhost:6006/
```
text
ard: Start with 'tensorboard --logdir path_to_your_tensorboard_logs', view at http://localhost:6006/
```
put indicates that TensorBoard is now actively monitoring your YOLOv8 training session. You can access the TensorBoard dashboard by visiting the provided URL (http://localhost:6006/) to view real-time training metrics and model performance. For users working in Google Colab, the TensorBoard will be displayed in the same cell where you executed the TensorBoard configuration commands.
information related to the model training process, be sure to check our [YOLOv8 Model Training guide](../modes/train.md). If you are interested in learning more about logging, checkpoints, plotting, and file management, read our [usage guide on configuration](../usage/cfg.md).
standing Your TensorBoard for YOLOv8 Training
's focus on understanding the various features and components of TensorBoard in the context of YOLOv8 training. The three key sections of the TensorBoard are Time Series, Scalars, and Graphs.
Series
Series feature in the TensorBoard offers a dynamic and detailed perspective of various training metrics over time for YOLOv8 models. It focuses on the progression and trends of metrics across training epochs. Here's an example of what you can expect to see.
(https://github.com/ultralytics/ultralytics/assets/25847604/20b3e038-0356-465e-a37e-1ea232c68354)
Features of Time Series in TensorBoard
er Tags and Pinned Cards**: This functionality allows users to filter specific metrics and pin cards for quick comparison and access. It's particularly useful for focusing on specific aspects of the training process.
iled Metric Cards**: Time Series divides metrics into different categories like learning rate (lr), training (train), and validation (val) metrics, each represented by individual cards.
hical Display**: Each card in the Time Series section shows a detailed graph of a specific metric over the course of training. This visual representation aids in identifying trends, patterns, or anomalies in the training process.
epth Analysis**: Time Series provides an in-depth analysis of each metric. For instance, different learning rate segments are shown, offering insights into how adjustments in learning rate impact the model's learning curve.
ortance of Time Series in YOLOv8 Training
Series section is essential for a thorough analysis of the YOLOv8 model's training progress. It lets you track the metrics in real time to promptly identify and solve issues. It also offers a detailed view of each metrics progression, which is crucial for fine-tuning the model and enhancing its performance.
ars
in the TensorBoard are crucial for plotting and analyzing simple metrics like loss and accuracy during the training of YOLOv8 models. They offer a clear and concise view of how these metrics evolve with each training epoch, providing insights into the model's learning effectiveness and stability. Here's an example of what you can expect to see.
(https://github.com/ultralytics/ultralytics/assets/25847604/f9228193-13e9-4768-9edf-8fa15ecd24fa)
Features of Scalars in TensorBoard
ning Rate (lr) Tags**: These tags show the variations in the learning rate across different segments (e.g., `pg0`, `pg1`, `pg2`). This helps us understand the impact of learning rate adjustments on the training process.
ics Tags**: Scalars include performance indicators such as:
AP50 (B)`: Mean Average Precision at 50% Intersection over Union (IoU), crucial for assessing object detection accuracy.
AP50-95 (B)`: Mean Average Precision calculated over a range of IoU thresholds, offering a more comprehensive evaluation of accuracy.
recision (B)`: Indicates the ratio of correctly predicted positive observations, key to understanding prediction accuracy.
ecall (B)`: Important for models where missing a detection is significant, this metric measures the ability to detect all relevant instances.
This output indicates that TensorBoard is now actively monitoring your YOLOv8 training session. You can access the TensorBoard dashboard by visiting the provided URL (http://localhost:6006/) to view real-time training metrics and model performance. For users working in Google Colab, the TensorBoard will be displayed in the same cell where you executed the TensorBoard configuration commands.
learn more about the different metrics, read our guide on [performance metrics](../guides/yolo-performance-metrics.md).
For more information related to the model training process, be sure to check our [YOLOv8 Model Training guide](../modes/train.md). If you are interested in learning more about logging, checkpoints, plotting, and file management, read our [usage guide on configuration](../usage/cfg.md).
ning and Validation Tags (`train`, `val`)**: These tags display metrics specifically for the training and validation datasets, allowing for a comparative analysis of model performance across different data sets.
## Understanding Your TensorBoard for YOLOv8 Training
ortance of Monitoring Scalars
Now, let's focus on understanding the various features and components of TensorBoard in the context of YOLOv8 training. The three key sections of the TensorBoard are Time Series, Scalars, and Graphs.
g scalar metrics is crucial for fine-tuning the YOLOv8 model. Variations in these metrics, such as spikes or irregular patterns in loss graphs, can highlight potential issues such as overfitting, underfitting, or inappropriate learning rate settings. By closely monitoring these scalars, you can make informed decisions to optimize the training process, ensuring that the model learns effectively and achieves the desired performance.
### Time Series
erence Between Scalars and Time Series
The Time Series feature in the TensorBoard offers a dynamic and detailed perspective of various training metrics over time for YOLOv8 models. It focuses on the progression and trends of metrics across training epochs. Here's an example of what you can expect to see.
th Scalars and Time Series in TensorBoard are used for tracking metrics, they serve slightly different purposes. Scalars focus on plotting simple metrics such as loss and accuracy as scalar values. They provide a high-level overview of how these metrics change with each training epoch. While, the time-series section of the TensorBoard offers a more detailed timeline view of various metrics. It is particularly useful for monitoring the progression and trends of metrics over time, providing a deeper dive into the specifics of the training process.
![image](https://github.com/ultralytics/ultralytics/assets/25847604/20b3e038-0356-465e-a37e-1ea232c68354)
hs
#### Key Features of Time Series in TensorBoard
hs section of the TensorBoard visualizes the computational graph of the YOLOv8 model, showing how operations and data flow within the model. It's a powerful tool for understanding the model's structure, ensuring that all layers are connected correctly, and for identifying any potential bottlenecks in data flow. Here's an example of what you can expect to see.
- **Filter Tags and Pinned Cards**: This functionality allows users to filter specific metrics and pin cards for quick comparison and access. It's particularly useful for focusing on specific aspects of the training process.
(https://github.com/ultralytics/ultralytics/assets/25847604/039028e0-4ab3-4170-bfa8-f93ce483f615)
- **Detailed Metric Cards**: Time Series divides metrics into different categories like learning rate (lr), training (train), and validation (val) metrics, each represented by individual cards.
re particularly useful for debugging the model, especially in complex architectures typical in deep learning models like YOLOv8. They help in verifying layer connections and the overall design of the model.
- **Graphical Display**: Each card in the Time Series section shows a detailed graph of a specific metric over the course of training. This visual representation aids in identifying trends, patterns, or anomalies in the training process.
ry
- **In-Depth Analysis**: Time Series provides an in-depth analysis of each metric. For instance, different learning rate segments are shown, offering insights into how adjustments in learning rate impact the model's learning curve.
de aims to help you use TensorBoard with YOLOv8 for visualization and analysis of machine learning model training. It focuses on explaining how key TensorBoard features can provide insights into training metrics and model performance during YOLOv8 training sessions.
#### Importance of Time Series in YOLOv8 Training
re detailed exploration of these features and effective utilization strategies, you can refer to TensorFlow's official [TensorBoard documentation](https://www.tensorflow.org/tensorboard/get_started) and their [GitHub repository](https://github.com/tensorflow/tensorboard).
The Time Series section is essential for a thorough analysis of the YOLOv8 model's training progress. It lets you track the metrics in real time to promptly identify and solve issues. It also offers a detailed view of each metrics progression, which is crucial for fine-tuning the model and enhancing its performance.
learn more about the various integrations of Ultralytics? Check out the [Ultralytics integrations guide page](../integrations/index.md) to see what other exciting capabilities are waiting to be discovered!
### Scalars
## FAQ
Scalars in the TensorBoard are crucial for plotting and analyzing simple metrics like loss and accuracy during the training of YOLOv8 models. They offer a clear and concise view of how these metrics evolve with each training epoch, providing insights into the model's learning effectiveness and stability. Here's an example of what you can expect to see.
do I integrate YOLOv8 with TensorBoard for real-time visualization?
![image](https://github.com/ultralytics/ultralytics/assets/25847604/f9228193-13e9-4768-9edf-8fa15ecd24fa)
ing YOLOv8 with TensorBoard allows for real-time visual insights during model training. First, install the necessary package:
#### Key Features of Scalars in TensorBoard
ple "Installation"
- **Learning Rate (lr) Tags**: These tags show the variations in the learning rate across different segments (e.g., `pg0`, `pg1`, `pg2`). This helps us understand the impact of learning rate adjustments on the training process.
"CLI"
```bash
# Install the required package for YOLOv8 and Tensorboard
pip install ultralytics
```
Next, configure TensorBoard to log your training runs, then start TensorBoard:
!!! Example "Configure TensorBoard for Google Colab"
=== "Python"
```ipython
%load_ext tensorboard
%tensorboard --logdir path/to/runs
```
Finally, during training, YOLOv8 automatically logs metrics like loss and accuracy to TensorBoard. You can monitor these metrics by visiting [http://localhost:6006/](http://localhost:6006/).
- **Metrics Tags**: Scalars include performance indicators such as:
For a comprehensive guide, refer to our [YOLOv8 Model Training guide](../modes/train.md).
- `mAP50 (B)`: Mean Average Precision at 50% Intersection over Union (IoU), crucial for assessing object detection accuracy.
### What benefits does using TensorBoard with YOLOv8 offer?
- `mAP50-95 (B)`: Mean Average Precision calculated over a range of IoU thresholds, offering a more comprehensive evaluation of accuracy.
Using TensorBoard with YOLOv8 provides several visualization tools essential for efficient model training:
- `Precision (B)`: Indicates the ratio of correctly predicted positive observations, key to understanding prediction accuracy.
- **Real-Time Metrics Tracking:** Track key metrics such as loss, accuracy, precision, and recall live.
- **Model Graph Visualization:** Understand and debug the model architecture by visualizing computational graphs.
- **Embedding Visualization:** Project embeddings to lower-dimensional spaces for better insight.
- `Recall (B)`: Important for models where missing a detection is significant, this metric measures the ability to detect all relevant instances.
These tools enable you to make informed adjustments to enhance your YOLOv8 model's performance. For more details on TensorBoard features, check out the TensorFlow [TensorBoard guide](https://www.tensorflow.org/tensorboard/get_started).
- To learn more about the different metrics, read our guide on [performance metrics](../guides/yolo-performance-metrics.md).
### How can I monitor training metrics using TensorBoard when training a YOLOv8 model?
- **Training and Validation Tags (`train`, `val`)**: These tags display metrics specifically for the training and validation datasets, allowing for a comparative analysis of model performance across different data sets.
To monitor training metrics while training a YOLOv8 model with TensorBoard, follow these steps:
#### Importance of Monitoring Scalars
1. **Install TensorBoard and YOLOv8:** Run `pip install ultralytics` which includes TensorBoard.
2. **Configure TensorBoard Logging:** During the training process, YOLOv8 logs metrics to a specified log directory.
3. **Start TensorBoard:** Launch TensorBoard using the command `tensorboard --logdir path/to/your/tensorboard/logs`.
Observing scalar metrics is crucial for fine-tuning the YOLOv8 model. Variations in these metrics, such as spikes or irregular patterns in loss graphs, can highlight potential issues such as overfitting, underfitting, or inappropriate learning rate settings. By closely monitoring these scalars, you can make informed decisions to optimize the training process, ensuring that the model learns effectively and achieves the desired performance.
The TensorBoard dashboard, accessible via [http://localhost:6006/](http://localhost:6006/), provides real-time insights into various training metrics. For a deeper dive into training configurations, visit our [YOLOv8 Configuration guide](../usage/cfg.md).
### Difference Between Scalars and Time Series
### What kind of metrics can I visualize with TensorBoard when training YOLOv8 models?
While both Scalars and Time Series in TensorBoard are used for tracking metrics, they serve slightly different purposes. Scalars focus on plotting simple metrics such as loss and accuracy as scalar values. They provide a high-level overview of how these metrics change with each training epoch. While, the time-series section of the TensorBoard offers a more detailed timeline view of various metrics. It is particularly useful for monitoring the progression and trends of metrics over time, providing a deeper dive into the specifics of the training process.
When training YOLOv8 models, TensorBoard allows you to visualize an array of important metrics including:
### Graphs
- **Loss (Training and Validation):** Indicates how well the model is performing during training and validation.
- **Accuracy/Precision/Recall:** Key performance metrics to evaluate detection accuracy.
- **Learning Rate:** Track learning rate changes to understand its impact on training dynamics.
- **mAP (mean Average Precision):** For a comprehensive evaluation of object detection accuracy at various IoU thresholds.
The Graphs section of the TensorBoard visualizes the computational graph of the YOLOv8 model, showing how operations and data flow within the model. It's a powerful tool for understanding the model's structure, ensuring that all layers are connected correctly, and for identifying any potential bottlenecks in data flow. Here's an example of what you can expect to see.
These visualizations are essential for tracking model performance and making necessary optimizations. For more information on these metrics, refer to our [Performance Metrics guide](../guides/yolo-performance-metrics.md).
![image](https://github.com/ultralytics/ultralytics/assets/25847604/039028e0-4ab3-4170-bfa8-f93ce483f615)
### Can I use TensorBoard in a Google Colab environment for training YOLOv8?
Graphs are particularly useful for debugging the model, especially in complex architectures typical in deep learning models like YOLOv8. They help in verifying layer connections and the overall design of the model.
Yes, you can use TensorBoard in a Google Colab environment to train YOLOv8 models. Here's a quick setup:
## Summary
!!! Example "Configure TensorBoard for Google Colab"
This guide aims to help you use TensorBoard with YOLOv8 for visualization and analysis of machine learning model training. It focuses on explaining how key TensorBoard features can provide insights into training metrics and model performance during YOLOv8 training sessions.
=== "Python"
For a more detailed exploration of these features and effective utilization strategies, you can refer to TensorFlow's official [TensorBoard documentation](https://www.tensorflow.org/tensorboard/get_started) and their [GitHub repository](https://github.com/tensorflow/tensorboard).
```ipython
%load_ext tensorboard
%tensorboard --logdir path/to/runs
```
Then, run the YOLOv8 training script:
```python
from ultralytics import YOLO
# Load a pre-trained model
model = YOLO("yolov8n.pt")
# Train the model
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
```
Want to learn more about the various integrations of Ultralytics? Check out the [Ultralytics integrations guide page](../integrations/index.md) to see what other exciting capabilities are waiting to be discovered!
TensorBoard will visualize the training progress within Colab, providing real-time insights into metrics like loss and accuracy. For additional details on configuring YOLOv8 training, see our detailed [YOLOv8 Installation guide](../quickstart.md).

@ -453,3 +453,94 @@ In this guide, we focused on converting Ultralytics YOLOv8 models to NVIDIA's Te
For more information on usage details, take a look at the [TensorRT official documentation](https://docs.nvidia.com/deeplearning/tensorrt/).
If you're curious about additional Ultralytics YOLOv8 integrations, our [integration guide page](../integrations/index.md) provides an extensive selection of informative resources and insights.
## FAQ
### How do I convert YOLOv8 models to TensorRT format?
To convert your Ultralytics YOLOv8 models to TensorRT format for optimized NVIDIA GPU inference, follow these steps:
1. **Install the required package**:
```bash
pip install ultralytics
```
2. **Export your YOLOv8 model**:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.export(format="engine") # creates 'yolov8n.engine'
# Run inference
model = YOLO("yolov8n.engine")
results = model("https://ultralytics.com/images/bus.jpg")
```
For more details, visit the [YOLOv8 Installation guide](../quickstart.md) and the [export documentation](../modes/export.md).
### What are the benefits of using TensorRT for YOLOv8 models?
Using TensorRT to optimize YOLOv8 models offers several benefits:
- **Faster Inference Speed**: TensorRT optimizes the model layers and uses precision calibration (INT8 and FP16) to speed up inference without significantly sacrificing accuracy.
- **Memory Efficiency**: TensorRT manages tensor memory dynamically, reducing overhead and improving GPU memory utilization.
- **Layer Fusion**: Combines multiple layers into single operations, reducing computational complexity.
- **Kernel Auto-Tuning**: Automatically selects optimized GPU kernels for each model layer, ensuring maximum performance.
For more information, explore the detailed features of TensorRT [here](https://developer.nvidia.com/tensorrt) and read our [TensorRT overview section](#tensorrt).
### Can I use INT8 quantization with TensorRT for YOLOv8 models?
Yes, you can export YOLOv8 models using TensorRT with INT8 quantization. This process involves post-training quantization (PTQ) and calibration:
1. **Export with INT8**:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.export(format="engine", batch=8, workspace=4, int8=True, data="coco.yaml")
```
2. **Run inference**:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.engine", task="detect")
result = model.predict("https://ultralytics.com/images/bus.jpg")
```
For more details, refer to the [exporting TensorRT with INT8 quantization section](#exporting-tensorrt-with-int8-quantization).
### How do I deploy YOLOv8 TensorRT models on an NVIDIA Triton Inference Server?
Deploying YOLOv8 TensorRT models on an NVIDIA Triton Inference Server can be done using the following resources:
- **[Deploy Ultralytics YOLOv8 with Triton Server](../guides/triton-inference-server.md)**: Step-by-step guidance on setting up and using Triton Inference Server.
- **[NVIDIA Triton Inference Server Documentation](https://developer.nvidia.com/blog/deploying-deep-learning-nvidia-tensorrt/)**: Official NVIDIA documentation for detailed deployment options and configurations.
These guides will help you integrate YOLOv8 models efficiently in various deployment environments.
### What are the performance improvements observed with YOLOv8 models exported to TensorRT?
Performance improvements with TensorRT can vary based on the hardware used. Here are some typical benchmarks:
- **NVIDIA A100**:
- **FP32** Inference: ~0.52 ms / image
- **FP16** Inference: ~0.34 ms / image
- **INT8** Inference: ~0.28 ms / image
- Slight reduction in mAP with INT8 precision, but significant improvement in speed.
- **Consumer GPUs (e.g., RTX 3080)**:
- **FP32** Inference: ~1.06 ms / image
- **FP16** Inference: ~0.62 ms / image
- **INT8** Inference: ~0.52 ms / image
Detailed performance benchmarks for different hardware configurations can be found in the [performance section](#ultralytics-yolo-tensorrt-export-performance).
For more comprehensive insights into TensorRT performance, refer to the [Ultralytics documentation](../modes/export.md) and our performance analysis reports.

@ -124,3 +124,81 @@ In this guide, we explored how to export Ultralytics YOLOv8 models to the TF Gra
For further details on usage, visit the [TF GraphDef official documentation](https://www.tensorflow.org/api_docs/python/tf/Graph).
For more information on integrating Ultralytics YOLOv8 with other platforms and frameworks, don't forget to check out our [integration guide page](index.md). It has great resources and insights to help you make the most of YOLOv8 in your projects.
## FAQ
### How do I export a YOLOv8 model to TF GraphDef format?
Ultralytics YOLOv8 models can be exported to TensorFlow GraphDef (TF GraphDef) format seamlessly. This format provides a serialized, platform-independent representation of the model, ideal for deploying in varied environments like mobile and web. To export a YOLOv8 model to TF GraphDef, follow these steps:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model to TF GraphDef format
model.export(format="pb") # creates 'yolov8n.pb'
# Load the exported TF GraphDef model
tf_graphdef_model = YOLO("yolov8n.pb")
# Run inference
results = tf_graphdef_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to TF GraphDef format
yolo export model="yolov8n.pt" format="pb" # creates 'yolov8n.pb'
# Run inference with the exported model
yolo predict model="yolov8n.pb" source="https://ultralytics.com/images/bus.jpg"
```
For more information on different export options, visit the [Ultralytics documentation on model export](../modes/export.md).
### What are the benefits of using TF GraphDef for YOLOv8 model deployment?
Exporting YOLOv8 models to the TF GraphDef format offers multiple advantages, including:
1. **Platform Independence**: TF GraphDef provides a platform-independent format, allowing models to be deployed across various environments including mobile and web browsers.
2. **Optimizations**: The format enables several optimizations, such as constant folding, quantization, and graph transformations, which enhance execution efficiency and reduce memory usage.
3. **Hardware Acceleration**: Models in TF GraphDef format can leverage hardware accelerators like GPUs, TPUs, and AI chips for performance gains.
Read more about the benefits in the [TF GraphDef section](#why-should-you-export-to-tf-graphdef) of our documentation.
### Why should I use Ultralytics YOLOv8 over other object detection models?
Ultralytics YOLOv8 offers numerous advantages compared to other models like YOLOv5 and YOLOv7. Some key benefits include:
1. **State-of-the-Art Performance**: YOLOv8 provides exceptional speed and accuracy for real-time object detection, segmentation, and classification.
2. **Ease of Use**: Features a user-friendly API for model training, validation, prediction, and export, making it accessible for both beginners and experts.
3. **Broad Compatibility**: Supports multiple export formats including ONNX, TensorRT, CoreML, and TensorFlow, for versatile deployment options.
Explore further details in our [introduction to YOLOv8](https://docs.ultralytics.com/models/yolov8/).
### How can I deploy a YOLOv8 model on specialized hardware using TF GraphDef?
Once a YOLOv8 model is exported to TF GraphDef format, you can deploy it across various specialized hardware platforms. Typical deployment scenarios include:
- **TensorFlow Serving**: Use TensorFlow Serving for scalable model deployment in production environments. It supports model management and efficient serving.
- **Mobile Devices**: Convert TF GraphDef models to TensorFlow Lite, optimized for mobile and embedded devices, enabling on-device inference.
- **Web Browsers**: Deploy models using TensorFlow.js for client-side inference in web applications.
- **AI Accelerators**: Leverage TPUs and custom AI chips for accelerated inference.
Check the [deployment options](#deployment-options-with-tf-graphdef) section for detailed information.
### Where can I find solutions for common issues while exporting YOLOv8 models?
For troubleshooting common issues with exporting YOLOv8 models, Ultralytics provides comprehensive guides and resources. If you encounter problems during installation or model export, refer to:
- **[Common Issues Guide](../guides/yolo-common-issues.md)**: Offers solutions to frequently faced problems.
- **[Installation Guide](../quickstart.md)**: Step-by-step instructions for setting up the required packages.
These resources should help you resolve most issues related to YOLOv8 model export and deployment.

@ -118,3 +118,80 @@ In this guide, we explored how to export Ultralytics YOLOv8 models to the TF Sav
For further details on usage, visit the [TF SavedModel official documentation](https://www.tensorflow.org/guide/saved_model).
For more information on integrating Ultralytics YOLOv8 with other platforms and frameworks, don't forget to check out our [integration guide page](index.md). It's packed with great resources to help you make the most of YOLOv8 in your projects.
## FAQ
### How do I export an Ultralytics YOLO model to TensorFlow SavedModel format?
Exporting an Ultralytics YOLO model to the TensorFlow SavedModel format is straightforward. You can use either Python or CLI to achieve this:
!!! Example "Exporting YOLOv8 to TF SavedModel"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model to TF SavedModel format
model.export(format="saved_model") # creates '/yolov8n_saved_model'
# Load the exported TF SavedModel for inference
tf_savedmodel_model = YOLO("./yolov8n_saved_model")
results = tf_savedmodel_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export the YOLOv8 model to TF SavedModel format
yolo export model=yolov8n.pt format=saved_model # creates '/yolov8n_saved_model'
# Run inference with the exported model
yolo predict model='./yolov8n_saved_model' source='https://ultralytics.com/images/bus.jpg'
```
Refer to the [Ultralytics Export documentation](../modes/export.md) for more details.
### Why should I use the TensorFlow SavedModel format?
The TensorFlow SavedModel format offers several advantages for model deployment:
- **Portability:** It provides a language-neutral format, making it easy to share and deploy models across different environments.
- **Compatibility:** Integrates seamlessly with tools like TensorFlow Serving, TensorFlow Lite, and TensorFlow.js, which are essential for deploying models on various platforms, including web and mobile applications.
- **Complete encapsulation:** Encodes the model architecture, weights, and compilation information, allowing for straightforward sharing and training continuation.
For more benefits and deployment options, check out the [Ultralytics YOLO model deployment options](../guides/model-deployment-options.md).
### What are the typical deployment scenarios for TF SavedModel?
TF SavedModel can be deployed in various environments, including:
- **TensorFlow Serving:** Ideal for production environments requiring scalable and high-performance model serving.
- **Cloud Platforms:** Supports major cloud services like Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure for scalable model deployment.
- **Mobile and Embedded Devices:** Using TensorFlow Lite to convert TF SavedModels allows for deployment on mobile devices, IoT devices, and microcontrollers.
- **TensorFlow Runtime:** For C++ environments needing low-latency inference with better performance.
For detailed deployment options, visit the official guides on [deploying TensorFlow models](https://www.tensorflow.org/tfx/guide/serving).
### How can I install the necessary packages to export YOLOv8 models?
To export YOLOv8 models, you need to install the `ultralytics` package. Run the following command in your terminal:
```bash
pip install ultralytics
```
For more detailed installation instructions and best practices, refer to our [Ultralytics Installation guide](../quickstart.md). If you encounter any issues, consult our [Common Issues guide](../guides/yolo-common-issues.md).
### What are the key features of the TensorFlow SavedModel format?
TF SavedModel format is beneficial for AI developers due to the following features:
- **Portability:** Allows sharing and deployment across various environments effortlessly.
- **Ease of Deployment:** Encapsulates the computational graph, trained parameters, and metadata into a single package, which simplifies loading and inference.
- **Asset Management:** Supports external assets like vocabularies, ensuring they are available when the model loads.
For further details, explore the [official TensorFlow documentation](https://www.tensorflow.org/guide/saved_model).

@ -38,9 +38,9 @@ TF.js provides a range of options to deploy your machine learning models:
- **In-Browser ML Applications:** You can build web applications that run machine learning models directly in the browser. The need for server-side computation is eliminated and the server load is reduced.
- **Node.js Applications::** TensorFlow.js also supports deployment in Node.js environments, enabling the development of server-side machine learning applications. It is particularly useful for applications that require the processing power of a server or access to server-side data
- **Node.js Applications::** TensorFlow.js also supports deployment in Node.js environments, enabling the development of server-side machine learning applications. It is particularly useful for applications that require the processing power of a server or access to server-side data.
- **Chrome Extensions:** An interesting deployment scenario is the creation of Chrome extensions with TensorFlow.js. For instance, you can develop an extension that allows users to right-click on an image within any webpage to classify it using a pre-trained ML model. TensorFlow.js can be integrated into everyday web browsing experiences to provide immediate insights or augmentations based on machine learning.
- **Chrome Extensions:** An interesting deployment scenario is the creation of Chrome extensions with TensorFlow.js. For instance, you can develop an extension that allows users to right-click on an image within any webpage to classify it using a pre-trained ML model. TensorFlow.js can be integrated into everyday web browsing experiences to provide immediate insights or augmentations based on machine learning.
## Exporting YOLOv8 Models to TensorFlow.js
@ -116,3 +116,79 @@ In this guide, we learned how to export Ultralytics YOLOv8 models to the TensorF
For further details on usage, visit the [TensorFlow.js official documentation](https://www.tensorflow.org/js/guide).
For more information on integrating Ultralytics YOLOv8 with other platforms and frameworks, don't forget to check out our [integration guide page](index.md). It's packed with great resources to help you make the most of YOLOv8 in your projects.
## FAQ
### How do I export Ultralytics YOLOv8 models to TensorFlow.js format?
Exporting Ultralytics YOLOv8 models to TensorFlow.js (TF.js) format is straightforward. You can follow these steps:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model to TF.js format
model.export(format="tfjs") # creates '/yolov8n_web_model'
# Load the exported TF.js model
tfjs_model = YOLO("./yolov8n_web_model")
# Run inference
results = tfjs_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to TF.js format
yolo export model=yolov8n.pt format=tfjs # creates '/yolov8n_web_model'
# Run inference with the exported model
yolo predict model='./yolov8n_web_model' source='https://ultralytics.com/images/bus.jpg'
```
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
### Why should I export my YOLOv8 models to TensorFlow.js?
Exporting YOLOv8 models to TensorFlow.js offers several advantages, including:
1. **Local Execution:** Models can run directly in the browser or Node.js, reducing latency and enhancing user experience.
2. **Cross-Platform Support:** TF.js supports multiple environments, allowing flexibility in deployment.
3. **Offline Capabilities:** Enables applications to function without an internet connection, ensuring reliability and privacy.
4. **GPU Acceleration:** Leverages WebGL for GPU acceleration, optimizing performance on devices with limited resources.
For a comprehensive overview, see our [Integrations with TensorFlow.js](../integrations/tf-graphdef.md).
### How does TensorFlow.js benefit browser-based machine learning applications?
TensorFlow.js is specifically designed for efficient execution of ML models in browsers and Node.js environments. Here's how it benefits browser-based applications:
- **Reduces Latency:** Runs machine learning models locally, providing immediate results without relying on server-side computations.
- **Improves Privacy:** Keeps sensitive data on the user's device, minimizing security risks.
- **Enables Offline Use:** Models can operate without an internet connection, ensuring consistent functionality.
- **Supports Multiple Backends:** Offers flexibility with backends like CPU, WebGL, WebAssembly (WASM), and WebGPU for varying computational needs.
Interested in learning more about TF.js? Check out the [official TensorFlow.js guide](https://www.tensorflow.org/js/guide).
### What are the key features of TensorFlow.js for deploying YOLOv8 models?
Key features of TensorFlow.js include:
- **Cross-Platform Support:** TF.js can be used in both web browsers and Node.js, providing extensive deployment flexibility.
- **Multiple Backends:** Supports CPU, WebGL for GPU acceleration, WebAssembly (WASM), and WebGPU for advanced operations.
- **Offline Capabilities:** Models can run directly in the browser without internet connectivity, making it ideal for developing responsive web applications.
For deployment scenarios and more in-depth information, see our section on [Deployment Options with TensorFlow.js](#deploying-exported-yolov8-tensorflowjs-models).
### Can I deploy a YOLOv8 model on server-side Node.js applications using TensorFlow.js?
Yes, TensorFlow.js allows the deployment of YOLOv8 models on Node.js environments. This enables server-side machine learning applications that benefit from the processing power of a server and access to server-side data. Typical use cases include real-time data processing and machine learning pipelines on backend servers.
To get started with Node.js deployment, refer to the [Run TensorFlow.js in Node.js](https://www.tensorflow.org/js/guide/nodejs) guide from TensorFlow.

@ -120,3 +120,74 @@ In this guide, we focused on how to export to TFLite format. By converting your
For further details on usage, visit the [TFLite official documentation](https://www.tensorflow.org/lite/guide).
Also, if you're curious about other Ultralytics YOLOv8 integrations, make sure to check out our [integration guide page](../integrations/index.md). You'll find tons of helpful info and insights waiting for you there.
## FAQ
### How do I export a YOLOv8 model to TFLite format?
To export a YOLOv8 model to TFLite format, you can use the Ultralytics library. First, install the required package using:
```bash
pip install ultralytics
```
Then, use the following code snippet to export your model:
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model to TFLite format
model.export(format="tflite") # creates 'yolov8n_float32.tflite'
```
For CLI users, you can achieve this with:
```bash
yolo export model=yolov8n.pt format=tflite # creates 'yolov8n_float32.tflite'
```
For more details, visit the [Ultralytics export guide](../modes/export.md).
### What are the benefits of using TensorFlow Lite for YOLOv8 model deployment?
TensorFlow Lite (TFLite) is an open-source deep learning framework designed for on-device inference, making it ideal for deploying YOLOv8 models on mobile, embedded, and IoT devices. Key benefits include:
- **On-device optimization**: Minimize latency and enhance privacy by processing data locally.
- **Platform compatibility**: Supports Android, iOS, embedded Linux, and MCU.
- **Performance**: Utilizes hardware acceleration to optimize model speed and efficiency.
To learn more, check out the [TFLite guide](https://www.tensorflow.org/lite/guide).
### Is it possible to run YOLOv8 TFLite models on Raspberry Pi?
Yes, you can run YOLOv8 TFLite models on Raspberry Pi to improve inference speeds. First, export your model to TFLite format as explained [here](#how-do-i-export-a-yolov8-model-to-tflite-format). Then, use a tool like TensorFlow Lite Interpreter to execute the model on your Raspberry Pi.
For further optimizations, you might consider using [Coral Edge TPU](https://coral.withgoogle.com/). For detailed steps, refer to our [Raspberry Pi deployment guide](../guides/raspberry-pi.md).
### Can I use TFLite models on microcontrollers for YOLOv8 predictions?
Yes, TFLite supports deployment on microcontrollers with limited resources. TFLite's core runtime requires only 16 KB of memory on an Arm Cortex M3 and can run basic YOLOv8 models. This makes it suitable for deployment on devices with minimal computational power and memory.
To get started, visit the [TFLite Micro for Microcontrollers guide](https://www.tensorflow.org/lite/microcontrollers).
### What platforms are compatible with TFLite exported YOLOv8 models?
TensorFlow Lite provides extensive platform compatibility, allowing you to deploy YOLOv8 models on a wide range of devices, including:
- **Android and iOS**: Native support through TFLite Android and iOS libraries.
- **Embedded Linux**: Ideal for single-board computers such as Raspberry Pi.
- **Microcontrollers**: Suitable for MCUs with constrained resources.
For more information on deployment options, see our detailed [deployment guide](#deploying-exported-yolov8-tflite-models).
### How do I troubleshoot common issues during YOLOv8 model export to TFLite?
If you encounter errors while exporting YOLOv8 models to TFLite, common solutions include:
- **Check package compatibility**: Ensure you're using compatible versions of Ultralytics and TensorFlow. Refer to our [installation guide](../quickstart.md).
- **Model support**: Verify that the specific YOLOv8 model supports TFLite export by checking [here](../modes/export.md).
For additional troubleshooting tips, visit our [Common Issues guide](../guides/yolo-common-issues.md).

@ -124,3 +124,81 @@ In this guide, we explored the process of exporting Ultralytics YOLOv8 models to
For further details on usage, visit [TorchScript's official documentation](https://pytorch.org/docs/stable/jit.html).
Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.
## FAQ
### What is Ultralytics YOLOv8 model export to TorchScript?
Exporting an Ultralytics YOLOv8 model to TorchScript allows for flexible, cross-platform deployment. TorchScript, a part of the PyTorch ecosystem, facilitates the serialization of models, which can then be executed in environments that lack Python support. This makes it ideal for deploying models on embedded systems, C++ environments, mobile applications, and even web browsers. Exporting to TorchScript enables efficient performance and wider applicability of your YOLOv8 models across diverse platforms.
### How can I export my YOLOv8 model to TorchScript using Ultralytics?
To export a YOLOv8 model to TorchScript, you can use the following example code:
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model to TorchScript format
model.export(format="torchscript") # creates 'yolov8n.torchscript'
# Load the exported TorchScript model
torchscript_model = YOLO("yolov8n.torchscript")
# Run inference
results = torchscript_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to TorchScript format
yolo export model=yolov8n.pt format=torchscript # creates 'yolov8n.torchscript'
# Run inference with the exported model
yolo predict model=yolov8n.torchscript source='https://ultralytics.com/images/bus.jpg'
```
For more details about the export process, refer to the [Ultralytics documentation on exporting](../modes/export.md).
### Why should I use TorchScript for deploying YOLOv8 models?
Using TorchScript for deploying YOLOv8 models offers several advantages:
- **Portability**: Exported models can run in environments without the need for Python, such as C++ applications, embedded systems, or mobile devices.
- **Optimization**: TorchScript supports static graph execution and Just-In-Time (JIT) compilation, which can optimize model performance.
- **Cross-Language Integration**: TorchScript models can be integrated into other programming languages, enhancing flexibility and expandability.
- **Serialization**: Models can be serialized, allowing for platform-independent loading and inference.
For more insights into deployment, visit the [PyTorch Mobile Documentation](https://pytorch.org/mobile/home/), [TorchServe Documentation](https://pytorch.org/serve/getting_started.html), and [C++ Deployment Guide](https://pytorch.org/tutorials/advanced/cpp_export.html).
### What are the installation steps for exporting YOLOv8 models to TorchScript?
To install the required package for exporting YOLOv8 models, use the following command:
!!! Tip "Installation"
=== "CLI"
```bash
# Install the required package for YOLOv8
pip install ultralytics
```
For detailed instructions, visit the [Ultralytics Installation guide](../quickstart.md). If any issues arise during installation, consult the [Common Issues guide](../guides/yolo-common-issues.md).
### How do I deploy my exported TorchScript YOLOv8 models?
After exporting YOLOv8 models to the TorchScript format, you can deploy them across a variety of platforms:
- **C++ API**: Ideal for low-overhead, highly efficient production environments.
- **Mobile Deployment**: Use [PyTorch Mobile](https://pytorch.org/mobile/home/) for iOS and Android applications.
- **Cloud Deployment**: Utilize services like [TorchServe](https://pytorch.org/serve/getting_started.html) for scalable server-side deployment.
Explore comprehensive guidelines for deploying models in these settings to take full advantage of TorchScript's capabilities.

@ -63,35 +63,33 @@ Before diving into the usage instructions for YOLOv8 model training with Weights
=== "Python"
```python
import wandb
from wandb.integration.ultralytics import add_wandb_callback
```python
import wandb
from wandb.integration.ultralytics import add_wandb_callback
from ultralytics import YOLO
from ultralytics import YOLO
# Step 1: Initialize a Weights & Biases run
wandb.init(project="ultralytics", job_type="training")
# Initialize a Weights & Biases run
wandb.init(project="ultralytics", job_type="training")
# Step 2: Define the YOLOv8 Model and Dataset
model_name = "yolov8n"
dataset_name = "coco8.yaml"
model = YOLO(f"{model_name}.pt")
# Load a YOLO model
model = YOLO("yolov8n.pt")
# Step 3: Add W&B Callback for Ultralytics
add_wandb_callback(model, enable_model_checkpointing=True)
# Add W&B Callback for Ultralytics
add_wandb_callback(model, enable_model_checkpointing=True)
# Step 4: Train and Fine-Tune the Model
model.train(project="ultralytics", data=dataset_name, epochs=5, imgsz=640)
# Train and Fine-Tune the Model
model.train(project="ultralytics", data="coco8.yaml", epochs=5, imgsz=640)
# Step 5: Validate the Model
model.val()
# Validate the Model
model.val()
# Step 6: Perform Inference and Log Results
model(["path/to/image1", "path/to/image2"])
# Perform Inference and Log Results
model(["path/to/image1", "path/to/image2"])
# Step 7: Finalize the W&B Run
wandb.finish()
```
# Finalize the W&B Run
wandb.finish()
```
### Understanding the Code
@ -150,3 +148,86 @@ This guide helped you explore Ultralytics' YOLOv8 integration with Weights & Bia
For further details on usage, visit [Weights & Biases' official documentation](https://docs.wandb.ai/guides/integrations/ultralytics).
Also, be sure to check out the [Ultralytics integration guide page](../integrations/index.md), to learn more about different exciting integrations.
## FAQ
### How do I install the required packages for YOLOv8 and Weights & Biases?
To install the required packages for YOLOv8 and Weights & Biases, open your command line interface and run:
```bash
pip install --upgrade ultralytics==8.0.186 wandb
```
For further guidance on installation steps, refer to our [YOLOv8 Installation guide](../quickstart.md). If you encounter issues, consult the [Common Issues guide](../guides/yolo-common-issues.md) for troubleshooting tips.
### What are the benefits of integrating Ultralytics YOLOv8 with Weights & Biases?
Integrating Ultralytics YOLOv8 with Weights & Biases offers several benefits including:
- **Real-Time Metrics Tracking:** Observe metric changes during training for immediate insights.
- **Hyperparameter Optimization:** Improve model performance by fine-tuning learning rate, batch size, etc.
- **Comparative Analysis:** Side-by-side comparison of different training runs.
- **Resource Monitoring:** Keep track of CPU, GPU, and memory usage.
- **Model Artifacts Management:** Easy access and sharing of model checkpoints.
Explore these features in detail in the Weights & Biases Dashboard section above.
### How can I configure Weights & Biases for YOLOv8 training?
To configure Weights & Biases for YOLOv8 training, follow these steps:
1. Run the command to initialize Weights & Biases:
```bash
import wandb
wandb.login()
```
2. Retrieve your API key from the Weights & Biases website.
3. Use the API key to authenticate your development environment.
Detailed setup instructions can be found in the Configuring Weights & Biases section above.
### How do I train a YOLOv8 model using Weights & Biases?
For training a YOLOv8 model using Weights & Biases, use the following steps in a Python script:
```python
import wandb
from wandb.integration.ultralytics import add_wandb_callback
from ultralytics import YOLO
# Initialize a Weights & Biases run
wandb.init(project="ultralytics", job_type="training")
# Load a YOLO model
model = YOLO("yolov8n.pt")
# Add W&B Callback for Ultralytics
add_wandb_callback(model, enable_model_checkpointing=True)
# Train and Fine-Tune the Model
model.train(project="ultralytics", data="coco8.yaml", epochs=5, imgsz=640)
# Validate the Model
model.val()
# Perform Inference and Log Results
model(["path/to/image1", "path/to/image2"])
# Finalize the W&B Run
wandb.finish()
```
This script initializes Weights & Biases, sets up the model, trains it, and logs results. For more details, visit the Usage section above.
### Why should I use Ultralytics YOLOv8 with Weights & Biases over other platforms?
Ultralytics YOLOv8 integrated with Weights & Biases offers several unique advantages:
- **High Efficiency:** Real-time tracking of training metrics and performance optimization.
- **Scalability:** Easily manage large-scale training jobs with robust resource monitoring and utilization tools.
- **Interactivity:** A user-friendly interactive UI for data visualization and model management.
- **Community and Support:** Strong integration documentation and community support with flexible customization and enhancement options.
For comparisons with other platforms like Comet and ClearML, refer to [Ultralytics integrations](../integrations/index.md).

@ -36,3 +36,25 @@ We welcome contributions from the community! If you've mastered a particular asp
To get started, please read our [Contributing Guide](../help/contributing.md) for guidelines on how to open up a Pull Request (PR) 🛠. We look forward to your contributions!
Let's work together to make the Ultralytics YOLO ecosystem more robust and versatile 🙏!
## FAQ
### How can I use Ultralytics YOLO for real-time object counting?
Ultralytics YOLOv8 can be used for real-time object counting by leveraging its advanced object detection capabilities. You can follow our detailed guide on [Object Counting](../guides/object-counting.md) to set up YOLOv8 for live video stream analysis. Simply install YOLOv8, load your model, and process video frames to count objects dynamically.
### What are the benefits of using Ultralytics YOLO for security systems?
Ultralytics YOLOv8 enhances security systems by offering real-time object detection and alert mechanisms. By employing YOLOv8, you can create a security alarm system that triggers alerts when new objects are detected in the surveillance area. Learn how to set up a [Security Alarm System](../guides/security-alarm-system.md) with YOLOv8 for robust security monitoring.
### How can Ultralytics YOLO improve queue management systems?
Ultralytics YOLOv8 can significantly improve queue management systems by accurately counting and tracking people in queues, thus helping to reduce wait times and optimize service efficiency. Follow our detailed guide on [Queue Management](../guides/queue-management.md) to learn how to implement YOLOv8 for effective queue monitoring and analysis.
### Can Ultralytics YOLO be used for workout monitoring?
Yes, Ultralytics YOLOv8 can be effectively used for monitoring workouts by tracking and analyzing fitness routines in real-time. This allows for precise evaluation of exercise form and performance. Explore our guide on [Workouts Monitoring](../guides/workouts-monitoring.md) to learn how to set up an AI-powered workout monitoring system using YOLOv8.
### How does Ultralytics YOLO help in creating heatmaps for data visualization?
Ultralytics YOLOv8 can generate heatmaps to visualize data intensity across a given area, highlighting regions of high activity or interest. This feature is particularly useful in understanding patterns and trends in various computer vision tasks. Learn more about creating and using [Heatmaps](../guides/heatmaps.md) with YOLOv8 for comprehensive data analysis and visualization.

@ -21,7 +21,7 @@ theme:
name: material
language: en
custom_dir: docs/overrides/
logo: https://github.com/ultralytics/assets/raw/main/logo/Ultralytics_Logotype_Reverse.svg
logo: https://raw.githubusercontent.com/ultralytics/assets/main/logo/Ultralytics_Logotype_Reverse.svg
favicon: assets/favicon.ico
icon:
repo: fontawesome/brands/github
@ -617,7 +617,7 @@ plugins:
add_authors: True
add_json_ld: True
add_share_buttons: True
default_image: https://github.com/ultralytics/assets/blob/main/yolov8/banner-yolov8.png
default_image: https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png
- mkdocs-jupyter
- redirects:
redirect_maps:

@ -93,7 +93,7 @@ dev = [
"mkdocstrings[python]",
"mkdocs-jupyter", # for notebooks
"mkdocs-redirects", # for 301 redirects
"mkdocs-ultralytics-plugin>=0.0.48", # for meta descriptions and images, dates and authors
"mkdocs-ultralytics-plugin>=0.0.49", # for meta descriptions and images, dates and authors
]
export = [
"onnx>=1.12.0", # ONNX export

@ -276,7 +276,7 @@ class BaseModel(nn.Module):
batch (dict): Batch to compute loss on
preds (torch.Tensor | List[torch.Tensor]): Predictions.
"""
if not hasattr(self, "criterion"):
if getattr(self, "criterion", None) is None:
self.criterion = self.init_criterion()
preds = self.forward(batch["img"]) if preds is None else preds

Loading…
Cancel
Save