@ -56,14 +56,14 @@ To stop the serve command and terminate the local server, you can use the `CTRL+
For multi-language MkDocs sites use the following additional steps:
1. Add all new language *.md files to git commit: `git add docs/**/*.md -f`
1. Add all new language `*.md` files to git commit: `git add docs/**/*.md -f`
2. Build all languages to the `/site` directory. Verify that the top-level `/site` directory contains `CNAME`, `robots.txt` and `sitemap.xml` files, if applicable.
```bash
# Remove existing /site directory
rm -rf site
# Loop through all *.yml files in the docs directory
# Loop through all YAML files in the docs directory
mkdocs build -f docs/mkdocs.yml
for file in docs/mkdocs_*.yml; do
echo "Building MkDocs site with configuration file: $file"
@ -80,7 +80,7 @@ keywords: الرؤية الحاسوبية ، مجموعات البيانات ،
2. **وضع تعليقات على الصور**: قم بإضافة تعليقات على هذه الصور مع صناديق الحدود أو الشرائح أو النقاط التي تعتمد على المهمة.
3. **تصدير التعليقات**: قم بتحويل هذه التعليقات إلى تنسيق الملف *.txt *.txt المدعوم من Ultralytics.
3. **تصدير التعليقات**: قم بتحويل هذه التعليقات إلى تنسيق الملف `*.txt``*.txt` المدعوم من Ultralytics.
4. **تنظيم مجموعة البيانات**: قم بترتيب مجموعة البيانات الخاصة بك في البنية المجلدات الصحيحة. يجب أن تحتوي على مجلدات أعلى المستوى `train/` و `val/` ، وداخل كل منهما ، مجلدات فرعية للـ `images/` و `labels/`.
@ -80,7 +80,7 @@ Das Bereitstellen eines neuen Datensatzes umfasst mehrere Schritte, um sicherzus
2. **Bilder annotieren**: Annotieren Sie diese Bilder mit Bounding Boxen, Segmenten oder Schlüsselpunkten, je nach Aufgabe.
3. **Annotationen exportieren**: Konvertieren Sie diese Annotationen in das von Ultralytics unterstützte YOLO *.txt-Dateiformat.
3. **Annotationen exportieren**: Konvertieren Sie diese Annotationen in das von Ultralytics unterstützte YOLO `*.txt`-Dateiformat.
4. **Datensatz organisieren**: Ordnen Sie Ihren Datensatz in die richtige Ordnerstruktur an. Sie sollten übergeordnete Verzeichnisse `train/` und `val/` haben, und innerhalb dieser je ein Unterverzeichnis `images/` und `labels/`.
@ -10,7 +10,7 @@ The [Argoverse](https://www.argoverse.org/) dataset is a collection of data desi
!!! Note
The Argoverse dataset *.zip file required for training was removed from Amazon S3 after the shutdown of Argo AI by Ford, but we have made it available for manual download on [Google Drive](https://drive.google.com/file/d/1st9qW3BeIwQsnR0t8mRpvbsSWIo16ACi/view?usp=drive_link).
The Argoverse dataset `*.zip` file required for training was removed from Amazon S3 after the shutdown of Argo AI by Ford, but we have made it available for manual download on [Google Drive](https://drive.google.com/file/d/1st9qW3BeIwQsnR0t8mRpvbsSWIo16ACi/view?usp=drive_link).
@ -12,7 +12,7 @@ Training a robust and accurate object detection model requires a comprehensive d
### Ultralytics YOLO format
The Ultralytics YOLO format is a dataset configuration format that allows you to define the dataset root directory, the relative paths to training/validation/testing image directories or *.txt files containing image paths, and a dictionary of class names. Here is an example:
The Ultralytics YOLO format is a dataset configuration format that allows you to define the dataset root directory, the relative paths to training/validation/testing image directories or `*.txt` files containing image paths, and a dictionary of class names. Here is an example:
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -87,7 +87,7 @@ Contributing a new dataset involves several steps to ensure that it aligns well
2. **Annotate Images**: Annotate these images with bounding boxes, segments, or keypoints, depending on the task.
3. **Export Annotations**: Convert these annotations into the YOLO *.txt file format which Ultralytics supports.
3. **Export Annotations**: Convert these annotations into the YOLO `*.txt` file format which Ultralytics supports.
4. **Organize Dataset**: Arrange your dataset into the correct folder structure. You should have `train/` and `val/` top-level directories, and within each, an `images/` and `labels/` subdirectory.
@ -14,7 +14,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
## Recipe Walk Through
1. Begin with the necessary imports
1. Begin with the necessary imports
```py
from pathlib import Path
@ -28,9 +28,9 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
See the Ultralytics [Quickstart](../quickstart.md/#install-ultralytics) Installation section for a quick walkthrough on installing the required libraries.
---
***
2. Load a model and run `predict()` method on a source.
2. Load a model and run `predict()` method on a source.
```py
from ultralytics import YOLO
@ -55,11 +55,11 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
For additional information about Segmentation Models, visit the [Segment Task](../tasks/segment.md#models) page. To learn more about `predict()` method, see [Predict Mode](../modes/predict.md) section of the Documentation.
---
***
3. Now iterate over the results and the contours. For workflows that want to save an image to file, the source image `base-name` and the detection `class-label` are retrieved for later use (optional).
3. Now iterate over the results and the contours. For workflows that want to save an image to file, the source image `base-name` and the detection `class-label` are retrieved for later use (optional).
```{ .py .annotate }
```{ .py .annotate }
# (2) Iterate detection results (helpful for multiple images)
for r in res:
img = np.copy(r.orig_img)
@ -79,13 +79,13 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
A single image will only iterate the first loop once. A single image with only a single detection will iterate each loop _only_ once.
---
***
4. Start with generating a binary mask from the source image and then draw a filled contour onto the mask. This will allow the object to be isolated from the other parts of the image. An example from `bus.jpg` for one of the detected `person` class objects is shown on the right.
4. Start with generating a binary mask from the source image and then draw a filled contour onto the mask. This will allow the object to be isolated from the other parts of the image. An example from `bus.jpg` for one of the detected `person` class objects is shown on the right.
@ -116,7 +116,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
<summary> Expand to understand what is happening when defining the <code>contour</code> variable.</summary>
<p>
- `c.masks.xy` :: Provides the coordinates of the mask contour points in the format `(x, y)`. For more details, refer to the [Masks Section from Predict Mode](../modes/predict.md#masks).
- `c.masks.xy` :: Provides the coordinates of the mask contour points in the format `(x, y)`. For more details, refer to the [Masks Section from Predict Mode](../modes/predict.md#masks).
- `.pop()` :: As `masks.xy` is a list containing a single element, this element is extracted using the `pop()` method.
@ -143,9 +143,9 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
</details>
<p></p>
---
***
5. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each.
5. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each.
### Object Isolation Options
@ -256,9 +256,9 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
This is a built in feature for the Ultralytics library. See the `save_crop` argument for [Predict Mode Inference Arguments](../modes/predict.md/#inference-arguments) for details.
---
***
6. <u>What to do next is entirely left to you as the developer.</u> A basic example of one possible next step (saving the image to file for future use) is shown.
6. <u>What to do next is entirely left to you as the developer.</u> A basic example of one possible next step (saving the image to file for future use) is shown.
- **NOTE:** this step is optional and can be skipped if not required for your specific use case.
@ -275,7 +275,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
Here, all steps from the previous section are combined into a single block of code. For repeated use, it would be optimal to define a function to do some or all commands contained in the `for`-loops, but that is an exercise left to the reader.
*Setting the Scene:* You've come a long way on your journey with YOLOv8. You've diligently collected data, meticulously annotated it, and put in the hours to train and rigorously evaluate your custom YOLOv8 model. Now, it’s time to put your model to work for your specific application, use case, or project. But there's a critical decision that stands before you: how to export and deploy your model effectively.
You've come a long way on your journey with YOLOv8. You've diligently collected data, meticulously annotated it, and put in the hours to train and rigorously evaluate your custom YOLOv8 model. Now, it’s time to put your model to work for your specific application, use case, or project. But there's a critical decision that stands before you: how to export and deploy your model effectively.
This guide walks you through YOLOv8’s deployment options and the essential factors to consider to choose the right option for your project.
Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves applying a blurring effect to specific detected objects in an image or video. This can be achieved using the YOLOv8 model capabilities to identify and manipulate objects within a given scene.
Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves applying a blurring effect to specific detected objects in an image or video. This can be achieved using the YOLOv8 model capabilities to identify and manipulate objects within a given scene.
@ -56,6 +56,6 @@ We hope that the resources here will help you get the most out of HUB. Please br
- [**Models: Training and Exporting**](models.md). Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
- [**Integrations: Options**](integrations.md). Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
- [**Ultralytics HUB App**](app/index.md). Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
* [**iOS**](app/ios.md). Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
* [**Android**](app/android.md). Explore TFLite acceleration on mobile devices.
- [**iOS**](app/ios.md). Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
- [**Android**](app/android.md). Explore TFLite acceleration on mobile devices.
- [**Inference API**](inference_api.md). Understand how to use the Inference API for running your trained models in the cloud to generate predictions.
Additionally, you can try FastSAM through a [Colab demo](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9?usp=sharing) or on the [HuggingFace web demo](https://huggingface.co/spaces/An-619/FastSAM) for a visual experience.
@ -89,4 +89,4 @@ If you use Baidu's RT-DETR in your research or development work, please cite the
We would like to acknowledge Baidu and the [PaddlePaddle](https://github.com/PaddlePaddle/PaddleDetection) team for creating and maintaining this valuable resource for the computer vision community. Their contribution to the field with the development of the Vision Transformers-based real-time object detector, RT-DETR, is greatly appreciated.
@ -117,4 +117,4 @@ If you employ YOLO-NAS in your research or development work, please cite SuperGr
We express our gratitude to Deci AI's [SuperGradients](https://github.com/Deci-AI/super-gradients/) team for their efforts in creating and maintaining this valuable resource for the computer vision community. We believe YOLO-NAS, with its innovative architecture and superior object detection capabilities, will become a critical tool for developers and researchers alike.
@ -761,7 +761,5 @@ Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video
This script will run predictions on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'.
@ -18,7 +18,6 @@ The output of an oriented object detector is a set of rotated bounding boxes tha
YOLOv8 OBB models use the `-obb` suffix, i.e. `yolov8n-obb.pt` and are pretrained on [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml).
| Ships Detection using OBB | Vehicle Detection using OBB |
description: 'Learn how to use Ultralytics YOLO through Command Line: train models, run predictions and exports models to different formats easily using terminal commands.'
description: Learn how to use Ultralytics YOLO through Command Line: train models, run predictions and exports models to different formats easily using terminal commands.
keywords: Ultralytics, YOLO, CLI, train, validation, prediction, command line interface, YOLO CLI, YOLO terminal, model training, prediction, exporting
@ -161,8 +161,8 @@ The objectness losses of the three prediction layers (`P3`, `P4`, `P5`) are weig
The YOLOv5 architecture makes some important changes to the box prediction strategy compared to earlier versions of YOLO. In YOLOv2 and YOLOv3, the box coordinates were directly predicted using the activation of the last layer.
Compare the center point offset before and after scaling. The center point offset range is adjusted from (0, 1) to (-0.5, 1.5). Therefore, offset can easily get 0 or 1.
@ -197,11 +197,11 @@ This process follows these steps:
@ -77,7 +77,7 @@ Export in `YOLOv5 Pytorch` format, then copy the snippet into your training scri
### 2.1 Create `dataset.yaml`
[COCO128](https://www.kaggle.com/ultralytics/coco128) is an example small tutorial dataset composed of the first 128 images in [COCO](https://cocodataset.org/) train2017. These same 128 images are used for both training and validation to verify our training pipeline is capable of overfitting. [data/coco128.yaml](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), shown below, is the dataset config file that defines 1) the dataset root directory `path` and relative paths to `train` / `val` / `test` image directories (or *.txt files with image paths) and 2) a class `names` dictionary:
[COCO128](https://www.kaggle.com/ultralytics/coco128) is an example small tutorial dataset composed of the first 128 images in [COCO](https://cocodataset.org/) train2017. These same 128 images are used for both training and validation to verify our training pipeline is capable of overfitting. [data/coco128.yaml](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), shown below, is the dataset config file that defines 1) the dataset root directory `path` and relative paths to `train` / `val` / `test` image directories (or `*.txt` files with image paths) and 2) a class `names` dictionary:
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -114,7 +114,7 @@ The label file corresponding to the above image contains 2 persons (class `0`) a
### 2.3 Organize Directories
Organize your train and val images and labels according to the example below. YOLOv5 assumes `/coco128` is inside a `/datasets` directory **next to** the `/yolov5` directory. **YOLOv5 locates labels automatically for each image** by replacing the last instance of `/images/` in each image path with `/labels/`. For example:
Organize your train and val images and labels according to the example below. YOLOv5 assumes `/coco128` is inside a `/datasets` directory **next to** the `/yolov5` directory. **YOLOv5 locates labels automatically for each image** by replacing the last instance of `/images/` in each image path with `/labels/`. For example:
@ -80,7 +80,7 @@ Contribuir con un nuevo conjunto de datos implica varios pasos para garantizar q
2. **Annotar Imágenes**: Anota estas imágenes con cuadros delimitadores, segmentos o puntos clave, dependiendo de la tarea.
3. **Exportar Anotaciones**: Convierte estas anotaciones en el formato de archivo *.txt de YOLO que Ultralytics soporta.
3. **Exportar Anotaciones**: Convierte estas anotaciones en el formato de archivo `*.txt` de YOLO que Ultralytics soporta.
4. **Organizar Conjunto de Datos**: Organiza tu conjunto de datos en la estructura de carpetas correcta. Deberías tener directorios de nivel superior `train/` y `val/`, y dentro de cada uno, un subdirectorio `images/` y `labels/`.
@ -80,7 +80,7 @@ Contribuer un nouvel ensemble de données implique plusieurs étapes pour s'assu
2. **Annoter des Images** : Annotez ces images avec des boîtes englobantes, des segments ou des points clés, en fonction de la tâche.
3. **Exporter des Annotations** : Convertissez ces annotations au format de fichier YOLO *.txt pris en charge par Ultralytics.
3. **Exporter des Annotations** : Convertissez ces annotations au format de fichier YOLO `*.txt` pris en charge par Ultralytics.
4. **Organiser l'Ensemble de Données** : Rangez votre ensemble de données dans la bonne structure de dossiers. Vous devriez avoir des répertoires de niveau supérieur `train/` et `val/`, et à l'intérieur de chacun, un sous-répertoire `images/` et `labels/`.
@ -83,7 +83,7 @@ Ultralytics कंप्यूटर विज्ञान कार्यों
इन छवियों को बाउंडिंग बॉक्स, संरेखण या कीपॉइंट्स के साथ थस्क करें, टास्क के आधार पर।
3. **एनोटेशन निर्यात करें**:
इन एनोटेशन को योलो *.txt फ़ाइल प्रारूप में निर्यात करें, जिसे Ultralytics समर्थित करता है।
इन एनोटेशन को योलो`*.txt` फ़ाइल प्रारूप में निर्यात करें, जिसे Ultralytics समर्थित करता है।
4. **डेटासेट व्यवस्थित करें**:
अपने डेटासेट को सही फ़ोल्डर संरचना में व्यवस्थित करें। आपके पास `train/` और `val/` शीर्ष-स्तर निर्देशिकाएँ होनी चाहिए, और हर एक में`images/` और `labels/` उप-निर्देशिका होनी चाहिए।
`*.pt` प्रीट्रेन किए गए PyTorch मॉडल और कॉन्फ़िगरेशन *.yaml फ़ाइल Python में YOLO() क्लास कों यूज़ करके एक मॉडल इंस्टेंस तैयार करने के लिए पास कर सकते हैं:
`*.pt` प्रीट्रेन किए गए PyTorch मॉडल और कॉन्फ़िगरेशन `*.yaml` फ़ाइल Python में YOLO() क्लास कों यूज़ करके एक मॉडल इंस्टेंस तैयार करने के लिए पास कर सकते हैं:
@ -80,7 +80,7 @@ Contribuir com um novo conjunto de dados envolve várias etapas para garantir qu
2. **Anotar Imagens**: Anote essas imagens com caixas delimitadoras, segmentos ou pontos-chave, dependendo da tarefa.
3. **Exportar Anotações**: Converta essas anotações no formato de arquivo *.txt YOLO que a Ultralytics suporta.
3. **Exportar Anotações**: Converta essas anotações no formato de arquivo `*.txt` YOLO que a Ultralytics suporta.
4. **Organizar Conjunto de Dados**: Organize seu conjunto de dados na estrutura de pastas correta. Você deve ter diretórios de topo `train/` e `val/`, e dentro de cada um, um subdiretório `images/` e `labels/`.
@ -80,7 +80,7 @@ Ultralytics предоставляет поддержку различных н
2. **Аннотация изображений**: Пометьте эти изображения ограничивающими рамками, сегментами или ключевыми точками в зависимости от задачи.
3. **Экспорт аннотаций**: Конвертируйте эти аннотации в формат файлов YOLO *.txt, который поддерживается Ultralytics.
3. **Экспорт аннотаций**: Конвертируйте эти аннотации в формат файлов YOLO `*.txt`, который поддерживается Ultralytics.
4. **Организация набора данных**: Распределите ваш набор данных в правильную структуру папок. У вас должны быть каталоги верхнего уровня `train/` и `val/`, и в каждом из них подкаталоги `images/` и `labels/`.