From 2a73bf7046d1c8eb37a51c52b1db2aaee2a09120 Mon Sep 17 00:00:00 2001 From: Ultralytics Assistant <135830346+UltralyticsAssistant@users.noreply.github.com> Date: Fri, 6 Sep 2024 04:47:15 +0800 Subject: [PATCH] Update URLs to redirects (#16048) --- CONTRIBUTING.md | 4 +-- README.md | 24 ++++++++-------- README.zh-CN.md | 24 ++++++++-------- docs/README.md | 8 +++--- docs/coming_soon_template.md | 8 +++--- docs/en/datasets/classify/imagenet10.md | 2 +- docs/en/datasets/classify/index.md | 2 +- docs/en/datasets/detect/coco8.md | 6 ++-- docs/en/datasets/detect/roboflow-100.md | 4 +-- docs/en/datasets/obb/dota8.md | 4 +-- docs/en/datasets/pose/coco8-pose.md | 4 +-- docs/en/datasets/pose/index.md | 4 +-- docs/en/datasets/pose/tiger-pose.md | 8 +++--- docs/en/datasets/segment/carparts-seg.md | 8 +++--- docs/en/datasets/segment/coco8-seg.md | 6 ++-- docs/en/datasets/segment/crack-seg.md | 6 ++-- docs/en/datasets/segment/package-seg.md | 6 ++-- .../guides/data-collection-and-annotation.md | 2 +- docs/en/guides/defining-project-goals.md | 2 +- docs/en/guides/docker-quickstart.md | 4 +-- docs/en/guides/hyperparameter-tuning.md | 2 +- docs/en/guides/model-deployment-options.md | 2 +- docs/en/guides/model-deployment-practices.md | 2 +- docs/en/guides/model-evaluation-insights.md | 2 +- .../model-monitoring-and-maintenance.md | 2 +- docs/en/guides/model-testing.md | 2 +- docs/en/guides/model-training-tips.md | 2 +- docs/en/guides/nvidia-jetson.md | 2 +- ...ng-openvino-latency-vs-throughput-modes.md | 2 +- .../en/guides/preprocessing_annotated_data.md | 2 +- docs/en/guides/raspberry-pi.md | 2 +- docs/en/guides/steps-of-a-cv-project.md | 2 +- docs/en/guides/streamlit-live-inference.md | 2 +- docs/en/guides/triton-inference-server.md | 10 +++---- docs/en/guides/yolo-common-issues.md | 6 ++-- docs/en/guides/yolo-performance-metrics.md | 2 +- docs/en/help/CI.md | 6 ++-- docs/en/help/FAQ.md | 4 +-- docs/en/help/code_of_conduct.md | 4 +-- docs/en/help/contributing.md | 4 +-- docs/en/help/minimum_reproducible_example.md | 2 +- docs/en/help/privacy.md | 6 ++-- docs/en/help/security.md | 10 +++---- docs/en/hub/api/index.md | 6 ++-- docs/en/hub/app/android.md | 16 +++++------ docs/en/hub/cloud-training.md | 4 +-- docs/en/hub/datasets.md | 14 +++++----- docs/en/hub/index.md | 10 +++---- docs/en/hub/inference-api.md | 14 +++++----- docs/en/hub/integrations.md | 24 ++++++++-------- docs/en/hub/models.md | 24 ++++++++-------- docs/en/hub/pro.md | 2 +- docs/en/hub/projects.md | 6 ++-- docs/en/hub/quickstart.md | 6 ++-- docs/en/hub/teams.md | 2 +- docs/en/index.md | 10 +++---- docs/en/integrations/clearml.md | 2 +- docs/en/integrations/comet.md | 4 +-- docs/en/integrations/coreml.md | 6 ++-- docs/en/integrations/dvc.md | 2 +- docs/en/integrations/edge-tpu.md | 2 +- docs/en/integrations/index.md | 16 +++++------ docs/en/integrations/mlflow.md | 2 +- docs/en/integrations/neural-magic.md | 2 +- docs/en/integrations/ray-tune.md | 2 +- docs/en/integrations/roboflow.md | 28 +++++++++---------- docs/en/integrations/tensorboard.md | 2 +- docs/en/models/sam.md | 2 +- docs/en/models/yolo-world.md | 4 +-- docs/en/models/yolov10.md | 4 +-- docs/en/models/yolov5.md | 2 +- docs/en/models/yolov8.md | 2 +- docs/en/models/yolov9.md | 2 +- docs/en/modes/train.md | 2 +- docs/en/quickstart.md | 2 +- docs/en/tasks/detect.md | 2 +- docs/en/tasks/pose.md | 2 +- docs/en/tasks/segment.md | 2 +- .../docker_image_quickstart_tutorial.md | 2 +- .../google_cloud_quickstart_tutorial.md | 2 +- docs/en/yolov5/index.md | 4 +-- .../tutorials/hyperparameter_evolution.md | 2 +- docs/en/yolov5/tutorials/model_ensembling.md | 2 +- docs/en/yolov5/tutorials/model_export.md | 6 ++-- .../tutorials/model_pruning_and_sparsity.md | 2 +- .../en/yolov5/tutorials/multi_gpu_training.md | 2 +- .../tutorials/pytorch_hub_model_loading.md | 4 +-- .../roboflow_datasets_integration.md | 12 ++++---- .../tutorials/test_time_augmentation.md | 4 +-- docs/en/yolov5/tutorials/train_custom_data.md | 12 ++++---- .../transfer_learning_with_frozen_layers.md | 2 +- ultralytics/cfg/models/README.md | 2 +- 92 files changed, 253 insertions(+), 253 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 0c564dadef..d884e43b4a 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -6,7 +6,7 @@ keywords: Ultralytics, YOLO, open-source, contribution, pull request, code of co # Contributing to Ultralytics Open-Source Projects -Welcome! We're thrilled that you're considering contributing to our [Ultralytics](https://ultralytics.com) [open-source](https://github.com/ultralytics) projects. Your involvement not only helps enhance the quality of our repositories but also benefits the entire community. This guide provides clear guidelines and best practices to help you get started. +Welcome! We're thrilled that you're considering contributing to our [Ultralytics](https://www.ultralytics.com/) [open-source](https://github.com/ultralytics) projects. Your involvement not only helps enhance the quality of our repositories but also benefits the entire community. This guide provides clear guidelines and best practices to help you get started. @@ -131,7 +131,7 @@ We encourage all contributors to familiarize themselves with the terms of the AG ## Conclusion -Thank you for your interest in contributing to [Ultralytics](https://ultralytics.com) [open-source](https://github.com/ultralytics) YOLO projects. Your participation is essential in shaping the future of our software and building a vibrant community of innovation and collaboration. Whether you're enhancing code, reporting bugs, or suggesting new features, your contributions are invaluable. +Thank you for your interest in contributing to [Ultralytics](https://www.ultralytics.com/) [open-source](https://github.com/ultralytics) YOLO projects. Your participation is essential in shaping the future of our software and building a vibrant community of innovation and collaboration. Whether you're enhancing code, reporting bugs, or suggesting new features, your contributions are invaluable. We're excited to see your ideas come to life and appreciate your commitment to advancing object detection technology. Together, let's continue to grow and innovate in this exciting open-source journey. Happy coding! 🚀🌟 diff --git a/README.md b/README.md index 0505701dd8..1cc1686e67 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@
-[中文](https://docs.ultralytics.com/zh/) | [한국어](https://docs.ultralytics.com/ko/) | [日本語](https://docs.ultralytics.com/ja/) | [Русский](https://docs.ultralytics.com/ru/) | [Deutsch](https://docs.ultralytics.com/de/) | [Français](https://docs.ultralytics.com/fr/) | [Español](https://docs.ultralytics.com/es/) | [Português](https://docs.ultralytics.com/pt/) | [Türkçe](https://docs.ultralytics.com/tr/) | [Tiếng Việt](https://docs.ultralytics.com/vi/) | [العربية](https://docs.ultralytics.com/ar/)
@@ -21,7 +21,7 @@ keywords: COCO8, Ultralytics, dataset, object detection, YOLOv8, training, valid
Watch: Ultralytics COCO Dataset Overview
@@ -101,7 +101,7 @@ The dataset has been released available under the [AGPL-3.0 License](https://git
### What is the Ultralytics Tiger-Pose dataset used for?
-The Ultralytics Tiger-Pose dataset is designed for pose estimation tasks, consisting of 263 images sourced from a [YouTube video](https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUbVGlnZXIgd2Fsa2luZyByZWZlcmVuY2UubXA0). The dataset is divided into 210 training images and 53 validation images. It is particularly useful for testing, training, and refining pose estimation algorithms using [Ultralytics HUB](https://hub.ultralytics.com) and [YOLOv8](https://github.com/ultralytics/ultralytics).
+The Ultralytics Tiger-Pose dataset is designed for pose estimation tasks, consisting of 263 images sourced from a [YouTube video](https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUbVGlnZXIgd2Fsa2luZyByZWZlcmVuY2UubXA0). The dataset is divided into 210 training images and 53 validation images. It is particularly useful for testing, training, and refining pose estimation algorithms using [Ultralytics HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
### How do I train a YOLOv8 model on the Tiger-Pose dataset?
@@ -161,4 +161,4 @@ To perform inference using a YOLOv8 model trained on the Tiger-Pose dataset, you
### What are the benefits of using the Tiger-Pose dataset for pose estimation?
-The Tiger-Pose dataset, despite its manageable size of 210 images for training, provides a diverse collection of images that are ideal for testing pose estimation pipelines. The dataset helps identify potential errors and acts as a preliminary step before working with larger datasets. Additionally, the dataset supports the training and refinement of pose estimation algorithms using advanced tools like [Ultralytics HUB](https://hub.ultralytics.com) and [YOLOv8](https://github.com/ultralytics/ultralytics), enhancing model performance and accuracy.
+The Tiger-Pose dataset, despite its manageable size of 210 images for training, provides a diverse collection of images that are ideal for testing pose estimation pipelines. The dataset helps identify potential errors and acts as a preliminary step before working with larger datasets. Additionally, the dataset supports the training and refinement of pose estimation algorithms using advanced tools like [Ultralytics HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics), enhancing model performance and accuracy.
diff --git a/docs/en/datasets/segment/carparts-seg.md b/docs/en/datasets/segment/carparts-seg.md
index d5799954be..f0d020ff46 100644
--- a/docs/en/datasets/segment/carparts-seg.md
+++ b/docs/en/datasets/segment/carparts-seg.md
@@ -6,7 +6,7 @@ keywords: Carparts Segmentation Dataset, Roboflow, computer vision, automotive A
# Roboflow Universe Carparts Segmentation Dataset
-The [Roboflow](https://roboflow.com/?ref=ultralytics) [Carparts Segmentation Dataset](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm) is a curated collection of images and videos designed for computer vision applications, specifically focusing on segmentation tasks related to car parts. This dataset provides a diverse set of visuals captured from multiple perspectives, offering valuable annotated examples for training and testing segmentation models.
+The [Roboflow](https://roboflow.com/?ref=ultralytics) [Carparts Segmentation Dataset](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm?ref=ultralytics) is a curated collection of images and videos designed for computer vision applications, specifically focusing on segmentation tasks related to car parts. This dataset provides a diverse set of visuals captured from multiple perspectives, offering valuable annotated examples for training and testing segmentation models.
Whether you're working on automotive research, developing AI solutions for vehicle maintenance, or exploring computer vision applications, the Carparts Segmentation Dataset serves as a valuable resource for enhancing accuracy and efficiency in your projects.
@@ -100,13 +100,13 @@ If you integrate the Carparts Segmentation dataset into your research or develop
}
```
-We extend our thanks to the Roboflow team for their dedication in developing and managing the Carparts Segmentation dataset, a valuable resource for vehicle maintenance and research projects. For additional details about the Carparts Segmentation dataset and its creators, please visit the [CarParts Segmentation Dataset Page](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm).
+We extend our thanks to the Roboflow team for their dedication in developing and managing the Carparts Segmentation dataset, a valuable resource for vehicle maintenance and research projects. For additional details about the Carparts Segmentation dataset and its creators, please visit the [CarParts Segmentation Dataset Page](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm?ref=ultralytics).
## FAQ
### What is the Roboflow Carparts Segmentation Dataset?
-The [Roboflow Carparts Segmentation Dataset](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm) is a curated collection of images and videos specifically designed for car part segmentation tasks in computer vision. This dataset includes a diverse range of visuals captured from multiple perspectives, making it an invaluable resource for training and testing segmentation models for automotive applications.
+The [Roboflow Carparts Segmentation Dataset](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm?ref=ultralytics) is a curated collection of images and videos specifically designed for car part segmentation tasks in computer vision. This dataset includes a diverse range of visuals captured from multiple perspectives, making it an invaluable resource for training and testing segmentation models for automotive applications.
### How can I use the Carparts Segmentation Dataset with Ultralytics YOLOv8?
@@ -157,4 +157,4 @@ The dataset configuration file for the Carparts Segmentation dataset, `carparts-
The Carparts Segmentation Dataset provides rich, annotated data essential for developing high-accuracy segmentation models in automotive computer vision. This dataset's diversity and detailed annotations improve model training, making it ideal for applications like vehicle maintenance automation, enhancing vehicle safety systems, and supporting autonomous driving technologies. Partnering with a robust dataset accelerates AI development and ensures better model performance.
-For more details, visit the [CarParts Segmentation Dataset Page](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm).
+For more details, visit the [CarParts Segmentation Dataset Page](https://universe.roboflow.com/gianmarco-russo-vt9xr/car-seg-un1pm?ref=ultralytics).
diff --git a/docs/en/datasets/segment/coco8-seg.md b/docs/en/datasets/segment/coco8-seg.md
index f22d6a68a3..e4aa6bef84 100644
--- a/docs/en/datasets/segment/coco8-seg.md
+++ b/docs/en/datasets/segment/coco8-seg.md
@@ -8,9 +8,9 @@ keywords: COCO8-Seg, Ultralytics, segmentation dataset, YOLOv8, COCO 2017, model
## Introduction
-[Ultralytics](https://ultralytics.com) COCO8-Seg is a small, but versatile instance segmentation dataset composed of the first 8 images of the COCO train 2017 set, 4 for training and 4 for validation. This dataset is ideal for testing and debugging segmentation models, or for experimenting with new detection approaches. With 8 images, it is small enough to be easily manageable, yet diverse enough to test training pipelines for errors and act as a sanity check before training larger datasets.
+[Ultralytics](https://www.ultralytics.com/) COCO8-Seg is a small, but versatile instance segmentation dataset composed of the first 8 images of the COCO train 2017 set, 4 for training and 4 for validation. This dataset is ideal for testing and debugging segmentation models, or for experimenting with new detection approaches. With 8 images, it is small enough to be easily manageable, yet diverse enough to test training pipelines for errors and act as a sanity check before training larger datasets.
-This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com) and [YOLOv8](https://github.com/ultralytics/ultralytics).
+This dataset is intended for use with Ultralytics [HUB](https://hub.ultralytics.com/) and [YOLOv8](https://github.com/ultralytics/ultralytics).
## Dataset YAML
@@ -82,7 +82,7 @@ We would like to acknowledge the COCO Consortium for creating and maintaining th
### What is the COCO8-Seg dataset, and how is it used in Ultralytics YOLOv8?
-The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralytics, consisting of the first 8 images from the COCO train 2017 set—4 images for training and 4 for validation. This dataset is tailored for testing and debugging segmentation models or experimenting with new detection methods. It is particularly useful with Ultralytics [YOLOv8](https://github.com/ultralytics/ultralytics) and [HUB](https://hub.ultralytics.com) for rapid iteration and pipeline error-checking before scaling to larger datasets. For detailed usage, refer to the model [Training](../../modes/train.md) page.
+The **COCO8-Seg dataset** is a compact instance segmentation dataset by Ultralytics, consisting of the first 8 images from the COCO train 2017 set—4 images for training and 4 for validation. This dataset is tailored for testing and debugging segmentation models or experimenting with new detection methods. It is particularly useful with Ultralytics [YOLOv8](https://github.com/ultralytics/ultralytics) and [HUB](https://hub.ultralytics.com/) for rapid iteration and pipeline error-checking before scaling to larger datasets. For detailed usage, refer to the model [Training](../../modes/train.md) page.
### How can I train a YOLOv8n-seg model using the COCO8-Seg dataset?
diff --git a/docs/en/datasets/segment/crack-seg.md b/docs/en/datasets/segment/crack-seg.md
index 5fa99dfbbf..32113dfc6d 100644
--- a/docs/en/datasets/segment/crack-seg.md
+++ b/docs/en/datasets/segment/crack-seg.md
@@ -6,7 +6,7 @@ keywords: Roboflow, Crack Segmentation Dataset, Ultralytics, transportation safe
# Roboflow Universe Crack Segmentation Dataset
-The [Roboflow](https://roboflow.com/?ref=ultralytics) [Crack Segmentation Dataset](https://universe.roboflow.com/university-bswxt/crack-bphdr) stands out as an extensive resource designed specifically for individuals involved in transportation and public safety studies. It is equally beneficial for those working on the development of self-driving car models or simply exploring computer vision applications for recreational purposes.
+The [Roboflow](https://roboflow.com/?ref=ultralytics) [Crack Segmentation Dataset](https://universe.roboflow.com/university-bswxt/crack-bphdr?ref=ultralytics) stands out as an extensive resource designed specifically for individuals involved in transportation and public safety studies. It is equally beneficial for those working on the development of self-driving car models or simply exploring computer vision applications for recreational purposes.
Comprising a total of 4029 static images captured from diverse road and wall scenarios, this dataset emerges as a valuable asset for tasks related to crack segmentation. Whether you are delving into the intricacies of transportation research or seeking to enhance the accuracy of your self-driving car models, this dataset provides a rich and varied collection of images to support your endeavors.
@@ -90,13 +90,13 @@ If you incorporate the crack segmentation dataset into your research or developm
}
```
-We would like to acknowledge the Roboflow team for creating and maintaining the Crack Segmentation dataset as a valuable resource for the road safety and research projects. For more information about the Crack segmentation dataset and its creators, visit the [Crack Segmentation Dataset Page](https://universe.roboflow.com/university-bswxt/crack-bphdr).
+We would like to acknowledge the Roboflow team for creating and maintaining the Crack Segmentation dataset as a valuable resource for the road safety and research projects. For more information about the Crack segmentation dataset and its creators, visit the [Crack Segmentation Dataset Page](https://universe.roboflow.com/university-bswxt/crack-bphdr?ref=ultralytics).
## FAQ
### What is the Roboflow Crack Segmentation Dataset?
-The [Roboflow Crack Segmentation Dataset](https://universe.roboflow.com/university-bswxt/crack-bphdr) is a comprehensive collection of 4029 static images designed specifically for transportation and public safety studies. It is ideal for tasks such as self-driving car model development and infrastructure maintenance. The dataset includes training, testing, and validation sets, aiding in accurate crack detection and segmentation.
+The [Roboflow Crack Segmentation Dataset](https://universe.roboflow.com/university-bswxt/crack-bphdr?ref=ultralytics) is a comprehensive collection of 4029 static images designed specifically for transportation and public safety studies. It is ideal for tasks such as self-driving car model development and infrastructure maintenance. The dataset includes training, testing, and validation sets, aiding in accurate crack detection and segmentation.
### How do I train a model using the Crack Segmentation Dataset with Ultralytics YOLOv8?
diff --git a/docs/en/datasets/segment/package-seg.md b/docs/en/datasets/segment/package-seg.md
index bf88410fb6..2aec99a21f 100644
--- a/docs/en/datasets/segment/package-seg.md
+++ b/docs/en/datasets/segment/package-seg.md
@@ -6,7 +6,7 @@ keywords: Roboflow, Package Segmentation Dataset, computer vision, package ident
# Roboflow Universe Package Segmentation Dataset
-The [Roboflow](https://roboflow.com/?ref=ultralytics) [Package Segmentation Dataset](https://universe.roboflow.com/factorypackage/factory_package) is a curated collection of images specifically tailored for tasks related to package segmentation in the field of computer vision. This dataset is designed to assist researchers, developers, and enthusiasts working on projects related to package identification, sorting, and handling.
+The [Roboflow](https://roboflow.com/?ref=ultralytics) [Package Segmentation Dataset](https://universe.roboflow.com/factorypackage/factory_package?ref=ultralytics) is a curated collection of images specifically tailored for tasks related to package segmentation in the field of computer vision. This dataset is designed to assist researchers, developers, and enthusiasts working on projects related to package identification, sorting, and handling.
Containing a diverse set of images showcasing various packages in different contexts and environments, the dataset serves as a valuable resource for training and evaluating segmentation models. Whether you are engaged in logistics, warehouse automation, or any application requiring precise package analysis, the Package Segmentation Dataset provides a targeted and comprehensive set of images to enhance the performance of your computer vision algorithms.
@@ -89,13 +89,13 @@ If you integrate the crack segmentation dataset into your research or developmen
}
```
-We express our gratitude to the Roboflow team for their efforts in creating and maintaining the Package Segmentation dataset, a valuable asset for logistics and research projects. For additional details about the Package Segmentation dataset and its creators, please visit the [Package Segmentation Dataset Page](https://universe.roboflow.com/factorypackage/factory_package).
+We express our gratitude to the Roboflow team for their efforts in creating and maintaining the Package Segmentation dataset, a valuable asset for logistics and research projects. For additional details about the Package Segmentation dataset and its creators, please visit the [Package Segmentation Dataset Page](https://universe.roboflow.com/factorypackage/factory_package?ref=ultralytics).
## FAQ
### What is the Roboflow Package Segmentation Dataset and how can it help in computer vision projects?
-The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factorypackage/factory_package) is a curated collection of images tailored for tasks involving package segmentation. It includes diverse images of packages in various contexts, making it invaluable for training and evaluating segmentation models. This dataset is particularly useful for applications in logistics, warehouse automation, and any project requiring precise package analysis. It helps optimize logistics and enhance vision models for accurate package identification and sorting.
+The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factorypackage/factory_package?ref=ultralytics) is a curated collection of images tailored for tasks involving package segmentation. It includes diverse images of packages in various contexts, making it invaluable for training and evaluating segmentation models. This dataset is particularly useful for applications in logistics, warehouse automation, and any project requiring precise package analysis. It helps optimize logistics and enhance vision models for accurate package identification and sorting.
### How do I train an Ultralytics YOLOv8 model on the Package Segmentation Dataset?
diff --git a/docs/en/guides/data-collection-and-annotation.md b/docs/en/guides/data-collection-and-annotation.md
index 2a7cb149f8..7939d12a42 100644
--- a/docs/en/guides/data-collection-and-annotation.md
+++ b/docs/en/guides/data-collection-and-annotation.md
@@ -137,7 +137,7 @@ Bouncing your ideas and queries off other computer vision enthusiasts can help a
### Where to Find Help and Support
- **GitHub Issues:** Visit the YOLOv8 GitHub repository and use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features. The community and maintainers are there to help with any issues you face.
-- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://ultralytics.com/discord/) to connect with other users and developers, get support, share knowledge, and brainstorm ideas.
+- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to connect with other users and developers, get support, share knowledge, and brainstorm ideas.
### Official Documentation
diff --git a/docs/en/guides/defining-project-goals.md b/docs/en/guides/defining-project-goals.md
index 3282cfe2d5..fcd32f12f2 100644
--- a/docs/en/guides/defining-project-goals.md
+++ b/docs/en/guides/defining-project-goals.md
@@ -115,7 +115,7 @@ Connecting with other computer vision enthusiasts can be incredibly helpful for
### Community Support Channels
- **GitHub Issues:** Head over to the YOLOv8 GitHub repository. You can use the [Issues tab](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features. The community and maintainers can assist with specific problems you encounter.
-- **Ultralytics Discord Server:** Become part of the [Ultralytics Discord server](https://ultralytics.com/discord/). Connect with fellow users and developers, seek support, exchange knowledge, and discuss ideas.
+- **Ultralytics Discord Server:** Become part of the [Ultralytics Discord server](https://discord.com/invite/ultralytics). Connect with fellow users and developers, seek support, exchange knowledge, and discuss ideas.
### Comprehensive Guides and Documentation
diff --git a/docs/en/guides/docker-quickstart.md b/docs/en/guides/docker-quickstart.md
index 90b86ed6d4..6d08fac0b5 100644
--- a/docs/en/guides/docker-quickstart.md
+++ b/docs/en/guides/docker-quickstart.md
@@ -10,7 +10,7 @@ keywords: Ultralytics, Docker, Quickstart Guide, CPU support, GPU support, NVIDI
diff --git a/docs/en/guides/steps-of-a-cv-project.md b/docs/en/guides/steps-of-a-cv-project.md
index 3b98171d30..ec9b84ff2f 100644
--- a/docs/en/guides/steps-of-a-cv-project.md
+++ b/docs/en/guides/steps-of-a-cv-project.md
@@ -189,7 +189,7 @@ Connecting with a community of computer vision enthusiasts can help you tackle a
### Community Resources
- **GitHub Issues:** Check out the [YOLOv8 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The active community and maintainers are there to help with specific issues.
-- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://ultralytics.com/discord/) to interact with other users and developers, get support, and share insights.
+- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to interact with other users and developers, get support, and share insights.
### Official Documentation
diff --git a/docs/en/guides/streamlit-live-inference.md b/docs/en/guides/streamlit-live-inference.md
index d6a356136d..24388eb302 100644
--- a/docs/en/guides/streamlit-live-inference.md
+++ b/docs/en/guides/streamlit-live-inference.md
@@ -86,7 +86,7 @@ Engage with the community to learn more, troubleshoot issues, and share your pro
### Where to Find Help and Support
- **GitHub Issues:** Visit the [Ultralytics GitHub repository](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features.
-- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://ultralytics.com/discord/) to connect with other users and developers, get support, share knowledge, and brainstorm ideas.
+- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to connect with other users and developers, get support, share knowledge, and brainstorm ideas.
### Official Documentation
diff --git a/docs/en/guides/triton-inference-server.md b/docs/en/guides/triton-inference-server.md
index dc69e9f390..1879bf78f3 100644
--- a/docs/en/guides/triton-inference-server.md
+++ b/docs/en/guides/triton-inference-server.md
@@ -6,7 +6,7 @@ keywords: Triton Inference Server, YOLOv8, Ultralytics, NVIDIA, deep learning, A
# Triton Inference Server with Ultralytics YOLOv8
-The [Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) (formerly known as TensorRT Inference Server) is an open-source software solution developed by NVIDIA. It provides a cloud inference solution optimized for NVIDIA GPUs. Triton simplifies the deployment of AI models at scale in production. Integrating Ultralytics YOLOv8 with Triton Inference Server allows you to deploy scalable, high-performance deep learning inference workloads. This guide provides steps to set up and test the integration.
+The [Triton Inference Server](https://developer.nvidia.com/triton-inference-server) (formerly known as TensorRT Inference Server) is an open-source software solution developed by NVIDIA. It provides a cloud inference solution optimized for NVIDIA GPUs. Triton simplifies the deployment of AI models at scale in production. Integrating Ultralytics YOLOv8 with Triton Inference Server allows you to deploy scalable, high-performance deep learning inference workloads. This guide provides steps to set up and test the integration.
@@ -147,7 +147,7 @@ By following the above steps, you can deploy and run Ultralytics YOLOv8 models e
### How do I set up Ultralytics YOLOv8 with NVIDIA Triton Inference Server?
-Setting up [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) involves a few key steps:
+Setting up [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) involves a few key steps:
1. **Export YOLOv8 to ONNX format**:
@@ -213,7 +213,7 @@ This setup can help you efficiently deploy YOLOv8 models at scale on Triton Infe
### What benefits does using Ultralytics YOLOv8 with NVIDIA Triton Inference Server offer?
-Integrating [Ultralytics YOLOv8](../models/yolov8.md) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) provides several advantages:
+Integrating [Ultralytics YOLOv8](../models/yolov8.md) with [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) provides several advantages:
- **Scalable AI Inference**: Triton allows serving multiple models from a single server instance, supporting dynamic model loading and unloading, making it highly scalable for diverse AI workloads.
- **High Performance**: Optimized for NVIDIA GPUs, Triton Inference Server ensures high-speed inference operations, perfect for real-time applications such as object detection.
@@ -223,7 +223,7 @@ For detailed instructions on setting up and running YOLOv8 with Triton, you can
### Why should I export my YOLOv8 model to ONNX format before using Triton Inference Server?
-Using ONNX (Open Neural Network Exchange) format for your [Ultralytics YOLOv8](../models/yolov8.md) model before deploying it on [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) offers several key benefits:
+Using ONNX (Open Neural Network Exchange) format for your [Ultralytics YOLOv8](../models/yolov8.md) model before deploying it on [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server) offers several key benefits:
- **Interoperability**: ONNX format supports transfer between different deep learning frameworks (such as PyTorch, TensorFlow), ensuring broader compatibility.
- **Optimization**: Many deployment environments, including Triton, optimize for ONNX, enabling faster inference and better performance.
@@ -242,7 +242,7 @@ You can follow the steps in the [exporting guide](../modes/export.md) to complet
### Can I run inference using the Ultralytics YOLOv8 model on Triton Inference Server?
-Yes, you can run inference using the [Ultralytics YOLOv8](../models/yolov8.md) model on [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server). Once your model is set up in the Triton Model Repository and the server is running, you can load and run inference on your model as follows:
+Yes, you can run inference using the [Ultralytics YOLOv8](../models/yolov8.md) model on [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server). Once your model is set up in the Triton Model Repository and the server is running, you can load and run inference on your model as follows:
```python
from ultralytics import YOLO
diff --git a/docs/en/guides/yolo-common-issues.md b/docs/en/guides/yolo-common-issues.md
index 849b44c42a..77351eaa06 100644
--- a/docs/en/guides/yolo-common-issues.md
+++ b/docs/en/guides/yolo-common-issues.md
@@ -121,7 +121,7 @@ You can access these metrics from the training logs or by using tools like Tenso
- [TensorBoard](https://www.tensorflow.org/tensorboard): TensorBoard is a popular choice for visualizing training metrics, including loss, accuracy, and more. You can integrate it with your YOLOv8 training process.
- [Comet](https://bit.ly/yolov8-readme-comet): Comet provides an extensive toolkit for experiment tracking and comparison. It allows you to track metrics, hyperparameters, and even model weights. Integration with YOLO models is also straightforward, providing you with a complete overview of your experiment cycle.
-- [Ultralytics HUB](https://hub.ultralytics.com): Ultralytics HUB offers a specialized environment for tracking YOLO models, giving you a one-stop platform to manage metrics, datasets, and even collaborate with your team. Given its tailored focus on YOLO, it offers more customized tracking options.
+- [Ultralytics HUB](https://hub.ultralytics.com/): Ultralytics HUB offers a specialized environment for tracking YOLO models, giving you a one-stop platform to manage metrics, datasets, and even collaborate with your team. Given its tailored focus on YOLO, it offers more customized tracking options.
Each of these tools offers its own set of advantages, so you may want to consider the specific needs of your project when making a choice.
@@ -270,7 +270,7 @@ Engaging with a community of like-minded individuals can significantly enhance y
**GitHub Issues:** The YOLOv8 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it's a great place to get help with specific problems.
-**Ultralytics Discord Server:** Ultralytics has a [Discord server](https://ultralytics.com/discord/) where you can interact with other users and the developers.
+**Ultralytics Discord Server:** Ultralytics has a [Discord server](https://discord.com/invite/ultralytics) where you can interact with other users and the developers.
### Official Documentation and Resources
@@ -312,7 +312,7 @@ This sets the training process to the first GPU. Consult the `nvidia-smi` comman
### How can I monitor and track my YOLOv8 model training progress?
-Tracking and visualizing training progress can be efficiently managed through tools like [TensorBoard](https://www.tensorflow.org/tensorboard), [Comet](https://bit.ly/yolov8-readme-comet), and [Ultralytics HUB](https://hub.ultralytics.com). These tools allow you to log and visualize metrics such as loss, precision, recall, and mAP. Implementing [early stopping](#continuous-monitoring-parameters) based on these metrics can also help achieve better training outcomes.
+Tracking and visualizing training progress can be efficiently managed through tools like [TensorBoard](https://www.tensorflow.org/tensorboard), [Comet](https://bit.ly/yolov8-readme-comet), and [Ultralytics HUB](https://hub.ultralytics.com/). These tools allow you to log and visualize metrics such as loss, precision, recall, and mAP. Implementing [early stopping](#continuous-monitoring-parameters) based on these metrics can also help achieve better training outcomes.
### What should I do if YOLOv8 is not recognizing my dataset format?
diff --git a/docs/en/guides/yolo-performance-metrics.md b/docs/en/guides/yolo-performance-metrics.md
index ad59d4eb9d..d885b9eab3 100644
--- a/docs/en/guides/yolo-performance-metrics.md
+++ b/docs/en/guides/yolo-performance-metrics.md
@@ -159,7 +159,7 @@ Tapping into a community of enthusiasts and experts can amplify your journey wit
- **GitHub Issues:** The YOLOv8 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it's a great place to get help with specific problems.
-- **Ultralytics Discord Server:** Ultralytics has a [Discord server](https://ultralytics.com/discord/) where you can interact with other users and the developers.
+- **Ultralytics Discord Server:** Ultralytics has a [Discord server](https://discord.com/invite/ultralytics) where you can interact with other users and the developers.
### Official Documentation and Resources:
diff --git a/docs/en/help/CI.md b/docs/en/help/CI.md
index 0cea46c329..93b1ad3222 100644
--- a/docs/en/help/CI.md
+++ b/docs/en/help/CI.md
@@ -40,9 +40,9 @@ Remember, a successful CI test does not mean that everything is perfect. It is a
Code coverage is a metric that represents the percentage of your codebase that is executed when your tests run. It provides insight into how well your tests exercise your code and can be crucial in identifying untested parts of your application. A high code coverage percentage is often associated with a lower likelihood of bugs. However, it's essential to understand that code coverage does not guarantee the absence of defects. It merely indicates which parts of the code have been executed by the tests.
-### Integration with [codecov.io](https://codecov.io/)
+### Integration with [codecov.io](https://about.codecov.io/)
-At Ultralytics, we have integrated our repositories with [codecov.io](https://codecov.io/), a popular online platform for measuring and visualizing code coverage. Codecov provides detailed insights, coverage comparisons between commits, and visual overlays directly on your code, indicating which lines were covered.
+At Ultralytics, we have integrated our repositories with [codecov.io](https://about.codecov.io/), a popular online platform for measuring and visualizing code coverage. Codecov provides detailed insights, coverage comparisons between commits, and visual overlays directly on your code, indicating which lines were covered.
By integrating with Codecov, we aim to maintain and improve the quality of our code by focusing on areas that might be prone to errors or need further testing.
@@ -84,4 +84,4 @@ Automated [PyPI publishing](https://github.com/ultralytics/ultralytics/actions/w
### How does Ultralytics measure code coverage and why is it important?
-Ultralytics measures code coverage by integrating with [Codecov](https://codecov.io/github/ultralytics/ultralytics), providing insights into how much of the codebase is executed during tests. High code coverage can indicate well-tested code, helping to uncover untested areas that might be prone to bugs. Detailed code coverage metrics can be explored via badges displayed on our main repositories or directly on [Codecov](https://codecov.io/gh/ultralytics/ultralytics).
+Ultralytics measures code coverage by integrating with [Codecov](https://app.codecov.io/github/ultralytics/ultralytics), providing insights into how much of the codebase is executed during tests. High code coverage can indicate well-tested code, helping to uncover untested areas that might be prone to bugs. Detailed code coverage metrics can be explored via badges displayed on our main repositories or directly on [Codecov](https://app.codecov.io/gh/ultralytics/ultralytics).
diff --git a/docs/en/help/FAQ.md b/docs/en/help/FAQ.md
index 5165a953d9..24472e77f4 100644
--- a/docs/en/help/FAQ.md
+++ b/docs/en/help/FAQ.md
@@ -6,7 +6,7 @@ keywords: Ultralytics, YOLO, FAQ, object detection, hardware requirements, fine-
# Ultralytics YOLO Frequently Asked Questions (FAQ)
-This FAQ section addresses common questions and issues users might encounter while working with [Ultralytics](https://ultralytics.com) YOLO repositories.
+This FAQ section addresses common questions and issues users might encounter while working with [Ultralytics](https://www.ultralytics.com/) YOLO repositories.
## FAQ
@@ -222,7 +222,7 @@ Ultralytics provides a wealth of resources to help you get started and master th
- 💻 [GitHub repository](https://github.com/ultralytics/ultralytics): Source code, example scripts, and community contributions.
- ✍️ [Ultralytics blog](https://www.ultralytics.com/blog): In-depth articles, use cases, and technical insights.
- 💬 [Community forums](https://community.ultralytics.com/): Connect with other users, ask questions, and share your experiences.
-- 🎥 [YouTube channel](https://youtube.com/ultralytics?sub_confirmation=1): Video tutorials, demos, and webinars on various Ultralytics topics.
+- 🎥 [YouTube channel](https://www.youtube.com/ultralytics?sub_confirmation=1): Video tutorials, demos, and webinars on various Ultralytics topics.
These resources provide code examples, real-world use cases, and step-by-step guides for various tasks using Ultralytics models.
diff --git a/docs/en/help/code_of_conduct.md b/docs/en/help/code_of_conduct.md
index c8638cc61f..625ed601e4 100644
--- a/docs/en/help/code_of_conduct.md
+++ b/docs/en/help/code_of_conduct.md
@@ -78,7 +78,7 @@ Community leaders will follow these Community Impact Guidelines in determining t
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
-Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
+Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/inclusion).
For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
@@ -104,6 +104,6 @@ Contributing to Ultralytics means engaging positively and respectfully with othe
### Where can I find additional information about the Ultralytics Code of Conduct?
-For more comprehensive details about the Ultralytics Code of Conduct, including reporting guidelines and enforcement policies, you can visit the [Contributor Covenant homepage](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html) or check the [FAQ section of Contributor Covenant](https://www.contributor-covenant.org/faq). Learn more about Ultralytics' goals and initiatives on [our brand page](https://www.ultralytics.com/brand) and [about page](https://www.ultralytics.com/about).
+For more comprehensive details about the Ultralytics Code of Conduct, including reporting guidelines and enforcement policies, you can visit the [Contributor Covenant homepage](https://www.contributor-covenant.org/version/2/0/code_of_conduct/) or check the [FAQ section of Contributor Covenant](https://www.contributor-covenant.org/faq/). Learn more about Ultralytics' goals and initiatives on [our brand page](https://www.ultralytics.com/brand) and [about page](https://www.ultralytics.com/about).
Should you have more questions or need further assistance, check our [Help Center](../help/FAQ.md) and [Contributing Guide](../help/contributing.md) for more information.
diff --git a/docs/en/help/contributing.md b/docs/en/help/contributing.md
index a4c23e99dd..637c1ae86e 100644
--- a/docs/en/help/contributing.md
+++ b/docs/en/help/contributing.md
@@ -6,7 +6,7 @@ keywords: Ultralytics, YOLO, open-source, contribution, pull request, code of co
# Contributing to Ultralytics Open-Source Projects
-Welcome! We're thrilled that you're considering contributing to our [Ultralytics](https://ultralytics.com) [open-source](https://github.com/ultralytics) projects. Your involvement not only helps enhance the quality of our repositories but also benefits the entire community. This guide provides clear guidelines and best practices to help you get started.
+Welcome! We're thrilled that you're considering contributing to our [Ultralytics](https://www.ultralytics.com/) [open-source](https://github.com/ultralytics) projects. Your involvement not only helps enhance the quality of our repositories but also benefits the entire community. This guide provides clear guidelines and best practices to help you get started.
@@ -133,7 +133,7 @@ We encourage all contributors to familiarize themselves with the terms of the AG
## Conclusion
-Thank you for your interest in contributing to [Ultralytics](https://ultralytics.com) [open-source](https://github.com/ultralytics) YOLO projects. Your participation is essential in shaping the future of our software and building a vibrant community of innovation and collaboration. Whether you're enhancing code, reporting bugs, or suggesting new features, your contributions are invaluable.
+Thank you for your interest in contributing to [Ultralytics](https://www.ultralytics.com/) [open-source](https://github.com/ultralytics) YOLO projects. Your participation is essential in shaping the future of our software and building a vibrant community of innovation and collaboration. Whether you're enhancing code, reporting bugs, or suggesting new features, your contributions are invaluable.
We're excited to see your ideas come to life and appreciate your commitment to advancing object detection technology. Together, let's continue to grow and innovate in this exciting open-source journey. Happy coding! 🚀🌟
diff --git a/docs/en/help/minimum_reproducible_example.md b/docs/en/help/minimum_reproducible_example.md
index 92eb629938..eb4e25368c 100644
--- a/docs/en/help/minimum_reproducible_example.md
+++ b/docs/en/help/minimum_reproducible_example.md
@@ -6,7 +6,7 @@ keywords: Ultralytics, YOLO, Minimum Reproducible Example, MRE, bug report, issu
# Creating a Minimum Reproducible Example for Bug Reports in Ultralytics YOLO Repositories
-When submitting a bug report for [Ultralytics](https://ultralytics.com) [YOLO](https://github.com/ultralytics) repositories, it's essential to provide a [Minimum Reproducible Example (MRE)](https://stackoverflow.com/help/minimal-reproducible-example). An MRE is a small, self-contained piece of code that demonstrates the problem you're experiencing. Providing an MRE helps maintainers and contributors understand the issue and work on a fix more efficiently. This guide explains how to create an MRE when submitting bug reports to Ultralytics YOLO repositories.
+When submitting a bug report for [Ultralytics](https://www.ultralytics.com/) [YOLO](https://github.com/ultralytics) repositories, it's essential to provide a [Minimum Reproducible Example (MRE)](https://stackoverflow.com/help/minimal-reproducible-example). An MRE is a small, self-contained piece of code that demonstrates the problem you're experiencing. Providing an MRE helps maintainers and contributors understand the issue and work on a fix more efficiently. This guide explains how to create an MRE when submitting bug reports to Ultralytics YOLO repositories.
## 1. Isolate the Problem
diff --git a/docs/en/help/privacy.md b/docs/en/help/privacy.md
index 0453569e3d..a053f199fe 100644
--- a/docs/en/help/privacy.md
+++ b/docs/en/help/privacy.md
@@ -7,7 +7,7 @@ keywords: Ultralytics, data collection, YOLO, Python package, Google Analytics,
## Overview
-[Ultralytics](https://ultralytics.com) is dedicated to the continuous enhancement of the user experience and the capabilities of our Python package, including the advanced YOLO models we develop. Our approach involves the gathering of anonymized usage statistics and crash reports, helping us identify opportunities for improvement and ensuring the reliability of our software. This transparency document outlines what data we collect, its purpose, and the choice you have regarding this data collection.
+[Ultralytics](https://www.ultralytics.com/) is dedicated to the continuous enhancement of the user experience and the capabilities of our Python package, including the advanced YOLO models we develop. Our approach involves the gathering of anonymized usage statistics and crash reports, helping us identify opportunities for improvement and ensuring the reliability of our software. This transparency document outlines what data we collect, its purpose, and the choice you have regarding this data collection.
## Anonymized Google Analytics
@@ -37,7 +37,7 @@ We take several measures to ensure the privacy and security of the data you entr
## Sentry Crash Reporting
-[Sentry](https://sentry.io/) is a developer-centric error tracking software that aids in identifying, diagnosing, and resolving issues in real-time, ensuring the robustness and reliability of applications. Within our package, it plays a crucial role by providing insights through crash reporting, significantly contributing to the stability and ongoing refinement of our software.
+[Sentry](https://sentry.io/welcome/) is a developer-centric error tracking software that aids in identifying, diagnosing, and resolving issues in real-time, ensuring the robustness and reliability of applications. Within our package, it plays a crucial role by providing insights through crash reporting, significantly contributing to the stability and ongoing refinement of our software.
!!! Note
@@ -138,7 +138,7 @@ Ultralytics takes user privacy seriously. We design our data collection practice
## Questions or Concerns
-If you have any questions or concerns about our data collection practices, please reach out to us via our [contact form](https://ultralytics.com/contact) or via [support@ultralytics.com](mailto:support@ultralytics.com). We are dedicated to ensuring our users feel informed and confident in their privacy when using our package.
+If you have any questions or concerns about our data collection practices, please reach out to us via our [contact form](https://www.ultralytics.com/contact) or via [support@ultralytics.com](mailto:support@ultralytics.com). We are dedicated to ensuring our users feel informed and confident in their privacy when using our package.
## FAQ
diff --git a/docs/en/help/security.md b/docs/en/help/security.md
index 553a0b2408..39fe3829ff 100644
--- a/docs/en/help/security.md
+++ b/docs/en/help/security.md
@@ -5,7 +5,7 @@ keywords: Ultralytics security policy, Snyk scanning, CodeQL scanning, Dependabo
# Ultralytics Security Policy
-At [Ultralytics](https://ultralytics.com), the security of our users' data and systems is of utmost importance. To ensure the safety and security of our [open-source projects](https://github.com/ultralytics), we have implemented several measures to detect and prevent security vulnerabilities.
+At [Ultralytics](https://www.ultralytics.com/), the security of our users' data and systems is of utmost importance. To ensure the safety and security of our [open-source projects](https://github.com/ultralytics), we have implemented several measures to detect and prevent security vulnerabilities.
## Snyk Scanning
@@ -15,7 +15,7 @@ We utilize [Snyk](https://snyk.io/advisor/python/ultralytics) to conduct compreh
## GitHub CodeQL Scanning
-Our security strategy includes GitHub's [CodeQL](https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/about-code-scanning-with-codeql) scanning. CodeQL delves deep into our codebase, identifying complex vulnerabilities like SQL injection and XSS by analyzing the code's semantic structure. This advanced level of analysis ensures early detection and resolution of potential security risks.
+Our security strategy includes GitHub's [CodeQL](https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql) scanning. CodeQL delves deep into our codebase, identifying complex vulnerabilities like SQL injection and XSS by analyzing the code's semantic structure. This advanced level of analysis ensures early detection and resolution of potential security risks.
[![CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml)
@@ -31,7 +31,7 @@ We employ GitHub [secret scanning](https://docs.github.com/en/code-security/secr
We enable private vulnerability reporting, allowing users to discreetly report potential security issues. This approach facilitates responsible disclosure, ensuring vulnerabilities are handled securely and efficiently.
-If you suspect or discover a security vulnerability in any of our repositories, please let us know immediately. You can reach out to us directly via our [contact form](https://ultralytics.com/contact) or via [security@ultralytics.com](mailto:security@ultralytics.com). Our security team will investigate and respond as soon as possible.
+If you suspect or discover a security vulnerability in any of our repositories, please let us know immediately. You can reach out to us directly via our [contact form](https://www.ultralytics.com/contact) or via [security@ultralytics.com](mailto:security@ultralytics.com). Our security team will investigate and respond as soon as possible.
We appreciate your help in keeping all Ultralytics open-source projects secure and safe for everyone 🙏.
@@ -57,7 +57,7 @@ To see the Snyk badge and learn more about its deployment, check the [Snyk Scann
### What is CodeQL and how does it enhance security for Ultralytics?
-[CodeQL](https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/about-code-scanning-with-codeql) is a security analysis tool integrated into Ultralytics' workflow via GitHub. It delves deep into the codebase to identify complex vulnerabilities such as SQL injection and Cross-Site Scripting (XSS). CodeQL analyzes the semantic structure of the code to provide an advanced level of security, ensuring early detection and mitigation of potential risks.
+[CodeQL](https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql) is a security analysis tool integrated into Ultralytics' workflow via GitHub. It delves deep into the codebase to identify complex vulnerabilities such as SQL injection and Cross-Site Scripting (XSS). CodeQL analyzes the semantic structure of the code to provide an advanced level of security, ensuring early detection and mitigation of potential risks.
For more information on how CodeQL is used, visit the [GitHub CodeQL Scanning section](#github-codeql-scanning).
@@ -69,6 +69,6 @@ For more details, explore the [GitHub Dependabot Alerts section](#github-dependa
### How does Ultralytics handle private vulnerability reporting?
-Ultralytics encourages users to report potential security issues through private channels. Users can report vulnerabilities discreetly via the [contact form](https://ultralytics.com/contact) or by emailing [security@ultralytics.com](mailto:security@ultralytics.com). This ensures responsible disclosure and allows the security team to investigate and address vulnerabilities securely and efficiently.
+Ultralytics encourages users to report potential security issues through private channels. Users can report vulnerabilities discreetly via the [contact form](https://www.ultralytics.com/contact) or by emailing [security@ultralytics.com](mailto:security@ultralytics.com). This ensures responsible disclosure and allows the security team to investigate and address vulnerabilities securely and efficiently.
For more information on private vulnerability reporting, refer to the [Private Vulnerability Reporting section](#private-vulnerability-reporting).
diff --git a/docs/en/hub/api/index.md b/docs/en/hub/api/index.md
index b417161514..9ae12c3db5 100644
--- a/docs/en/hub/api/index.md
+++ b/docs/en/hub/api/index.md
@@ -17,13 +17,13 @@ Welcome to the Ultralytics "Under Construction" page! Here, we're hard at work d
This placeholder page is your first stop for upcoming developments. Keep an eye out for:
-- **Newsletter:** Subscribe [here](https://ultralytics.com/#newsletter) for the latest news.
+- **Newsletter:** Subscribe [here](https://www.ultralytics.com/#newsletter) for the latest news.
- **Social Media:** Follow us [here](https://www.linkedin.com/company/ultralytics) for updates and teasers.
-- **Blog:** Visit our [blog](https://ultralytics.com/blog) for detailed insights.
+- **Blog:** Visit our [blog](https://www.ultralytics.com/blog) for detailed insights.
## We Value Your Input 🗣️
-Your feedback shapes our future releases. Share your thoughts and suggestions [here](https://ultralytics.com/contact).
+Your feedback shapes our future releases. Share your thoughts and suggestions [here](https://www.ultralytics.com/contact).
## Thank You, Community! 🌍
diff --git a/docs/en/hub/app/android.md b/docs/en/hub/app/android.md
index c3c19b0c17..365180545d 100644
--- a/docs/en/hub/app/android.md
+++ b/docs/en/hub/app/android.md
@@ -60,7 +60,7 @@ INT8 (or 8-bit integer) quantization further reduces the model's size and comput
## Delegates and Performance Variability
-Different delegates are available on Android devices to accelerate model inference. These delegates include CPU, [GPU](https://www.tensorflow.org/lite/android/delegates/gpu), [Hexagon](https://www.tensorflow.org/lite/android/delegates/hexagon) and [NNAPI](https://www.tensorflow.org/lite/android/delegates/nnapi). The performance of these delegates varies depending on the device's hardware vendor, product line, and specific chipsets used in the device.
+Different delegates are available on Android devices to accelerate model inference. These delegates include CPU, [GPU](https://ai.google.dev/edge/litert/android/gpu), [Hexagon](https://developer.android.com/ndk/guides/neuralnetworks/migration-guide) and [NNAPI](https://developer.android.com/ndk/guides/neuralnetworks/migration-guide). The performance of these delegates varies depending on the device's hardware vendor, product line, and specific chipsets used in the device.
1. **CPU**: The default option, with reasonable performance on most devices.
2. **GPU**: Utilizes the device's GPU for faster inference. It can provide a significant performance boost on devices with powerful GPUs.
@@ -69,13 +69,13 @@ Different delegates are available on Android devices to accelerate model inferen
Here's a table showing the primary vendors, their product lines, popular devices, and supported delegates:
-| Vendor | Product Lines | Popular Devices | Delegates Supported |
-| --------------------------------------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------ |
-| [Qualcomm](https://www.qualcomm.com/) | [Snapdragon (e.g., 800 series)](https://www.qualcomm.com/snapdragon) | [Samsung Galaxy S21](https://www.samsung.com/global/galaxy/galaxy-s21-5g/), [OnePlus 9](https://www.oneplus.com/9), [Google Pixel 6](https://store.google.com/product/pixel_6) | CPU, GPU, Hexagon, NNAPI |
-| [Samsung](https://www.samsung.com/) | [Exynos (e.g., Exynos 2100)](https://www.samsung.com/semiconductor/minisite/exynos/) | [Samsung Galaxy S21 (Global version)](https://www.samsung.com/global/galaxy/galaxy-s21-5g/) | CPU, GPU, NNAPI |
-| [MediaTek](https://i.mediatek.com/) | [Dimensity (e.g., Dimensity 1200)](https://i.mediatek.com/dimensity-1200) | [Realme GT](https://www.realme.com/global/realme-gt), [Xiaomi Redmi Note](https://www.mi.com/en/phone/redmi/note-list) | CPU, GPU, NNAPI |
-| [HiSilicon](https://www.hisilicon.com/) | [Kirin (e.g., Kirin 990)](https://www.hisilicon.com/en/products/Kirin) | [Huawei P40 Pro](https://consumer.huawei.com/en/phones/p40-pro/), [Huawei Mate 30 Pro](https://consumer.huawei.com/en/phones/mate30-pro/) | CPU, GPU, NNAPI |
-| [NVIDIA](https://www.nvidia.com/) | [Tegra (e.g., Tegra X1)](https://developer.nvidia.com/content/tegra-x1) | [NVIDIA Shield TV](https://www.nvidia.com/en-us/shield/shield-tv/), [Nintendo Switch](https://www.nintendo.com/switch/) | CPU, GPU, NNAPI |
+| Vendor | Product Lines | Popular Devices | Delegates Supported |
+| ----------------------------------------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------ |
+| [Qualcomm](https://www.qualcomm.com/) | [Snapdragon (e.g., 800 series)](https://www.qualcomm.com/snapdragon/overview) | [Samsung Galaxy S21](https://www.samsung.com/global/galaxy/galaxy-s21-5g/), [OnePlus 9](https://www.oneplus.com/9), [Google Pixel 6](https://store.google.com/product/pixel_6) | CPU, GPU, Hexagon, NNAPI |
+| [Samsung](https://www.samsung.com/) | [Exynos (e.g., Exynos 2100)](https://www.samsung.com/semiconductor/minisite/exynos/) | [Samsung Galaxy S21 (Global version)](https://www.samsung.com/global/galaxy/galaxy-s21-5g/) | CPU, GPU, NNAPI |
+| [MediaTek](https://i.mediatek.com/) | [Dimensity (e.g., Dimensity 1200)](https://i.mediatek.com/dimensity-1200) | [Realme GT](https://www.realme.com/global/realme-gt), [Xiaomi Redmi Note](https://www.mi.com/global/phone/redmi/note-list) | CPU, GPU, NNAPI |
+| [HiSilicon](https://www.hisilicon.com/cn) | [Kirin (e.g., Kirin 990)](https://www.hisilicon.com/en/products/Kirin) | [Huawei P40 Pro](https://consumer.huawei.com/en/phones/), [Huawei Mate 30 Pro](https://consumer.huawei.com/en/phones/) | CPU, GPU, NNAPI |
+| [NVIDIA](https://www.nvidia.com/) | [Tegra (e.g., Tegra X1)](https://developer.nvidia.com/content/tegra-x1) | [NVIDIA Shield TV](https://www.nvidia.com/en-us/shield/shield-tv/), [Nintendo Switch](https://www.nintendo.com/switch/) | CPU, GPU, NNAPI |
Please note that the list of devices mentioned is not exhaustive and may vary depending on the specific chipsets and device models. Always test your models on your target devices to ensure compatibility and optimal performance.
diff --git a/docs/en/hub/cloud-training.md b/docs/en/hub/cloud-training.md
index 9d09a18fc0..42dd681080 100644
--- a/docs/en/hub/cloud-training.md
+++ b/docs/en/hub/cloud-training.md
@@ -6,9 +6,9 @@ keywords: Ultralytics HUB, cloud training, model training, Pro Plan, easy AI set
# Ultralytics HUB Cloud Training
-We've listened to the high demand and widespread interest and are thrilled to unveil [Ultralytics HUB](https://ultralytics.com/hub) Cloud Training, offering a single-click training experience for our [Pro](./pro.md) users!
+We've listened to the high demand and widespread interest and are thrilled to unveil [Ultralytics HUB](https://www.ultralytics.com/hub) Cloud Training, offering a single-click training experience for our [Pro](./pro.md) users!
-[Ultralytics HUB](https://ultralytics.com/hub) [Pro](./pro.md) users can finetune [Ultralytics HUB](https://ultralytics.com/hub) models on a custom dataset using our Cloud Training solution, making the model training process easy. Say goodbye to complex setups and hello to streamlined workflows with [Ultralytics HUB](https://ultralytics.com/hub)'s intuitive interface.
+[Ultralytics HUB](https://www.ultralytics.com/hub) [Pro](./pro.md) users can finetune [Ultralytics HUB](https://www.ultralytics.com/hub) models on a custom dataset using our Cloud Training solution, making the model training process easy. Say goodbye to complex setups and hello to streamlined workflows with [Ultralytics HUB](https://www.ultralytics.com/hub)'s intuitive interface.
@@ -275,7 +275,7 @@ This approach provides a powerful means of customizing state-of-the-art object d
| Dataset | Type | Samples | Boxes | Annotation Files |
| ----------------------------------------------------------------- | --------- | ------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| [Objects365v1](https://opendatalab.com/OpenDataLab/Objects365_v1) | Detection | 609k | 9621k | [objects365_train.json](https://opendatalab.com/OpenDataLab/Objects365_v1) |
-| [GQA](https://nlp.stanford.edu/data/gqa/images.zip) | Grounding | 621k | 3681k | [final_mixed_train_no_coco.json](https://huggingface.co/GLIPModel/GLIP/blob/main/mdetr_annotations/final_mixed_train_no_coco.json) |
+| [GQA](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip) | Grounding | 621k | 3681k | [final_mixed_train_no_coco.json](https://huggingface.co/GLIPModel/GLIP/blob/main/mdetr_annotations/final_mixed_train_no_coco.json) |
| [Flickr30k](https://shannon.cs.illinois.edu/DenotationGraph/) | Grounding | 149k | 641k | [final_flickr_separateGT_train.json](https://huggingface.co/GLIPModel/GLIP/blob/main/mdetr_annotations/final_flickr_separateGT_train.json) |
- Val data
diff --git a/docs/en/models/yolov10.md b/docs/en/models/yolov10.md
index 482c5cda39..fb99f4d1ad 100644
--- a/docs/en/models/yolov10.md
+++ b/docs/en/models/yolov10.md
@@ -6,7 +6,7 @@ keywords: YOLOv10, real-time object detection, NMS-free, deep learning, Tsinghua
# YOLOv10: Real-Time End-to-End Object Detection
-YOLOv10, built on the [Ultralytics](https://ultralytics.com) [Python package](https://pypi.org/project/ultralytics/) by researchers at [Tsinghua University](https://www.tsinghua.edu.cn/en/), introduces a new approach to real-time object detection, addressing both the post-processing and model architecture deficiencies found in previous YOLO versions. By eliminating non-maximum suppression (NMS) and optimizing various model components, YOLOv10 achieves state-of-the-art performance with significantly reduced computational overhead. Extensive experiments demonstrate its superior accuracy-latency trade-offs across multiple model scales.
+YOLOv10, built on the [Ultralytics](https://www.ultralytics.com/) [Python package](https://pypi.org/project/ultralytics/) by researchers at [Tsinghua University](https://www.tsinghua.edu.cn/en/), introduces a new approach to real-time object detection, addressing both the post-processing and model architecture deficiencies found in previous YOLO versions. By eliminating non-maximum suppression (NMS) and optimizing various model components, YOLOv10 achieves state-of-the-art performance with significantly reduced computational overhead. Extensive experiments demonstrate its superior accuracy-latency trade-offs across multiple model scales.
![YOLOv10 consistent dual assignment for NMS-free training](https://github.com/ultralytics/docs/releases/download/0/yolov10-consistent-dual-assignment.avif)
@@ -223,7 +223,7 @@ YOLOv10 sets a new standard in real-time object detection by addressing the shor
## Citations and Acknowledgements
-We would like to acknowledge the YOLOv10 authors from [Tsinghua University](https://www.tsinghua.edu.cn/en/) for their extensive research and significant contributions to the [Ultralytics](https://ultralytics.com) framework:
+We would like to acknowledge the YOLOv10 authors from [Tsinghua University](https://www.tsinghua.edu.cn/en/) for their extensive research and significant contributions to the [Ultralytics](https://www.ultralytics.com/) framework:
!!! Quote ""
diff --git a/docs/en/models/yolov5.md b/docs/en/models/yolov5.md
index 9927d06c5d..57b562423a 100644
--- a/docs/en/models/yolov5.md
+++ b/docs/en/models/yolov5.md
@@ -111,7 +111,7 @@ If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv
}
```
-Please note that YOLOv5 models are provided under [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) and [Enterprise](https://ultralytics.com/license) licenses.
+Please note that YOLOv5 models are provided under [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) and [Enterprise](https://www.ultralytics.com/license) licenses.
## FAQ
diff --git a/docs/en/models/yolov8.md b/docs/en/models/yolov8.md
index 72ee275099..aecbc157c0 100644
--- a/docs/en/models/yolov8.md
+++ b/docs/en/models/yolov8.md
@@ -183,7 +183,7 @@ If you use the YOLOv8 model or any other software from this repository in your w
}
```
-Please note that the DOI is pending and will be added to the citation once it is available. YOLOv8 models are provided under [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) and [Enterprise](https://ultralytics.com/license) licenses.
+Please note that the DOI is pending and will be added to the citation once it is available. YOLOv8 models are provided under [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) and [Enterprise](https://www.ultralytics.com/license) licenses.
## FAQ
diff --git a/docs/en/models/yolov9.md b/docs/en/models/yolov9.md
index 3cefff6f25..2a32176086 100644
--- a/docs/en/models/yolov9.md
+++ b/docs/en/models/yolov9.md
@@ -6,7 +6,7 @@ keywords: YOLOv9, object detection, real-time, PGI, GELAN, deep learning, MS COC
# YOLOv9: A Leap Forward in Object Detection Technology
-YOLOv9 marks a significant advancement in real-time object detection, introducing groundbreaking techniques such as Programmable Gradient Information (PGI) and the Generalized Efficient Layer Aggregation Network (GELAN). This model demonstrates remarkable improvements in efficiency, accuracy, and adaptability, setting new benchmarks on the MS COCO dataset. The YOLOv9 project, while developed by a separate open-source team, builds upon the robust codebase provided by [Ultralytics](https://ultralytics.com) [YOLOv5](yolov5.md), showcasing the collaborative spirit of the AI research community.
+YOLOv9 marks a significant advancement in real-time object detection, introducing groundbreaking techniques such as Programmable Gradient Information (PGI) and the Generalized Efficient Layer Aggregation Network (GELAN). This model demonstrates remarkable improvements in efficiency, accuracy, and adaptability, setting new benchmarks on the MS COCO dataset. The YOLOv9 project, while developed by a separate open-source team, builds upon the robust codebase provided by [Ultralytics](https://www.ultralytics.com/) [YOLOv5](yolov5.md), showcasing the collaborative spirit of the AI research community.
diff --git a/docs/en/modes/train.md b/docs/en/modes/train.md
index 1614490817..56d6004fa9 100644
--- a/docs/en/modes/train.md
+++ b/docs/en/modes/train.md
@@ -291,7 +291,7 @@ Remember to sign in to your Comet account on their website and get your API key.
### ClearML
-[ClearML](https://www.clear.ml/) is an open-source platform that automates tracking of experiments and helps with efficient sharing of resources. It is designed to help teams manage, execute, and reproduce their ML work more efficiently.
+[ClearML](https://clear.ml/) is an open-source platform that automates tracking of experiments and helps with efficient sharing of resources. It is designed to help teams manage, execute, and reproduce their ML work more efficiently.
To use ClearML:
diff --git a/docs/en/quickstart.md b/docs/en/quickstart.md
index 957d6c9b7e..5f41a7a0bb 100644
--- a/docs/en/quickstart.md
+++ b/docs/en/quickstart.md
@@ -143,7 +143,7 @@ See the `ultralytics` [pyproject.toml](https://github.com/ultralytics/ultralytic
!!! Tip "Tip"
- PyTorch requirements vary by operating system and CUDA requirements, so it's recommended to install PyTorch first following instructions at [https://pytorch.org/get-started/locally](https://pytorch.org/get-started/locally).
+ PyTorch requirements vary by operating system and CUDA requirements, so it's recommended to install PyTorch first following instructions at [https://pytorch.org/get-started/locally](https://pytorch.org/get-started/locally/).
diff --git a/docs/en/tasks/detect.md b/docs/en/tasks/detect.md
index 42f88843e2..d5b4e1f0dc 100644
--- a/docs/en/tasks/detect.md
+++ b/docs/en/tasks/detect.md
@@ -41,7 +41,7 @@ YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
-- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset.
Reproduce by `yolo val detect data=coco.yaml device=0`
+- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset.
Reproduce by `yolo val detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val detect data=coco8.yaml batch=1 device=0|cpu`
## Train
diff --git a/docs/en/tasks/pose.md b/docs/en/tasks/pose.md
index ac6fc7a108..ffa0a39ffb 100644
--- a/docs/en/tasks/pose.md
+++ b/docs/en/tasks/pose.md
@@ -75,7 +75,7 @@ YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models ar
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
-- **mAPval** values are for single-model single-scale on [COCO Keypoints val2017](https://cocodataset.org) dataset.
Reproduce by `yolo val pose data=coco-pose.yaml device=0`
+- **mAPval** values are for single-model single-scale on [COCO Keypoints val2017](https://cocodataset.org/) dataset.
Reproduce by `yolo val pose data=coco-pose.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu`
## Train
diff --git a/docs/en/tasks/segment.md b/docs/en/tasks/segment.md
index d6eaf7a04f..96090fb2ee 100644
--- a/docs/en/tasks/segment.md
+++ b/docs/en/tasks/segment.md
@@ -42,7 +42,7 @@ YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
-- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org) dataset.
Reproduce by `yolo val segment data=coco.yaml device=0`
+- **mAPval** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset.
Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
Reproduce by `yolo val segment data=coco8-seg.yaml batch=1 device=0|cpu`
## Train
diff --git a/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md b/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
index 5618fed52d..93c9f0a16a 100644
--- a/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
+++ b/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
@@ -14,7 +14,7 @@ You can also explore other quickstart options for YOLOv5, such as our [Colab Not
1. **NVIDIA Driver**: Version 455.23 or higher. Download from [Nvidia's website](https://www.nvidia.com/Download/index.aspx).
2. **NVIDIA-Docker**: Allows Docker to interact with your local GPU. Installation instructions are available on the [NVIDIA-Docker GitHub repository](https://github.com/NVIDIA/nvidia-docker).
-3. **Docker Engine - CE**: Version 19.03 or higher. Download and installation instructions can be found on the [Docker website](https://docs.docker.com/install/).
+3. **Docker Engine - CE**: Version 19.03 or higher. Download and installation instructions can be found on the [Docker website](https://docs.docker.com/get-started/get-docker/).
## Step 1: Pull the YOLOv5 Docker Image
diff --git a/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md b/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
index 5645ef6450..cdd397dc17 100644
--- a/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
+++ b/docs/en/yolov5/environments/google_cloud_quickstart_tutorial.md
@@ -8,7 +8,7 @@ keywords: YOLOv5, Google Cloud Platform, GCP, Deep Learning VM, object detection
Embarking on the journey of artificial intelligence and machine learning can be exhilarating, especially when you leverage the power and flexibility of a cloud platform. Google Cloud Platform (GCP) offers robust tools tailored for machine learning enthusiasts and professionals alike. One such tool is the Deep Learning VM that is preconfigured for data science and ML tasks. In this tutorial, we will navigate through the process of setting up YOLOv5 on a GCP Deep Learning VM. Whether you're taking your first steps in ML or you're a seasoned practitioner, this guide is designed to provide you with a clear pathway to implementing object detection models powered by YOLOv5.
-🆓 Plus, if you're a fresh GCP user, you're in luck with a [$300 free credit offer](https://cloud.google.com/free/docs/gcp-free-tier#free-trial) to kickstart your projects.
+🆓 Plus, if you're a fresh GCP user, you're in luck with a [$300 free credit offer](https://cloud.google.com/free/docs/free-cloud-features#free-trial) to kickstart your projects.
In addition to GCP, explore other accessible quickstart options for YOLOv5, like our [Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) for a browser-based experience, or the scalability of [Amazon AWS](./aws_quickstart_tutorial.md). Furthermore, container aficionados can utilize our official Docker image at [Docker Hub](https://hub.docker.com/r/ultralytics/yolov5) for an encapsulated environment.
diff --git a/docs/en/yolov5/index.md b/docs/en/yolov5/index.md
index 6d2946fd0f..ba4e4e3ab8 100644
--- a/docs/en/yolov5/index.md
+++ b/docs/en/yolov5/index.md
@@ -52,7 +52,7 @@ Here's a compilation of comprehensive tutorials that will guide you through diff
## Supported Environments
-Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
+Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda-zone), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
- **Free GPU Notebooks**:
- **Google Cloud**: [GCP Quickstart Guide](environments/google_cloud_quickstart_tutorial.md)
@@ -85,7 +85,7 @@ This badge indicates that all [YOLOv5 GitHub Actions](https://github.com/ultraly
## Connect and Contribute
-Your journey with YOLOv5 doesn't have to be a solitary one. Join our vibrant community on [GitHub](https://github.com/ultralytics/yolov5), connect with professionals on [LinkedIn](https://www.linkedin.com/company/ultralytics/), share your results on [Twitter](https://twitter.com/ultralytics), and find educational resources on [YouTube](https://youtube.com/ultralytics?sub_confirmation=1). Follow us on [TikTok](https://www.tiktok.com/@ultralytics) and [BiliBili](https://ultralytics.com/bilibili) for more engaging content.
+Your journey with YOLOv5 doesn't have to be a solitary one. Join our vibrant community on [GitHub](https://github.com/ultralytics/yolov5), connect with professionals on [LinkedIn](https://www.linkedin.com/company/ultralytics/), share your results on [Twitter](https://twitter.com/ultralytics), and find educational resources on [YouTube](https://www.youtube.com/ultralytics?sub_confirmation=1). Follow us on [TikTok](https://www.tiktok.com/@ultralytics) and [BiliBili](https://ultralytics.com/bilibili) for more engaging content.
Interested in contributing? We welcome contributions of all forms; from code improvements and bug reports to documentation updates. Check out our [contributing guidelines](../help/contributing.md/) for more information.
diff --git a/docs/en/yolov5/tutorials/hyperparameter_evolution.md b/docs/en/yolov5/tutorials/hyperparameter_evolution.md
index 174f818ca1..8b2a132479 100644
--- a/docs/en/yolov5/tutorials/hyperparameter_evolution.md
+++ b/docs/en/yolov5/tutorials/hyperparameter_evolution.md
@@ -151,7 +151,7 @@ We recommend a minimum of 300 generations of evolution for best results. Note th
## Supported Environments
-Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
+Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda-zone), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
- **Free GPU Notebooks**:
- **Google Cloud**: [GCP Quickstart Guide](../environments/google_cloud_quickstart_tutorial.md)
diff --git a/docs/en/yolov5/tutorials/model_ensembling.md b/docs/en/yolov5/tutorials/model_ensembling.md
index 625fe0a406..f358106537 100644
--- a/docs/en/yolov5/tutorials/model_ensembling.md
+++ b/docs/en/yolov5/tutorials/model_ensembling.md
@@ -132,7 +132,7 @@ Done. (0.223s)
## Supported Environments
-Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
+Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda-zone), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
- **Free GPU Notebooks**:
- **Google Cloud**: [GCP Quickstart Guide](../environments/google_cloud_quickstart_tutorial.md)
diff --git a/docs/en/yolov5/tutorials/model_export.md b/docs/en/yolov5/tutorials/model_export.md
index 05b5a53adf..dc68c84c10 100644
--- a/docs/en/yolov5/tutorials/model_export.md
+++ b/docs/en/yolov5/tutorials/model_export.md
@@ -36,7 +36,7 @@ YOLOv5 inference is officially supported in 11 formats:
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov5s.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov5s_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov5s.pb` |
-| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov5s.tflite` |
+| [TensorFlow Lite](https://ai.google.dev/edge/litert) | `tflite` | `yolov5s.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov5s_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov5s_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov5s_paddle_model/` |
@@ -224,7 +224,7 @@ YOLOv5 OpenCV DNN C++ inference on exported ONNX model examples:
YOLOv5 OpenVINO C++ inference examples:
- [https://github.com/dacquaviva/yolov5-openvino-cpp-python](https://github.com/dacquaviva/yolov5-openvino-cpp-python)
-- [https://github.com/UNeedCryDear/yolov5-seg-opencv-dnn-cpp](https://github.com/UNeedCryDear/yolov5-seg-opencv-dnn-cpp)
+- [https://github.com/UNeedCryDear/yolov5-seg-opencv-dnn-cpp](https://github.com/UNeedCryDear/yolov5-seg-opencv-onnxruntime-cpp)
## TensorFlow.js Web Browser Inference
@@ -232,7 +232,7 @@ YOLOv5 OpenVINO C++ inference examples:
## Supported Environments
-Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
+Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda-zone), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
- **Free GPU Notebooks**:
- **Google Cloud**: [GCP Quickstart Guide](../environments/google_cloud_quickstart_tutorial.md)
diff --git a/docs/en/yolov5/tutorials/model_pruning_and_sparsity.md b/docs/en/yolov5/tutorials/model_pruning_and_sparsity.md
index 6142510ae3..8bda8772e1 100644
--- a/docs/en/yolov5/tutorials/model_pruning_and_sparsity.md
+++ b/docs/en/yolov5/tutorials/model_pruning_and_sparsity.md
@@ -95,7 +95,7 @@ In the results we can observe that we have achieved a **sparsity of 30%** in our
## Supported Environments
-Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
+Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda-zone), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
- **Free GPU Notebooks**:
- **Google Cloud**: [GCP Quickstart Guide](../environments/google_cloud_quickstart_tutorial.md)
diff --git a/docs/en/yolov5/tutorials/multi_gpu_training.md b/docs/en/yolov5/tutorials/multi_gpu_training.md
index df269b0cc0..4a56570fdd 100644
--- a/docs/en/yolov5/tutorials/multi_gpu_training.md
+++ b/docs/en/yolov5/tutorials/multi_gpu_training.md
@@ -171,7 +171,7 @@ If you went through all the above, feel free to raise an Issue by giving as much
## Supported Environments
-Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
+Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda-zone), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
- **Free GPU Notebooks**:
- **Google Cloud**: [GCP Quickstart Guide](../environments/google_cloud_quickstart_tutorial.md)
diff --git a/docs/en/yolov5/tutorials/pytorch_hub_model_loading.md b/docs/en/yolov5/tutorials/pytorch_hub_model_loading.md
index eb1b62c99c..b00d94db4c 100644
--- a/docs/en/yolov5/tutorials/pytorch_hub_model_loading.md
+++ b/docs/en/yolov5/tutorials/pytorch_hub_model_loading.md
@@ -4,7 +4,7 @@ description: Learn how to load YOLOv5 from PyTorch Hub for seamless model infere
keywords: YOLOv5, PyTorch Hub, model loading, Ultralytics, object detection, machine learning, AI, tutorial, inference
---
-📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub at [https://pytorch.org/hub/ultralytics_yolov5](https://pytorch.org/hub/ultralytics_yolov5).
+📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub at [https://pytorch.org/hub/ultralytics_yolov5](https://pytorch.org/hub/ultralytics_yolov5/).
## Before You Start
@@ -359,7 +359,7 @@ model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5s_paddle_mode
## Supported Environments
-Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
+Ultralytics provides a range of ready-to-use environments, each pre-installed with essential dependencies such as [CUDA](https://developer.nvidia.com/cuda-zone), [CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/), to kickstart your projects.
- **Free GPU Notebooks**:
- **Google Cloud**: [GCP Quickstart Guide](../environments/google_cloud_quickstart_tutorial.md)
diff --git a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
index d154b5c5ba..ac1454b201 100644
--- a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
+++ b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
@@ -12,14 +12,14 @@ You can now use Roboflow to organize, label, prepare, version, and host your dat
Ultralytics offers two licensing options:
- - The [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE), an [OSI-approved](https://opensource.org/licenses/) open-source license ideal for students and enthusiasts.
- - The [Enterprise License](https://ultralytics.com/license) for businesses seeking to incorporate our AI models into their products and services.
+ - The [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE), an [OSI-approved](https://opensource.org/license) open-source license ideal for students and enthusiasts.
+ - The [Enterprise License](https://www.ultralytics.com/license) for businesses seeking to incorporate our AI models into their products and services.
- For more details see [Ultralytics Licensing](https://ultralytics.com/license).
+ For more details see [Ultralytics Licensing](https://www.ultralytics.com/license).
## Upload
-You can upload your data to Roboflow via [web UI](https://docs.roboflow.com/adding-data), [REST API](https://docs.roboflow.com/adding-data/upload-api), or [Python](https://docs.roboflow.com/python).
+You can upload your data to Roboflow via [web UI](https://docs.roboflow.com/adding-data?ref=ultralytics), [REST API](https://docs.roboflow.com/adding-data/upload-api?ref=ultralytics), or [Python](https://docs.roboflow.com/python?ref=ultralytics).
## Labeling
@@ -52,13 +52,13 @@ We have released a custom training tutorial demonstrating all of the above capab
## Active Learning
-The real world is messy and your model will invariably encounter situations your dataset didn't anticipate. Using [active learning](https://blog.roboflow.com/what-is-active-learning/) is an important strategy to iteratively improve your dataset and model. With the Roboflow and YOLOv5 integration, you can quickly make improvements on your model deployments by using a battle tested machine learning pipeline.
+The real world is messy and your model will invariably encounter situations your dataset didn't anticipate. Using [active learning](https://blog.roboflow.com/what-is-active-learning/?ref=ultralytics) is an important strategy to iteratively improve your dataset and model. With the Roboflow and YOLOv5 integration, you can quickly make improvements on your model deployments by using a battle tested machine learning pipeline.