From 6183b59d8a1c3fe67ed1a6fab093799a8a85cbe7 Mon Sep 17 00:00:00 2001 From: Abirami Vina Date: Sun, 28 Jan 2024 04:48:47 +0300 Subject: [PATCH] Add TensorRT Docs Integrations Page (#7855) --- docs/en/integrations/index.md | 2 + docs/en/integrations/tensorrt.md | 128 +++++++++++++++++++++++++++++++ docs/mkdocs.yml | 1 + 3 files changed, 131 insertions(+) create mode 100644 docs/en/integrations/tensorrt.md diff --git a/docs/en/integrations/index.md b/docs/en/integrations/index.md index 6d9b791a69..afb7d1c624 100644 --- a/docs/en/integrations/index.md +++ b/docs/en/integrations/index.md @@ -44,6 +44,8 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of - [ONNX](onnx.md): An open-source format created by [Microsoft](https://www.microsoft.com) for facilitating the transfer of AI models between various frameworks, enhancing the versatility and deployment flexibility of Ultralytics models. +- [TensorRT](tensorrt.md): Developed by [NVIDIA](https://www.nvidia.com/), this high-performance deep learning inference framework and model format optimizes AI models for accelerated speed and efficiency on NVIDIA GPUs, ensuring streamlined deployment. + ### Export Formats We also support a variety of model export formats for deployment in different environments. Here are the available formats: diff --git a/docs/en/integrations/tensorrt.md b/docs/en/integrations/tensorrt.md new file mode 100644 index 0000000000..88b3bf46fb --- /dev/null +++ b/docs/en/integrations/tensorrt.md @@ -0,0 +1,128 @@ +--- +comments: true +description: Discover the power and flexibility of exporting Ultralytics YOLOv8 models to TensorRT format for enhanced performance and efficiency on NVIDIA GPUs. +keywords: Ultralytics, YOLOv8, TensorRT Export, Model Deployment, GPU Acceleration, NVIDIA Support, CUDA Deployment +--- + +# TensorRT Export for YOLOv8 Models + +Deploying computer vision models in high-performance environments can require a format that maximizes speed and efficiency. This is especially true when you are deploying your model on NVIDIA GPUs. + +By using the TensorRT export format, you can enhance your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for swift and efficient inference on NVIDIA hardware. This guide will give you easy-to-follow steps for the conversion process and help you make the most of NVIDIA's advanced technology in your deep learning projects. + +## TensorRT + +

+ TensorRT Overview +

+ +[TensorRT](https://developer.nvidia.com/tensorrt#:~:text=NVIDIA%20TensorRT%2DLLM%20is%20an,knowledge%20of%20C%2B%2B%20or%20CUDA.), developed by NVIDIA, is an advanced software development kit (SDK) designed for high-speed deep learning inference. It’s well-suited for real-time applications like object detection. + +This toolkit optimizes deep learning models for NVIDIA GPUs and results in faster and more efficient operations. TensorRT models undergo TensorRT optimization, which includes techniques like layer fusion, precision calibration (INT8 and FP16), dynamic tensor memory management, and kernel auto-tuning. Converting deep learning models into the TensorRT format allows developers to realize the potential of NVIDIA GPUs fully. + +TensorRT is known for its compatibility with various model formats, including TensorFlow, PyTorch, and ONNX, providing developers with a flexible solution for integrating and optimizing models from different frameworks. This versatility enables efficient model deployment across diverse hardware and software environments. + +## Key Features of TensorRT Models + +TensorRT models offer a range of key features that contribute to their efficiency and effectiveness in high-speed deep learning inference: + +- **Precision Calibration**: TensorRT supports precision calibration, allowing models to be fine-tuned for specific accuracy requirements. This includes support for reduced precision formats like INT8 and FP16, which can further boost inference speed while maintaining acceptable accuracy levels. + +- **Layer Fusion**: The TensorRT optimization process includes layer fusion, where multiple layers of a neural network are combined into a single operation. This reduces computational overhead and improves inference speed by minimizing memory access and computation. + +

+ TensorRT Layer Fusion +

+ +- **Dynamic Tensor Memory Management**: TensorRT efficiently manages tensor memory usage during inference, reducing memory overhead and optimizing memory allocation. This results in more efficient GPU memory utilization. + +- **Automatic Kernel Tuning**: TensorRT applies automatic kernel tuning to select the most optimized GPU kernel for each layer of the model. This adaptive approach ensures that the model takes full advantage of the GPU's computational power. + +## Deployment Options in TensorRT + +Before we look at the code for exporting YOLOv8 models to the TensorRT format, let’s understand where TensorRT models are normally used. + +TensorRT offers several deployment options, and each option balances ease of integration, performance optimization, and flexibility differently: + +- **Deploying within TensorFlow**: This method integrates TensorRT into TensorFlow, allowing optimized models to run in a familiar TensorFlow environment. It's useful for models with a mix of supported and unsupported layers, as TF-TRT can handle these efficiently. + +

+ TensorRT Overview +

+ +- **Standalone TensorRT Runtime API**: Offers granular control, ideal for performance-critical applications. It's more complex but allows for custom implementation of unsupported operators. + +- **NVIDIA Triton Inference Server**: An option that supports models from various frameworks. Particularly suited for cloud or edge inferencing, it provides features like concurrent model execution and model analysis. + +## Exporting YOLOv8 Models to TensorRT + +You can improve execution efficiency and optimize performance by converting YOLOv8 models to TensorRT format. + +### Installation + +To install the required package, run: + +!!! Tip "Installation" + + === "CLI" + + ```bash + # Install the required package for YOLOv8 + pip install ultralytics + ``` + +For detailed instructions and best practices related to the installation process, check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips. + +### Usage + +Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements. + +!!! Example "Usage" + + === "Python" + + ```python + from ultralytics import YOLO + + # Load the YOLOv8 model + model = YOLO('yolov8n.pt') + + # Export the model to TensorRT format + model.export(format='engine') # creates 'yolov8n.engine' + + # Load the exported TensorRT model + tensorrt_model = YOLO('yolov8n.engine') + + # Run inference + results = tensorrt_model('https://ultralytics.com/images/bus.jpg') + ``` + + === "CLI" + + ```bash + # Export a YOLOv8n PyTorch model to TensorRT format + yolo export model=yolov8n.pt format=engine # creates 'yolov8n.engine'' + + # Run inference with the exported model + yolo predict model=yolov8n.engine source='https://ultralytics.com/images/bus.jpg' + ``` + +For more details about the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md). + +## Deploying Exported YOLOv8 TensorRT Models + +Having successfully exported your Ultralytics YOLOv8 models to TensorRT format, you're now ready to deploy them. For in-depth instructions on deploying your TensorRT models in various settings, take a look at the following resources: + +- **[Deploying Deep Neural Networks with NVIDIA TensorRT](https://developer.nvidia.com/blog/deploying-deep-learning-nvidia-tensorrt/)**: This article explains how to use NVIDIA TensorRT to deploy deep neural networks on GPU-based deployment platforms efficiently. + +- **[End-to-End AI for NVIDIA-Based PCs: NVIDIA TensorRT Deployment](https://developer.nvidia.com/blog/end-to-end-ai-for-nvidia-based-pcs-nvidia-tensorrt-deployment/)**: This blog post explains the use of NVIDIA TensorRT for optimizing and deploying AI models on NVIDIA-based PCs. + +- **[GitHub Repository for NVIDIA TensorRT:](https://github.com/NVIDIA/TensorRT)**: This is the official GitHub repository that contains the source code and documentation for NVIDIA TensorRT. + +## Summary + +In this guide, we focused on converting Ultralytics YOLOv8 models to NVIDIA's TensorRT model format. This conversion step is crucial for improving the efficiency and speed of YOLOv8 models, making them more effective and suitable for diverse deployment environments. + +For more information on usage details, take a look at the [TensorRT official documentation](https://docs.nvidia.com/deeplearning/tensorrt/). + +If you're curious about additional Ultralytics YOLOv8 integrations, our [integration guide page](../integrations/index.md) provides an extensive selection of informative resources and insights. diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index ac08e208d6..d7eaad2b62 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -337,6 +337,7 @@ nav: - Comet ML: integrations/comet.md - OpenVINO: integrations/openvino.md - ONNX: integrations/onnx.md + - TensorRT: integrations/tensorrt.md - Ray Tune: integrations/ray-tune.md - Roboflow: integrations/roboflow.md - MLflow: integrations/mlflow.md