Merge branch 'main' into yolov9

pull/8571/head
Glenn Jocher 10 months ago committed by GitHub
commit 6ecdc05bc0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 8
      docs/en/integrations/index.md
  2. 120
      docs/en/integrations/ncnn.md
  3. 7
      mkdocs.yml
  4. 4
      ultralytics/utils/loss.py
  5. 70
      ultralytics/utils/metrics.py
  6. 4
      ultralytics/utils/plotting.py
  7. 2
      ultralytics/utils/torch_utils.py

@ -40,19 +40,21 @@ Welcome to the Ultralytics Integrations page! This page provides an overview of
- [Neural Magic](neural-magic.md): Leverage Quantization Aware Training (QAT) and pruning techniques to optimize Ultralytics models for superior performance and leaner size. - [Neural Magic](neural-magic.md): Leverage Quantization Aware Training (QAT) and pruning techniques to optimize Ultralytics models for superior performance and leaner size.
- [Gradio](../integrations/gradio.md) 🚀 NEW: Deploy Ultralytics models with Gradio for real-time, interactive object detection demos. - [Gradio](gradio.md) 🚀 NEW: Deploy Ultralytics models with Gradio for real-time, interactive object detection demos.
- [OpenVINO](openvino.md): Intel's toolkit for optimizing and deploying computer vision models efficiently across various Intel CPU and GPU platforms. - [TorchScript](torchscript.md): Developed as part of the [PyTorch](https://pytorch.org/) framework, TorchScript enables efficient execution and deployment of machine learning models in various production environments without the need for Python dependencies.
- [ONNX](onnx.md): An open-source format created by [Microsoft](https://www.microsoft.com) for facilitating the transfer of AI models between various frameworks, enhancing the versatility and deployment flexibility of Ultralytics models. - [ONNX](onnx.md): An open-source format created by [Microsoft](https://www.microsoft.com) for facilitating the transfer of AI models between various frameworks, enhancing the versatility and deployment flexibility of Ultralytics models.
- [OpenVINO](openvino.md): Intel's toolkit for optimizing and deploying computer vision models efficiently across various Intel CPU and GPU platforms.
- [TensorRT](tensorrt.md): Developed by [NVIDIA](https://www.nvidia.com/), this high-performance deep learning inference framework and model format optimizes AI models for accelerated speed and efficiency on NVIDIA GPUs, ensuring streamlined deployment. - [TensorRT](tensorrt.md): Developed by [NVIDIA](https://www.nvidia.com/), this high-performance deep learning inference framework and model format optimizes AI models for accelerated speed and efficiency on NVIDIA GPUs, ensuring streamlined deployment.
- [CoreML](coreml.md): CoreML, developed by [Apple](https://www.apple.com/), is a framework designed for efficiently integrating machine learning models into applications across iOS, macOS, watchOS, and tvOS, using Apple's hardware for effective and secure model deployment. - [CoreML](coreml.md): CoreML, developed by [Apple](https://www.apple.com/), is a framework designed for efficiently integrating machine learning models into applications across iOS, macOS, watchOS, and tvOS, using Apple's hardware for effective and secure model deployment.
- [TFLite](tflite.md): Developed by [Google](https://www.google.com), TFLite is a lightweight framework for deploying machine learning models on mobile and edge devices, ensuring fast, efficient inference with minimal memory footprint. - [TFLite](tflite.md): Developed by [Google](https://www.google.com), TFLite is a lightweight framework for deploying machine learning models on mobile and edge devices, ensuring fast, efficient inference with minimal memory footprint.
- [TorchScript](torchscript.md): Developed as part of the [PyTorch](https://pytorch.org/) framework, TorchScript enables efficient execution and deployment of machine learning models in various production environments without the need for Python dependencies. - [NCNN](ncnn.md): Developed by [Tencent](http://www.tencent.com/), NCNN is an efficient neural network inference framework tailored for mobile devices. It enables direct deployment of AI models into apps, optimizing performance across various mobile platforms.
### Export Formats ### Export Formats

@ -0,0 +1,120 @@
---
comments: true
description: Uncover how to improve your Ultralytics YOLOv8 model's performance using the NCNN export format that is suitable for devices with limited computation resources.
keywords: Ultralytics, YOLOv8, NCNN Export, Export YOLOv8, Model Deployment
---
# How to Export to NCNN from YOLOv8 for Smooth Deployment
Deploying computer vision models on devices with limited computational power, such as mobile or embedded systems, can be tricky. You need to make sure you use a format optimized for optimal performance. This makes sure that even devices with limited processing power can handle advanced computer vision tasks well.
The export to NCNN format feature allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for lightweight device-based applications. In this guide, we'll walk you through how to convert your models to the NCNN format, making it easier for your models to perform well on various mobile and embedded devices.
## Why should you export to NCNN?
<p align="center">
<img width="100%" src="https://repository-images.githubusercontent.com/494294418/207a2e12-dc16-41a6-a39e-eae26e662638" alt="NCNN overview">
</p>
The [NCNN](https://github.com/Tencent/ncnn) framework, developed by Tencent, is a high-performance neural network inference computing framework optimized specifically for mobile platforms, including mobile phones, embedded devices, and IoT devices. NCNN is compatible with a wide range of platforms, including Linux, Android, iOS, and macOS.
NCNN is known for its fast processing speed on mobile CPUs and enables rapid deployment of deep learning models to mobile platforms. This makes it easier to build smart apps, putting the power of AI right at your fingertips.
## Key Features of NCNN Models
NCNN models offer a wide range of key features that enable on-device machine learning by helping developers run their models on mobile, embedded, and edge devices:
- **Efficient and High-Performance**: NCNN models are made to be efficient and lightweight, optimized for running on mobile and embedded devices like Raspberry Pi with limited resources. They can also achieve high performance with high accuracy on various computer vision-based tasks.
- **Quantization**: NCNN models often support quantization which is a technique that reduces the precision of the model's weights and activations. This leads to further improvements in performance and reduces memory footprint.
- **Compatibility**: NCNN models are compatible with popular deep learning frameworks like [TensorFlow](https://www.tensorflow.org/), [Caffe](https://caffe.berkeleyvision.org/), and [ONNX](https://onnx.ai/). This compatibility allows developers to use existing models and workflows easily.
- **Easy to Use**: NCNN models are designed for easy integration into various applications, thanks to their compatibility with popular deep learning frameworks. Additionally, NCNN offers user-friendly tools for converting models between different formats, ensuring smooth interoperability across the development landscape.
## Deployment Options with NCNN
Before we look at the code for exporting YOLOv8 models to the NCNN format, let’s understand how NCNN models are normally used.
NCNN models, designed for efficiency and performance, are compatible with a variety of deployment platforms:
- **Mobile Deployment**: Specifically optimized for Android and iOS, allowing for seamless integration into mobile applications for efficient on-device inference.
- **Embedded Systems and IoT Devices**: If you find that running inference on a Raspberry Pi with the [Ultralytics Guide](../guides/raspberry-pi.md) isn't fast enough, switching to an NCNN exported model could help speed things up. NCNN is great for devices like Raspberry Pi and NVIDIA Jetson, especially in situations where you need quick processing right on the device.
- **Desktop and Server Deployment**: Capable of being deployed in desktop and server environments across Linux, Windows, and macOS, supporting development, training, and evaluation with higher computational capacities.
## Export to NCNN: Converting Your YOLOv8 Model
You can expand model compatibility and deployment flexibility by converting YOLOv8 models to NCNN format.
### Installation
To install the required packages, run:
!!! Tip "Installation"
=== "CLI"
```bash
# Install the required package for YOLOv8
pip install ultralytics
```
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
### Usage
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models]((../models/index.md)) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! Example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
# Export the model to NCNN format
model.export(format='ncnn') # creates '/yolov8n_ncnn_model'
# Load the exported NCNN model
ncnn_model = YOLO('./yolov8n_ncnn_model')
# Run inference
results = ncnn_model('https://ultralytics.com/images/bus.jpg')
```
=== "CLI"
```bash
# Export a YOLOv8n PyTorch model to NCNN format
yolo export model=yolov8n.pt format=ncnn # creates '/yolov8n_ncnn_model'
# Run inference with the exported model
yolo predict model='./yolov8n_ncnn_model' source='https://ultralytics.com/images/bus.jpg'
```
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
## Deploying Exported YOLOv8 NCNN Models
After successfully exporting your Ultralytics YOLOv8 models to NCNN format, you can now deploy them. The primary and recommended first step for running a NCNN model is to utilize the YOLO("./model_ncnn_model") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your NCNN models in various other settings, take a look at the following resources:
- **[Android](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-android)**: This blog explains how to use NCNN models for performing tasks like object detection through Android applications.
- **[macOS](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-macos)**: Understand how to use NCNN models for performing tasks through macOS.
- **[Linux](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux)**: Explore this page to learn how to deploy NCNN models on limited resource devices like Raspberry Pi and other similar devices.
- **[Windows x64 using VS2017](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-windows-x64-using-visual-studio-community-2017)**: Explore this blog to learn how to deploy NCNN models on windows x64 using Visual Studio Community 2017.
## Summary
In this guide, we've gone over exporting Ultralytics YOLOv8 models to the NCNN format. This conversion step is crucial for improving the efficiency and speed of YOLOv8 models, making them more effective and suitable for limited-resource computing environments.
For detailed instructions on usage, please refer to the [official NCNN documentation](https://ncnn.readthedocs.io/en/latest/index.html).
Also, if you're interested in exploring other integration options for Ultralytics YOLOv8, be sure to visit our [integration guide page](index.md) for further insights and information.

@ -339,13 +339,14 @@ nav:
- Clearml Logging: yolov5/tutorials/clearml_logging_integration.md - Clearml Logging: yolov5/tutorials/clearml_logging_integration.md
- Integrations: - Integrations:
- integrations/index.md - integrations/index.md
- Comet ML: integrations/comet.md - TorchScript: integrations/torchscript.md
- OpenVINO: integrations/openvino.md
- ONNX: integrations/onnx.md - ONNX: integrations/onnx.md
- OpenVINO: integrations/openvino.md
- TensorRT: integrations/tensorrt.md - TensorRT: integrations/tensorrt.md
- CoreML: integrations/coreml.md - CoreML: integrations/coreml.md
- TFLite: integrations/tflite.md - TFLite: integrations/tflite.md
- TorchScript: integrations/torchscript.md - NCNN: integrations/ncnn.md
- Comet ML: integrations/comet.md
- Ray Tune: integrations/ray-tune.md - Ray Tune: integrations/ray-tune.md
- Roboflow: integrations/roboflow.md - Roboflow: integrations/roboflow.md
- MLflow: integrations/mlflow.md - MLflow: integrations/mlflow.md

@ -137,10 +137,10 @@ class KeypointLoss(nn.Module):
def forward(self, pred_kpts, gt_kpts, kpt_mask, area): def forward(self, pred_kpts, gt_kpts, kpt_mask, area):
"""Calculates keypoint loss factor and Euclidean distance loss for predicted and actual keypoints.""" """Calculates keypoint loss factor and Euclidean distance loss for predicted and actual keypoints."""
d = (pred_kpts[..., 0] - gt_kpts[..., 0]) ** 2 + (pred_kpts[..., 1] - gt_kpts[..., 1]) ** 2 d = (pred_kpts[..., 0] - gt_kpts[..., 0]).pow(2) + (pred_kpts[..., 1] - gt_kpts[..., 1]).pow(2)
kpt_loss_factor = kpt_mask.shape[1] / (torch.sum(kpt_mask != 0, dim=1) + 1e-9) kpt_loss_factor = kpt_mask.shape[1] / (torch.sum(kpt_mask != 0, dim=1) + 1e-9)
# e = d / (2 * (area * self.sigmas) ** 2 + 1e-9) # from formula # e = d / (2 * (area * self.sigmas) ** 2 + 1e-9) # from formula
e = d / (2 * self.sigmas) ** 2 / (area + 1e-9) / 2 # from cocoeval e = d / (2 * self.sigmas).pow(2) / (area + 1e-9) / 2 # from cocoeval
return (kpt_loss_factor.view(-1, 1) * ((1 - torch.exp(-e)) * kpt_mask)).mean() return (kpt_loss_factor.view(-1, 1) * ((1 - torch.exp(-e)) * kpt_mask)).mean()

@ -116,10 +116,12 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7
cw = b1_x2.maximum(b2_x2) - b1_x1.minimum(b2_x1) # convex (smallest enclosing box) width cw = b1_x2.maximum(b2_x2) - b1_x1.minimum(b2_x1) # convex (smallest enclosing box) width
ch = b1_y2.maximum(b2_y2) - b1_y1.minimum(b2_y1) # convex height ch = b1_y2.maximum(b2_y2) - b1_y1.minimum(b2_y1) # convex height
if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
c2 = cw**2 + ch**2 + eps # convex diagonal squared c2 = cw.pow(2) + ch.pow(2) + eps # convex diagonal squared
rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2 rho2 = (
(b2_x1 + b2_x2 - b1_x1 - b1_x2).pow(2) + (b2_y1 + b2_y2 - b1_y1 - b1_y2).pow(2)
) / 4 # center dist**2
if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
v = (4 / math.pi**2) * (torch.atan(w2 / h2) - torch.atan(w1 / h1)).pow(2) v = (4 / math.pi**2) * ((w2 / h2).atan() - (w1 / h1).atan()).pow(2)
with torch.no_grad(): with torch.no_grad():
alpha = v / (v - iou + (1 + eps)) alpha = v / (v - iou + (1 + eps))
return iou - (rho2 / c2 + v * alpha) # CIoU return iou - (rho2 / c2 + v * alpha) # CIoU
@ -162,12 +164,12 @@ def kpt_iou(kpt1, kpt2, area, sigma, eps=1e-7):
Returns: Returns:
(torch.Tensor): A tensor of shape (N, M) representing keypoint similarities. (torch.Tensor): A tensor of shape (N, M) representing keypoint similarities.
""" """
d = (kpt1[:, None, :, 0] - kpt2[..., 0]) ** 2 + (kpt1[:, None, :, 1] - kpt2[..., 1]) ** 2 # (N, M, 17) d = (kpt1[:, None, :, 0] - kpt2[..., 0]).pow(2) + (kpt1[:, None, :, 1] - kpt2[..., 1]).pow(2) # (N, M, 17)
sigma = torch.tensor(sigma, device=kpt1.device, dtype=kpt1.dtype) # (17, ) sigma = torch.tensor(sigma, device=kpt1.device, dtype=kpt1.dtype) # (17, )
kpt_mask = kpt1[..., 2] != 0 # (N, 17) kpt_mask = kpt1[..., 2] != 0 # (N, 17)
e = d / (2 * sigma) ** 2 / (area[:, None, None] + eps) / 2 # from cocoeval e = d / (2 * sigma).pow(2) / (area[:, None, None] + eps) / 2 # from cocoeval
# e = d / ((area[None, :, None] + eps) * sigma) ** 2 / 2 # from formula # e = d / ((area[None, :, None] + eps) * sigma) ** 2 / 2 # from formula
return (torch.exp(-e) * kpt_mask[:, None]).sum(-1) / (kpt_mask.sum(-1)[:, None] + eps) return ((-e).exp() * kpt_mask[:, None]).sum(-1) / (kpt_mask.sum(-1)[:, None] + eps)
def _get_covariance_matrix(boxes): def _get_covariance_matrix(boxes):
@ -181,13 +183,13 @@ def _get_covariance_matrix(boxes):
(torch.Tensor): Covariance metrixs corresponding to original rotated bounding boxes. (torch.Tensor): Covariance metrixs corresponding to original rotated bounding boxes.
""" """
# Gaussian bounding boxes, ignore the center points (the first two columns) because they are not needed here. # Gaussian bounding boxes, ignore the center points (the first two columns) because they are not needed here.
gbbs = torch.cat((torch.pow(boxes[:, 2:4], 2) / 12, boxes[:, 4:]), dim=-1) gbbs = torch.cat((boxes[:, 2:4].pow(2) / 12, boxes[:, 4:]), dim=-1)
a, b, c = gbbs.split(1, dim=-1) a, b, c = gbbs.split(1, dim=-1)
return ( cos = c.cos()
a * torch.cos(c) ** 2 + b * torch.sin(c) ** 2, sin = c.sin()
a * torch.sin(c) ** 2 + b * torch.cos(c) ** 2, cos2 = cos.pow(2)
a * torch.cos(c) * torch.sin(c) - b * torch.sin(c) * torch.cos(c), sin2 = sin.pow(2)
) return a * cos2 + b * sin2, a * sin2 + b * cos2, (a - b) * cos * sin
def probiou(obb1, obb2, CIoU=False, eps=1e-7): def probiou(obb1, obb2, CIoU=False, eps=1e-7):
@ -208,26 +210,21 @@ def probiou(obb1, obb2, CIoU=False, eps=1e-7):
a2, b2, c2 = _get_covariance_matrix(obb2) a2, b2, c2 = _get_covariance_matrix(obb2)
t1 = ( t1 = (
((a1 + a2) * (torch.pow(y1 - y2, 2)) + (b1 + b2) * (torch.pow(x1 - x2, 2))) ((a1 + a2) * (y1 - y2).pow(2) + (b1 + b2) * (x1 - x2).pow(2)) / ((a1 + a2) * (b1 + b2) - (c1 + c2).pow(2) + eps)
/ ((a1 + a2) * (b1 + b2) - (torch.pow(c1 + c2, 2)) + eps)
) * 0.25 ) * 0.25
t2 = (((c1 + c2) * (x2 - x1) * (y1 - y2)) / ((a1 + a2) * (b1 + b2) - (torch.pow(c1 + c2, 2)) + eps)) * 0.5 t2 = (((c1 + c2) * (x2 - x1) * (y1 - y2)) / ((a1 + a2) * (b1 + b2) - (c1 + c2).pow(2) + eps)) * 0.5
t3 = ( t3 = (
torch.log( ((a1 + a2) * (b1 + b2) - (c1 + c2).pow(2))
((a1 + a2) * (b1 + b2) - (torch.pow(c1 + c2, 2))) / (4 * ((a1 * b1 - c1.pow(2)).clamp_(0) * (a2 * b2 - c2.pow(2)).clamp_(0)).sqrt() + eps)
/ (4 * torch.sqrt((a1 * b1 - torch.pow(c1, 2)).clamp_(0) * (a2 * b2 - torch.pow(c2, 2)).clamp_(0)) + eps) + eps
+ eps ).log() * 0.5
) bd = (t1 + t2 + t3).clamp(eps, 100.0)
* 0.5 hd = (1.0 - (-bd).exp() + eps).sqrt()
)
bd = t1 + t2 + t3
bd = torch.clamp(bd, eps, 100.0)
hd = torch.sqrt(1.0 - torch.exp(-bd) + eps)
iou = 1 - hd iou = 1 - hd
if CIoU: # only include the wh aspect ratio part if CIoU: # only include the wh aspect ratio part
w1, h1 = obb1[..., 2:4].split(1, dim=-1) w1, h1 = obb1[..., 2:4].split(1, dim=-1)
w2, h2 = obb2[..., 2:4].split(1, dim=-1) w2, h2 = obb2[..., 2:4].split(1, dim=-1)
v = (4 / math.pi**2) * (torch.atan(w2 / h2) - torch.atan(w1 / h1)).pow(2) v = (4 / math.pi**2) * ((w2 / h2).atan() - (w1 / h1).atan()).pow(2)
with torch.no_grad(): with torch.no_grad():
alpha = v / (v - iou + (1 + eps)) alpha = v / (v - iou + (1 + eps))
return iou - v * alpha # CIoU return iou - v * alpha # CIoU
@ -255,21 +252,16 @@ def batch_probiou(obb1, obb2, eps=1e-7):
a2, b2, c2 = (x.squeeze(-1)[None] for x in _get_covariance_matrix(obb2)) a2, b2, c2 = (x.squeeze(-1)[None] for x in _get_covariance_matrix(obb2))
t1 = ( t1 = (
((a1 + a2) * (torch.pow(y1 - y2, 2)) + (b1 + b2) * (torch.pow(x1 - x2, 2))) ((a1 + a2) * (y1 - y2).pow(2) + (b1 + b2) * (x1 - x2).pow(2)) / ((a1 + a2) * (b1 + b2) - (c1 + c2).pow(2) + eps)
/ ((a1 + a2) * (b1 + b2) - (torch.pow(c1 + c2, 2)) + eps)
) * 0.25 ) * 0.25
t2 = (((c1 + c2) * (x2 - x1) * (y1 - y2)) / ((a1 + a2) * (b1 + b2) - (torch.pow(c1 + c2, 2)) + eps)) * 0.5 t2 = (((c1 + c2) * (x2 - x1) * (y1 - y2)) / ((a1 + a2) * (b1 + b2) - (c1 + c2).pow(2) + eps)) * 0.5
t3 = ( t3 = (
torch.log( ((a1 + a2) * (b1 + b2) - (c1 + c2).pow(2))
((a1 + a2) * (b1 + b2) - (torch.pow(c1 + c2, 2))) / (4 * ((a1 * b1 - c1.pow(2)).clamp_(0) * (a2 * b2 - c2.pow(2)).clamp_(0)).sqrt() + eps)
/ (4 * torch.sqrt((a1 * b1 - torch.pow(c1, 2)).clamp_(0) * (a2 * b2 - torch.pow(c2, 2)).clamp_(0)) + eps) + eps
+ eps ).log() * 0.5
) bd = (t1 + t2 + t3).clamp(eps, 100.0)
* 0.5 hd = (1.0 - (-bd).exp() + eps).sqrt()
)
bd = t1 + t2 + t3
bd = torch.clamp(bd, eps, 100.0)
hd = torch.sqrt(1.0 - torch.exp(-bd) + eps)
return 1 - hd return 1 - hd

@ -1028,13 +1028,13 @@ def feature_visualization(x, module_type, stage, n=32, save_dir=Path("runs/detec
for m in ["Detect", "Pose", "Segment"]: for m in ["Detect", "Pose", "Segment"]:
if m in module_type: if m in module_type:
return return
batch, channels, height, width = x.shape # batch, channels, height, width _, channels, height, width = x.shape # batch, channels, height, width
if height > 1 and width > 1: if height > 1 and width > 1:
f = save_dir / f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename f = save_dir / f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename
blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels
n = min(n, channels) # number of plots n = min(n, channels) # number of plots
fig, ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True) # 8 rows x n/8 cols _, ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True) # 8 rows x n/8 cols
ax = ax.ravel() ax = ax.ravel()
plt.subplots_adjust(wspace=0.05, hspace=0.05) plt.subplots_adjust(wspace=0.05, hspace=0.05)
for i in range(n): for i in range(n):

@ -115,7 +115,7 @@ def select_device(device="", batch=0, newline=False, verbose=True):
device = "0" device = "0"
visible = os.environ.get("CUDA_VISIBLE_DEVICES", None) visible = os.environ.get("CUDA_VISIBLE_DEVICES", None)
os.environ["CUDA_VISIBLE_DEVICES"] = device # set environment variable - must be before assert is_available() os.environ["CUDA_VISIBLE_DEVICES"] = device # set environment variable - must be before assert is_available()
if not (torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(",", ""))): if not (torch.cuda.is_available() and torch.cuda.device_count() >= len(device.split(","))):
LOGGER.info(s) LOGGER.info(s)
install = ( install = (
"See https://pytorch.org/get-started/locally/ for up-to-date torch install instructions if no " "See https://pytorch.org/get-started/locally/ for up-to-date torch install instructions if no "

Loading…
Cancel
Save