`py-cpuinfo` Exception context manager fix (#14814)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
pull/14817/head
Glenn Jocher 4 months ago committed by GitHub
parent f955fedb7f
commit 7ecab94b29
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 1
      .github/workflows/publish.yml
  2. 2
      docs/en/models/index.md
  3. 24
      docs/en/models/sam-2.md
  4. 3
      mkdocs.yml
  5. 7
      ultralytics/utils/torch_utils.py

@ -168,6 +168,7 @@ jobs:
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }} PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
INDEXNOW_KEY: ${{ secrets.INDEXNOW_KEY_DOCS }} INDEXNOW_KEY: ${{ secrets.INDEXNOW_KEY_DOCS }}
run: | run: |
pip install black
export JUPYTER_PLATFORM_DIRS=1 export JUPYTER_PLATFORM_DIRS=1
python docs/build_docs.py python docs/build_docs.py
git clone https://github.com/ultralytics/docs.git docs-repo git clone https://github.com/ultralytics/docs.git docs-repo

@ -21,7 +21,7 @@ Here are some of the key models supported:
7. **[YOLOv9](yolov9.md)**: An experimental model trained on the Ultralytics [YOLOv5](yolov5.md) codebase implementing Programmable Gradient Information (PGI). 7. **[YOLOv9](yolov9.md)**: An experimental model trained on the Ultralytics [YOLOv5](yolov5.md) codebase implementing Programmable Gradient Information (PGI).
8. **[YOLOv10](yolov10.md)**: By Tsinghua University, featuring NMS-free training and efficiency-accuracy driven architecture, delivering state-of-the-art performance and latency. 8. **[YOLOv10](yolov10.md)**: By Tsinghua University, featuring NMS-free training and efficiency-accuracy driven architecture, delivering state-of-the-art performance and latency.
9. **[Segment Anything Model (SAM)](sam.md)**: Meta's original Segment Anything Model (SAM). 9. **[Segment Anything Model (SAM)](sam.md)**: Meta's original Segment Anything Model (SAM).
10. **[Segment Anything Model 2 (SAM2)](sam2.md)**: The next generation of Meta's Segment Anything Model (SAM) for videos and images. 10. **[Segment Anything Model 2 (SAM2)](sam-2.md)**: The next generation of Meta's Segment Anything Model (SAM) for videos and images.
11. **[Mobile Segment Anything Model (MobileSAM)](mobile-sam.md)**: MobileSAM for mobile applications, by Kyung Hee University. 11. **[Mobile Segment Anything Model (MobileSAM)](mobile-sam.md)**: MobileSAM for mobile applications, by Kyung Hee University.
12. **[Fast Segment Anything Model (FastSAM)](fast-sam.md)**: FastSAM by Image & Video Analysis Group, Institute of Automation, Chinese Academy of Sciences. 12. **[Fast Segment Anything Model (FastSAM)](fast-sam.md)**: FastSAM by Image & Video Analysis Group, Institute of Automation, Chinese Academy of Sciences.
13. **[YOLO-NAS](yolo-nas.md)**: YOLO Neural Architecture Search (NAS) Models. 13. **[YOLO-NAS](yolo-nas.md)**: YOLO Neural Architecture Search (NAS) Models.

@ -112,7 +112,7 @@ pip install ultralytics
The following table details the available SAM 2 models, their pre-trained weights, supported tasks, and compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md). The following table details the available SAM 2 models, their pre-trained weights, supported tasks, and compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md).
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export | | Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
| ---------- | ------------------------------------------------------------------------------------- | -------------------------------------------- | --------- | ---------- | -------- | ------ | | ----------- | ------------------------------------------------------------------------------------- | -------------------------------------------- | --------- | ---------- | -------- | ------ |
| SAM 2 base | [sam2_b.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_b.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ | | SAM 2 base | [sam2_b.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_b.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ |
| SAM 2 large | [sam2_l.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_l.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ | | SAM 2 large | [sam2_l.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_l.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ |
@ -129,10 +129,10 @@ SAM2 can be utilized across a broad spectrum of tasks, including real-time video
=== "Python" === "Python"
```python ```python
from ultralytics import SAM2 from ultralytics import SAM
# Load a model # Load a model
model = SAM2("sam2_b.pt") model = SAM("sam2_b.pt")
# Display model information (optional) # Display model information (optional)
model.info() model.info()
@ -153,10 +153,10 @@ SAM2 can be utilized across a broad spectrum of tasks, including real-time video
=== "Python" === "Python"
```python ```python
from ultralytics import SAM2 from ultralytics import SAM
# Load a model # Load a model
model = SAM2("sam2_b.pt") model = SAM("sam2_b.pt")
# Display model information (optional) # Display model information (optional)
model.info() model.info()
@ -261,10 +261,10 @@ If SAM2 is a crucial part of your research or development work, please cite it u
=== "BibTeX" === "BibTeX"
```bibtex ```bibtex
@article{kirillov2024sam2, @article{ravi2024sam2,
title={SAM2: Segment Anything Model 2}, title={SAM 2: Segment Anything in Images and Videos},
author={Alexander Kirillov and others}, author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
journal={arXiv preprint arXiv:2401.12741}, journal={arXiv preprint},
year={2024} year={2024}
} }
``` ```
@ -296,10 +296,10 @@ SAM2 can be utilized for real-time video segmentation by leveraging its promptab
=== "Python" === "Python"
```python ```python
from ultralytics import SAM2 from ultralytics import SAM
# Load a model # Load a model
model = SAM2("sam2_b.pt") model = SAM("sam2_b.pt")
# Display model information (optional) # Display model information (optional)
model.info() model.info()
@ -311,7 +311,7 @@ SAM2 can be utilized for real-time video segmentation by leveraging its promptab
results = model("path/to/image.jpg", points=[150, 150], labels=[1]) results = model("path/to/image.jpg", points=[150, 150], labels=[1])
``` ```
For more comprehensive usage, refer to the [How to Use SAM2](#how-to-use-sam2-versatility-in-image-and-video-segmentation) section. For more comprehensive usage, refer to the [How to Use SAM 2](#how-to-use-sam-2-versatility-in-image-and-video-segmentation) section.
### What datasets are used to train SAM 2, and how do they enhance its performance? ### What datasets are used to train SAM 2, and how do they enhance its performance?

@ -239,7 +239,7 @@ nav:
- YOLOv9: models/yolov9.md - YOLOv9: models/yolov9.md
- YOLOv10: models/yolov10.md - YOLOv10: models/yolov10.md
- SAM (Segment Anything Model): models/sam.md - SAM (Segment Anything Model): models/sam.md
- SAM2 (Segment Anything Model 2): models/sam2.md - SAM2 (Segment Anything Model 2): models/sam-2.md
- MobileSAM (Mobile Segment Anything Model): models/mobile-sam.md - MobileSAM (Mobile Segment Anything Model): models/mobile-sam.md
- FastSAM (Fast Segment Anything Model): models/fast-sam.md - FastSAM (Fast Segment Anything Model): models/fast-sam.md
- YOLO-NAS (Neural Architecture Search): models/yolo-nas.md - YOLO-NAS (Neural Architecture Search): models/yolo-nas.md
@ -659,6 +659,7 @@ plugins:
sdk.md: index.md sdk.md: index.md
hub/inference_api.md: hub/inference-api.md hub/inference_api.md: hub/inference-api.md
usage/hyperparameter_tuning.md: integrations/ray-tune.md usage/hyperparameter_tuning.md: integrations/ray-tune.md
models/sam2.md: models/sam-2.md
reference/base_pred.md: reference/engine/predictor.md reference/base_pred.md: reference/engine/predictor.md
reference/base_trainer.md: reference/engine/trainer.md reference/base_trainer.md: reference/engine/trainer.md
reference/exporter.md: reference/engine/exporter.md reference/exporter.md: reference/engine/exporter.md

@ -1,5 +1,5 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
import contextlib
import gc import gc
import math import math
import os import os
@ -101,13 +101,16 @@ def autocast(enabled: bool, device: str = "cuda"):
def get_cpu_info(): def get_cpu_info():
"""Return a string with system CPU information, i.e. 'Apple M2'.""" """Return a string with system CPU information, i.e. 'Apple M2'."""
with contextlib.suppress(Exception):
import cpuinfo # pip install py-cpuinfo import cpuinfo # pip install py-cpuinfo
k = "brand_raw", "hardware_raw", "arch_string_raw" # info keys sorted by preference (not all keys always available) k = "brand_raw", "hardware_raw", "arch_string_raw" # keys sorted by preference (not all keys always available)
info = cpuinfo.get_cpu_info() # info dict info = cpuinfo.get_cpu_info() # info dict
string = info.get(k[0] if k[0] in info else k[1] if k[1] in info else k[2], "unknown") string = info.get(k[0] if k[0] in info else k[1] if k[1] in info else k[2], "unknown")
return string.replace("(R)", "").replace("CPU ", "").replace("@ ", "") return string.replace("(R)", "").replace("CPU ", "").replace("@ ", "")
return "unknown"
def select_device(device="", batch=0, newline=False, verbose=True): def select_device(device="", batch=0, newline=False, verbose=True):
""" """

Loading…
Cancel
Save