@ -21,7 +21,7 @@ Here are some of the key models supported:
7. **[YOLOv9](yolov9.md)**: An experimental model trained on the Ultralytics [YOLOv5](yolov5.md) codebase implementing Programmable Gradient Information (PGI).
7. **[YOLOv9](yolov9.md)**: An experimental model trained on the Ultralytics [YOLOv5](yolov5.md) codebase implementing Programmable Gradient Information (PGI).
8. **[YOLOv10](yolov10.md)**: By Tsinghua University, featuring NMS-free training and efficiency-accuracy driven architecture, delivering state-of-the-art performance and latency.
8. **[YOLOv10](yolov10.md)**: By Tsinghua University, featuring NMS-free training and efficiency-accuracy driven architecture, delivering state-of-the-art performance and latency.
9. **[Segment Anything Model (SAM)](sam.md)**: Meta's original Segment Anything Model (SAM).
9. **[Segment Anything Model (SAM)](sam.md)**: Meta's original Segment Anything Model (SAM).
10. **[Segment Anything Model 2 (SAM2)](sam2.md)**: The next generation of Meta's Segment Anything Model (SAM) for videos and images.
10. **[Segment Anything Model 2 (SAM2)](sam-2.md)**: The next generation of Meta's Segment Anything Model (SAM) for videos and images.
11. **[Mobile Segment Anything Model (MobileSAM)](mobile-sam.md)**: MobileSAM for mobile applications, by Kyung Hee University.
11. **[Mobile Segment Anything Model (MobileSAM)](mobile-sam.md)**: MobileSAM for mobile applications, by Kyung Hee University.
12. **[Fast Segment Anything Model (FastSAM)](fast-sam.md)**: FastSAM by Image & Video Analysis Group, Institute of Automation, Chinese Academy of Sciences.
12. **[Fast Segment Anything Model (FastSAM)](fast-sam.md)**: FastSAM by Image & Video Analysis Group, Institute of Automation, Chinese Academy of Sciences.
The following table details the available SAM 2 models, their pre-trained weights, supported tasks, and compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md).
The following table details the available SAM 2 models, their pre-trained weights, supported tasks, and compatibility with different operating modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md).
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
| Model Type | Pre-trained Weights | Tasks Supported | Inference | Validation | Training | Export |
| SAM 2 base | [sam2_b.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_b.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ |
| SAM 2 base | [sam2_b.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_b.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ |
| SAM 2 large | [sam2_l.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_l.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ |
| SAM 2 large | [sam2_l.pt](https://github.com/ultralytics/assets/releases/download/v8.2.0/sam2_l.pt) | [Instance Segmentation](../tasks/segment.md) | ✅ | ❌ | ❌ | ❌ |
@ -129,10 +129,10 @@ SAM2 can be utilized across a broad spectrum of tasks, including real-time video
=== "Python"
=== "Python"
```python
```python
from ultralytics import SAM2
from ultralytics import SAM
# Load a model
# Load a model
model = SAM2("sam2_b.pt")
model = SAM("sam2_b.pt")
# Display model information (optional)
# Display model information (optional)
model.info()
model.info()
@ -153,10 +153,10 @@ SAM2 can be utilized across a broad spectrum of tasks, including real-time video
=== "Python"
=== "Python"
```python
```python
from ultralytics import SAM2
from ultralytics import SAM
# Load a model
# Load a model
model = SAM2("sam2_b.pt")
model = SAM("sam2_b.pt")
# Display model information (optional)
# Display model information (optional)
model.info()
model.info()
@ -261,10 +261,10 @@ If SAM2 is a crucial part of your research or development work, please cite it u
=== "BibTeX"
=== "BibTeX"
```bibtex
```bibtex
@article{kirillov2024sam2,
@article{ravi2024sam2,
title={SAM2: Segment Anything Model 2},
title={SAM 2: Segment Anything in Images and Videos},
author={Alexander Kirillov and others},
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
journal={arXiv preprint arXiv:2401.12741},
journal={arXiv preprint},
year={2024}
year={2024}
}
}
```
```
@ -296,10 +296,10 @@ SAM2 can be utilized for real-time video segmentation by leveraging its promptab
=== "Python"
=== "Python"
```python
```python
from ultralytics import SAM2
from ultralytics import SAM
# Load a model
# Load a model
model = SAM2("sam2_b.pt")
model = SAM("sam2_b.pt")
# Display model information (optional)
# Display model information (optional)
model.info()
model.info()
@ -311,7 +311,7 @@ SAM2 can be utilized for real-time video segmentation by leveraging its promptab