diff --git a/docs/en/models/sam-2.md b/docs/en/models/sam-2.md
index ef76f8a3cf..952d333641 100644
--- a/docs/en/models/sam-2.md
+++ b/docs/en/models/sam-2.md
@@ -168,47 +168,45 @@ SAM 2 can be utilized across a broad spectrum of tasks, including real-time vide
- This example demonstrates how SAM 2 can be used to segment the entire content of an image or video if no prompts (bboxes/points/masks) are provided.
-## SAM comparison vs YOLOv8
+## SAM 2 comparison vs YOLOv8
-Here we compare Meta's smallest SAM model, SAM-b, with Ultralytics smallest segmentation model, [YOLOv8n-seg](../tasks/segment.md):
+Here we compare Meta's smallest SAM 2 model, SAM2-t, with Ultralytics smallest segmentation model, [YOLOv8n-seg](../tasks/segment.md):
-| Model | Size | Parameters | Speed (CPU) |
-| ---------------------------------------------- | -------------------------- | ---------------------- | -------------------------- |
-| Meta's SAM-b | 358 MB | 94.7 M | 51096 ms/im |
-| [MobileSAM](mobile-sam.md) | 40.7 MB | 10.1 M | 46122 ms/im |
-| [FastSAM-s](fast-sam.md) with YOLOv8 backbone | 23.7 MB | 11.8 M | 115 ms/im |
-| Ultralytics [YOLOv8n-seg](../tasks/segment.md) | **6.7 MB** (53.4x smaller) | **3.4 M** (27.9x less) | **59 ms/im** (866x faster) |
+| Model | Size
(MB) | Parameters
(M) | Speed (CPU)
(ms/im) |
+| ---------------------------------------------- | ----------------------- | ---------------------------- | --------------------------------- |
+| [Meta SAM-b](sam.md) | 375 | 93.7 | 161440 |
+| Meta SAM2-b | 162 | 80.8 | 121923 |
+| Meta SAM2-t | 78.1 | 38.9 | 85155 |
+| [MobileSAM](mobile-sam.md) | 40.7 | 10.1 | 98543 |
+| [FastSAM-s](fast-sam.md) with YOLOv8 backbone | 23.7 | 11.8 | 140 |
+| Ultralytics [YOLOv8n-seg](../tasks/segment.md) | **6.7** (11.7x smaller) | **3.4** (11.4x less) | **79.5** (1071x faster) |
This comparison shows the order-of-magnitude differences in the model sizes and speeds between models. Whereas SAM presents unique capabilities for automatic segmenting, it is not a direct competitor to YOLOv8 segment models, which are smaller, faster and more efficient.
-Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
+Tests run on a 2023 Apple M2 Macbook with 16GB of RAM using `torch==2.3.1` and `ultralytics==8.3.82`. To reproduce this test:
!!! Example
=== "Python"
```python
- from ultralytics import SAM, YOLO, FastSAM
+ from ultralytics import ASSETS, SAM, YOLO, FastSAM
- # Profile SAM-b
- model = SAM("sam_b.pt")
- model.info()
- model("ultralytics/assets")
-
- # Profile MobileSAM
- model = SAM("mobile_sam.pt")
- model.info()
- model("ultralytics/assets")
+ # Profile SAM2-t, SAM2-b, SAM-b, MobileSAM
+ for file in ["sam_b.pt", "sam2_b.pt", "sam2_t.pt", "mobile_sam.pt"]:
+ model = SAM(file)
+ model.info()
+ model(ASSETS)
# Profile FastSAM-s
model = FastSAM("FastSAM-s.pt")
model.info()
- model("ultralytics/assets")
+ model(ASSETS)
# Profile YOLOv8n-seg
model = YOLO("yolov8n-seg.pt")
model.info()
- model("ultralytics/assets")
+ model(ASSETS)
```
## Auto-Annotation: Efficient Dataset Creation
@@ -331,11 +329,13 @@ This mechanism ensures continuity even when objects are temporarily obscured or
SAM 2 and Ultralytics YOLOv8 serve different purposes and excel in different areas. While SAM 2 is designed for comprehensive object segmentation with advanced features like zero-shot generalization and real-time performance, YOLOv8 is optimized for speed and efficiency in object detection and segmentation tasks. Here's a comparison:
-| Model | Size | Parameters | Speed (CPU) |
-| ---------------------------------------------- | -------------------------- | ---------------------- | -------------------------- |
-| Meta's SAM-b | 358 MB | 94.7 M | 51096 ms/im |
-| [MobileSAM](mobile-sam.md) | 40.7 MB | 10.1 M | 46122 ms/im |
-| [FastSAM-s](fast-sam.md) with YOLOv8 backbone | 23.7 MB | 11.8 M | 115 ms/im |
-| Ultralytics [YOLOv8n-seg](../tasks/segment.md) | **6.7 MB** (53.4x smaller) | **3.4 M** (27.9x less) | **59 ms/im** (866x faster) |
+| Model | Size
(MB) | Parameters
(M) | Speed (CPU)
(ms/im) |
+| ---------------------------------------------- | ----------------------- | ---------------------------- | --------------------------------- |
+| [Meta SAM-b](sam.md) | 375 | 93.7 | 161440 |
+| Meta SAM2-b | 162 | 80.8 | 121923 |
+| Meta SAM2-t | 78.1 | 38.9 | 85155 |
+| [MobileSAM](mobile-sam.md) | 40.7 | 10.1 | 98543 |
+| [FastSAM-s](fast-sam.md) with YOLOv8 backbone | 23.7 | 11.8 | 140 |
+| Ultralytics [YOLOv8n-seg](../tasks/segment.md) | **6.7** (11.7x smaller) | **3.4** (11.4x less) | **79.5** (1071x faster) |
-For more details, see the [SAM comparison vs YOLOv8](#sam-comparison-vs-yolov8) section.
+For more details, see the [SAM 2 comparison vs YOLOv8](#sam-2-comparison-vs-yolov8) section.
diff --git a/docs/en/models/sam.md b/docs/en/models/sam.md
index d7da2be334..6060361606 100644
--- a/docs/en/models/sam.md
+++ b/docs/en/models/sam.md
@@ -138,12 +138,12 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
Here we compare Meta's smallest SAM model, SAM-b, with Ultralytics smallest segmentation model, [YOLOv8n-seg](../tasks/segment.md):
-| Model | Size | Parameters | Speed (CPU) |
-| ---------------------------------------------- | -------------------------- | ---------------------- | -------------------------- |
-| Meta's SAM-b | 358 MB | 94.7 M | 51096 ms/im |
-| [MobileSAM](mobile-sam.md) | 40.7 MB | 10.1 M | 46122 ms/im |
-| [FastSAM-s](fast-sam.md) with YOLOv8 backbone | 23.7 MB | 11.8 M | 115 ms/im |
-| Ultralytics [YOLOv8n-seg](../tasks/segment.md) | **6.7 MB** (53.4x smaller) | **3.4 M** (27.9x less) | **59 ms/im** (866x faster) |
+| Model | Size
(MB) | Parameters
(M) | Speed (CPU)
(ms/im) |
+| ---------------------------------------------- | ----------------------- | ---------------------------- | --------------------------------- |
+| Meta SAM-b | 358 | 94.7 | 51096 |
+| [MobileSAM](mobile-sam.md) | 40.7 | 10.1 | 46122 |
+| [FastSAM-s](fast-sam.md) with YOLOv8 backbone | 23.7 | 11.8 | 115 |
+| Ultralytics [YOLOv8n-seg](../tasks/segment.md) | **6.7** (53.4x smaller) | **3.4** (27.9x less) | **59** (866x faster) |
This comparison shows the order-of-magnitude differences in the model sizes and speeds between models. Whereas SAM presents unique capabilities for automatic segmenting, it is not a direct competitor to YOLOv8 segment models, which are smaller, faster and more efficient.
@@ -154,27 +154,23 @@ Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
=== "Python"
```python
- from ultralytics import SAM, YOLO, FastSAM
+ from ultralytics import ASSETS, SAM, YOLO, FastSAM
- # Profile SAM-b
- model = SAM("sam_b.pt")
- model.info()
- model("ultralytics/assets")
-
- # Profile MobileSAM
- model = SAM("mobile_sam.pt")
- model.info()
- model("ultralytics/assets")
+ # Profile SAM-b, MobileSAM
+ for file in ["sam_b.pt", "mobile_sam.pt"]:
+ model = SAM(file)
+ model.info()
+ model(ASSETS)
# Profile FastSAM-s
model = FastSAM("FastSAM-s.pt")
model.info()
- model("ultralytics/assets")
+ model(ASSETS)
# Profile YOLOv8n-seg
model = YOLO("yolov8n-seg.pt")
model.info()
- model("ultralytics/assets")
+ model(ASSETS)
```
## Auto-Annotation: A Quick Path to Segmentation Datasets
diff --git a/ultralytics/__init__.py b/ultralytics/__init__.py
index a4e8dd21e2..4645f8c8f1 100644
--- a/ultralytics/__init__.py
+++ b/ultralytics/__init__.py
@@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
-__version__ = "8.2.82"
+__version__ = "8.2.83"
import os
diff --git a/ultralytics/cfg/__init__.py b/ultralytics/cfg/__init__.py
index 2f89c7dc63..6b8edff251 100644
--- a/ultralytics/cfg/__init__.py
+++ b/ultralytics/cfg/__init__.py
@@ -793,11 +793,7 @@ def entrypoint(debug=""):
from ultralytics import FastSAM
model = FastSAM(model)
- elif "sam2" in stem:
- from ultralytics import SAM2
-
- model = SAM2(model)
- elif "sam" in stem:
+ elif "sam_" in stem or "sam2_" in stem:
from ultralytics import SAM
model = SAM(model)