Update `FastSAM` and `SAM` docs (#14499)

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
pull/14508/head
Laughing 9 months ago committed by GitHub
parent 81544c6d71
commit c87600037d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 14
      docs/en/models/fast-sam.md
  2. 8
      docs/en/models/sam.md

@ -81,19 +81,19 @@ To perform object detection on an image, use the `predict` method as shown below
prompt_process = FastSAMPrompt(source, everything_results, device="cpu") prompt_process = FastSAMPrompt(source, everything_results, device="cpu")
# Everything prompt # Everything prompt
ann = prompt_process.everything_prompt() results = prompt_process.everything_prompt()
# Bbox default shape [0,0,0,0] -> [x1,y1,x2,y2] # Bbox default shape [0,0,0,0] -> [x1,y1,x2,y2]
ann = prompt_process.box_prompt(bbox=[200, 200, 300, 300]) results = prompt_process.box_prompt(bbox=[200, 200, 300, 300])
# Text prompt # Text prompt
ann = prompt_process.text_prompt(text="a photo of a dog") results = prompt_process.text_prompt(text="a photo of a dog")
# Point prompt # Point prompt
# points default [[0,0]] [[x1,y1],[x2,y2]] # points default [[0,0]] [[x1,y1],[x2,y2]]
# point_label default [0] [1,0] 0:background, 1:foreground # point_label default [0] [1,0] 0:background, 1:foreground
ann = prompt_process.point_prompt(points=[[200, 200]], pointlabel=[1]) results = prompt_process.point_prompt(points=[[200, 200]], pointlabel=[1])
prompt_process.plot(annotations=ann, output="./") prompt_process.plot(annotations=results, output="./")
``` ```
=== "CLI" === "CLI"
@ -105,6 +105,10 @@ To perform object detection on an image, use the `predict` method as shown below
This snippet demonstrates the simplicity of loading a pre-trained model and running a prediction on an image. This snippet demonstrates the simplicity of loading a pre-trained model and running a prediction on an image.
!!! Note
All the returned `results` in above examples are [Results](../modes/predict.md#working-with-results) object which allows access predicted masks and source image easily.
### Val Usage ### Val Usage
Validation of the model on a dataset can be done as follows: Validation of the model on a dataset can be done as follows:

@ -56,10 +56,10 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
model.info() model.info()
# Run inference with bboxes prompt # Run inference with bboxes prompt
model("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709]) results = model("ultralytics/assets/zidane.jpg", bboxes=[439, 437, 524, 709])
# Run inference with points prompt # Run inference with points prompt
model("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1]) results = model("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
``` ```
!!! Example "Segment everything" !!! Example "Segment everything"
@ -128,6 +128,10 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
results = predictor(source="ultralytics/assets/zidane.jpg", crop_n_layers=1, points_stride=64) results = predictor(source="ultralytics/assets/zidane.jpg", crop_n_layers=1, points_stride=64)
``` ```
!!! Note
All the returned `results` in above examples are [Results](../modes/predict.md#working-with-results) object which allows access predicted masks and source image easily.
- More additional args for `Segment everything` see [`Predictor/generate` Reference](../reference/models/sam/predict.md). - More additional args for `Segment everything` see [`Predictor/generate` Reference](../reference/models/sam/predict.md).
## SAM comparison vs YOLOv8 ## SAM comparison vs YOLOv8

Loading…
Cancel
Save