You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 

4.8 KiB

comments description keywords
true Explore the diverse range of YOLO family, SAM, MobileSAM, FastSAM, YOLO-NAS, and RT-DETR models supported by Ultralytics. Get started with examples for both CLI and Python usage. Ultralytics, documentation, YOLO, SAM, MobileSAM, FastSAM, YOLO-NAS, RT-DETR, models, architectures, Python, CLI

Models Supported by Ultralytics

Welcome to Ultralytics' model documentation! We offer support for a wide range of models, each tailored to specific tasks like object detection, instance segmentation, image classification, pose estimation, and multi-object tracking. If you're interested in contributing your model architecture to Ultralytics, check out our Contributing Guide.

Here are some of the key models supported:

  1. YOLOv3: The third iteration of the YOLO model family, originally by Joseph Redmon, known for its efficient real-time object detection capabilities.
  2. YOLOv4: A darknet-native update to YOLOv3, released by Alexey Bochkovskiy in 2020.
  3. YOLOv5: An improved version of the YOLO architecture by Ultralytics, offering better performance and speed trade-offs compared to previous versions.
  4. YOLOv6: Released by Meituan in 2022, and in use in many of the company's autonomous delivery robots.
  5. YOLOv7: Updated YOLO models released in 2022 by the authors of YOLOv4.
  6. YOLOv8: The latest version of the YOLO family, featuring enhanced capabilities such as instance segmentation, pose/keypoints estimation, and classification.
  7. Segment Anything Model (SAM): Meta's Segment Anything Model (SAM).
  8. Mobile Segment Anything Model (MobileSAM): MobileSAM for mobile applications, by Kyung Hee University.
  9. Fast Segment Anything Model (FastSAM): FastSAM by Image & Video Analysis Group, Institute of Automation, Chinese Academy of Sciences.
  10. YOLO-NAS: YOLO Neural Architecture Search (NAS) Models.
  11. Realtime Detection Transformers (RT-DETR): Baidu's PaddlePaddle Realtime Detection Transformer (RT-DETR) models.



Watch: Run Ultralytics YOLO models in just a few lines of code.

Getting Started: Usage Examples

!!! example ""

=== "Python"

    PyTorch pretrained `*.pt` models as well as configuration `*.yaml` files can be passed to the `YOLO()`, `SAM()`, `NAS()` and `RTDETR()` classes to create a model instance in Python:

    ```python
    from ultralytics import YOLO

    # Load a COCO-pretrained YOLOv8n model
    model = YOLO('yolov8n.pt')

    # Display model information (optional)
    model.info()

    # Train the model on the COCO8 example dataset for 100 epochs
    results = model.train(data='coco8.yaml', epochs=100, imgsz=640)

    # Run inference with the YOLOv8n model on the 'bus.jpg' image
    results = model('path/to/bus.jpg')
    ```

=== "CLI"

    CLI commands are available to directly run the models:

    ```bash
    # Load a COCO-pretrained YOLOv8n model and train it on the COCO8 example dataset for 100 epochs
    yolo train model=yolov8n.pt data=coco8.yaml epochs=100 imgsz=640

    # Load a COCO-pretrained YOLOv8n model and run inference on the 'bus.jpg' image
    yolo predict model=yolov8n.pt source=path/to/bus.jpg
    ```

Contributing New Models

Interested in contributing your model to Ultralytics? Great! We're always open to expanding our model portfolio.

  1. Fork the Repository: Start by forking the Ultralytics GitHub repository.

  2. Clone Your Fork: Clone your fork to your local machine and create a new branch to work on.

  3. Implement Your Model: Add your model following the coding standards and guidelines provided in our Contributing Guide.

  4. Test Thoroughly: Make sure to test your model rigorously, both in isolation and as part of the pipeline.

  5. Create a Pull Request: Once you're satisfied with your model, create a pull request to the main repository for review.

  6. Code Review & Merging: After review, if your model meets our criteria, it will be merged into the main repository.

For detailed steps, consult our Contributing Guide.