You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
Glenn Jocher 95f96dc5bc
`ultralytics 8.0.72` faster Windows trainings and corrupt cache fix (#1912)
2 years ago
..
v3 `ultralytics 8.0.55` unified YOLOv8 model YAMLs (#1475) 2 years ago
v5 `ultralytics 8.0.55` unified YOLOv8 model YAMLs (#1475) 2 years ago
v8 `ultralytics 8.0.65` YOLOv8 Pose models (#1347) 2 years ago
README.md `ultralytics 8.0.72` faster Windows trainings and corrupt cache fix (#1912) 2 years ago

README.md

Models

Welcome to the Ultralytics Models directory! Here you will find a wide variety of pre-configured model configuration files (*.yamls) that can be used to create custom YOLO models. The models in this directory have been expertly crafted and fine-tuned by the Ultralytics team to provide the best performance for a wide range of object detection and image segmentation tasks.

These model configurations cover a wide range of scenarios, from simple object detection to more complex tasks like instance segmentation and object tracking. They are also designed to run efficiently on a variety of hardware platforms, from CPUs to GPUs. Whether you are a seasoned machine learning practitioner or just getting started with YOLO, this directory provides a great starting point for your custom model development needs.

To get started, simply browse through the models in this directory and find one that best suits your needs. Once you've selected a model, you can use the provided *.yaml file to train and deploy your custom YOLO model with ease. See full details at the Ultralytics Docs, and if you need help or have any questions, feel free to reach out to the Ultralytics team for support. So, don't wait, start creating your custom YOLO model now!

Usage

Model *.yaml files may be used directly in the Command Line Interface (CLI) with a yolo command:

yolo task=detect mode=train model=yolov8n.yaml data=coco128.yaml epochs=100

They may also be used directly in a Python environment, and accepts the same arguments as in the CLI example above:

from ultralytics import YOLO

model = YOLO("model.yaml")  # build a YOLOv8n model from scratch
# YOLO("model.pt")  use pre-trained model if available
model.info()  # display model information
model.train(data="coco128.yaml", epochs=100)  # train the model

Pre-trained Model Architectures

Ultralytics supports many model architectures. Visit models page to view detailed information and usage. Any of these models can be used by loading their configs or pretrained checkpoints if available.

What to add your model architecture? Here's how you can contribute

1. YOLOv8

About - Cutting edge Detection, Segmentation, Classification and Pose models developed by Ultralytics.

Available Models:

  • Detection - yolov8n, yolov8s, yolov8m, yolov8l, yolov8x
  • Instance Segmentation - yolov8n-seg, yolov8s-seg, yolov8m-seg, yolov8l-seg, yolov8x-seg
  • Classification - yolov8n-cls, yolov8s-cls, yolov8m-cls, yolov8l-cls, yolov8x-cls
  • Pose - yolov8n-pose, yolov8s-pose, yolov8m-pose, yolov8l-pose, yolov8x-pose, yolov8x-pose-p6
Performance

Detection

Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8

Segmentation

Model size
(pixels)
mAPbox
50-95
mAPmask
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-seg 640 36.7 30.5 96.1 1.21 3.4 12.6
YOLOv8s-seg 640 44.6 36.8 155.7 1.47 11.8 42.6
YOLOv8m-seg 640 49.9 40.8 317.0 2.18 27.3 110.2
YOLOv8l-seg 640 52.3 42.6 572.4 2.79 46.0 220.5
YOLOv8x-seg 640 53.4 43.4 712.1 4.02 71.8 344.1

Classification

Model size
(pixels)
acc
top1
acc
top5
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B) at 640
YOLOv8n-cls 224 66.6 87.0 12.9 0.31 2.7 4.3
YOLOv8s-cls 224 72.3 91.1 23.4 0.35 6.4 13.5
YOLOv8m-cls 224 76.4 93.2 85.4 0.62 17.0 42.7
YOLOv8l-cls 224 78.0 94.1 163.0 0.87 37.5 99.7
YOLOv8x-cls 224 78.4 94.3 232.0 1.01 57.4 154.8

Pose

Model size
(pixels)
mAPpose
50-95
mAPpose
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-pose 640 49.7 79.7 131.8 1.18 3.3 9.2
YOLOv8s-pose 640 59.2 85.8 233.2 1.42 11.6 30.2
YOLOv8m-pose 640 63.6 88.8 456.3 2.00 26.4 81.0
YOLOv8l-pose 640 67.0 89.9 784.5 2.59 44.4 168.6
YOLOv8x-pose 640 68.9 90.4 1607.1 3.73 69.4 263.2
YOLOv8x-pose-p6 1280 71.5 91.3 4088.7 10.04 99.1 1066.4

2. YOLOv5u

About - Anchor-free YOLOv5 models with new detection head and better speed-accuracy tradeoff

Available Models:

  • Detection P5/32 - yolov5nu, yolov5su, yolov5mu, yolov5lu, yolov5xu
  • Detection P6/64 - yolov5n6u, yolov5s6u, yolov5m6u, yolov5l6u, yolov5x6u
Performance

Detection

Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv5nu 640 34.3 73.6 1.06 2.6 7.7
YOLOv5su 640 43.0 120.7 1.27 9.1 24.0
YOLOv5mu 640 49.0 233.9 1.86 25.1 64.2
YOLOv5lu 640 52.2 408.4 2.50 53.2 135.0
YOLOv5xu 640 53.2 763.2 3.81 97.2 246.4
YOLOv5n6u 1280 42.1 - - 4.3 7.8
YOLOv5s6u 1280 48.6 - - 15.3 24.6
YOLOv5m6u 1280 53.6 - - 41.2 65.7
YOLOv5l6u 1280 55.7 - - 86.1 137.4
YOLOv5x6u 1280 56.8 - - 155.4 250.7