You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
Glenn Jocher 9552827157 Created using Colaboratory 2 years ago
.github Minor updates and improvements (#216) 2 years ago
docker Minor updates and improvements (#216) 2 years ago
docs README and Docs updates with A100 TensorRT times (#270) 2 years ago
examples Created using Colaboratory 2 years ago
tests Release 8.0.4 fixes (#256) 2 years ago
ultralytics Update README.md (#272) 2 years ago
.gitignore Integration of v8 segmentation (#107) 2 years ago
.pre-commit-config.yaml Add Dockerfiles and update Docs README (#124) 2 years ago
CITATION.cff Fix CITATION.cff typos (#64) 2 years ago
CONTRIBUTING.md Update docs with YOLOv8 banner (#160) 2 years ago
LICENSE Initial commit 2 years ago
MANIFEST.in Trainer + Dataloaders (#27) 2 years ago
README.md Update README.md (#272) 2 years ago
README.zh-CN.md Update README.md (#272) 2 years ago
mkdocs.yml Update README.md (#272) 2 years ago
requirements.txt Cleanup (#168) 2 years ago
setup.cfg Flake8 updates (#66) 2 years ago
setup.py Cleanup (#168) 2 years ago

README.md

English | 简体中文

Ultralytics CI YOLOv8 Citation Docker Pulls
Run on Gradient Open In Colab Open In Kaggle

Ultralytics YOLOv8, developed by Ultralytics, is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image classification tasks.

To request an Enterprise License please complete the form at Ultralytics Licensing.

Documentation

See below for quickstart intallation and usage example, and see the YOLOv8 Docs for full documentation on training, validation, prediction and deployment.

Install

Pip install the ultralytics package including all requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7.

pip install ultralytics
Usage

YOLOv8 may be used directly in the Command Line Interface (CLI) with a yolo command:

yolo task=detect mode=predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg"

yolo can be used for a variety of tasks and modes and accepts additional arguments, i.e. imgsz=640. See a full list of available yolo arguments in the YOLOv8 Docs.

yolo task=detect    mode=train    model=yolov8n.pt        args...
          classify       predict        yolov8n-cls.yaml  args...
          segment        val            yolov8n-seg.yaml  args...
                         export         yolov8n.pt        format=onnx  args...

YOLOv8 may also be used directly in a Python environment, and accepts the same arguments as in the CLI example above:

from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Use the model
results = model.train(data="coco128.yaml", epochs=3)  # train the model
results = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
success = YOLO("yolov8n.pt").export(format="onnx")  # export a model to ONNX format

Models download automatically from the latest Ultralytics release.

Known Issues / TODOs

We are still working on several parts of YOLOv8! We aim to have these completed soon to bring the YOLOv8 feature set up to par with YOLOv5, including export and inference to all the same formats. We are also writing a YOLOv8 paper which we will submit to arxiv.org once complete.

  • TensorFlow exports
  • DDP resume
  • arxiv.org paper

Checkpoints

All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset, while Classification models are pretrained on the ImageNet dataset.

Models download automatically from the latest Ultralytics release on first use.

Detection
Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo mode=val task=detect data=coco.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu
Segmentation
Model size
(pixels)
mAPbox
50-95
mAPmask
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 36.7 30.5 96.11 1.21 3.4 12.6
YOLOv8s 640 44.6 36.8 155.7 1.47 11.8 42.6
YOLOv8m 640 49.9 40.8 317.0 2.18 27.3 110.2
YOLOv8l 640 52.3 42.6 572.4 2.79 46.0 220.5
YOLOv8x 640 53.4 43.4 712.1 4.02 71.8 344.1
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo mode=val task=segment data=coco.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo mode=val task=segment data=coco128-seg.yaml batch=1 device=0/cpu
Classification
Model size
(pixels)
acc
top1
acc
top5
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B) at 640
YOLOv8n 224 66.6 87.0 12.9 0.31 2.7 4.3
YOLOv8s 224 72.3 91.1 23.4 0.35 6.4 13.5
YOLOv8m 224 76.4 93.2 85.4 0.62 17.0 42.7
YOLOv8l 224 78.0 94.1 163.0 0.87 37.5 99.7
YOLOv8x 224 78.4 94.3 232.0 1.01 57.4 154.8
  • acc values are model accuracies on the ImageNet dataset validation set.
    Reproduce by yolo mode=val task=classify data=path/to/ImageNet device=0
  • Speed averaged over ImageNet val images using an Amazon EC2 P4d instance.
    Reproduce by yolo mode=val task=classify data=path/to/ImageNet batch=1 device=0/cpu

Integrations




Roboflow ClearML NEW Comet NEW Neural Magic NEW
Label and export your custom datasets directly to YOLOv8 for training with Roboflow Automatically track, visualize and even remotely train YOLOv8 using ClearML (open-source!) Free forever, Comet lets you save YOLOv8 models, resume training, and interactively visualise and debug predictions Run YOLOv8 inference up to 6x faster with Neural Magic DeepSparse

Ultralytics HUB

Ultralytics HUB is our NEW no-code solution to visualize datasets, train YOLOv8 🚀 models, and deploy to the real world in a seamless experience. Get started for Free now! Also run YOLOv8 models on your iOS or Android device by downloading the Ultralytics App!

Contribute

We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our Contributing Guide to get started, and fill out our Survey to send us feedback on your experience. Thank you 🙏 to all our contributors!

License

YOLOv8 is available under two different licenses:

  • GPL-3.0 License: See LICENSE file for details.
  • Enterprise License: Provides greater flexibility for commercial product development without the open-source requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at Ultralytics Licensing.

Contact

For YOLOv8 bugs and feature requests please visit GitHub Issues. For professional support please Contact Us.