Co-authored-by: Onuralp Sezer <thunderbirdtr@gmail.com>
Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
pull/200/head^2
Glenn Jocher 2 years ago committed by GitHub
parent 840c35a0aa
commit c8e3c5db4b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 2
      README.md
  2. 2
      ultralytics/__init__.py
  3. 44
      ultralytics/models/README.md
  4. 3
      ultralytics/yolo/utils/callbacks/base.py
  5. 48
      ultralytics/yolo/utils/callbacks/wb.py

@ -10,7 +10,7 @@
<div>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv8 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
<a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Docker Pulls"></a>
<br>
<a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"/></a>
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, GPL-3.0 license
__version__ = "8.0.0"
__version__ = "8.0.2"
from ultralytics.hub import checks
from ultralytics.yolo.engine.model import YOLO

@ -1,24 +1,36 @@
## Models HUB
## Models
Here are the models that are supported out-of-the-box with Ultralytics. For a detailed view and navigation, visit [model hub](<>) section of the docs.
Welcome to the Ultralytics Models directory! Here you will find a wide variety of pre-configured model configuration
files (`*.yaml`s) that can be used to create custom YOLO models. The models in this directory have been expertly crafted
and fine-tuned by the Ultralytics team to provide the best performance for a wide range of object detection and image
segmentation tasks.
These model configurations cover a wide range of scenarios, from simple object detection to more complex tasks like
instance segmentation and object tracking. They are also designed to run efficiently on a variety of hardware platforms,
from CPUs to GPUs. Whether you are a seasoned machine learning practitioner or just getting started with YOLO, this
directory provides a great starting point for your custom model development needs.
To get started, simply browse through the models in this directory and find one that best suits your needs. Once you've
selected a model, you can use the provided `*.yaml` file to train and deploy your custom YOLO model with ease. See full
details at the Ultralytics [Docs](https://docs.ultralytics.com), and if you need help or have any questions, feel free
to reach out to the Ultralytics team for support. So, don't wait, start creating your custom YOLO model now!
### Usage
You can simply set the `model` parameter to any available yaml config or pretained weights
Model `*.yaml` files may be used directly in the Command Line Interface (CLI) with a `yolo` command:
```bash
yolo task=... mode=... model=yolov5n.yaml
yolo task=detect mode=train model=yolov8n.yaml data=coco128.yaml epochs=100
```
| Model | Version/ | size (pixels) | mAPval 50-95 | Speed CPU b1 (ms) | params (M) | FLOPs @640 (B) | model file | Pretrained Weights |
| ------------------ | -------- | ------------- | ------------ | ----------------- | ---------- | -------------- | ------------- | ------------------ |
| YOLOv5n | v6.3 | 640 | 28.0 | 45 | 1.9 | 4.5 | yolov5n.yaml | - |
| YOLOv5s | - | 640 | 37.4 | 98 | 7.2 | 16.5 | yolov5s.yaml | - |
| YOLOv5m | - | 640 | 45.4 | 224 | 21.2 | 49.0 | yolov5m.yaml | - |
| YOLOv5l | - | 640 | 49.0 | 430 | 46.5 | 109.1 | yolov5l.yaml | - |
| YOLOv5x | - | 640 | 50.7 | 766 | 86.7 | 205.7 | yolov5x.yaml | - |
| YOLOv5n6 | - | 1280 | 36.0 | 153 | 3.2 | 4.6 | yolov5n6.yaml | - |
| YOLOv5s6 | - | 1280 | 44.8 | 385 | 12.6 | 16.8 | yolov5s6.yaml | - |
| YOLOv5m6 | - | 1280 | 51.3 | 887 | 35.7 | 50.0 | yolov5m6.yaml | - |
| YOLOv5l6 | - | 1280 | 53.7 | 1784 | 76.8 | 111.4 | yolov5l6.yaml | - |
| YOLOv5x6 + \[TTA\] | - | 1280 1536 | 55.0 55.8 | 3136 - | 140.7 - | 209.8 - | yolov5x6.yaml | - |
They may also be used directly in a Python environment, and accepts the same
[arguments](https://docs.ultralytics.com/config/) as in the CLI example above:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml") # build a YOLOv8n model from scratch
model.info() # display model information
model.train(data="coco128.yaml", epochs=100) # train the model
```

@ -143,8 +143,7 @@ def add_integration_callbacks(instance):
from .comet import callbacks as comet_callbacks
from .hub import callbacks as hub_callbacks
from .tensorboard import callbacks as tb_callbacks
from .wb import callbacks as wb_callbacks
for x in clearml_callbacks, comet_callbacks, hub_callbacks, tb_callbacks, wb_callbacks:
for x in clearml_callbacks, comet_callbacks, hub_callbacks, tb_callbacks:
for k, v in x.items():
instance.callbacks[k].append(v) # callback[name].append(func)

@ -1,48 +0,0 @@
# Ultralytics YOLO 🚀, GPL-3.0 license
from ultralytics.yolo.utils.torch_utils import get_flops, get_num_params
try:
import wandb
assert hasattr(wandb, '__version__')
except (ImportError, AssertionError):
wandb = None
def on_pretrain_routine_start(trainer):
wandb.init(project=trainer.args.project or "YOLOv8", name=trainer.args.name, config=dict(
trainer.args)) if not wandb.run else wandb.run
def on_fit_epoch_end(trainer):
wandb.run.log(trainer.metrics, step=trainer.epoch + 1)
if trainer.epoch == 0:
model_info = {
"model/parameters": get_num_params(trainer.model),
"model/GFLOPs": round(get_flops(trainer.model), 3),
"model/speed(ms)": round(trainer.validator.speed[1], 3)}
wandb.run.log(model_info, step=trainer.epoch + 1)
def on_train_epoch_end(trainer):
wandb.run.log(trainer.label_loss_items(trainer.tloss, prefix="train"), step=trainer.epoch + 1)
wandb.run.log(trainer.lr, step=trainer.epoch + 1)
if trainer.epoch == 1:
wandb.run.log({f.stem: wandb.Image(str(f))
for f in trainer.save_dir.glob('train_batch*.jpg')},
step=trainer.epoch + 1)
def on_train_end(trainer):
art = wandb.Artifact(type="model", name=f"run_{wandb.run.id}_model")
if trainer.best.exists():
art.add_file(trainer.best)
wandb.run.log_artifact(art)
callbacks = {
"on_pretrain_routine_start": on_pretrain_routine_start,
"on_train_epoch_end": on_train_epoch_end,
"on_fit_epoch_end": on_fit_epoch_end,
"on_train_end": on_train_end} if wandb else {}
Loading…
Cancel
Save