`ultralytics 8.0.92` updates and fixes (#2361)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com>
Co-authored-by: introvin <vinod.4166@gmail.com>
Co-authored-by: marinmarcillat <58145636+marinmarcillat@users.noreply.github.com>
Co-authored-by: BIGBOSS-FOX <47949596+BIGBOSS-FOX@users.noreply.github.com>
pull/2373/head^2 v8.0.92
Glenn Jocher 2 years ago committed by GitHub
parent 3fd317edfd
commit 0ebd3f2959
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 4
      .pre-commit-config.yaml
  2. 4
      docs/modes/train.md
  3. 26
      docs/overrides/partials/source-file.html
  4. 2
      docs/quickstart.md
  5. 5
      docs/reference/yolo/utils/plotting.md
  6. 14
      docs/tasks/classify.md
  7. 17
      mkdocs.yml
  8. 11
      setup.py
  9. 2
      ultralytics/__init__.py
  10. 2
      ultralytics/tracker/track.py
  11. 4
      ultralytics/vit/sam/modules/mask_generator.py
  12. 6
      ultralytics/yolo/engine/predictor.py
  13. 15
      ultralytics/yolo/utils/__init__.py
  14. 3
      ultralytics/yolo/utils/plotting.py
  15. 2
      ultralytics/yolo/v8/detect/train.py

@ -22,7 +22,7 @@ repos:
- id: detect-private-key
- repo: https://github.com/asottile/pyupgrade
rev: v3.3.1
rev: v3.3.2
hooks:
- id: pyupgrade
name: Upgrade code
@ -34,7 +34,7 @@ repos:
name: Sort imports
- repo: https://github.com/google/yapf
rev: v0.32.0
rev: v0.33.0
hooks:
- id: yapf
name: YAPF formatting

@ -2,10 +2,6 @@
comments: true
---
---
comments: true
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Train mode** is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the

@ -0,0 +1,26 @@
{% import "partials/language.html" as lang with context %}
<!-- taken from
https://github.com/squidfunk/mkdocs-material/blob/master/src/partials/source-file.html -->
<br>
<div class="md-source-file">
<small>
<!-- mkdocs-git-revision-date-localized-plugin -->
{% if page.meta.git_revision_date_localized %}
📅 {{ lang.t("source.file.date.updated") }}:
{{ page.meta.git_revision_date_localized }}
{% if page.meta.git_creation_date_localized %}
<br />
🎂 {{ lang.t("source.file.date.created") }}:
{{ page.meta.git_creation_date_localized }}
{% endif %}
<!-- mkdocs-git-revision-date-plugin -->
{% elif page.meta.revision_date %}
📅 {{ lang.t("source.file.date.updated") }}:
{{ page.meta.revision_date }}
{% endif %}
</small>
</div>

@ -96,7 +96,7 @@ CLI requires no customization or Python code. You can simply run all tasks from
!!! warning "Warning"
Arguments must be passed as `arg=val` pairs, split by an equals `=` sign and delimited by spaces ` ` between pairs. Do not use `--` argument prefixes or commas `,` beteen arguments.
Arguments must be passed as `arg=val` pairs, split by an equals `=` sign and delimited by spaces ` ` between pairs. Do not use `--` argument prefixes or commas `,` between arguments.
- `yolo predict model=yolov8n.pt imgsz=640 conf=0.25` &nbsp;
- `yolo predict model yolov8n.pt imgsz 640 conf 0.25` &nbsp;

@ -32,3 +32,8 @@
---
:::ultralytics.yolo.utils.plotting.output_to_target
<br><br>
# feature_visualization
---
:::ultralytics.yolo.utils.plotting.feature_visualization
<br><br>

@ -77,10 +77,16 @@ see the [Configuration](../usage/cfg.md) page.
The YOLO classification dataset format is same as the torchvision format. Each class of images has its own folder and you have to simply pass the path of the dataset folder, i.e, `yolo classify train data="path/to/dataset"`
```
dataset/
├── class1/
├── class2/
├── class3/
├── ...
├── train/
├──── class1/
├──── class2/
├──── class3/
├──── ...
├── val/
├──── class1/
├──── class2/
├──── class3/
├──── ...
```
## Val

@ -12,6 +12,8 @@ theme:
custom_dir: docs/overrides
logo: https://github.com/ultralytics/assets/raw/main/logo/Ultralytics_Logotype_Reverse.svg
favicon: assets/favicon.ico
icon:
repo: fontawesome/brands/github
font:
text: Roboto
code: Roboto Mono
@ -55,6 +57,7 @@ copyright: <a href="https://ultralytics.com" target="_blank">Ultralytics 2023.</
extra:
# version:
# provider: mike # version drop-down menu
robots: robots.txt
analytics:
provider: google
property: G-2M5EHKC0BH
@ -91,9 +94,6 @@ extra:
extra_css:
- stylesheets/style.css
extra_files:
- robots.txt
markdown_extensions:
# Div text decorators
- admonition
@ -289,6 +289,9 @@ nav:
plugins:
- mkdocstrings
- search
- git-revision-date-localized:
type: timeago
enable_creation_date: true
- redirects:
redirect_maps:
callbacks.md: usage/callbacks.md
@ -338,6 +341,7 @@ plugins:
yolov5/hyp_evolution.md: yolov5/tutorials/hyperparameter_evolution.md
yolov5/pruning_sparsity.md: yolov5/tutorials/model_pruning_and_sparsity.md
yolov5/comet.md: yolov5/tutorials/comet_logging_integration.md
yolov5/clearml.md: yolov5/tutorials/clearml_logging_integration.md
yolov5/tta.md: yolov5/tutorials/test_time_augmentation.md
yolov5/multi_gpu_training.md: yolov5/tutorials/multi_gpu_training.md
yolov5/ensemble.md: yolov5/tutorials/model_ensembling.md
@ -351,3 +355,10 @@ plugins:
yolov5/tutorials/yolov5_neural_magic_tutorial.md: yolov5/tutorials/neural_magic_pruning_quantization.md
yolov5/tutorials/model_ensembling_tutorial.md: yolov5/tutorials/model_ensembling.md
yolov5/tutorials/pytorch_hub_tutorial.md: yolov5/tutorials/pytorch_hub_model_loading.md
yolov5/tutorials/yolov5_architecture_tutorial.md: yolov5/tutorials/architecture_description.md
yolov5/tutorials/multi_gpu_training_tutorial.md: yolov5/tutorials/multi_gpu_training.md
yolov5/tutorials/yolov5_pytorch_hub_tutorial.md: yolov5/tutorials/pytorch_hub_model_loading.md
yolov5/tutorials/model_export_tutorial.md: yolov5/tutorials/model_export.md
yolov5/tutorials/jetson_nano_tutorial.md: yolov5/tutorials/running_on_jetson_nano.md
yolov5/tutorials/yolov5_model_ensembling_tutorial.md: yolov5/tutorials/model_ensembling.md
reference/base_val.md: index.md

@ -39,8 +39,15 @@ setup(
install_requires=REQUIREMENTS + PKG_REQUIREMENTS,
extras_require={
'dev': [
'check-manifest', 'pytest', 'pytest-cov', 'coverage', 'mkdocs-material', 'mkdocstrings[python]',
'mkdocs-redirects'],
'check-manifest',
'pytest',
'pytest-cov',
'coverage',
'mkdocs-material',
'mkdocstrings[python]',
'mkdocs-redirects', # for 301 redirects
'mkdocs-git-revision-date-localized-plugin', # for created/updated dates
],
'export': ['coremltools>=6.0', 'openvino-dev>=2022.3', 'tensorflowjs'], # automatically installs tensorflow
},
classifiers=[

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = '8.0.91'
__version__ = '8.0.92'
from ultralytics.hub import start
from ultralytics.vit.sam import SAM

@ -47,7 +47,7 @@ def on_predict_postprocess_end(predictor):
tracks = predictor.trackers[i].update(det, im0s[i])
if len(tracks) == 0:
continue
idx = tracks[:, -1].tolist()
idx = tracks[:, -1].astype(int)
predictor.results[i] = predictor.results[i][idx]
predictor.results[i].update(boxes=torch.as_tensor(tracks[:, :-1]))

@ -82,8 +82,8 @@ class SamAutomaticMaskGenerator:
memory.
"""
assert (points_per_side is None) != (point_grids is
None), 'Exactly one of points_per_side or point_grid must be provided.'
assert (points_per_side is None) != (point_grids is None), \
'Exactly one of points_per_side or point_grid must be provided.'
if points_per_side is not None:
self.point_grids = build_all_layer_point_grids(
points_per_side,

@ -115,10 +115,8 @@ class BasePredictor:
im (torch.Tensor | List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list.
"""
if not isinstance(im, torch.Tensor):
auto = all(x.shape == im[0].shape for x in im) and self.model.pt
if not auto:
LOGGER.warning(
'WARNING ⚠ Source shapes differ. For optimal performance supply similarly-shaped sources.')
same_shapes = all(x.shape == im[0].shape for x in im)
auto = same_shapes and self.model.pt
im = np.stack([LetterBox(self.imgsz, auto=auto, stride=self.model.stride)(image=x) for x in im])
im = im[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW, (n, 3, h, w)
im = np.ascontiguousarray(im) # contiguous

@ -259,13 +259,14 @@ def yaml_save(file='data.yaml', data=None):
# Create parent directories if they don't exist
file.parent.mkdir(parents=True, exist_ok=True)
# Convert Path objects to strings
for k, v in data.items():
if isinstance(v, Path):
dict[k] = str(v)
# Dump data to file in YAML format
with open(file, 'w') as f:
# Dump data to file in YAML format, converting Path objects to strings
yaml.safe_dump({k: str(v) if isinstance(v, Path) else v
for k, v in data.items()},
f,
sort_keys=False,
allow_unicode=True)
yaml.safe_dump(data, f, sort_keys=False, allow_unicode=True)
def yaml_load(file='data.yaml', append_filename=False):
@ -759,7 +760,7 @@ ENVIRONMENT = 'Colab' if is_colab() else 'Kaggle' if is_kaggle() else 'Jupyter'
TESTS_RUNNING = is_pytest_running() or is_github_actions_ci()
set_sentry()
# OpenCV Multilanguage-friendly functions ------------------------------------------------------------------------------------
# OpenCV Multilanguage-friendly functions ------------------------------------------------------------------------------
imshow_ = cv2.imshow # copy to avoid recursion errors

@ -481,9 +481,6 @@ def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detec
stage (int): Module stage within the model.
n (int, optional): Maximum number of feature maps to plot. Defaults to 32.
save_dir (Path, optional): Directory to save results. Defaults to Path('runs/detect/exp').
Returns:
None: This function does not return any value; it saves the visualization to the specified directory.
"""
for m in ['Detect', 'Pose', 'Segment']:
if m in module_type:

@ -212,7 +212,6 @@ class Loss:
pred_scores.detach().sigmoid(), (pred_bboxes.detach() * stride_tensor).type(gt_bboxes.dtype),
anchor_points * stride_tensor, gt_labels, gt_bboxes, mask_gt)
target_bboxes /= stride_tensor
target_scores_sum = max(target_scores.sum(), 1)
# cls loss
@ -221,6 +220,7 @@ class Loss:
# bbox loss
if fg_mask.sum():
target_bboxes /= stride_tensor
loss[0], loss[2] = self.bbox_loss(pred_distri, pred_bboxes, anchor_points, target_bboxes, target_scores,
target_scores_sum, fg_mask)

Loading…
Cancel
Save