From d6bb3046a84bdfa20f478b052aaeab8feeaed3d2 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 27 Apr 2024 13:16:40 +0200 Subject: [PATCH] Docs Colab, OBB and typos fixes (#10366) Co-authored-by: Olivier Louvignes Co-authored-by: RainRat --- docs/en/guides/coral-edge-tpu-on-raspberry-pi.md | 8 ++++---- docs/en/guides/isolating-segmentation-objects.md | 4 ++-- docs/en/guides/nvidia-jetson.md | 2 +- docs/en/guides/view-results-in-terminal.md | 2 +- docs/en/integrations/google-colab.md | 2 +- docs/en/modes/predict.md | 2 ++ docs/en/tasks/classify.md | 2 +- docs/en/tasks/obb.md | 2 +- docs/en/tasks/pose.md | 2 +- docs/en/tasks/segment.md | 2 +- examples/YOLOv8-CPP-Inference/README.md | 2 +- examples/YOLOv8-ONNXRuntime-Rust/src/ort_backend.rs | 2 +- examples/YOLOv8-Region-Counter/readme.md | 2 +- 13 files changed, 18 insertions(+), 16 deletions(-) diff --git a/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md b/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md index 90f9145dc1..f104663717 100644 --- a/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md +++ b/docs/en/guides/coral-edge-tpu-on-raspberry-pi.md @@ -82,7 +82,7 @@ To use the Edge TPU, you need to convert your model into a compatible format. It from ultralytics import YOLO # Load a model - model = YOLO('path/to/model.pt') # Load a official model or custom model + model = YOLO('path/to/model.pt') # Load an official model or custom model # Export the model model.export(format='edgetpu') @@ -91,7 +91,7 @@ To use the Edge TPU, you need to convert your model into a compatible format. It === "CLI" ```bash - yolo export model=path/to/model.pt format=edgetpu # Export a official model or custom model + yolo export model=path/to/model.pt format=edgetpu # Export an official model or custom model ``` The exported model will be saved in the `_saved_model/` folder with the name `_full_integer_quant_edgetpu.tflite`. @@ -108,7 +108,7 @@ After exporting your model, you can run inference with it using the following co from ultralytics import YOLO # Load a model - model = YOLO('path/to/edgetpu_model.tflite') # Load a official model or custom model + model = YOLO('path/to/edgetpu_model.tflite') # Load an official model or custom model # Run Prediction model.predict("path/to/source.png") @@ -117,7 +117,7 @@ After exporting your model, you can run inference with it using the following co === "CLI" ```bash - yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png # Load a official model or custom model + yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png # Load an official model or custom model ``` Find comprehensive information on the [Predict](../modes/predict.md) page for full prediction mode details. diff --git a/docs/en/guides/isolating-segmentation-objects.md b/docs/en/guides/isolating-segmentation-objects.md index f8e9a7a654..3ef965b709 100644 --- a/docs/en/guides/isolating-segmentation-objects.md +++ b/docs/en/guides/isolating-segmentation-objects.md @@ -108,7 +108,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab 1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks). - 2. Here, the values are cast into `np.int32` for compatibility with `drawContours()` function from OpenCV. + 2. Here the values are cast into `np.int32` for compatibility with `drawContours()` function from OpenCV. 3. The OpenCV `drawContours()` function expects contours to have a shape of `[N, 1, 2]` expand section below for more details. @@ -145,7 +145,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab *** -5. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each. +5. Next there are 2 options for how to move forward with the image from this point and a subsequent option for each. ### Object Isolation Options diff --git a/docs/en/guides/nvidia-jetson.md b/docs/en/guides/nvidia-jetson.md index dbf7e261c7..af656967dd 100644 --- a/docs/en/guides/nvidia-jetson.md +++ b/docs/en/guides/nvidia-jetson.md @@ -54,7 +54,7 @@ The first step after getting your hands on an NVIDIA Jetson device is to flash N The fastest way to get started with Ultralytics YOLOv8 on NVIDIA Jetson is to run with pre-built docker image for Jetson. -Execute the below command to pull the Docker containter and run on Jetson. This is based on [l4t-pytorch](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch) docker image which contains PyTorch and Torchvision in a Python3 environment. +Execute the below command to pull the Docker container and run on Jetson. This is based on [l4t-pytorch](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch) docker image which contains PyTorch and Torchvision in a Python3 environment. ```sh t=ultralytics/ultralytics:latest-jetson && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t diff --git a/docs/en/guides/view-results-in-terminal.md b/docs/en/guides/view-results-in-terminal.md index d0a00bbbe6..24f382cf92 100644 --- a/docs/en/guides/view-results-in-terminal.md +++ b/docs/en/guides/view-results-in-terminal.md @@ -10,7 +10,7 @@ keywords: YOLOv8, VSCode, Terminal, Remote Development, Ultralytics, SSH, Object Sixel example of image in Terminal

-Image from the the [libsixel](https://saitoha.github.io/libsixel/) website. +Image from the [libsixel](https://saitoha.github.io/libsixel/) website. ## Motivation diff --git a/docs/en/integrations/google-colab.md b/docs/en/integrations/google-colab.md index 0d37c6c89f..6df3fe894a 100644 --- a/docs/en/integrations/google-colab.md +++ b/docs/en/integrations/google-colab.md @@ -95,7 +95,7 @@ There are many options for training and evaluating YOLOv8 models, so what makes If you’d like to dive deeper into Google Colab, here are a few resources to guide you. -- **[Training Custom Datasets with Ultralytics YOLOv8 in Google Colab](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-Colab)**: Learn how to train custom datasets with Ultralytics YOLOv8 on Google Colab. This comprehensive blog post will take you through the entire process, from initial setup to the training and evaluation stages. +- **[Training Custom Datasets with Ultralytics YOLOv8 in Google Colab](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-colab)**: Learn how to train custom datasets with Ultralytics YOLOv8 on Google Colab. This comprehensive blog post will take you through the entire process, from initial setup to the training and evaluation stages. - **[Curated Notebooks](https://colab.google/notebooks/)**: Here you can explore a series of organized and educational notebooks, each grouped by specific topic areas. diff --git a/docs/en/modes/predict.md b/docs/en/modes/predict.md index 7b36783e6d..c8ad07d4bb 100644 --- a/docs/en/modes/predict.md +++ b/docs/en/modes/predict.md @@ -69,6 +69,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m masks = result.masks # Masks object for segmentation masks outputs keypoints = result.keypoints # Keypoints object for pose outputs probs = result.probs # Probs object for classification outputs + obb = result.obb # Oriented boxes object for OBB outputs result.show() # display to screen result.save(filename='result.jpg') # save to disk ``` @@ -90,6 +91,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m masks = result.masks # Masks object for segmentation masks outputs keypoints = result.keypoints # Keypoints object for pose outputs probs = result.probs # Probs object for classification outputs + obb = result.obb # Oriented boxes object for OBB outputs result.show() # display to screen result.save(filename='result.jpg') # save to disk ``` diff --git a/docs/en/tasks/classify.md b/docs/en/tasks/classify.md index 18a641d50f..716e702e2b 100644 --- a/docs/en/tasks/classify.md +++ b/docs/en/tasks/classify.md @@ -83,7 +83,7 @@ YOLO classification dataset format can be found in detail in the [Dataset Guide] ## Val -Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. +Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes. !!! Example diff --git a/docs/en/tasks/obb.md b/docs/en/tasks/obb.md index 117deb4fe8..7f27fd379b 100644 --- a/docs/en/tasks/obb.md +++ b/docs/en/tasks/obb.md @@ -103,7 +103,7 @@ OBB dataset format can be found in detail in the [Dataset Guide](../datasets/obb ## Val Validate trained YOLOv8n-obb model accuracy on the DOTA8 dataset. No argument need to passed as the `model` -retains it's training `data` and arguments as model attributes. +retains its training `data` and arguments as model attributes. !!! Example diff --git a/docs/en/tasks/pose.md b/docs/en/tasks/pose.md index 64bb107019..5d4d80a19e 100644 --- a/docs/en/tasks/pose.md +++ b/docs/en/tasks/pose.md @@ -97,7 +97,7 @@ YOLO pose dataset format can be found in detail in the [Dataset Guide](../datase ## Val Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model` -retains it's training `data` and arguments as model attributes. +retains its training `data` and arguments as model attributes. !!! Example diff --git a/docs/en/tasks/segment.md b/docs/en/tasks/segment.md index e9d0199b89..0421b2e180 100644 --- a/docs/en/tasks/segment.md +++ b/docs/en/tasks/segment.md @@ -83,7 +83,7 @@ YOLO segmentation dataset format can be found in detail in the [Dataset Guide](. ## Val Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model` -retains it's training `data` and arguments as model attributes. +retains its training `data` and arguments as model attributes. !!! Example diff --git a/examples/YOLOv8-CPP-Inference/README.md b/examples/YOLOv8-CPP-Inference/README.md index 601c1d0c93..5bb2586dd6 100644 --- a/examples/YOLOv8-CPP-Inference/README.md +++ b/examples/YOLOv8-CPP-Inference/README.md @@ -13,7 +13,7 @@ cd examples/YOLOv8-CPP-Inference # Add a **yolov8\_.onnx** and/or **yolov5\_.onnx** model(s) to the ultralytics folder. # Edit the **main.cpp** to change the **projectBasePath** to match your user. -# Note that by default the CMake file will try and import the CUDA library to be used with the OpenCVs dnn (cuDNN) GPU Inference. +# Note that by default the CMake file will try to import the CUDA library to be used with the OpenCVs dnn (cuDNN) GPU Inference. # If your OpenCV build does not use CUDA/cuDNN you can remove that import call and run the example on CPU. mkdir build diff --git a/examples/YOLOv8-ONNXRuntime-Rust/src/ort_backend.rs b/examples/YOLOv8-ONNXRuntime-Rust/src/ort_backend.rs index 5be93bdc58..857baaebae 100644 --- a/examples/YOLOv8-ONNXRuntime-Rust/src/ort_backend.rs +++ b/examples/YOLOv8-ONNXRuntime-Rust/src/ort_backend.rs @@ -161,7 +161,7 @@ impl OrtBackend { Ok(metadata) => match metadata.custom("task") { Err(_) => panic!("Can not get custom value. Try making it explicit by `--task`"), Ok(value) => match value { - None => panic!("No correspoing value of `task` found in metadata. Make it explicit by `--task`"), + None => panic!("No corresponding value of `task` found in metadata. Make it explicit by `--task`"), Some(task) => match task.as_str() { "classify" => YOLOTask::Classify, "detect" => YOLOTask::Detect, diff --git a/examples/YOLOv8-Region-Counter/readme.md b/examples/YOLOv8-Region-Counter/readme.md index 2acf0a5534..4ab8e7fca1 100644 --- a/examples/YOLOv8-Region-Counter/readme.md +++ b/examples/YOLOv8-Region-Counter/readme.md @@ -50,7 +50,7 @@ python yolov8_region_counter.py --source "path/to/video.mp4" --save-img --weight # If you want to detect specific class (first class and third class) python yolov8_region_counter.py --source "path/to/video.mp4" --classes 0 2 --weights "path/to/model.pt" -# If you dont want to save results +# If you don't want to save results python yolov8_region_counter.py --source "path/to/video.mp4" --view-img ```