Mohammed Yasin
d28caa9a58
Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> |
2 weeks ago | |
---|---|---|
.. | ||
README.md | Refactor TFLite example. Support FP32, Fp16, INT8 models (#17317) | 2 weeks ago |
main.py | Refactor TFLite example. Support FP32, Fp16, INT8 models (#17317) | 2 weeks ago |
README.md
YOLOv8 - TFLite Runtime
This example shows how to run inference with YOLOv8 TFLite model. It supports FP32, FP16 and INT8 models.
Installation
Installing tflite-runtime
To load TFLite models, install the tflite-runtime
package using:
pip install tflite-runtime
Installing tensorflow-gpu
(For NVIDIA GPU Users)
Leverage GPU acceleration with NVIDIA GPUs by installing tensorflow-gpu
:
pip install tensorflow-gpu
Note: Ensure you have compatible GPU drivers installed on your system.
Installing tensorflow
(CPU Version)
For CPU usage or non-NVIDIA GPUs, install TensorFlow with:
pip install tensorflow
Usage
Follow these instructions to run YOLOv8 after successful installation.
Convert the YOLOv8 model to TFLite format:
yolo export model=yolov8n.pt imgsz=640 format=tflite int8
Locate the TFLite model in yolov8n_saved_model
. Then, execute the following in your terminal:
python main.py --model yolov8n_full_integer_quant.tflite --img image.jpg --conf 0.25 --iou 0.45 --metadata "metadata.yaml"
Replace best_full_integer_quant.tflite
with the TFLite model path, image.jpg
with the input image path, metadata.yaml
with the one generated by ultralytics
during export, and adjust the confidence (conf) and IoU thresholds (iou) as necessary.
Output
The output would show the detections along with the class labels and confidences of each detected object.