Open Source Computer Vision Library https://opencv.org/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

147 lines
6.2 KiB

#!/usr/bin/env python
'''
Tracker demo
For usage download models by following links
For GOTURN:
goturn.prototxt and goturn.caffemodel: https://github.com/opencv/opencv_extra/tree/c4219d5eb3105ed8e634278fad312a1a8d2c182d/testdata/tracking
For DaSiamRPN:
network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0
kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0
kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0
For NanoTrack:
nanotrack_backbone: https://github.com/HonglinChu/SiamTrackers/blob/master/NanoTrack/models/nanotrackv2/nanotrack_backbone_sim.onnx
nanotrack_headneck: https://github.com/HonglinChu/SiamTrackers/blob/master/NanoTrack/models/nanotrackv2/nanotrack_head_sim.onnx
USAGE:
tracker.py [-h] [--input INPUT] [--tracker_algo TRACKER_ALGO]
[--goturn GOTURN] [--goturn_model GOTURN_MODEL]
[--dasiamrpn_net DASIAMRPN_NET]
[--dasiamrpn_kernel_r1 DASIAMRPN_KERNEL_R1]
[--dasiamrpn_kernel_cls1 DASIAMRPN_KERNEL_CLS1]
[--dasiamrpn_backend DASIAMRPN_BACKEND]
[--dasiamrpn_target DASIAMRPN_TARGET]
[--nanotrack_backbone NANOTRACK_BACKEND] [--nanotrack_headneck NANOTRACK_TARGET]
Merge pull request #24201 from lpylpy0514:4.x VIT track(gsoc realtime object tracking model) #24201 Vit tracker(vision transformer tracker) is a much better model for real-time object tracking. Vit tracker can achieve speeds exceeding nanotrack by 20% in single-threaded mode with ARM chip, and the advantage becomes even more pronounced in multi-threaded mode. In addition, on the dataset, vit tracker demonstrates better performance compared to nanotrack. Moreover, vit trackerprovides confidence values during the tracking process, which can be used to determine if the tracking is currently lost. opencv_zoo: https://github.com/opencv/opencv_zoo/pull/194 opencv_extra: [https://github.com/opencv/opencv_extra/pull/1088](https://github.com/opencv/opencv_extra/pull/1088) # Performance comparison is as follows: NOTE: The speed below is tested by **onnxruntime** because opencv has poor support for the transformer architecture for now. ONNX speed test on ARM platform(apple M2)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack| 5.25| 4.86| 4.72| 4.49| | vit tracker| 4.18| 2.41| 1.97| **1.46 (3X)**| ONNX speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack|3.20|2.75|2.46|2.55| | vit tracker|3.84|2.37|2.10|2.01| opencv speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | vit tracker|31.3|31.4|31.4|31.4| preformance test on lasot dataset(AUC is the most important data. Higher AUC means better tracker): |LASOT | AUC| P| Pnorm| |--------|--------|--------|--------| | nanotrack| 46.8| 45.0| 43.3| | vit tracker| 48.6| 44.8| 54.7| [https://youtu.be/MJiPnu1ZQRI](https://youtu.be/MJiPnu1ZQRI) In target tracking tasks, the score is an important indicator that can indicate whether the current target is lost. In the video, vit tracker can track the target and display the current score in the upper left corner of the video. When the target is lost, the score drops significantly. While nanotrack will only return 0.9 score in any situation, so that we cannot determine whether the target is lost. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [ ] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [ ] The feature is well documented and sample code can be built with the project CMake
1 year ago
[--vittrack_net VITTRACK_MODEL]
'''
# Python 2/3 compatibility
from __future__ import print_function
import sys
import numpy as np
import cv2 as cv
import argparse
from video import create_capture, presets
class App(object):
def __init__(self, args):
self.args = args
self.trackerAlgorithm = args.tracker_algo
self.tracker = self.createTracker()
def createTracker(self):
if self.trackerAlgorithm == 'mil':
tracker = cv.TrackerMIL_create()
elif self.trackerAlgorithm == 'goturn':
params = cv.TrackerGOTURN_Params()
params.modelTxt = self.args.goturn
params.modelBin = self.args.goturn_model
tracker = cv.TrackerGOTURN_create(params)
elif self.trackerAlgorithm == 'dasiamrpn':
params = cv.TrackerDaSiamRPN_Params()
params.model = self.args.dasiamrpn_net
params.kernel_cls1 = self.args.dasiamrpn_kernel_cls1
params.kernel_r1 = self.args.dasiamrpn_kernel_r1
tracker = cv.TrackerDaSiamRPN_create(params)
elif self.trackerAlgorithm == 'nanotrack':
params = cv.TrackerNano_Params()
params.backbone = args.nanotrack_backbone
params.neckhead = args.nanotrack_headneck
tracker = cv.TrackerNano_create(params)
Merge pull request #24201 from lpylpy0514:4.x VIT track(gsoc realtime object tracking model) #24201 Vit tracker(vision transformer tracker) is a much better model for real-time object tracking. Vit tracker can achieve speeds exceeding nanotrack by 20% in single-threaded mode with ARM chip, and the advantage becomes even more pronounced in multi-threaded mode. In addition, on the dataset, vit tracker demonstrates better performance compared to nanotrack. Moreover, vit trackerprovides confidence values during the tracking process, which can be used to determine if the tracking is currently lost. opencv_zoo: https://github.com/opencv/opencv_zoo/pull/194 opencv_extra: [https://github.com/opencv/opencv_extra/pull/1088](https://github.com/opencv/opencv_extra/pull/1088) # Performance comparison is as follows: NOTE: The speed below is tested by **onnxruntime** because opencv has poor support for the transformer architecture for now. ONNX speed test on ARM platform(apple M2)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack| 5.25| 4.86| 4.72| 4.49| | vit tracker| 4.18| 2.41| 1.97| **1.46 (3X)**| ONNX speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack|3.20|2.75|2.46|2.55| | vit tracker|3.84|2.37|2.10|2.01| opencv speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | vit tracker|31.3|31.4|31.4|31.4| preformance test on lasot dataset(AUC is the most important data. Higher AUC means better tracker): |LASOT | AUC| P| Pnorm| |--------|--------|--------|--------| | nanotrack| 46.8| 45.0| 43.3| | vit tracker| 48.6| 44.8| 54.7| [https://youtu.be/MJiPnu1ZQRI](https://youtu.be/MJiPnu1ZQRI) In target tracking tasks, the score is an important indicator that can indicate whether the current target is lost. In the video, vit tracker can track the target and display the current score in the upper left corner of the video. When the target is lost, the score drops significantly. While nanotrack will only return 0.9 score in any situation, so that we cannot determine whether the target is lost. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [ ] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [ ] The feature is well documented and sample code can be built with the project CMake
1 year ago
elif self.trackerAlgorithm == 'vittrack':
params = cv.TrackerVit_Params()
params.net = args.vittrack_net
tracker = cv.TrackerVit_create(params)
else:
sys.exit("Tracker {} is not recognized. Please use one of three available: mil, goturn, dasiamrpn, nanotrack.".format(self.trackerAlgorithm))
return tracker
def initializeTracker(self, image):
while True:
print('==> Select object ROI for tracker ...')
bbox = cv.selectROI('tracking', image)
print('ROI: {}'.format(bbox))
if bbox[2] <= 0 or bbox[3] <= 0:
sys.exit("ROI selection cancelled. Exiting...")
try:
self.tracker.init(image, bbox)
except Exception as e:
print('Unable to initialize tracker with requested bounding box. Is there any object?')
print(e)
print('Try again ...')
continue
return
def run(self):
videoPath = self.args.input
print('Using video: {}'.format(videoPath))
camera = create_capture(cv.samples.findFileOrKeep(videoPath), presets['cube'])
if not camera.isOpened():
sys.exit("Can't open video stream: {}".format(videoPath))
ok, image = camera.read()
if not ok:
sys.exit("Can't read first frame")
assert image is not None
cv.namedWindow('tracking')
self.initializeTracker(image)
print("==> Tracking is started. Press 'SPACE' to re-initialize tracker or 'ESC' for exit...")
while camera.isOpened():
ok, image = camera.read()
if not ok:
print("Can't read frame")
break
ok, newbox = self.tracker.update(image)
#print(ok, newbox)
if ok:
cv.rectangle(image, newbox, (200,0,0))
cv.imshow("tracking", image)
k = cv.waitKey(1)
if k == 32: # SPACE
self.initializeTracker(image)
if k == 27: # ESC
break
print('Done')
if __name__ == '__main__':
print(__doc__)
parser = argparse.ArgumentParser(description="Run tracker")
parser.add_argument("--input", type=str, default="vtest.avi", help="Path to video source")
Merge pull request #24201 from lpylpy0514:4.x VIT track(gsoc realtime object tracking model) #24201 Vit tracker(vision transformer tracker) is a much better model for real-time object tracking. Vit tracker can achieve speeds exceeding nanotrack by 20% in single-threaded mode with ARM chip, and the advantage becomes even more pronounced in multi-threaded mode. In addition, on the dataset, vit tracker demonstrates better performance compared to nanotrack. Moreover, vit trackerprovides confidence values during the tracking process, which can be used to determine if the tracking is currently lost. opencv_zoo: https://github.com/opencv/opencv_zoo/pull/194 opencv_extra: [https://github.com/opencv/opencv_extra/pull/1088](https://github.com/opencv/opencv_extra/pull/1088) # Performance comparison is as follows: NOTE: The speed below is tested by **onnxruntime** because opencv has poor support for the transformer architecture for now. ONNX speed test on ARM platform(apple M2)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack| 5.25| 4.86| 4.72| 4.49| | vit tracker| 4.18| 2.41| 1.97| **1.46 (3X)**| ONNX speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack|3.20|2.75|2.46|2.55| | vit tracker|3.84|2.37|2.10|2.01| opencv speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | vit tracker|31.3|31.4|31.4|31.4| preformance test on lasot dataset(AUC is the most important data. Higher AUC means better tracker): |LASOT | AUC| P| Pnorm| |--------|--------|--------|--------| | nanotrack| 46.8| 45.0| 43.3| | vit tracker| 48.6| 44.8| 54.7| [https://youtu.be/MJiPnu1ZQRI](https://youtu.be/MJiPnu1ZQRI) In target tracking tasks, the score is an important indicator that can indicate whether the current target is lost. In the video, vit tracker can track the target and display the current score in the upper left corner of the video. When the target is lost, the score drops significantly. While nanotrack will only return 0.9 score in any situation, so that we cannot determine whether the target is lost. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [ ] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [ ] The feature is well documented and sample code can be built with the project CMake
1 year ago
parser.add_argument("--tracker_algo", type=str, default="nanotrack", help="One of available tracking algorithms: mil, goturn, dasiamrpn, nanotrack, vittrack")
parser.add_argument("--goturn", type=str, default="goturn.prototxt", help="Path to GOTURN architecture")
parser.add_argument("--goturn_model", type=str, default="goturn.caffemodel", help="Path to GOTERN model")
parser.add_argument("--dasiamrpn_net", type=str, default="dasiamrpn_model.onnx", help="Path to onnx model of DaSiamRPN net")
parser.add_argument("--dasiamrpn_kernel_r1", type=str, default="dasiamrpn_kernel_r1.onnx", help="Path to onnx model of DaSiamRPN kernel_r1")
parser.add_argument("--dasiamrpn_kernel_cls1", type=str, default="dasiamrpn_kernel_cls1.onnx", help="Path to onnx model of DaSiamRPN kernel_cls1")
parser.add_argument("--nanotrack_backbone", type=str, default="nanotrack_backbone_sim.onnx", help="Path to onnx model of NanoTrack backBone")
parser.add_argument("--nanotrack_headneck", type=str, default="nanotrack_head_sim.onnx", help="Path to onnx model of NanoTrack headNeck")
Merge pull request #24201 from lpylpy0514:4.x VIT track(gsoc realtime object tracking model) #24201 Vit tracker(vision transformer tracker) is a much better model for real-time object tracking. Vit tracker can achieve speeds exceeding nanotrack by 20% in single-threaded mode with ARM chip, and the advantage becomes even more pronounced in multi-threaded mode. In addition, on the dataset, vit tracker demonstrates better performance compared to nanotrack. Moreover, vit trackerprovides confidence values during the tracking process, which can be used to determine if the tracking is currently lost. opencv_zoo: https://github.com/opencv/opencv_zoo/pull/194 opencv_extra: [https://github.com/opencv/opencv_extra/pull/1088](https://github.com/opencv/opencv_extra/pull/1088) # Performance comparison is as follows: NOTE: The speed below is tested by **onnxruntime** because opencv has poor support for the transformer architecture for now. ONNX speed test on ARM platform(apple M2)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack| 5.25| 4.86| 4.72| 4.49| | vit tracker| 4.18| 2.41| 1.97| **1.46 (3X)**| ONNX speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack|3.20|2.75|2.46|2.55| | vit tracker|3.84|2.37|2.10|2.01| opencv speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | vit tracker|31.3|31.4|31.4|31.4| preformance test on lasot dataset(AUC is the most important data. Higher AUC means better tracker): |LASOT | AUC| P| Pnorm| |--------|--------|--------|--------| | nanotrack| 46.8| 45.0| 43.3| | vit tracker| 48.6| 44.8| 54.7| [https://youtu.be/MJiPnu1ZQRI](https://youtu.be/MJiPnu1ZQRI) In target tracking tasks, the score is an important indicator that can indicate whether the current target is lost. In the video, vit tracker can track the target and display the current score in the upper left corner of the video. When the target is lost, the score drops significantly. While nanotrack will only return 0.9 score in any situation, so that we cannot determine whether the target is lost. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [ ] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [ ] The feature is well documented and sample code can be built with the project CMake
1 year ago
parser.add_argument("--vittrack_net", type=str, default="vitTracker.onnx", help="Path to onnx model of vittrack")
args = parser.parse_args()
App(args).run()
cv.destroyAllWindows()