Skip to content

Latest commit

 

History

History
120 lines (80 loc) · 5.03 KB

README.md

File metadata and controls

120 lines (80 loc) · 5.03 KB

YOLOX_deepsort_tracker


🎉 How to use

↳ Tracker example

from tracker import Tracker

tracker = Tracker()    # instantiate Tracker

cap = cv2.VideoCapture('test.mp4')  # open one video

while True:
    _, img = cap.read() # read frame from video
    if img is None:
       break
    
    img_visual, bbox = tracker.update(img)  # feed one frame and get result
    
    cv2.imshow('demo', img_visual)	# imshow
    cv2.waitKey(1)
    if cv2.getWindowProperty('demo', cv2.WND_PROP_AUTOSIZE) < 1:
        break

cap.release()
cv2.destroyAllWindows()

Tracker uses YOLOX as detector to get each target's boundingbox, and use deepsort to get every bbox's ID.

↳ Select specific category

If you just want to track only some specific categories, you can set by param filter_classes.

For example:

tracker = Tracker(filter_classes=['car','person']) 

↳ Detector example

If you don't need tracking and just want to use YOLOX for object-detection, you can use the class Detector to inference easliy .

For example:

from detector import Detector
import cv2
detector = Detector() # instantiate Detector

img = cv2.imread('YOLOX/assets/dog.jpg') 	# load image
result = detector.detect(img) 	# detect targets

img_visual = result['visual'] 	 # visualized image
cv2.imshow('detect', img_visual) # imshow
cv2.waitKey(0)

You can also get more information like raw_img/boudingbox/score/class_id from the result of detector.

🎨 Install

  1. Clone the repository recursively:

    git clone --recurse-submodules https://github.com/pmj110119/YOLOX_deepsort_tracker.git

    If you already cloned and forgot to use --recurse-submodules you can run git submodule update --init(clone最新的YOLOX仓库)

  2. Make sure that you fulfill all the requirements: Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install, run:

    pip install -r requirements.txt

⚡ Select a YOLOX family model

  1. train your own model or just download pretrained models from https://github.com/Megvii-BaseDetection/YOLOX

    Model size mAPtest
    0.5:0.95
    Speed V100
    (ms)
    Params
    (M)
    FLOPs
    (G)
    weights
    YOLOX-s 640 39.6 9.8 9.0 26.8 onedrive/github
    YOLOX-m 640 46.4 12.3 25.3 73.8 onedrive/github
    YOLOX-l 640 50.0 14.5 54.2 155.6 onedrive/github
    YOLOX-x 640 51.2 17.3 99.1 281.9 onedrive/github
    YOLOX-Darknet53 640 47.4 11.1 63.7 185.3 onedrive/github

    Download yolox_s.pth to the folder weights , which is the default model path of Tracker.

  2. You can also use other yolox models as detector,. For example:

    """
    YOLO family: yolox-s, yolox-m, yolox-l, yolox-x, yolox-tiny, yolox-nano, yolov3
    """
    # yolox-s example
    detector = Tracker(model='yolox-s', ckpt='./yolox_s.pth')
    # yolox-m example
    detector = Tracker(model='yolox-m', ckpt='./yolox_m.pth')

🌹 Run demo

python demo.py --path=test.mp4