Releases: fcakyon/yolov5-pip
v5.0.5
- Synchronized with 11.05.21 ultralytics/yolov5 repo.
PLUS:
-
neptune ai support:
yolo_train --data coco.yaml --weights yolov5s.pt --neptune_token YOUR_TOKEN --neptune_project YOUR/PROJECT
-
mmdet style metric logging support
yolo_train --data coco.yaml --weights yolov5s.pt --mmdet_tags
v5.0.3
- Update to ultralytics/yolov5 24.04.21
v5.0.1
v5.0.0
Basic Usage
import yolov5
# model
model = yolov5.load('yolov5s')
# image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# inference
results = model(img)
# inference with larger input size
results = model(img, size=1280)
# inference with test time augmentation
results = model(img, augment=True)
# show results
results.show()
# save results
results.save(save_dir='results/')
Scripts
You can call yolo_train, yolo_detect and yolo_test commands after installing the package via pip
:
Training
Run commands below to reproduce results on COCO dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest --batch-size
your GPU allows (batch sizes shown for 16 GB devices).
$ yolo_train --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
yolov5m 40
yolov5l 24
yolov5x 16
Inference
yolo_detect command runs inference on a variety of sources, downloading models automatically from the latest YOLOv5 release and saving results to runs/detect
.
$ yolo_detect --source 0 # webcam
file.jpg # image
file.mp4 # video
path/ # directory
path/*.jpg # glob
rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream
rtmp://192.168.1.105/live/test # rtmp stream
http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream
To run inference on example images in data/images
:
$ yolo_detect --source data/images --weights yolov5s.pt --conf 0.25
v4.0.14
v4.0.13
v4.0.12
v4.0.11
- fixes inference from string image filepath https://github.com/fcakyon/yolov5-pip/issues/9