-
-
Notifications
You must be signed in to change notification settings - Fork 16.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to increase FPS camera capture inside the Raspberry Pi 4B 8GB with best.onnx model #13144
Comments
👋 Hello @Killuagg, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. RequirementsPython>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit. Introducing YOLOv8 🚀We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀! Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects. Check out our YOLOv8 Docs for details and get started with: pip install ultralytics |
@Killuagg hi there, Thank you for reaching out and for providing details about your setup and issue. To help you increase the FPS for your camera capture on the Raspberry Pi 4B, here are a few suggestions:
If you continue to experience issues, please provide a minimal reproducible example of your code. This will help us investigate further. You can find more details on creating a minimal reproducible example here. Feel free to reach out if you have any more questions or need further assistance. The YOLO community and the Ultralytics team are always here to help! 😊 |
Thank for your replied. First when i try to run the detect.py with img 320 the error produce : expected 620 not 320 size. So i only can run the 640 inside my raspberry pi. If i want to run the TensorRT model inside my raspberry pi, do i need to run it on GPU raspberry pi because device available is CPU only. Is there any code inside detect.py that make my fps have limit? |
Hi @Killuagg, Thank you for your follow-up and for providing additional details. Let's address your concerns one by one. Image Size ErrorThe error you encountered ( TensorRT on Raspberry PiRunning TensorRT on a Raspberry Pi can indeed provide significant performance improvements, but it typically requires a GPU. Since the Raspberry Pi 4B primarily relies on its CPU, you might not see the same benefits as on a GPU-enabled device. However, you can still try optimizing your setup:
Code Example for ThreadingHere's an example of how you might use threading to improve performance: import cv2
import threading
import time
from yolov5 import YOLOv5
# Load model
model = YOLOv5("best.onnx")
# Function to capture frames
def capture_frames():
global frame
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
time.sleep(0.01) # Adjust sleep time as needed
# Function to run inference
def run_inference():
global frame
while True:
if frame is not None:
results = model.predict(frame)
# Process results
time.sleep(0.01) # Adjust sleep time as needed
# Start threads
frame = None
thread1 = threading.Thread(target=capture_frames)
thread2 = threading.Thread(target=run_inference)
thread1.start()
thread2.start()
thread1.join()
thread2.join() Verify Latest VersionsPlease ensure you are using the latest versions of Minimum Reproducible ExampleIf you continue to experience issues, please provide a minimal reproducible example of your code. This will help us investigate further. You can find more details on creating a minimal reproducible example here. Feel free to reach out if you have any more questions or need further assistance. The YOLO community and the Ultralytics team are always here to help! 😊 |
Thank you for sharing info. May i know another method without using the TensorRT lite. I mean, its possible the solution only involving the CPU not GPU. Sorry for asking. Plus, may i know if 2000 images for train will effect the FPS?. Because i have other model with 800 images and the FPS still the same. Why after i run the detect.py using source 0 which is webcam, the file mp4 cannot play on my raspberry pi and also window 11? |
Hi @Killuagg, Thank you for your detailed follow-up! Let's address your questions and concerns step by step. CPU-Only OptimizationIf you're looking to optimize your YOLOv5 model inference on a CPU-only setup, here are a few strategies you can employ:
Dataset Size ImpactThe number of images used for training (2000 vs. 800) does not directly affect the FPS during inference. The FPS is influenced by the model size, input image size, and the computational power of your device. However, a larger dataset can improve the model's accuracy, which might indirectly affect the processing time if the model becomes more complex. Video Playback IssuesRegarding the issue with the MP4 file not playing on your Raspberry Pi and Windows 11, it could be related to the codec or the way the video is being saved. Ensure that the video is saved using a widely supported codec like H.264. Here’s an example of how to save the video correctly: import cv2
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'mp4v') # Use 'XVID' for .avi files
out = cv2.VideoWriter('output.mp4', fourcc, 20.0, (640, 480))
while cap.isOpened():
ret, frame = cap.read()
if ret:
# Write the frame
out.write(frame)
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows() Minimum Reproducible ExampleTo help us better understand and resolve your issue, could you please provide a minimal reproducible example of your code? This will allow us to reproduce the bug and investigate a solution. You can find more details on creating a minimal reproducible example here. This step is crucial for us to provide accurate and effective support. Verify Latest VersionsLastly, please ensure you are using the latest versions of Feel free to reach out if you have any more questions or need further assistance. The YOLO community and the Ultralytics team are always here to help! 😊 |
Ultralytics YOLOv5 🚀, AGPL-3.0 license""" Usage - sources: Usage - formats: import argparse import torch import pyttsx3 Initialize the TTS engineengine = pyttsx3.init() FILE = Path(file).resolve() from ultralytics.utils.plotting import Annotator, colors, save_one_box from models.common import DetectMultiBackend @smart_inference_mode()
def parse_opt(): def main(opt): if name == "main": I am using my modified detect1.py file from YOLOv5 Pytorch. I already follow the code you show but it still cannot show the video. Can you help me modified the code i share. |
Hi @Killuagg, Thank you for sharing your detailed code and setup. Let's address your concerns step by step to ensure we can help you effectively. Video Playback IssuesThe issue with the video not playing could be related to how the video is being saved or displayed. Let's ensure that the video is saved correctly and that the display logic is handled properly. Ensure Correct Video SavingFirst, let's ensure that the video is saved using a widely supported codec like H.264. Here's a snippet to ensure the video is saved correctly: # Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'mp4v') # Use 'XVID' for .avi files
out = cv2.VideoWriter('output.mp4', fourcc, 20.0, (640, 480))
while cap.isOpened():
ret, frame = cap.read()
if ret:
# Write the frame
out.write(frame)
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows() Ensure Correct Video DisplayNext, let's ensure that the video display logic is handled correctly. Here’s a simplified version of your import cv2
import time
import torch
from pathlib import Path
from models.common import DetectMultiBackend
from utils.dataloaders import LoadStreams
from utils.general import check_img_size, non_max_suppression, scale_boxes, xyxy2xywh
from utils.plots import Annotator, colors
# Load model
device = torch.device('cpu') # Change to 'cuda' if you have a GPU
model = DetectMultiBackend('best.onnx', device=device)
stride, names = model.stride, model.names
imgsz = check_img_size((640, 640), s=stride) # check image size
# Dataloader
source = '0' # webcam
dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=True)
# Run inference
model.warmup(imgsz=(1, 3, *imgsz)) # warmup
for path, im, im0s, vid_cap, s in dataset:
im = torch.from_numpy(im).to(device)
im = im.float() / 255.0 # 0 - 255 to 0.0 - 1.0
if len(im.shape) == 3:
im = im[None] # expand for batch dim
# Inference
pred = model(im)
# NMS
pred = non_max_suppression(pred, 0.25, 0.45, None, False, max_det=1000)
# Process predictions
for i, det in enumerate(pred): # per image
im0 = im0s[i].copy()
annotator = Annotator(im0, line_width=3, example=str(names))
if len(det):
det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round()
for *xyxy, conf, cls in reversed(det):
label = f'{names[int(cls)]} {conf:.2f}'
annotator.box_label(xyxy, label, color=colors(int(cls), True))
# Display results
cv2.imshow(str(path), im0)
if cv2.waitKey(1) == ord('q'): # 1 millisecond
break
cv2.destroyAllWindows() Verify Latest VersionsPlease ensure you are using the latest versions of Minimum Reproducible ExampleIf the issue persists, please provide a minimal reproducible example of your code. This will help us investigate further. You can find more details on creating a minimal reproducible example here. This step is crucial for us to provide accurate and effective support. Feel free to reach out if you have any more questions or need further assistance. The YOLO community and the Ultralytics team are always here to help! 😊 |
I am sorry.I am confuse where i need to place the code inside the detect.py |
Hi @Killuagg, Thank you for your patience and for providing more details about your setup. Let's clarify where to place the code within your Integrating the Code into
|
I have evaluate my model with val.py. The dataset was image extracted from video. When test with test dataset from google, it have high metrics.If i am using the dataset test from extracted video raspberry pi. i only get 60% metrics.How can i improve it? |
Hi @Killuagg, Thank you for reaching out and sharing your evaluation results. It's great to hear that your model performs well on the test dataset from Google but not as well on the dataset extracted from video on the Raspberry Pi. Let's explore some potential reasons and solutions to improve your metrics:
If you could provide a minimal reproducible example of your code, it would help us investigate further. You can find more details on creating a minimal reproducible example here. This step is crucial for us to provide accurate and effective support. Feel free to reach out if you have any more questions or need further assistance. The YOLO community and the Ultralytics team are always here to help! 😊 |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
YOLOv5 Component
Detection
Bug
Hi, i am currently trying to make traffic sign detection and recognition by using the YOLOv5 Pytorch with Yolov5s model. I am using detect.py file to run the model and the FPS i get is only 1 FPS. The dataset contain around 2K images with 200 epochs. I run the code with:
python detect.py --weights best.onnx --img 640 --conf 0.7 --source 0
Is there any modify to the code so that i can get more than 4FPS?
Environment
-Raspberry Pi 4B with 8GB Ram
-Webcam
-Model best.onnx
-Train using Yolov5 Pytorch
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: