Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raspberry pi webcam predictions #29

Open
pythonsus opened this issue Jun 5, 2024 · 5 comments
Open

raspberry pi webcam predictions #29

pythonsus opened this issue Jun 5, 2024 · 5 comments

Comments

@pythonsus
Copy link

How could I the change prediction video to my raspberry pi webcam's video?(I have a rstp stream if required)

@pderrenger
Copy link
Member

Hello!

To use your Raspberry Pi webcam's video for predictions with an RTSP stream, you can modify the source input in the inference command. Here’s how you can do it using the Ultralytics YOLO model:

If you're using Python:

from ultralytics import YOLO

# Load your YOLO model
model = YOLO("yolov8n.pt")

# Run inference on the RTSP stream
results = model("rtsp://your_stream_address")

Or, if you prefer using the CLI:

yolo predict model=yolov8n.pt source="rtsp://your_stream_address"

Just replace "rtsp://your_stream_address" with your actual RTSP stream URL. This will direct the model to process the video feed from your webcam.

Happy coding! 🚀

@pythonsus
Copy link
Author

pythonsus commented Jun 6, 2024

How do I impletment this in the yolo ios app @pderrenger

@pderrenger
Copy link
Member

Hello @pythonsus,

To implement YOLO model inference in an iOS app, you would typically use CoreML. First, convert the YOLO model to CoreML format, then integrate it into your iOS project using Swift or Objective-C.

  1. Convert the model using the Ultralytics export tool:

    yolo export model=yolov8n.pt format=coreml
  2. Import the .mlmodel into your Xcode project and use the Core ML framework to handle the predictions.

For detailed steps on model conversion and iOS implementation, you might want to check out Apple's Core ML documentation.

Best of luck with your app development! 🚀

@pythonstuff8
Copy link

pythonstuff8 commented Jun 7, 2024

How can I implement this code in the yolo-ios-app in swift. Or in other words how can I use an rtsp stream as the source for yolo-ios-app in swift(not in python) @pderrenger

from ultralytics import YOLO

# Load your YOLO model
model = YOLO("yolov8n.pt")

# Run inference on the RTSP stream
results = model("rtsp://your_stream_address")``` 

@pderrenger
Copy link
Member

Hello @pythonstuff8,

To use an RTSP stream as a source for the YOLO model in an iOS app using Swift, you'll need to handle the stream differently since Swift does not directly support the Python code you've shown.

Here’s a general approach:

  1. Capture RTSP Stream: Use a library like MobileVLCKit to capture the video stream in your iOS app.
  2. Convert YOLO Model: Export the YOLO model to CoreML format as mentioned previously.
  3. Process Frames: As you receive frames from the RTSP stream, convert them into a format suitable for CoreML (typically a CVPixelBuffer).
  4. Run Inference: Use the CoreML model to run predictions on these frames and handle the output accordingly.

You'll need to handle the video stream and frame processing asynchronously to ensure smooth performance in your app.

Best of luck with your implementation! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants