You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem Of about 20 different webcams tried, about 10 work out of the box, and the other 10 fail due to various errors. This makes the user experience working with NVIDIA Jetson poor, because even supposedly simple things - attach webcam and run a basic VLM, is a hit or miss experience.
Debugging Different cameras, especially newer ones, can generally be made to work with via gst-launch-1.0 by adjusting parameters, but these parameters cannot be passed to Jetson-Utils. For example, some cameras require an extra h264parse before nvv4l2decoder, but the logic that currently handles generation of the gst pipeline string is complex and even has camera-specific workarounds.
Solution 1 Rather than trying to accommodate hundreds of cameras, allow the user to pass a full gst pipeline string into the nano_llm.plugins VideoSource. The current pipeline string generation logic could remain untouched, but something like VideoSourceLowLevel(pipelinestring) could be used by more advanced users to handle essentially all cameras (at least those handled by gst streamer).
Solution2 Add a new flag --input-decoder=cpu or equivalent so that users can bypass the brittle vl2 logic, which tends to work only for very basic webcams like the Logitech 270.
Happy to help with this, please let me know if this could be useful
The text was updated successfully, but these errors were encountered:
Problem Of about 20 different webcams tried, about 10 work out of the box, and the other 10 fail due to various errors. This makes the user experience working with NVIDIA Jetson poor, because even supposedly simple things - attach webcam and run a basic VLM, is a hit or miss experience.
Debugging Different cameras, especially newer ones, can generally be made to work with via
gst-launch-1.0
by adjusting parameters, but these parameters cannot be passed toJetson-Utils
. For example, some cameras require an extrah264parse
beforenvv4l2decoder
, but the logic that currently handles generation of the gst pipeline string is complex and even has camera-specific workarounds.Solution 1 Rather than trying to accommodate hundreds of cameras, allow the user to pass a full gst pipeline string into the
nano_llm.plugins VideoSource
. The current pipeline string generation logic could remain untouched, but something likeVideoSourceLowLevel(pipelinestring)
could be used by more advanced users to handle essentially all cameras (at least those handled by gst streamer).Solution2 Add a new flag
--input-decoder=cpu
or equivalent so that users can bypass the brittle vl2 logic, which tends to work only for very basic webcams like the Logitech 270.Happy to help with this, please let me know if this could be useful
The text was updated successfully, but these errors were encountered: