-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Ouster instead of Velodyne #4
Comments
Hi, could you tell me which Lidar do you use? I would like to know if the point cloud is projected in laser layers. |
we use the ouster os1-128-u. |
What camera are you using? Is the camera calibrated with the ouster os1-128-u?. If you don't see the image automatically in RVIZ, it could be because of the wrong image topic name. Consider that the package I coded was made for a Lidar with few laser layers to interpolate. If you use a lidar with 128 laser layers, the interpolation could not be done in real-time due to the huge volume of point cloud data. In the demonstration video, I used a 16-layer lidar and interpolated it up to 10 times, making a virutal 160-layer point cloud. Maybe you don't need to interpolate your os1-128-u lidar, but just have it calibrated correctly with the Lidar-Camera package. |
We just saw that we use ros noetic. Do you know if this package is also oke to use? |
The Lidar-camera calibration has a package branch whit Ros-noetic. link
Our Lidar-Camera fusion code was tested with Ros noetic. The results are detailed in this issue |
Hmm. Would it be possible that you look oder Our Problem? If you help us we will thank with a Little bit of paypal Support :D |
The purpose of the issues is to let the github community know about the bug and fix it. You can upload the problem in these messages and we will help you at no cost. Tell me, what is the problem? have you been able to use the Lidar-Camera package? |
I have calibrated Velodyne HDL-64 LiDAR and camera. I gave the correct input topics in launch file. but i am able to visualize only the LiDAR data and camera image separately. I am not able to visualize fused data and depth detection.Can you Please help. |
hello can you upload an image of rqt_graph to visualize the nodes and their connections?
|
please only launch the node the lidar_camera_node
This node already performs the point cloud interpolation internally. |
The lidar_camera_node node is not subscribing to the image topic. Checks the name of the image topic going to the node. If you don't know how to see the topics of your rosbag use:
As you are using a rosbag I suggest you use the launch vlp16OnImg_offline.launch.
The lidar camera node needs the topics /velodyne_points and the image topic /camera/color/image_raw. |
Could you check if the image and point cloud topics of your rosbag have the time in the header? The lidar camera node subscribes to both topics by synchronizing them with the time stamp in the topic header. |
yes it have time stamp. few month back, with some other algorithm i have already projected this point cloud on image. then applied yolo on projected point cloud on image. it classify obstacle and create bounding box, inside bounding box the points from that obstacle is also there. But my next doubt is how to calculate the depth of that obstacle? |
Could you show me the roslaunch. file? I want to see the parameters you have set. I also want to know the calibration matrix of the cfg folder. |
Have you obtained the rlc and tlc values with any calibration method? And set them to: |
To calculate the depth of an object detected by yolo it is necessary to know the pixel coordinates of the detected objects, then the bounding box of each detected object is traversed with two FOR loops and inside the loop you ask the value of the depth point that is projected onto the image plane. Then, you can average those values or take their median. The article discusses how you calculate the distances of the objects and how a reduction of the bounding box improves the depth estimation. |
Thank you so much for helping me out and giving ur time. The point coloud projected on image is working perfectly and also the coloured point cloud is working perfectly. Now can u please help me with the reference code for depth estimation of obstacle? |
Sorry but I don't have any code on github that does the process. |
Hello, have you continued your research on depth estimation |
Hey, My college and me try to use the Code u provided.
But We use another lidar. Bouth of the Lidars use the Ros datatype Pointcloud2.
We changed the input ros topics and also the launch files.
All starts without a problem.
If rviz starts, we can see the image from the lidar, and we can configure the inputs, that we can see the image from the camera. But we cant se any fusion.
Can you help us?
The text was updated successfully, but these errors were encountered: