-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distorted reconstructed Pointcloud from rgbd-Camera #20
Comments
This comment was marked as spam.
This comment was marked as spam.
Hmmm... not sure I can interpret what is going on with the depth camera given your images? We have used the depth camera sensors successfully in other projects, but they may be less exacting requirement then yours? If I was to guess in the Our highest priority is to support URP rendering, but unfortunately we do not have a clear timeline ,at the moment, to get that work done. I believe the issue for it working with URP is just the shader |
I will retest with the sample scene to make sure that the distortion is there, and get back to you, so we can have a starting point that you could possibly look into. Furthermore i will try to infer or modify the camera info topic to see if that improves it. |
Hello @micahpearlman I have retested with the sample scene with similar results, here are the pictures of the scene and the point cloud I also provide the camera info topic here The effect is still clearly visible and not easy to ignore for my rail detection application, furthermore the distortion is also noticable if the objects are further away. Furthermore i tried to infer the cam info data using an implementation from robotics hub on your camera, the cam info topic I will apreciate your help, in the meantime i will try to read the theory behind the camera_info/pcl re-construction and try to experiment with a manually constructed camera info topic Thank you |
@panagelak my guess is the camera info. The distortion model is incorrect. Unfortunately we don't have a clear timeline on when we can look further into this. We would very much appreciate any contributions you may have to this project if you are able to further pin point the issue in code? Happy to help walk you through code. The main place it is calculated:
|
@panagelak if you uncheck "Physical Camera" in the camera parameters does it change at all? |
Hello, I think i located the issue, i iterated over the values of the depth_image and i think there is an issue with representing the depth values, The depth values (distance) seem to be calculated like a ray hitting the object so the distance is greater the bigger the distance of the pixel from the center, while ros expects in the depth image the absolute distance from the camera (orthogonal), so if you encapsulated the fov with a wall at distance 1m, you should see everywhere 1.0m at the depth data. I tested that by reproducing the scene with a gazebo camera and checking the depth values, furthermore regarding the // Fill in XYZ requiring only 4 parameters (which are infered from the camera info topic), and at z it just passes the value from the depth image. I tried to essentially fix the depth values to represent the ortogonal distance from the camera (based on pixel location,fov etc.) but i was unsaccesfull. Let me know if you have any ideas Thank you |
@micahpearlman also unchecking physical camera seemed to improve it a little |
So theoretically the That being said try adjusting the Also, in None of these should have anything to do with the distortion but just shooting in the dark. |
Any luck with this? |
no unfortunately i was busy with my work projects, i tried to modify the zorgbd shader a little unsuccesfully, the issue lies firstly on the depth data values (e.g a wall 1m in front and horizontal to the camera should fill the data with 1's) and secondary with the camera info if at all |
Hello i think this project is really good for simulating sensors in Unity replacing Gazebo,
what draw me here was specifically the depth camera implementation that i couldn't find a proper ros compatible anywhere else,
combining it with the Unity Robotics Hub Visualization package in two Unity instances, i find my self not needing both Gazebo and RViz.
However i noticed some things,
the Reconstructed pointcloud (from depth/rgb image and camera_info - depth_image_proc) appears to be distorted, this is also present in the sample scene.
I am not sure, if it's the camera_info fault or of the depth_data or a bug in the depth_image_proc,
i assume that you have encountered a similar problem and i want to know if you can help me with it.
Here are some pictures describing the problem
Furthermore i noticed that when changing the Graphics Setting to URP render pipeline, the depth Camera was not able to render,
although URP has Vulkan support, i have some projects that require URP and will probably be the default pipeline in the future.
Thanks a lot!
Panagiotis
The text was updated successfully, but these errors were encountered: