You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First off, I'd like to just thank you for making this code base openly available. It is very quick to set up and easy to work with. I was able to reproduce the results on KITTI with little effort. Thank you!
My question is rather specific about the rotation estimation, and I am aware it probably doesn't matter too much given the option to run the refinement on the global reg. result. However, I am curious about the mechanism behind it.
Specifically, while training the network, I noticed that when training DGR on KITTI, it seems like the RRE can never go below approximately 2.5 degrees, even after multiple epochs.
Moreover, I am unable to reduce the error even when overfitting a minibatch of just four KITTI samples. I am able to get the RTE as low as 5cm, but the rotation error never drops below 2.56deg, even after 100+ iterations on the same batch. Reducing the translation weight doesn't help either. It seems every sample gets stuck at 0.0447 rad of rotation error and simply can't go lower.
This never gets any lower even with 10s of iterations:
Have you encountered this when working on Deep Global Registration? It's not a major issue but I am thinking about whether this could be due to how the model is structured (backpropagating error through the correspondences), or due to some other reason.
Thank you,
Andrei
P.S. Here is an easy way to support overfitting in trainer.py:
First, one can modify get_data:
def get_data(self, iterator):
while True:
if self.overfit:
if self._cached_data is None:
self._cached_data = iterator.next()
else:
print("Returning the same old batch!")
return self._cached_data
return iterator.next()
And just set self.overfit = True in the constructor, pass it via the config, etc.
The text was updated successfully, but these errors were encountered:
While trying to overfit the model (with no data augmentation/batch normalization or regularization), we observed the same issue under multiple different training scenarios. It seems that the rotation error gets stuck at 2.563 degrees and does not go lower, even when it is trying to register a small point cloud directly with itself.
Our observations show that this does not depend on voxel size, as the above commenter had the same issue with KITTI and we tried with 3D Match Features using voxel sizes 0.025 and 0.5.
Have you previously encountered this issue, or have an idea about what may be causing it ? We weren't able to find where this error comes from, but it is interesting that the lower bound of the error is the same in all cases at 2.56 degrees.
Update: We found the reason of the error, it comes from the clamping operation applied in the batch_rotation_error function. Clamping the argument for arccos to 0.999 restricts the minimum rotation error to 2.563 degrees. We can send a pull request if you are interested in fixing the rotation error.
First off, I'd like to just thank you for making this code base openly available. It is very quick to set up and easy to work with. I was able to reproduce the results on KITTI with little effort. Thank you!
My question is rather specific about the rotation estimation, and I am aware it probably doesn't matter too much given the option to run the refinement on the global reg. result. However, I am curious about the mechanism behind it.
Specifically, while training the network, I noticed that when training DGR on KITTI, it seems like the RRE can never go below approximately 2.5 degrees, even after multiple epochs.
Moreover, I am unable to reduce the error even when overfitting a minibatch of just four KITTI samples. I am able to get the RTE as low as 5cm, but the rotation error never drops below 2.56deg, even after 100+ iterations on the same batch. Reducing the translation weight doesn't help either. It seems every sample gets stuck at
0.0447 rad
of rotation error and simply can't go lower.This never gets any lower even with 10s of iterations:
Have you encountered this when working on Deep Global Registration? It's not a major issue but I am thinking about whether this could be due to how the model is structured (backpropagating error through the correspondences), or due to some other reason.
Thank you,
Andrei
P.S. Here is an easy way to support overfitting in
trainer.py
:First, one can modify
get_data
:And just set
self.overfit = True
in the constructor, pass it via the config, etc.The text was updated successfully, but these errors were encountered: