Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue about the comparison between the predicted pose and the ground truth pose #6

Open
LiXinghui-666 opened this issue Jul 18, 2022 · 2 comments

Comments

@LiXinghui-666
Copy link

When I compare the optimized camera pose and the real relative pose transformation of the two input images, there is a large deviation, especially when calculating the rotation angle of phi. May I ask what transformation should be done before comparing? I don't know which part is the problem
Below is the part of the code I compared:

optimized relative pose transformation

pose_pred = predicted_poses[-1].copy()
pose_relative = np.linalg.inv(input_pose) @ pose_pred
print('Pose prediction:', np.arctan2(pose_relative[1, 0], pose_relative[0, 0]) * 180 / np.pi)
Output: Pose prediction: 2.1300169042266903

the real relative pose transformation

pose_convert = np.linalg.inv(input_pose_np) @ target_pose_np
phi_convert = np.arctan2(pose_convert[1, 0], pose_convert[0, 0]) * 180 / np.pi
print(f"Phi int: {phi_int}")
print(f"Phi target: {phi_target}")
print(f"Phi convert: {phi_convert}")
Output: Phi int: 155.91559686333247. Phi target: 119.24629196439332. Phi convert: -1.5495584242778404e-05

@yuchen-ji
Copy link

When I compare the optimized camera pose and the real relative pose transformation of the two input images, there is a large deviation, especially when calculating the rotation angle of phi. May I ask what transformation should be done before comparing? I don't know which part is the problem Below is the part of the code I compared:

optimized relative pose transformation

pose_pred = predicted_poses[-1].copy() pose_relative = np.linalg.inv(input_pose) @ pose_pred print('Pose prediction:', np.arctan2(pose_relative[1, 0], pose_relative[0, 0]) * 180 / np.pi) Output: Pose prediction: 2.1300169042266903

the real relative pose transformation

pose_convert = np.linalg.inv(input_pose_np) @ target_pose_np phi_convert = np.arctan2(pose_convert[1, 0], pose_convert[0, 0]) * 180 / np.pi print(f"Phi int: {phi_int}") print(f"Phi target: {phi_target}") print(f"Phi convert: {phi_convert}") Output: Phi int: 155.91559686333247. Phi target: 119.24629196439332. Phi convert: -1.5495584242778404e-05

I have the same question, could you give us a little help? @yenchenlin

@KJZhuAutomatic
Copy link

@LiXinghui-666 Where you can find the target_pose_np, the example just supply two images of car, even the input_pose is virtual.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants