You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems the optimization function "solver.retarget()" accepts two inputs of hand_joint_seq and hand_frame_seq.
The hand_joint_seq can simply be acquired by 3rd-party libraries such as mediapipe, however, I wonder how did you get the hand_frame_seq, which represents the transformations between hand landmarks.
It would be appreciate if you let me know the way you get the hand_frame_seq, whether you get it from other libraries or any other implementation code is inside this repository.
Best regards
The text was updated successfully, but these errors were encountered:
hand_frame_seq refers to the sequence of hand poses derived from detectors that utilize the MANO framework. Since MediaPipe operates independently of the MANO/SMPL-X model, it doesn't provide such data.
If you're specifically interested in the retargeting aspect of DexMV, I suggest exploring our new retargeting library. It offers significantly enhanced performance compared to the previous retargeting module found in the DexMV repository.
Hello
It seems the optimization function "solver.retarget()" accepts two inputs of hand_joint_seq and hand_frame_seq.
The hand_joint_seq can simply be acquired by 3rd-party libraries such as mediapipe, however, I wonder how did you get the hand_frame_seq, which represents the transformations between hand landmarks.
It would be appreciate if you let me know the way you get the hand_frame_seq, whether you get it from other libraries or any other implementation code is inside this repository.
Best regards
The text was updated successfully, but these errors were encountered: