Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problematic normalization #3

Open
Coco-hanqi opened this issue Feb 14, 2022 · 7 comments
Open

Problematic normalization #3

Coco-hanqi opened this issue Feb 14, 2022 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@Coco-hanqi
Copy link

Screen Shot 2022-02-13 at 5 36 52 PM
Got a validation accuracy around 58%, lower than the one proposed in the paper. Is the lower accuracy caused by this problematic normalization error?

@matyasbohacek
Copy link
Owner

Thanks for opening this ticket, @Coco-hanqi.

Yes, this seems strange, probably related to the under-performing accuracy. Let me try to reproduce this behavior on a new machine and investigate the source of the problem -- will get back to you here shortly.

@matyasbohacek matyasbohacek added the bug Something isn't working label Feb 15, 2022
@matyasbohacek matyasbohacek self-assigned this Feb 15, 2022
@caicairay
Copy link

I also got this problem. It seems because of the problematic key points data. The left and right shoulder key points are overlapped in some frames.

@matyasbohacek
Copy link
Owner

matyasbohacek commented Apr 5, 2022

Dear @Coco-hanqi & @caicairay,

I finally managed to reproduce this on an older external machine. Upon further research, it seems that some PyTorch versions seem to have various low-level issues with model checkpointing via saving the state dictionary. Interestingly enough, the same code with no edits worked on my macOS machine. Hence, the script has been rewritten to keep the entire model objects, which managed to solve this on the Linux instance where I reproduced this.

Please, let me know if this issue persists, and we can look into this deeper. Feel free to reach out should any other problems or questions arise.

@andresherrera97
Copy link

andresherrera97 commented Aug 29, 2022

I'm getting the same issue of the Problematic Normalization on a Ubuntu 22 machine, any update on this? Should I expect this to affect the performance?

@andresherrera97
Copy link

I realized that Problematic Normalization occurs during training when the augmentation happens to be rotation. Disabling the option of rotation prevents the Problematic Normalization and accuracy on test increases to 60%, which is still below the results in the paper

@saurus2
Copy link

saurus2 commented Nov 15, 2022

Hi,
I had a same problem.
The Validation accuracy was not over 60%.

I used MacOS though I got a problem.
Then I used my campus HPC server.
Then I got same problems.

I used WLASL 100 25fps dataset.
Somebody have an idea?

@ShortCatislong
Copy link

I realized that Problematic Normalization occurs during training when the augmentation happens to be rotation. Disabling the option of rotation prevents the Problematic Normalization and accuracy on test increases to 60%, which is still below the results in the paper

could you please kindly remind me which rotation method you have disabled?
the __rotate() , augment_rotate() or augment_arm_joint_rotate()?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants