Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss function #7

Open
alebas-fr opened this issue Mar 2, 2023 · 6 comments
Open

Loss function #7

alebas-fr opened this issue Mar 2, 2023 · 6 comments

Comments

@alebas-fr
Copy link

Hello,

I don't understand well your loss function, Can you give me some details please! How you calculate the loss!

Another question: if I don't use the graph in my architecture, is it possible to use this same loss formula that you use please!

Thanks a lot!

@abduallahmohamed
Copy link
Owner

Thanks for asking:
Section 4 in the paper explains the loss function.
A TL;DR:
The loss function have 3 components:
1- MSE between the joints and the prediction
2- A consistency loss that:
2-a Make sure the intradistance between joints are consistent
2-b Make sure that the angles between the joints are consistent
AKa, we don't want to end up with weird angles and/or weird skeleton bone lengths. Hope this explains it.
Foe the second question, the loss function is independent from graph configuration. So, you can use it on any other model.

@alebas-fr
Copy link
Author

Thank you, but in your code, there is 2 cases : if if args.use_cons or not;
In this case in your code you use juste the loss in the part of this part of else:
l2loss = nn.MSELoss().cuda()
def graph_loss(V_pred,V_trgt):
return l2loss(V_pred,V_trgt)

and this because the use cons argument is set to false in your code !

Can you give some explanation please!

thanks

@alebas-fr
Copy link
Author

What is the formula you use to calculate the loss and get the results you put in your article please ( there is many cases in your code in the loss function)

@abduallahmohamed
Copy link
Owner

args.use_cons is to enable to disable the consistency loss.
This is for ablation study, see Table 3 for the ablation study and the effect of different parts of the consistent loss.
The results in tables 1& 2 are the best result from Table 3.
So, for each dataset you can get the flags for enable/disable the cons. loss or the internal parts of it (cosine, L2) based on the ablation table.
For the GTA-IM, the best results are obtained when the cons. loss and all of it is parts are enabled.

Does this help?

@alebas-fr
Copy link
Author

Yes thank you. On the other hand for the loss of PROX I saw that only L2 leads to good performance (NOT LIKE gta) but I didn't understand why! can you explain to me please!

Also, I have another question is your visualization script can be used on the PROX basis please, I want to do the visualization on PROX rather how I have to do it please

@abduallahmohamed
Copy link
Owner

I didn't invest in visualizing PROX.
For the PROX performance, quoting from the paper:
image
image

AKA, it's a noisy and biased dataset unlike GTA-IM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants