Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random Expression Sampling #20

Open
oneThousand1000 opened this issue Jul 26, 2024 · 2 comments
Open

Random Expression Sampling #20

oneThousand1000 opened this issue Jul 26, 2024 · 2 comments

Comments

@oneThousand1000
Copy link

Hi! Thanks for sharing your nice work!

I am wondering how to sample meaningful expression latent code. For example, those presented on the "Latent Expression Interpolation" section of your project page.

image

Looking forward to your reply!

@SimonGiebenhain
Copy link
Owner

Hi oneThousand1000,

for this visualization I took expressions from the expression_codebook from the training set.
However, with the NPHM model there are often some unnatural deformation in the lip region.

MonoNPHM provides much better movemnt of the face and is more robust/consistent.

FYI, NPHM models forward deformations (compatible with rasterization) and MonoNPHM represents backward deformations (compatible with NeRF/ray-based rendering).

Are you looking to do anything specific?
Let me know what you would need.

Kind regards,
Simon

@oneThousand1000
Copy link
Author

Hi oneThousand1000,

for this visualization I took expressions from the expression_codebook from the training set. However, with the NPHM model there are often some unnatural deformation in the lip region.

MonoNPHM provides much better movemnt of the face and is more robust/consistent.

FYI, NPHM models forward deformations (compatible with rasterization) and MonoNPHM represents backward deformations (compatible with NeRF/ray-based rendering).

Are you looking to do anything specific? Let me know what you would need.

Kind regards, Simon

Thank you for your response!

I have tried using MonoNPHM and have two questions:

  1. I processed a video with approximately 240 frames, and it took over 10 hours to complete. Is this processing time typical for MonoNPHM?
  2. Can the MonoNPHM model be fitted to a point cloud, similar to fitting_pointclouds.py from the NPHM repository?

Thank you again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants