You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great work. The demos I have seen from the project website are all quite short (within 10 seconds) and travel only limited distance. Have you tried to train StreetSurf on larger scenes such as those that travel hundreds of kilometers? If so, will the current framework face any scalability issues? I am curious about why you haven't followed the same scene splitting approach as used in Block-NeRF, Mega-NeRF, and SUDS in order to readily scale to larger scenes.
I understand the distant-view branch might not be impacted because NeRF++ is internally robust to such large scenes with its inverted reparameterization. But how about the close-range branch that is based on NGP + NeuS? Will the hash-grid have enough capability to store features for larger scenes, or do you have to adjust N_{max}, T, and L (notations used in NGP paper) accordingly based on the size of the training sequence?
Looking forward to your answers!
The text was updated successfully, but these errors were encountered:
Thanks for the great work. The demos I have seen from the project website are all quite short (within 10 seconds) and travel only limited distance. Have you tried to train StreetSurf on larger scenes such as those that travel hundreds of kilometers? If so, will the current framework face any scalability issues? I am curious about why you haven't followed the same scene splitting approach as used in Block-NeRF, Mega-NeRF, and SUDS in order to readily scale to larger scenes.
I understand the distant-view branch might not be impacted because NeRF++ is internally robust to such large scenes with its inverted reparameterization. But how about the close-range branch that is based on NGP + NeuS? Will the hash-grid have enough capability to store features for larger scenes, or do you have to adjust N_{max}, T, and L (notations used in NGP paper) accordingly based on the size of the training sequence?
Looking forward to your answers!
The text was updated successfully, but these errors were encountered: