-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training from Scratch #3
Comments
Hello, in addition to @Mahsa13473 's comment, can you also provide the approximate training time? |
hi by the time we submitted, we used the old one which everyone else used as well. We used imagenet pretrained vgg16(provided by official tensorflow release), as shown in the command in readme. We haven't tried training everything from scratch yet since i guess the dataset itself is not big enough to understand 2d image perfectly. |
the training time can vary from 1 day to 3 days depends on your gpu. but i ll say at most 3 days. The bottleneck is on cpu since we have to read sdf ground truth and image h5 file on the fly. so if you have a better cpu or ssd for sdf/img storage, you can train them faster. |
Hi, I'm also training the network from scratch using the pre-trained vgg16, but I can't get the same result. Did you used the pre-trained vgg16? @Mahsa13473 |
Hi. Yes, but I couldn't get the same result with the pretrained vgg16. But I tried a few months ago. Not sure how it works with the updated version of code. @asurada404 |
Hello, anyone knows where is the pretrained model |
Download vgg_16.ckpt and save to |
@asurada404 Thanks! |
@Xharlie In your opinion, what's missing in the dataset that makes it unable to understand 2d image perfectly? |
The VGG was used as an encoder to extract the features of the image. |
@asurada404 That makes sense. So the |
You can find more details in this paper @JohnG0024 |
Does anyone successfully reproduce the results? I trained the network with ground truth camera parameters. No modifications have done to the code. The train/test split is 3D-R2N2. I trained for about 3 days, approxiamtely 23 epochs. The sdf loss stopped dropping so I assumed the network converged. But I only got bad visuals in test set models. |
Hi there,
Thanks for releasing the codes, it is amazing work!
I try to train the network from scratch and follow all the steps that were mentioned in Readme file, but I couldn't get the same results in comparison to pretrained model.
I was wondering which hyperparameters are used for the pretarined one. Is it the same as the defaults in train_sdf.py?
How many epochs did you train to get the best accuracy?
Also which dataset was used for training? The old one or the new one that you mentioned in Readme?
The text was updated successfully, but these errors were encountered: