-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation in pytorch #15
Comments
@jeong-tae Hi, I'm also trying to implemention to reproduce this paper with tensorflow, and I also have some trouble about APN。 For your question, I think we should use earlystopping when we trained. besides this, I have some doubt about APN, As I understand. The input is a batch of images and we will get a set of points(tx,ty,tl) for the segmentation area, so should we use these three dimensional points to cut the current batch of pictures for training? If so, when can we use the next batch of data ? |
@Ostnie i think we use the points to crop the current batch. the points are about current image. so it must be. actually i did the early stopping for APN pretrain. but... when? loss does not converging well. |
@jeong-tae As you said, we should cut current image, and send it to the VGG19, then we use it's loss to modify the APN parameters. Then we will get three new points, shall we still repeat the steps before ? I'm really confused about the loss of APN, I'm not sure how to calculate it. I guess it depends on the classification of VGG19. As the formula 8, loss=rankloss +crossentropyLoss ,is it ? |
following the paper, we should repeat two times. The losses are not backpropagated togather. rank loss is for APN, entropy loss for conv/classifier. As authors said, it should calculated in alternative way |
@jeong-tae Yes you are right, then I have some doubts about rank loss, is it calculated by the output of the softmax layers in vgg19?I think it is strange because the loss contain some information about it's network's parameters. Can we use vgg's loss to modify the APN? I don't know how to do this, could you plz show me some code about this? |
Yes it is. you can use the output of the softmax layer. i think the purpose of the rank loss is to fill the gap between scales performances. by doing this, APN will propose the more precise region to increase the performance at each scale. |
When I learned the back propagation algorithm. Loss is not just a number that shows how much the difference between the pred and the truth, it also contains information about the impact of each parameter on the final loss in the network. If we use the loss value of VGG, then the loss does not contain APN information in it, although they share most layers, but the last few A fully connected layer is independent of each other. In other words, if you give me a loss value of VGG and let me back propagation to calculate how to optimize the parameters of APN, I don't think it can be done. I think I may be wrong, but based on the back propagation algorithm I have deduced, I really can not understand this method. |
The rank loss is the gap between VGG1 and VGG2. You can easily imagine the meta-learning that teach the difference between two networks(in this cage VGG1 and VGG2). And the gap is occured in different scales with attention. So APN learn the attention where should we focus. if gap is large enough, the APN will try to reduce that gap by the proposing a attention. |
@jeong-tae This makes me confused,it seems to be right, but how can I get VGG's loss backpropagation to APN? I can't understand it and it really upset me. In tensorflow, I don't know how to set APN's loss as VGG‘s loss, could you plz show me how pytorch accomplished this step? |
oh, you mean backpropagation for APN? i will finish the code work so soon and make it public. Then you can see the whole process as well! |
@jeong-tae https://github.com/Charleo85/DeepCar this library may help you, it is written in pytorch. |
@Ostnie oh, very nice! thx! |
@Ostnie i publish the code and need some helps. If you still interested in implementation with other framework. come to here https://github.com/jeong-tae/RACNN-pytorch and work together. |
@jeong-tae Oh,great , I will study it soon, but I'm not familiar with pytorch, let's have a try first ! |
Hi,@jeong-tae,I'm trying to reproduce RA-CNN too.I have some doubt about the data preprocessing.In pytorch,the pixesl of images will be rescaled to 0 between 255,which is different from that in caffe.Do you think this difference will inluence the performance ? |
@jackshaw hello, jachshaw https://stackoverflow.com/questions/4674623/why-do-we-have-to-normalize-the-input-for-an-artificial-neural-network |
@jeong-tae Thanks very much for your reply. Did you ever tried the available caffe pretrained model?I can only get 74% accuracy far from 85%. I think I must miss some important details when preparing my test data, but I can not figure out what details I've missed. I just resized the shortest side of each image and then converted the resized image to lmdb format. |
Nope. i didn't. In pytorch, there is image resize preprocessing that used in the paper. |
@jeong-tae I think step 2 is something like:
|
i think so too exactly same! |
Could you send me the source code with caffe? |
How can I get the ground truth(x,y,l) ? |
Hi
i am working on implementation to reproduce this paper with pytorch.
But stuck in the pre-train a APN network.
Original code doesn't give the details about learning a APN network, step2.
Also condition about convergence. if loss fluctuate forever, when should i stop to train?
Anyone progress in reproducing this? Test code are 100% useless to reproduce this results.
How can we try RACNN on other public dataset?
If anyone who interested in reproducing this, plz contact me. we can discuss further about training details
The text was updated successfully, but these errors were encountered: