-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
low rank deep neural networks #8
Comments
Yes. |
@wenwei202 thanks to your remarkable work. |
@bachml I did not measure speedup by ResNet. Decomposing to rank 1 should have some benefits. It is the issue of the implementation? |
@wenwei202 More test in my baseline case(a 27 layers ResNet) shows that the issue is related to multi-threaded blas performance (Caffe with CPU). |
@bachml In the rank one case, the conv layer is decomposed to a conv layer with only one filter plus a linear combination layer which essentially is a conv layer with kernels of 1x1. Some code optimization may be required to fully exploit this kind of compactness. |
In case you still have interest in this research topic, the details are covered in the paper which is just accepted by ICCV 2017. |
Issue summary
I am working on low rank deep neural networks, to speedup the testing for better deployability. Anyone is working on similar stuff?
Steps to reproduce
Code is in https://github.com/wenwei202/caffe/tree/sfm.
Related publication in ICCV 2017
The text was updated successfully, but these errors were encountered: