You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 16, 2019. It is now read-only.
Thanks a lot for the overall android project the author posted, it has saved the time for me to compile from source.
I use the caffe2 MobilenetV1-ssd model for object detection and it works. However, the same model costs about 2.2s to infer an image in my android phone (although with a cpu that is not good enough--qualcomm snapdragon 616), whereas it costs 500ms to infer an image on Window 7 with CPU. I wonder if the caffe2::predictor is running using multithreads or not?If no, how to configure it ? Is there any way to improve the performance on android ?
Thanks a lot if someone could help me about this.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Thanks a lot for the overall android project the author posted, it has saved the time for me to compile from source.
I use the caffe2 MobilenetV1-ssd model for object detection and it works. However, the same model costs about 2.2s to infer an image in my android phone (although with a cpu that is not good enough--qualcomm snapdragon 616), whereas it costs 500ms to infer an image on Window 7 with CPU.
I wonder if the caffe2::predictor is running using multithreads or not?If no, how to configure it ? Is there any way to improve the performance on android ?
Thanks a lot if someone could help me about this.
The text was updated successfully, but these errors were encountered: