-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speeding up Bioscrape Inference #38
Comments
I will bring this up again in BE240 recitation to see if we can get help on this from someone... |
Reviving this issue, since there has been some progress on it over the last few months. A related issue is to add GPU compatibility for inference. @Farnazmdi (bioscrape reviewer for JOSS) suggested Numpyro as a possible option that may speed up the inference module. @WilliamIX I believe you have worked on both of these aspects - parallelization and GPU compatibility. Do you want to add your thoughts here? |
Thanks for suggesting this @ayush9pandey and @Farnazmdi. Short answer is that the bulk of Bioscrape was written before Numpyro and similar tools were commonly used and stable. I agree that such libraries provide a route for potential GPU compatibility, however they would likely require a complete overhaul of the entire library switching from compiled Cython code to just-in-time compilation methods with completely different dependencies and implementation. At a higher level, there are a number of GPU-enabled ODE and SSA solvers that have become available in the past few years. I think these are better options for GPU-based inference approaches. Bioscrape is not aiming to be the fastest inference engine possible, but rather strike a balance between a functional set of unique features (many of which aren't readily available in many simulators, such as stochastic delays) and speed. |
Emcee is designed to deal with parallelization - we should make use of this!
The text was updated successfully, but these errors were encountered: