-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: drop use of awkward in yield uncertainty calculation #408
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #408 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 22 22
Lines 2069 2072 +3
Branches 334 337 +3
=========================================
+ Hits 2069 2072 +3
☔ View full report in Codecov by Sentry. |
Co-authored-by: Alexander Held <45009355+alexander-held@users.noreply.github.com>
Co-authored-by: Alexander Held <45009355+alexander-held@users.noreply.github.com>
Co-authored-by: Alexander Held <45009355+alexander-held@users.noreply.github.com>
Co-authored-by: Alexander Held <45009355+alexander-held@users.noreply.github.com>
I had some perf suggestions after this popped up in my notifications: Pre-convert the parameters, and use # calculate the model distribution for every parameter varied up and down
# within the respective uncertainties
# ensure float for correct addition
float_parameters = parameters.astype(float)
for i_par in range(model.config.npars):
# central parameter values, but one parameter varied within uncertainties
up_pars = float_parameters.copy()
np.add.at(up_pars, i_par, uncertainty)
down_pars = float_parameters.copy()
np.subtract.at(down_pars, i_par, uncertainty) Pre-allocate stacked arrays and use in-place assignment (unsure of shapes here, so it's pseudo-code) up_comb_next = np.empty(...)
up_comb_next[...] = up_comb
np.sum(up_comb, axis=0, out=up_comb_next[...]) Pre-allocate the up_variations = np.empty(..., dtype=...)
for i_par in range(model.config.npars):
...
up_variations[i] = up_yields It might also be possible to do the above without the up_yields = np.concatenate((up_comb, up_yields_channel_sum), axis=1) step, i.e. directly assign the parts. I'm not sure. |
Co-authored-by: Alexander Held <45009355+alexander-held@users.noreply.github.com>
Co-authored-by: Alexander Held <45009355+alexander-held@users.noreply.github.com>
Co-authored-by: Alexander Held <45009355+alexander-held@users.noreply.github.com>
Co-authored-by: Alexander Held <45009355+alexander-held@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks all good to me, thanks a lot for getting this ready!
* drop use of awkward in yield uncertainty calculations
* reduced memory footprint and performance improvements for yield uncertainty calculations
ak.sum
is much more expensive than anticipated, so this changes the matrix operations inyield_stdev
to usenumpy
instead, as per suggestion by @alexander-heldpartially addresses #409, follows initial improvements done in #315