You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running benchmarks of a lot of small arrays I see that there is almost no difference in execution time between a 100_000 9 element arrays and a 100_000 23 element arrays. This indicates that we have some constant overhead dominating for small arrays.
I have already tried removing the python loop structure (notorious for being slow) from the measurements without it making much of a difference. Could it be overhead with cffi? Or something on the python side in the init?
The text was updated successfully, but these errors were encountered:
Not a priority I agree, but don't think it should be closed before looking closer at it. We have so few issues we can afford to let low priority issues be without worrying about clutter
When running benchmarks of a lot of small arrays I see that there is almost no difference in execution time between a 100_000 9 element arrays and a 100_000 23 element arrays. This indicates that we have some constant overhead dominating for small arrays.
I have already tried removing the python loop structure (notorious for being slow) from the measurements without it making much of a difference. Could it be overhead with
cffi
? Or something on the python side in the init?The text was updated successfully, but these errors were encountered: