-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuous benchmarking #80
Comments
Before we commence the actual benchmarking process, it's crucial to conduct preliminary research to determine what tasks we can and should perform ourselves and what components we can potentially leverage from other packages or libraries. Here are the key aspects we need to investigate:
This task has been added to the milestone for tracking and prioritization. |
@bartvanerp |
Just did some very extensive research: Benchmarking Methodology: Benchmark Targets: Reporting and Visualization: Results Storage: Benchmark Execution: @bvdmitri @albertpod let me know what you think. |
Aside from the real performance benchmakr, we can also already start building a test suit for allocations using |
Today we discussed the issue together with @albertpod and @bvdmitri. We agree on the following plan: All of our packages will need to be extended with a benchmark suite containing performance and memory (allocation) benchmarks. Alternative metrics can be added later once we have developed suitable methods for testing them. @bartvanerp will make a start with this for the Starting in January we will extend the benchmark suites to our other packages and will divide tasks. For now we will ask everyone to run the benchmarks locally when filing a PR. The benchmarking diff/results will need to be uploaded with the PR. Future work will be to automate this using a custom GitHub runner (our Raspberry Pi), and to visualize results online. |
Made a start with the benchmarks for There is one point which I need to adjust in my above statements: let's skip the extra allocation benchmarks, as these are automatically included in |
Coming back to the memory benchmarking: I think it will still be good to create tests for inplace functions, which we assume to be non-allocating, to check whether they are still non-allocating. Kind of like a test which checks |
I used |
Let's make sure we have a benchmark suite boilerplate set up before the |
I'm moving this to |
In the future it would be good to have some kind of benchmarking system in our CI, such that we become aware how changes in our code impact performance. An example of such a system is provided by FluxBench.jl and corresponding website.
The text was updated successfully, but these errors were encountered: