-
Notifications
You must be signed in to change notification settings - Fork 400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable benchmark monitoring with regression CI hook #2265
Comments
/bounty $750 |
💎 $750 bounty • ZIOSteps to solve:
Additional opportunities:
Thank you for contributing to zio/zio-http! Add a bounty • Share on socials
|
I've been making some good progress on this in a separate repo. /attempt #2265 I'm going to make a GitHub action that will parse the JMH output and compare its performance agains past run data (serialized and stored in a separate branch). If the benchmarks fall beneath the configured threshold, it will fail CI. I'm also going to try to have it post the benchmark results as a comment on the Pull Request. This action can be a separate Options |
Thank you for your serious attitude towards zio-http performance! It is already one from the most fastests contemporary Scala web-servers, just see results here and here. Here is a couple of ideas for JMH benchmarking:
Also, my 2 cents for HTTP-server benchmarking:
|
UPDATE: I've created the following JMH Benchmark Action repository. One bigger concern is that these benchmarks take a good deal of time to run—even on a relatively powerful M2 Mac. Doing this in CI, even configured with fewer iterations/forks, will take a good deal of time. One option would be to only run this action if there's a UPDATE: And, as usual, the complexity unfurls itself as you approach the end. It turns out it's not as simple to merely "comment on a pull request" as it first appeared (more info here: #2369). But I have spotted a workaround. Another thought from that PR: There's a lot of variance in certain very high |
/attempt #2265 Options |
Alrighty. A summary of open design questions:
|
@kitlangton are you still on this or can I make an attempt? |
/claim #2502 |
/attempt #2265 Options |
Hi Jdegoes- check solution- Impact: Undetected Performance Regressions: Without benchmark monitoring, performance regressions may go unnoticed, leading to degraded system performance. Lack of a CI hook for regression testing means changes in the codebase may not undergo performance testing during the CI/CD pipeline. Inspect Current Monitoring Setup: Observe the absence of benchmark monitoring in the current system. Explore the system configuration or relevant scripts to enable benchmark monitoring. Execute benchmark monitoring after attempting to enable it. After the task is completed, benchmark monitoring should be active, capturing relevant performance metrics. Benchmark Monitoring: Integrate a suitable benchmark monitoring tool or solution into the system configuration. Implement a CI hook that triggers regression testing for performance-related changes. Example CI/CD Configuration (GitLab CI)stages:
benchmark: Ensure the selected benchmark monitoring tool aligns with system requirements. Proof of Concept Assuming you are using a Unix-like system and want to integrate Apache Benchmark (ab) for benchmarking, here's a basic script: run_benchmarks.sh: #!/bin/bash Set variables TARGET_URL="http://your-api-endpoint.com/" Run Apache Benchmark (ab) ab -n 100 -c 10 $TARGET_URL > $BENCHMARK_RESULTS_FILE Print benchmark results cat $BENCHMARK_RESULTS_FILE This script does the following: It sends 100 requests (-n 100) with a concurrency of 10 (-c 10) to the specified API endpoint ($TARGET_URL). |
@uzmi1: Reminder that in 7 days the bounty will become up for grabs, so please submit a pull request before then 🙏 |
/claim #2265 |
The bounty is up for grabs! Everyone is welcome to |
/attempt #2265 Options |
@nermalcat69: Reminder that in 7 days the bounty will become up for grabs, so please submit a pull request before then 🙏 |
The bounty is up for grabs! Everyone is welcome to |
After digging little bit i found few flaws in the build :
|
/attempt 2265
|
Currently, we're running each benchmark in parallel for both the current branch and the base branch, which doubles the time required. The approach I'm considering is to run the base benchmark with each push to the main branch and save its artifact. During a pull request run, we'll execute the benchmarks for the current branch, download the base artifacts, compare the current benchmarks with the base benchmarks using a shell script, and upload the results simultaneously. If the benchmarks exceed a certain threshold, we will break the CI. I will divide the task into two PRs.
|
@jdegoes I have created a pull request for the current issue pull_request. |
hey @jdegoes #2751 would completely close this issue as i have i have divided the solution in two pr as stated in the above comment can you review; it its in working state. |
We need JMH-based benchmarks to be run as part of CI, with automatic failure if performance on some benchmark falls below some threshold set in configuration.
The text was updated successfully, but these errors were encountered: