-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EBPF] refactored gpu probe to decouple init and start phases #30615
Conversation
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: fb7c383 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | basic_py_check | % cpu utilization | +3.17 | [-0.76, +7.09] | 1 | Logs |
➖ | idle | memory utilization | +0.79 | [+0.74, +0.83] | 1 | Logs bounds checks dashboard |
➖ | quality_gate_idle | memory utilization | +0.35 | [+0.30, +0.40] | 1 | Logs bounds checks dashboard |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.13 | [-0.61, +0.87] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.07 | [-0.42, +0.56] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | +0.05 | [-0.19, +0.30] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.05 | [-0.28, +0.38] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.04 | [-0.19, +0.26] | 1 | Logs |
➖ | idle_all_features | memory utilization | +0.01 | [-0.10, +0.11] | 1 | Logs bounds checks dashboard |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.01 | [-0.09, +0.06] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | -0.06 | [-0.23, +0.12] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | -0.11 | [-0.17, -0.05] | 1 | Logs |
➖ | file_tree | memory utilization | -0.23 | [-0.36, -0.11] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | -0.45 | [-0.55, -0.35] | 1 | Logs bounds checks dashboard |
➖ | pycheck_lots_of_tags | % cpu utilization | -1.18 | [-4.68, +2.32] | 1 | Logs |
Bounds Checks: ❌ Failed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
❌ | idle | memory_usage | 9/10 | bounds checks dashboard |
❌ | quality_gate_idle | memory_usage | 9/10 | bounds checks dashboard |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
72ad447
to
992fe29
Compare
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv create-vm --pipeline-id=48347710 --os-family=ubuntu Note: This applies to commit 750171b |
- moved attacherConfig creation into a probe pkg - updated some funcs' comments - add call to Start in the UTs
… creation phase (NewProbe) to follow other modules pattern (Tracer, USMMonitor)
added a detailed comment to explain the logic
30425b9
to
10f9aa6
Compare
Co-authored-by: Guillermo Julián <gjulianm@users.noreply.github.com>
…e' into valeri.pliskin/refactor-gpu-probe
removed redundant nil checks
/merge |
🚂 MergeQueue: pull request added to the queue The median merge time in Use |
What does this PR do?
decouples "start phase" logic from the "init phase" for the gpu module.
subsequent PRs will do further decoupling of some of the internal structs and fields of the gpu probe
Motivation
Describe how to test/QA your changes
the refactor covered by existing UTs of the gpu pkg
Possible Drawbacks / Trade-offs
Additional Notes
Jira ticket