-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CONTINT-3524] Collect metrics of SBOM check (long running) in agent status #22313
Conversation
803a313
to
fddced7
Compare
Bloop Bleep... Dogbot HereRegression Detector ResultsRun ID: 28bec092-46ce-4e0d-97a3-aa84686c6ed3 Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | file_tree | memory utilization | +1.01 | [+0.95, +1.08] |
➖ | idle | memory utilization | +0.66 | [+0.63, +0.69] |
➖ | file_to_blackhole | % cpu utilization | -0.83 | [-7.37, +5.71] |
Fine details of change detection per experiment
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | otel_to_otel_logs | ingress throughput | +2.40 | [+1.66, +3.15] |
➖ | file_tree | memory utilization | +1.01 | [+0.95, +1.08] |
➖ | idle | memory utilization | +0.66 | [+0.63, +0.69] |
➖ | process_agent_standard_check_with_stats | memory utilization | +0.32 | [+0.28, +0.36] |
➖ | process_agent_real_time_mode | memory utilization | +0.28 | [+0.25, +0.30] |
➖ | process_agent_standard_check | memory utilization | +0.14 | [+0.10, +0.18] |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.04, +0.04] |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.05, +0.05] |
➖ | trace_agent_json | ingress throughput | -0.00 | [-0.04, +0.04] |
➖ | trace_agent_msgpack | ingress throughput | -0.00 | [-0.03, +0.02] |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.58 | [-2.03, +0.86] |
➖ | tcp_syslog_to_blackhole | ingress throughput | -0.77 | [-0.82, -0.71] |
➖ | file_to_blackhole | % cpu utilization | -0.83 | [-7.37, +5.71] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
72ab34c
to
720893b
Compare
720893b
to
f37a8d8
Compare
2b50812
to
41b6613
Compare
41b6613
to
fc0c15a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
adca09b
to
9ae8f3f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the https://github.com/DataDog/datadog-agent/blob/main/pkg/status/render/templates/collectorHTML.tmpl file need also to be updated
625a2b5
to
18d626f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whould it be a problem if sbom check wasn't a long running check? There are several core checks that do work in background, but are not long running. Don't see this discussed in the RFC.
hi @vickenty The name "long running" check comes from the fact that the For the SBOM check, we need to send the SBOM report as soon as the image is present on the host. That is why we have implemented this check as a "long running check". Migrating the SBOM check as a scheduled check doesn't make sense because it is not the fact we run the check that triggers the generation of the SBOM report payload. So maybe we can replace “long running” by “asynchronous” for this kind of check if it is more clear in terms of behavior for the end user. WDYT? |
A regular check can very well have background tasks driven by external events, see |
This PR introduces a reusable template that can easily allow us to port our long running checks as normal checks. I think it corresponds well to what you describe as the main logic will run in the background. Besides, IMO the specifications requires clarification. The fact that the check could start doing some work after calling Rather than migrating tons of checks to more-or-less respect the current interface, I think that defining a common standard for our checks is the easiest way for us and will benefit us for the long term support. WDYT ? |
Service Checks: Last Run: {{humanize .ServiceChecks}}, Total: {{humanize .TotalServiceChecks}} | ||
{{- if .TotalHistogramBuckets}} | ||
Histogram Buckets: Last Run: {{humanize .HistogramBuckets}}, Total: {{humanize .TotalHistogramBuckets}} | ||
{{ if .LongRunning -}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible avoid duplicating the template, and only put execution statistics in a condition on .LongRunning
? (And same for three other templates).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO it is clearer like this, also it conflicts with a previous comment #22313 (comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/merge |
🚂 MergeQueue This merge request is not mergeable yet, because of pending checks/missing approvals. It will be added to the queue as soon as checks pass and/or get approvals. Use |
This merge request was unqueued If you need support, contact us on Slack #ci-interfaces! |
/merge |
🚂 MergeQueue Pull request added to the queue. There are 6 builds ahead! (estimated merge in less than 2h) Use |
What does this PR do?
Add the SBOM check to the output of agent status and implements a generic wrapper to convert other checks as well. It implements the solution of the following RFC. Example:
Motivation
We would like to monitor long running check status. SBOM is one of them.
Additional Notes
Adding a field
LongRunning bool
to the MetricsStat data structure was necessary for formatting.Interval
can't be 0. Otherwise stats wouldn't be collected and the check would be scheduled only once.Possible Drawbacks / Trade-offs
The long running check will not go thorugh the normal loop.
Describe how to test/QA your changes
Deploy the agent with SBOM collection enabled. Here is the setting in the helm chart: https://github.com/DataDog/helm-charts/blob/716fc5c56344d64ddf9a1841e9d70d96a7a8fd94/charts/datadog/values.yaml#L707
Wait a few seconds and run
agent status
. It should output the SBOM check status. Look at the logs, every 15s check metrics should be collected:agent status
andagent launch-gui
are still working as expectedThe templates were refactored. We need to make sure that these two commands still work as expected.
Reviewer's Checklist
Triage
milestone is set.major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.changelog/no-changelog
label has been applied.qa/skip-qa
label, with required eitherqa/done
orqa/no-code-change
labels, are applied.team/..
label has been applied, indicating the team(s) that should QA this change.need-change/operator
andneed-change/helm
labels have been applied.k8s/<min-version>
label, indicating the lowest Kubernetes version compatible with this feature.